sppn.info Laws Daniel Dennett Brainstorms Pdf


Saturday, December 14, 2019

Daniel C. Dennett - Brainstorms Philosophical Essays on Mind and Psychology. pdf - Ebook download as PDF File .pdf), Text File .txt) or read book online. Brainstorms. Philosophical Essays on Mind and Psychology. By Daniel C. Dennett. Overview. This collection of 17 PDF ( KB). Introduction. PDF ( KB). When Brainstorms was published in , the interdisciplinary field of cognitive science was just emerging. Daniel Dennett was a young scholar who wanted to get philosophers out of their armchairs—and into conversations PDF ( KB ).

Daniel Dennett Brainstorms Pdf

Language:English, Spanish, Arabic
Published (Last):
ePub File Size: MB
PDF File Size: MB
Distribution:Free* [*Regsitration Required]
Uploaded by: JUDIE

Daniel C. Dennett. Tufts University. Abstract. This collection of 17 essays by the author offers a comprehensive theory of mind, encompassing traditional issues. Daniel C. Dennett. Now that I've won my suit under the Freedom of Information Act, I am at liberty to reveal for the first time a curious episode in my life that may. Professor Daniel C. Dennett . metacontrast and the cutaneous rabbit (Dennett, ). Dennett, D. C.,, Brainstorms: Philosophical Essays on Mind and.

Where you come out on this issue depends, of course, on whether you consider people to be naturally bad or naturally good. Dennett certainly does not prejudge this question. But Dyson, like The New Yorker reviewer, believes that all debates about religion—not only the question of innate goodness or badness—are orthogonal to scientific inquiry. Dennett's proposal to establish a scientific dialog on religion is thus rejected by some of the very people he is trying so hard to reach.

As a behaviorist I find it hard to muster any sympathy for Dennett's failure in this regard because, in one crucial area, the behavior of individual organisms, Dennett is a thoroughgoing creationist. Just as some critics unfairly accuse Dennett of trivializing religion, Dennett has unfairly accused behaviorists in general and Skinner in particular of trivializing human cognition. As I said above, the intentional stance is unquestionably necessary in our everyday interactions with each other.

I may believe that you believe that I believe that you are telling the truth—and this chain of my beliefs in the form of overt verbal and nonverbal behavioral patterns may be reinforced by your behavior as it interacts with mine. I see him at very long intervals—10 years on the average.

Invariably I come away confused. I don't know if he's really a nice guy or is just acting like a nice guy. I'm not sure whether he knows either. Having a conversation with a professional actor is like sparring with a professional boxer; they're in absolute control. How could I have resolved my confusion after my conversations with my actor friend?

What information did I need that I didn't have? According to Dennett, the information I needed was inside my friend at the time of my conversations with him in the form of a set of mechanisms in his brain which, if I only knew how they were organized and their state at the time, would tell me what he was really thinking as he said what he said.

It is the collective state of these mechanisms that constitute, for Dennett, the actor's mental state. And it is his mind that directly causes him to say what he says; that is, his behavior is created by his mind and his mind is inside his head. That seems to me nonsense. What I need is not information about my friend's internal state but information about his overt behavior over extended periods during the previous 10 years and, as it comes in, information about his overt behavior over the next 10 years.

A frank conversation about him with his children and wife would tell me far more about what he was really thinking at the time we met than would any kind of examination of his insides.

A behaviorist would have to say that, like my intentional stance with respect to the behavior of water it seeks its own level and the behavior of my computer it hates me , my intentional stance with respect to my own behavior and that of other people, while convenient for everyday life, is a hindrance to scientific understanding.

You may also like:

Dennett thinks, on the other hand, that although my intentional stance toward inanimate objects, plants, and most animals, is certainly unscientific, my intentional stance toward people, and especially toward myself, is the very basis of scientific psychology. Behaviorists, following Skinner , are far more consistent Darwinians than Dennett is.

For us, behavioral patterns within the lifetime of an individual person evolve by a Darwinian process just as genetic and cultural patterns do. For an excellent discussion of the evolution and maintenance of religion consistent with this behavioral outlook and of course ignored by Dennett see Baum For Baum, memes are transmitted not from one internal mind to another but by a history of discriminative stimuli, behavior, and reinforcement.

He says: No understanding is gained by imagining that the units of cultural transmission are [internal] mental entities…or unknown neural structures. Such explanatory fictions remain superfluous as ever and cannot explain how cultural practices originate and change, a question that demands attention to history and behavior over time for its answer… p. For Dennett, memes are passed down from the minds of parents to the minds of children.

But what exactly are memes and where exactly are they located? Dennett admits p. Instead, he argues, each meme, like each thought, wish, belief, etc. He quotes himself approvingly as follows p. Thus, for Dennett, our beliefs reside not in our verbal and nonverbal behavioral patterns but in a set of mechanisms the tiny robots in our brains. The data Dennett recommends for cognitive science are behavioral; cognitive science is distinct from neurophysiology.

But those data are to be interpreted as evidence for internal mechanisms the tiny robots —not indeed neural connections but flow diagrams where the boxes have labels like memory, imagination, thought, and so forth. Conducting an experiment would be like typing the keys in certain patterns, observing the patterns on the screen, and trying to infer, from their relationships, what the computer's program its software must be to have produced just those outputs from those inputs.

It would be up to the neurophysiologist then to take the computer apart to discover the wiring diagram the hardware that instantiate the program. Extending Dennett's analogy to a behavioral analysis, the human soul would consist not of a bunch of tiny robots but of the behavior of a single big robot—the person as a whole. The behavior analyst turns the dials and presses the levers, as it were, to discover, not what goes on inside the robot, but how the robot as a whole functions in its environment i.

That is, the behavior analyst approaches the study of a human being in exactly the same way as the evolutionary biologist approaches the study of a nonhuman animal. Once these are discovered, the Dennettian cognitivist's task is finished.

But, granted that no complete understanding of human behavior can be achieved without understanding internal mechanisms, if you knew everything there is to know about those tiny robots and the tinier robots inside them, and those inside them you would still not understand why people do the things they do and why they say the things they say.

You will have ignored the most important scientific fact—the most important Darwinian fact—about those patterns including religious patterns : their function in the person's environment including the social environment. Behaviorists disagree with each other about whether complex behavioral patterns of whole organisms are usefully labeled by terms from our mental vocabulary.

Skinner thought not.

I believe, on the contrary, that mental terms are useful in behavior analysis Rachlin, You could call this the teleological stance. Imagination, for example, may be seen, from this perspective, not as an image in your head but as a functional mode of behavior—behaving in the absence of some state of affairs as you normally would in its presence.

Suppose two people in a room are both asked to imagine a lion. The first person is imagining a picture or a movie of a lion but the second is imagining a lion itself. What is the function of such behavior?

Imagination is a necessary part of perception. If perception as distinct from sensation is current discrimination of complex, temporally extended sequences of stimuli as distinct from simpler, immediate stimuli , then the immediate discriminative response, especially if made early in the sequence, involves a sort of gamble—behaving as if the extended sequence had occurred.

For example, at any given moment I treat my wife as the person she is in the long run not as the particular bundle of sensations she presents to me at that moment.

It is in connection with such premature but necessary discrimination the universal arising out of particular instances that Aristotle gives us his famous analogy of soldiers in a rout turning one by one and making a stand Rachlin, , p.

The function of the soldiers' behavior is to create an abstraction the renewed formation out of individual actions. The first soldier to turn is behaving as he would if all the others had already turned; he is imagining that they had already turned.

His imagination is what he does, not what the robots in his head are doing. The functions of our ordinary imaginations are to allow us to get around in the world on the basis of partial information. We do not have to carefully test the floor of every room we walk into.

Imagination is also necessary in self-control. One cigarette refusal by a smoker is utterly worthless—like only one soldier in a rout turning and making a stand.

Refusal of an individual cigarette is never reinforced—not now, not later, not symbolically, not internally. Only an extended series of cigarette refusals is reinforced. Refusal of the first cigarette is thus an act of imagination—behaving as you would if a state of affairs existed when it does not yet exist.

Such complex long-term imaginative acts would be shaped from simpler short-term acts. The function of such behavior is clear. Getting up in the morning, at least for me, is an act of imagination.

Introduction xv elephant, want a cracker?

In answering these two questions, type identity theory attempted to discharge two obligations, one "metaphysical" and the other "scientific".

The first answer amounts to the mere denial of dualism, the insistence that we don't need a category of non-physical things in order to account for mental- ity.

Few today would quarrel with the first answer, but the second answer is hopelessly too strong. The claim it makes is that for every mentalistic term, every "mental" predicate "M", there is some predi- cate " P " expressible in the vocabulary of the physical sciences such that a creature is M if and only if it is P.

This is all utterly unlikely.

Every clock and every can-opener is no doubt nothing but a physical thing, but is it remotely plausible to suppose or insist that one could compose a predicate in the restricted language of physics and chemistry that singled out all and only the can-openers or clocks?

What is the com- mon physical feature in virtue of which this grandfather clock, this digital wristwatch, and this sundial can be ascribed the predicate "registers A.

What can-openers have peculiarly in common is a purpose or function, regardless of their physical constitution or even their design, and the same is true of clocks. This recognition led to the second wave of physicalism: Turing machine functionalism. The minimal denial of dualism was maintained every mental event was a physical eventbut the requirements for answering the second question were revised: for every "mental" predi- cate "M" there is some predicate "F" expressible in some language that is physically neutral, but designed to specify abstract functions and functional relations.

The obvious candidates for such a language were the systems used for describing computers or programs. The most general functional language is the system for describing computers as "Turing machines".

An elementary intro- duction to the concept of a Turing machine is provided in Chapter The states and activities of any digital computer or program can be given a mathematical description as states and activities of a unique numbered Turing machine, and this description is its mathematical fingerprint that will distinguish it from all functionally different com- puters or programs, but not from computers and programs that differ only in "physical realization".

The "reduction" of men- tal predicates to physical predicates attempted by type identity theory has been replaced in this view by a reduction of mental predicates to Turing machine predicates. While the resulting theory is only a token identity theoryeach individual mental event is identical with some individual physical brain event or otherit is a type functional- ismeach mental type is identifiable as a functional type in the language of Turing machine description.

But alas, this second answer is still too strong as I argue in Chapter 2. There is really no more reason to believe you and I "have the same program" in any relaxed and abstract sense, considering the differences in our nature and nurture, than that our brains have identical physico-chemical descriptions. What could be done to weaken the requirements for the second answer still further?

Consider what I will call token functionalism, the view that while Introduction xvii every mental event is indeed some physical event or other, and more- over some functional event or other this is the minimal denial of epiphenomenalismsee footnote on p. How will we answer the Socratic question? What do two people have in common when they both believe that snow is white?

This appears to be blatantly circular and uninformative"A horse is any animal to which the term 'horse' truly applies. What has happened to the goal of reduction? It was, I submit, a mistaken goal. Consider the parallel case of Turing machines. What do two dif- ferent realizations or embodiments of a Turing machine have in com- mon when they are in the same logical state?

Just this: there is a system of description such that according to it both are described as being realizations of some particular Turing machine, and according to this description, which is predictive of the operation of both entities, both are in the same state of that Turing machine's machine table.

One doesn't reduce Turing machine talk to some more fundamental idiom; one legitimizes Turing machine talk by providing it with rules of attribution and exhibiting its predictive powers.

If we can similarly legitimize "mentalistic" talk, we will have no need of a reduction. That is the point of my concept of an intentional system see Chapter 1. Intentional systems are supposed to play a role in the legitimiza- tion of mentalistic predicates parallel to the role played by the abstract notion of a Turing machine in setting down rules for the interpretation of artifacts as computational automata.

I fear my concept is woefully informal and unsystematic compared with Turing's, but then the domain it attempts to systematizeour everyday attributions in men- talistic or intentional languageis itself something of a mess, at least compared with the clearly defined mathematical field of recursive function theory, the domain of Turing machines.

The analogy between the theoretical roles of Turing machines and intentional systems is more than superficial. Consider that warhorse in the philosophy of mind, Brentano's Thesis that intentionality is the mark of the mental: all mental phenomena exhibit intentionality and no physical phenomena exhibit intentionality.

This has been traditionally taken to be an irreducibility thesis: the mental, in virtue of its intentionality, cannot be reduced to the physi- cal. According to Church's Thesis, every "effective" procedure in mathematics is recursive, that is, Turing- computable.

The idea, metaphorically, is that any mathematical task for which there is a clear recipe composed of simple steps can be performed by a very simple computer, a universal Turing machine, the universal recipe-follower. Church's Thesis is not provable, since it hinges on the intuitive and unformalizable notion of an effective procedure, but it is generally accepted, and it provides a very useful reduction of a fuzzy-but-useful mathematical notion to a crisply defined notion of apparently equivalent scope and greater power.

Analogously, the claim that every mental phenomenon is intentional- system-characterizable would, if true, provide a reduction of the mentala domain whose boundaries are at best fixed by mutual acknowledgment and shared intuitionto a clearly defined domain of entities, whose principles of organization are familiar, relatively formal and systematic.

In Chapter 1 the question is posed: are there mental treasures that cannot be downloadd with intentional coin? The negative answer, like Church's Thesis, cannot be proved, but only made plausible by the examination of a series of "tough" cases in which mental phenomena are I claim captured in the net of intentional systems. That is the major burden of the book, and individual essays tackle particular phenomena: invention in Chapter 5, dreams in Chapter 8, mental images and some of their kin in Chapters 9 and 10, pain in Chapter 11, and free will in Chapters 12 through This is hardly a complete list of mental treasures, but reasons are given along the way, in these chap- ters and in others, for thinking that parallel treatments can be devised for other phenomena.

Complete success in this project would vindicate physicalism of a very modest and undoctrinaire sort: all mental events are in the end just physical events, and commonalities between mental events or between people sharing a mentalistic attribute are explicated via a description and prediction system that is neutral with regard to physicalism, but just for that reason entirely compatible with physical- ism.

We know that a merely physical object can be an intentional system, even if we can't prove either that every intentional system is Introduction xix physically realizable in principle, or that every intuitively mental item in the world can be adequately accounted for as a feature of a physically realized intentional system. If one insisted on giving a name to this theory, it could be called type intentionalism: every mental event is some functional, physical event or other, and the types are captured not by any reductionist language but by a regimentation of the very terms we ordinarily use we explain what beliefs are by systematizing the notion of a believing- system, for instance.

This theory has the virtues of fitting neatly into a niche left open by its rivals and being expressible in a few straight- forward general statements, but in that clean, uncomplicated form it is unacceptable to me. Sadly for the taxonomists, I cannot rest content with "type intentionalism" as it stands, for it appears to assume some- thing I believe to be false: viz, that our ordinary way of picking out putative mental features and entities succeeds in picking out real features and entities.

Type intentionalism as so far described would assume this by assuming the integrity of the ordinary mentalistic pred- icates used on the left-hand side of our definition schema 3. One might uncritically suppose that when we talk, as we ordinarily do, of peoples' thoughts, desires, beliefs, pains, sensations, dreams, exper- iences, we are referring to members in good standing of usefully distinct classes of items in the world"natural kinds". Why else would one take on the burden of explaining how these "types" are reducible to any others?

But most if not all of our familiar mentalistic idioms fail to perform this task of perspicuous reference, because they embody conceptual infelicities and incoherencies of various sorts. I argue for this thesis in detail with regard to the ordinary concepts of pain in Chapter 11, belief in Chapters 6 and 16, and experience in Chapters 8, 9, and 10, but the strategic point of these criticisms is more graph- ically brought out by a fanciful example.

Suppose we find a society that lacks our knowledge of human physiology, and that speaks a language just like English except for one curious family of idioms. When they are tired they talk of being beset by fatigues, of having mental fatigues, muscular fatigues, fatigues in the eyes and fatigues of the spirit. Their sports lore contains such maxims as "too many fatigues spoils your aim" and "five fatigues in the legs are worth ten in the arms".

When we encounter them and tell them of our science, they want to know what fatigues are. We can see that they are off to a bad start with these questions, but what should we tell them?

Daniel C. Dennett

One thing we might tell them is that there simply are no such things as fatiguesthey have a confused ontology. We can expect some of them to retort: "You don't think there are fatigues?

Run around the block a few times and you 11 know better! There are many things your science might teach us, but the non-existence of fatigues isn't one of them. We could then give as best we could the physiological conditions for the truth and falsity of those claims, but refuse to take the apparent ontology of those claims seriously; that is, we could refuse to attempt any identification of fatigues.

Depending on how much we choose to reform their usage before answering their questions at all, we will appear to be countenancing what is called the disap- pearance form of the identity theory, or eliminative materialismfor we legislate the putative items right out of existence.

Fatigues areuot good theoretical entities, however well entrenched the term "fatigues" is in the habits of thought of the imagined society. The same is true, I hold, of beliefs, desires, pains, mental images, experiencesas all these are ordinarily understood. Not only are beliefs and pains not good theoretical things like electrons or neurons , but the state-of-believing- that-p is not a well-defined or definable theoretical state, and the attribute, being-in-pain, is not a well-behaved theoretical attribute.

Some ordinary mental-entity terms but not these may perspicuously isolate features of people that deserve mention in a mature psychology; about such features I am a straightforward type-intentionalist or "homuncular functionalist", as Lycan calls me, 6 for reasons that will be clear from Chapters 5, 7, 9 and About the theoretical entities in a mature psychology that eventually supplant beliefs, desires, pains, mental images, experiences I am also a type-intentionalist or homuncular functionalist.

About other putative mental entities I am an eliminative materialist. The details of my view must for this reason be built up piecemeal, by case studies and individual defenses that are not intended to generalize to all mental entities and all mental states. It is no easier to convince someone that there are no pains or beliefs than it would be to convince our imaginary people that there are no fatigues.

If it can be done at all supposing for the moment that one would want to, that it is true! The foundation for that task is laid in Part I, where the concept of an intentional system is defined and subjected to a preliminary explor- ation in Chapter 1. Chapter 2 develops arguments against type functionalism and for type intentionalism, and in the second half provides a first look at some of the themes about consciousness explored in detailed in Part III.

Chapter 3 examines the prospects of a very tempting extension of intentionalism: the brain writing hypo- thesis. If we can predict someone's behavior only by ascribing beliefs and other intentions to him, mustn't we suppose those beliefs are somehow stored in him and used by him to govern his behavior, and isn't a stored sentence a good modelif not our only modelfor a stored belief?


I argue that while it might turn out that there is some such brain writing that "encodes" our thoughts, the reasons for believ- ing so are far from overwhelming. Further caveats about brain writing are developed in other chapters, especially Chapter 6. It is important to protect type intentionalism, as a general theory of the nature of mentalistic attributions, from the compelling but problem-ridden "engineering" hypothesis that all sophisticated intentional systems must share at least one design feature: they must have an internal system or language of mental representation.

In some very weak sense, no doubt, this must be true, and in a variety of strong senses it must be false. What intermediate sense can be made of the claim is a subject of current controversy to which I add fuel in several of the chapters.

Behavioral and Brain Sciences, 18, — Commentary on N.

Daniel Dennett

Full text html commentary here and target article here. London: Penguin. The unimagined preposterousness of zombies. Journal of Consciousness Studies, 2 4 , — Commentary on T. Paywall-protected journal record target article here. Cog: Steps towards consciousness in robots. Metzinger Ed. Thorverton: Imprint Academic. Facing backwards on the problem of consciousness.

Journal of Consciousness Studies, 3 1 , 4—6.

Kinds of minds: Towards an understanding of consciousness. An exchange with Daniel Dennett. Searle Ed. The myth of double transduction.

Hameroff, A. Kaszniak, and A.

Scott Eds , Toward a science of consciousness: The second Tucson discussions and debates pp.His per- formance indicates that he has caught on to commutativity. Or would you be willing to be operated on by a surgeon you tells you that whenever a little voice in him tells him to disregard his medical training, he listens to the little voice?

If tokens turned out not to be physically salient—and this is rather plaus- ible in the light of current research—the brain-writing hypothesis would fail for the relatively humdrum reason that brain writing was illegible. He has a strong gut intuition that the traditional way of talking about and explaining human behavior—in "mentalistic" terms of a person's beliefs. Not only are beliefs and pains not good theoretical things like electrons or neurons.

The foundation for that task is laid in Part I. The denouement of this extended example should now be obvious: Scholars have uncovered a comically variegated profusion of ancient ways of delegating important decisions to uncontrollable externalities.

We do quite successfully treat these computers as intentional systems, and we do this independently of any considera- tions about what substance they are composed of, their origin, their position or lack of position in the community of moral agents, their consciousness or self-consciousness, or the determinacy or indetermin- acy of their operations. Yes, I did have an epiphany.