Chapter 1: Introduction (1)

 
Overview: This chapter is arranged in four major sections. The first presents several ideas having to do with the nature of science, and what researchers and scientists attempt to do. The second presents a brief introduction to the philosophical background of several different schools of psychology that emerged around the beginning of the century, as psychology was starting to establish itself as a science. The third discusses two of these schools of psychology in some further detail, and introduces several ideas from each that are part of the current understanding of most psychologists about what psychological research ought to involve. Finally, the fourth section more formally introduces the topics of learning and memory, and relates them to the previous discussions of philosophical background and research interests.
 

I. Science and the Search for Explanations

        A number of people have a fairly naive idea about what science is, and about what the purpose of research and experiments involves. In this view, scientists do experiments to find out what happens, and science is a simple accumulation of all the facts that are obtained from controlled experiments and careful observation. So, a discovery is a new fact that can be added to the stockpile of already-known facts. All that is required to make a discovery is to look for something no one else has ever seen.

        The problem with this account is that is doesn't say anything about theories. Science isn't really about the accumulation of observations or facts at random, as a simple thought experiment will demonstrate. Suppose I tell you (as many introductory classes on scientific methodology have been told to do, as an exercise) to go out and observe. You simply will be unable to follow through on that instruction without further information. Your first thought should be, Observe what?! and your second thought should be, For what purpose? The problem is, there are too many things that can be observed (an infinite number of things, in fact), and without knowing why I want you to observe something, even if you know what to observe, you still will be unlikely to have an idea of what you should be looking for. So, if I give you a more explicit instruction such as, Observe me!, do you look for any unusual behavior that I might exhibit? Or do you look for what I do most commonly? Or is the point how I'm dressed, or whether I speak with an accent, or whether I move around when I lecture? Without some idea of what and why I want you to observe, the instruction is so broad as to be useless.

        When scientists make controlled observations to collect facts, they are generally guided in their observations by theories. Whereas a fact tells you what happened, a theory attempts to tell you why it occurred. Theories are attempts at explanations, at making sense of the world around us. To use a metaphor, a good theory can be regarded as an engine or device that is capable of generating all (and only) the possible observations or facts that can occur. As an example, take the area of linguistics, which is concerned with explaining what language involves. Linguists try to come up with grammars of languages by observing native speakers of those languages, and by testing out certain statements to see whether the native speakers agree that those statements are acceptable. Grammar is an ambiguous word that means several different things. In many grammar classes, grammar is taught as style: a proscriptive approach that tells people how they ought to speak in order to be regarded as proper. So, one rule you learn is not to say ain't; another is to say ask instead of axt. But, this isn't the type of grammar linguists are primarily concerned with (although what a dominant culture regards as a proper style is, of course, a relevant topic for linguists and other social scientists to study). Linguists use a descriptive approach: How people actually speak (including using ain't and axt), rather than what one or another group regards as being proper. On this account, a grammar is a theory of a particular language. That is, a grammar is a set of rules that will generate any possible acceptable sentence in that language, and only the possible sentences. And that is what a good theory in any area attempts to do: It should be a system of rules that will generate all and only the permissible phenomena that theory attempts to explain. So, linguistic theories are meant to generate the actual language data a linguist might observe, and ideal gas theory is meant to generate all and only the behavior of ideal gasses, as pressure, temperature, or volume of an ideal gas changes.

        There is one very important point in the description above that might easily have been overlooked: A theory can generate something that has never before been observed. So, a grammar developed by a linguist could generate a sentence that no speaker of that language has ever before uttered. Similarly, a scientist working with heating gasses might generate a state that has never before been observed, because that particular temperature has not been seen in either nature or the laboratory by anyone before. And that gets us back to the purpose of experiments. The fact that theories can generate never-before-seen results means that theories can predict what ought to happen under certain conditions. Thus, scientists, relying in part on observations, attempt to construct theories to explain those observations, and test those theories by getting them to make novel predictions. Theories are the preferred level at which the work of science occurs, because theories try to make sense of the world around us: They provide a framework for integrating and interrelating all those different facts and observations we might make, and they guide us by suggesting further observations we ought to make.

        The rules that make up a theory include principles or laws that represent generalizations, and bridging principles that tie abstract terms in the theory to real events in the world. Bridging principles are needed because theories are descriptions of what should happen under ideal rather than real conditions. A grammar of a language might generate a very long sentence that no real speaker would ever utter because it would take 24 hours to say. An ideal gas law neglects to deal with the fact of friction caused by the material of the container's wall, or with the possibility that a given container might be made of a material that might chemically react with a given gas, resulting in different results than would normally be expected. Or in psychology, for example, if a psychologist wants to claim that fear will disrupt the normal course of learning an association, then bridging principles must be included to define how we know that a rabbit or a human is in a state of fear, and how we know whether or not the rabbit or human has learned an association. In addition, since theories (particularly of complex systems like living organisms) can be quite complex, scientists will use bridging principles to set up models of a theory that typically test just a small part of it.

        So what happens when a theory is proven wrong because it makes a prediction that turns out not to be true? In actual fact, things are much more complicated. When a theory makes an incorrect prediction, there are typically several other possibilities that need to be considered. While the theory may be wrong, it may also be the case that the researcher chose a bad model, or that the problem lies in the bridging principles. A third possibility having to do with the fact that experiments (particularly in psychology) are conducted in the real world is that there are too many variables that are uncontrolled, and that might, in unusual circumstances, influence the outcome of an experiment. This is in part where statistics comes in. Inferential statistics (as opposed to descriptive statistics that summarize and characterize group results such as how many people in this class are wearing t-shirts with fish designs) attempt to assign probabilities to given results, so that we can look at a result and decide whether it might have occurred by chance.

        The point is that there are generally no such things as killer experiments: experiments that by themselves can completely destroy a theory. Trying to evaluate whether a theory is true or false is a bit like weighing evidence in court: Some of the evidence at first appears contradictory or ambiguous, and you have to use the preponderance of the evidence to decide what happened. When you have made that decision, then an additional step is to go back to the ambiguities and seeming contradictions to see whether there might be another explanation for them that does away with the problems. And as you will see in class and in later chapters of this text, that is true of scientific research as well. A very common pattern is for Theorist A to claim that an experiment disproves an opposing theory held by a colleague, only to have that colleague come out later with an alternative explanation of the experiment that shows it doesn't prove anything of the sort! That is not to say that anything goes; it doesn't. Over the long haul, enough findings and predictions come in to start tilting most researchers towards or away from a given theory. But the point is that doing good science is a long-term investment that requires continual testing and reassessment of a theory's predictions. Understanding that is one of the most important things you can do in terms of evaluating all the claims you will constantly come across, in newspapers, radio, and TV, about this major new discovery, or that set of experiments regarding why MouthSoFresh is the best mouthwash you can buy.

        Two more points regarding science and experimentation will prove relevant to us. The first is that scientists (and psychologists, in particular) generally engage in what is called competitive hypothesis testing. And the second is that there are a number of levels at which theories may be generated.

        We briefly discussed above the problems posed by negative evidence, evidence against a theory. But there are also problems posed by positive evidence, evidence that is consistent with a prediction. The major problem here is that there are typically many theories that make the same prediction, at least when a limited set of data is being looked at. For example, a well-known finding in learning research is that a response that has been learned with continuous reinforcement (that is, an animal or human has been rewarded each time it made that response in the presence of a certain stimulus or context) will take longer to go away when the reinforcement is small rather than large. That finding strikes a number of people who first come across it as being counter-intuitive; after all, shouldn't the learning be better or stronger with large reinforcements? And shouldn't that mean that the response lasts longer? But since the result does happen, we need to explain it. And in fact, there is a simple explanation that will make sense to you, once you've heard it. It is this: According to Amsel's Frustration Theory, animals will stop doing things that cause them frustration. Not getting an expected large reward is more frustrating than not getting an expected small reward. So, the theory can easily explain the results we find.

        So far, so good. But here's another theory: According to Mowrer and Jone's Discrimination Theory, responses generalize to similar situations or contexts. That is, when you've learned a response in one situation, you are likely to make that same response in another, similar situation. The less similar the situation becomes, the more likely it is that you do not make the response. And the point they make (among many other theorists who adopt a discrimination theory approach) is that having a small reinforcement is more like having no reinforcement than having a large reinforcement is like having no reinforcement. So, on the basis of similarity, we would expect more responding in the small-reinforcement group when the reinforcement is no longer there. And that, of course, is exactly the same prediction made by Amsel's Frustration Theory.

        There are other theories that could be brought up here that also make the same prediction. So, the issue becomes, how do you know which of these theories is in fact correct? And the answer to that is that we normally try to avoid testing just a single theory. Instead, we typically try to set up an experiment so that at least two theories are involved. We create a model in which the theories make different predictions from one another, so that we have the theories compete against one another. If Discrimination Theory and Frustration Theory make the same prediction in the example I gave you above, then we try to find some circumstance in which they seem to predict different outcomes. And we test the theories on these differences. That is what is meant by competitive hypothesis testing: If the results simultaneously support one theory while disconfirming another, then we have much stronger confidence in the theory. As this example shows, in doing science, there is a bit of a disconfirmation bias: Because of the problem of several theories making the same prediction, we try to look for evidence that will disconfirm a theory: Such evidence is normally regarded as being more important than evidence that confirms a theory. And with competitive hypothesis testing, of course, we have the best of both worlds, since we potentially get to use both types of evidence, confirming (positive) and disconfirming (negative).

        Finally, what should the principles of a theory look like? Or to put it another way, what counts as a legitimate theory, and what doesn't? This turns out to be a very tricky question to answer. It is tricky in part because what you consider a legitimate explanation will depend on your own background and interests, and will most certainly depend on some perhaps hidden assumptions you carry around with you about what, philosophically speaking, constitutes proper scientific description. As you will shortly see, people who come from very different philosophical backgrounds can have radically different ideas about what constitutes a legitimate explanation. So, your philosophy of science will be a major determining factor in deciding what a legitimate theory looks like. But, over and above that, the level of description at which you are working will also have a strong influence. People who work at the physiological level will come up with very different theories of behavior than people who work at a higher level. These levels need not be incompatible with one another, and there are certainly fruitful interplays among them. But there is sometimes a tendency for people to argue that their own level is the best level for theorizing, so that there are sometimes very heated arguments about which is the appropriate level.

        For our purposes, I will identify three broad levels, although you should not think that these are the only levels. These are the physiological, representational, and behavioral levels. At the physiological level, we describe what goes on in terms of body chemistry and neurophysiological processes. Learning at this level, for example, may involve changes in neurotransmitters and the modification of synaptic transmission among neuronal assemblies. At the representational level, we discuss causes of events in terms of the symbolic knowledge representations of an organism. Learning at this level involves acquisition and organization of knowledge, and we discuss causes of responses in part as being due to the goals of an organism, and its current states of information relevant to achieving its desired goals. This, of course, is a much more cognitive level of description. At the behavioral level, we take a yet more externally oriented approach. At this level, an organism's actions, including its learning, are strongly determined by evolutionary-like mechanisms that adapt it to its environment. Behavior that improves an organism's ability to survive and successfully compete for food and mates is likely to be acquired, whereas behavior that has the opposite effect is likely to be stamped out. Learning on this level involves environmental pressures that lead to better adaptation. So, we discuss external behavior in terms of those mechanisms (like reinforcement) that select some responses as being useful, and so lead to their continued performance. In the extreme (as in some forms of Behaviorism), we need not refer to any internal representational or cognitive states of the organism at all.

        We will concentrate on the representational and behavioral levels in this text, although we will occasionally talk about the physiological level, as well. My take on these is that all three are compatible, and that each has something valuable to offer. But since I'm a cognitive psychologist, you ought not to be surprised if I concentrate primarily on the representational level. We all agree that theories at the representational or behavioral level that are inconsistent with the biological and physiological realities are simply not on: The physiological level places important constraints on what legitimate theories at the other levels can allow. Similarly, each of the other levels is capable of discovering realities or phenomena that the other levels will have to deal with. So, there should be constant cross-talk among levels, as what happens at each has implications for the other.

        This notion is a little bit similar to a notion that the computer scientist and vision specialist David Marr expressed several years back. Marr also identified three levels, though not completely the same levels as those discussed above. He called his levels the computational level, the algorithmic level, and the implementational level. Marr said that we needed to start with the computational level. At this level, we have to ask what the purpose of something is. What goal does it accomplish? Next, at the algorithmic level, we consider the many ways or algorithms in which that goal may be accomplished. Finally, at the implementational level, the details of the medium in which a specific algorithm occurs are of interest. At the computational level, for example, the goal of a visual system involves being able to react to incoming visual information. At the algorithmic level, there are numerous algorithms for processing visual information, including light-sensitive spots on worms, lensless pinhole eyes in the Nautilus, a lens-based eye that can form an image at various focal lengths in humans, etc. At the implementational level, vision in human eyes and in camcorders is obviously implemented by very different structures. Or to take a different example that Marr is fond of, the computation or goal of flight can be realized by many different algorithms, as you will note when you think of the many different ways of flying there are. If flight is realized by an algorithm that includes wings, however, then we can start to consider implementational details such as whether the wings have feathers, etc. The point of this will become clear when you look at Marr's (1982, p. 27) rather nice quote below:
 

trying to understand perception by studying only neurons is like trying to understand bird flight by studying only feathers: it just cannot be done. In order to study bird flight we have to understand aerodynamics; only then do the structure of feathers and the different shapes of bird wings make sense.
Or in other words, all levels have to be considered in building a proper explanation; each by itself gives only part of the picture.

        Although Marr and a number of other people argue for a basic compatibility among levels, this is a point that has not been shared by all researchers. For some curious historical reasons, practitioners of the three psychological levels I've listed above have tended to claim that their level was the exclusively proper level of description. Fortunately, most of us tend to be a bit more ecumenical today. But, it may be difficult to understand the nature of research into learning and memory without first understanding that much of the research and many of the theories came out of exclusionary approaches. Part of the reason for this, especially with respect to the behavioral and representational levels, was that scientists who in the past were attracted to these came from very different philosophical backgrounds, so that level and philosophy became intertwined. Arguments over levels thus arose that took on some of the religious-war fervor that arguments over philosophy sometimes inspire.
 

II. Dueling Philosophies: Rationalism & Empiricism

        There are many different branches of philosophy that have developed over the years, but the one we will be concerned with here is epistemology. Epistemology is concerned with exploring and defining the nature of knowledge. If we can capture the aim of epistemology in the written equivalent of a sound byte, it is this: How do we know when we know something? That is, when can we describe something as being justified true knowledge, as opposed to being an opinion, belief, or prejudice? A number of philosophers have struggled with this issue, with important historical consequences for psychologists. The issue is of obvious significance to the very nature of scientific research, since most of us would like to believe that we are finding out something about the nature of reality in our experiments.

        In this wildly abbreviated and totally inadequate survey of some of the important highlights in philosophy, we will concentrate on two movements that eventually gave rise to two very different approaches to psychological research and theorizing. These two movements are Rationalism and Empiricism. Rationalism is typically traced back to the Philosopher Plato (circa 400 BCE) who had a school of philosophy called The Academy. One of his students, Aristotle, developed Empiricism as an alternative to Plato's ideas. These two streams of thought about the nature of knowledge have been developed down the years, and have had immense influence on the development of sciences that concerned themselves with aspects of learning and knowledge. In a real sense, various sciences emerged from Philosophy as different philosophers and practitioners discovered that some of their philosophical claims could be settled by experimental techniques.

        In asking about what the nature of knowledge might be, Plato claimed that true knowledge could not be derived from our imperfect senses. There were a number of reasons for his belief, including the idea that our view of the real world was at best imperfect. An analogy you may be familiar with that is often used to present this idea is the analogy of Plato's Cave. In this example, there is a road passing in front of a cave opening, and on the road are a number of people carrying a number of different objects. Their shadows are reflected on the back of the cave wall. Inside of the cave, there are people who can only look at the cave wall. Plato pointed out that their knowledge of what was happening on the road would be imperfect, at best. His point was that what we know from seeing shadows on a cave wall may be very different from what actually happened to cause those shadows in the first place. So, he never thought we could directly grasp true knowledge through our senses. In some sense, he claimed that there was a world of ideal forms or essences that were like the objects casting shadows, whereas the imperfect reflections or shadows of these objects constituted the world of appearances. We never see an ideal triangle; we see a whole bunch of different triangles that are not identical to one another at all, but that somehow all share the important essence of triangleness. Moreover, our senses can easily be fooled, thus providing us with another reason to distrust the senses as a source of real knowledge. So, if you put your left hand in a bucket of cold water and your right hand in a bucket of hot water, and then five minutes later you place both hands in the same lukewarm-water bucket, the water will feel hot to one hand and cold to the other, even though it is the same water.

        But if we cannot rely on our senses to tell us when something is true, what can we rely on? Plato's answer to this was that true knowledge had to be innate. (For students of psychology, this is the nature side of the familiar nature-nurture debate.) That is, whatever we know, we know by virtue of being human.

        An objection to this view may have occurred to you. This appears to be a profoundly anti-learning approach, and thus a surprising view to bring up in a course on learning. If we already know everything, what is there to learn? And it also seems to make a wildly wrong claim. If everyone already knows everything, then why do there appear to be differences in what people know? More to the point, why do babies seem to know very little, if anything, and why does it look as if major learning is occurring with advancing age, at least through early adulthood? Plato, of course, was aware of this problem. His claim essentially was that we had to use our inborn faculty of reasoning in order to uncover that knowledge. Thus, the knowledge was hidden until reasoning disclosed it. In a real sense, learning for Plato was remembering.

        Why would knowledge have to be recovered through reasoning? Plato believed in reincarnation. He thought that the soul, when it was separated from a body, resided in a perfect world of forms and essences, and was aware of all of these (the sum of knowledge). But he also claimed that when it was placed into another body in the cycle of reincarnation, that process was traumatic enough to cause temporary confusion. Thus, education was still needed, but could be conceived of as the process of helping the soul, through reasoning, to remember and recover its knowledge.

        These ideas represent only a very small part of Plato's very rich philosophy. But already we can see a kernel of ideas start to form that will be important to certain later psychologists. One is that we are concerned with a knowledge-level description of what people know: the representational level. That means that we adopt an assumption of mentalism: our interest is in the mental life and its contents, in this case, knowledge in particular. In addition, there is an assumption of nativism, the notion that at least some (all, in Plato) knowledge will be innate. In more modern times, of course, we speak of genetically programmed knowledge. Finally, there is an assumption that reasoning is of critical importance in the mental life. This emphasis on reasoning, of course, accounts for why Plato's philosophy is termed Rationalism. In current parlance, a very similar assumption among many cognitive psychologists is the assumption of information processing: Thinking is goal-based problem-solving that involves an important component of reasoning.

        Plato's student Aristotle, who formed his own group known as The Peripatetics, took issue with much of Plato's philosophical claims. In particular, he rejected the notion of a world or ideals or forms or abstract ideas whose essences were imperfectly represented in specific exemplars in the real world, and which could be apprehended directly by the soul between reincarnations. Instead, he claimed that abstractions (like the idea of triangleness) were the outcome of experience with particular instances in the real world (particular exemplars; particular triangles of all sizes and differing angles). Or to put it another way, we gain knowledge (we learn) through our experiences with the real world, and thus, by relying on our senses, however imperfect or subject to illusion (see the water temperature example above) they might be.

        Empiricism is the idea that knowledge comes from our experiences with the world. In developing his ideas, Aristotle said that the soul could be likened to a block of soft wax: An object (like a bottle) placed on the wax will leave an imprint that will represent the object, but will not be the object (since the imprint, among other things, isn't made of the same material). This idea is sometimes referred to in introductory chapters such as this as the notion of a tabula rasa (Latin for blank slate), another major concept in Empiricism. Technically, however, they are not the same. We will talk about the tabula rasa a bit below, but for now, a very important concept that makes Aristotle's notion different is that Aristotle never claimed that all knowledge was gained through experience with the environment. To carry the analogy further, the soul for Aristotle (what we would today call the personality) was structured or organized into different parts and processes from the start, and wasn't really an unformed lump of wax on which structure would be created as a result of objects leaving their imprints over the course of time. As we will see below, later Empiricists sometimes took a radical position that virtually all structure, organization, and knowledge were acquired through interactions with the environment. To enable you to get a more concrete feel for this argument, think of your ability to see in three dimensions, so that you appear immediately to be able to tell that some objects are further away from others. Is depth perception innate? Or is it the result of very early experiences in childhood? That is the type of issue that is relevant to this discussion.

        Nevertheless, much knowledge for Aristotle resulted from experiences. So, the question became how such learning could take place. In discussing some of these issues, Aristotle formally developed three laws of association (although there is some indication that Plato might have had the first two of these in mind earlier). One of these involved the law of similarity: Given experiences would bring to mind similar experiences from the past. You can see that the law of similarity will be important in helping people perceive the commonalities amongst a number of objects. Thus, being reminded of a number of different triangles may be the basis for our recognizing the abstract notion of triangleness. A second law was the law of contrast: Given experiences bring to mind their opposites. So, an experience of warmth can remind you of an experience of cold; and an experience of being still can remind you of an experience of movement. Similarly, a triangle can remind you of a square or circle, but it should not remind you of warmth or movement. And if you think about these, you will realize that these laws (and the one mentioned below) help us structure our experiences, and organize them in certain ways (thus starting us off on a theory of categorization). Aristotle knew that memory appeared to rely on organization, and the various mechanisms of association he proposed helped establish this organization.

        The third law has been particularly significant in the history of psychology: the law of contiguity. According to this law of association, things that happen at about the same time (temporal contiguity) and at about the same place (spatial contiguity) get bonded together. In later theories of learning, the law of association was regarded as the central mechanism for learning. So, if by dumb luck the first three times you enter Girard Hall, someone steps on your foot, then you should associate these two events. Associating the events then allows you to make a prediction, since seeing Girard Hall should remind you of having your foot stepped on. But, the same accidental associations (Girard Hall, your foot being stepped on) don't normally occur repeatedly, so the strong associations ought to occur for things that are connected through some mechanism other than random chance. That is, associations due to contiguity should generally be more likely or more common for things that in some sense do belong together.

        This notion of associations doesn't just refer to associating complex events (like Girard Hall, or your foot) with one another, however. And that is where one of its great strengths arises. Association can also account for our experiences of single objects. An object (like bird) in Aristotle's view can be regarded as a collection of different abstractions or features, some in different senses. Birds normally have beaks, feathers, and they fly (to mention some of the common properties or features). By association, we create a representation or category that includes beaks, feathers, and flying things, and we can now refer to that category by a single word: Bird. In this sense, the law of contiguity is sensitive to the structure of objects, because objects can be viewed as correlations of more primitive features or properties. Moreover, associations can occur across senses to give us a more rounded knowledge or memory of a complex object: When you deal with an apple, it has a certain feel (sense of touch), a certain flavor (sense of taste), a certain shape, shading, and color (sense of vision), a certain aroma (sense of smell), etc. Since these recur together, they become associated together, and thus part of our common knowledge of what an apple is. The part of the soul in which all these sensations from different senses come together and get bonded together is called the common sense. Now you know where that term comes from.

        As you may expect, Aristotle's emphasis on knowledge arising from the operations of the mind on sense experiences in the real world led him and his students to study biological and natural phenomena, and to classify them according to their basic properties. In a later chapter on categorization, we will briefly return to Aristotle's view of categories.

        In any case, although Aristotle also adopted mentalism, one major assumption that arose out of the approach he started was environmentalism, the idea that knowledge is due to our experiences with the external world or environment (and as you can see, some familiar words in psychology and philosophy sometimes have very different technical meanings than in everyday conversation). Environmentalism, of course, represents the opposite of nativism. We can also point to associationism as part of the kernel of ideas we obtain from Aristotle: the idea that learning involves forming connections. And to the extent that associations form automatically (a point that became a major tenet of Behaviorism), we can add the assumption of automatic learning: Learning is imposed from the outside rather than being under the control of the organism itself (but this is too strong a claim for Aristotle!). Finally, consistent with Aristotle's notion of associations, we have the assumption of atomism (also sometimes referred to as the building block model): Simple sensations (atoms) combine together to create complex experiences. Hence, a complex experience, event, or memory is the sum of its individual components, just as a house might be described as being composed of bricks (its atoms) in certain arrangements or associations.

        I've described what I call the kernels of ideas in Rationalism and Empiricism, because in extreme forms, they can lead you to very different levels of explanation. Thus, to anticipate, an extreme position on environmentalism can be taken to mean that mentalism ought to be rejected. So, whether you wish to allow explaining what someone is doing, and why, in terms of internal mental states or external environmentally-imposed associations may depend on your position on these kernels. These explanations, of course, correspond to the representational and behavioral levels we spoke of earlier. But the point is that an extreme environmentalist who rejects mentalism will also tend to view explanations that are couched in mentalist terms as illegitimate and unscientific. And of course, a mentalist will tend to view an explanation couched solely in environmentalist terms the same way. Whether you are explicitly aware of what your position is on some of these issues, or whether you hold an implicit position that you have never consciously examined or tried to justify, your philosophical assumptions can have profound effects on what you take to be a legitimate theory or explanation of learning and behavior.

        We are now going to skip up to the 16th Century, which was a time of great intellectual excitement. This was the time of the Scientific Revolution, when scientists were refusing to accept Church dogma about the nature of the world, and were making all sorts of observations and studies questioning the old views (and were sometimes being burnt at the stake for it). The heavens were no longer a revolving dome over a stationary Earth; the compound-lens telescope (patented in the Netherlands in 1608) was opening up new vistas of planets and moons potentially like the Earth; and the compound microscope (essentially an inverted telescope) was opening up a world of organisms so tiny as to be invisible to the unassisted eye. And with all the new techniques and discoveries (not to mention one of the first systematizations of scientific methodology by Francis Bacon, who died in 1626), there was renewed ferment in philosophy, and a renewed debate over Rationalism and Empiricism.

        One of the major reassertions and reformulations of Rationalism occurred at this time. This involved the work of René Descartes (who died in 1650). Like Plato, he accepted the doctrine of innate ideas. But he put an interesting twist on it that has continued to affect psychology since. You will be familiar with Descartes from what is probably his most widely quoted saying: cogito ergo sum. This is Latin for I think, therefore I am. The story behind it is that Descartes was trying to develop a method for uncovering the innate ideas. His method involved a kind of boot-strapping whereby the discovery of one innate idea could be used to uncover yet other ideas. So, he asked himself what fundamental thing he knew to be absolutely true, without need for any external experiences or learning at all. And the answer he came up with was that his self-consciousness and his ability to reason gave indubitable proof of his own existence. Hence, the statement that is perhaps one of the most recognized phrases in the world (though perhaps one that relatively few people understand the significance of). Note the implicit emphasis this places on the role of consciousness.

        Descartes is of interest to us because of what is today called Cartesian Dualism. Descartes of course was quite familiar with all of the exciting work being done on what causes bodies to move, and he concluded, in the spirit of the Scientific Revolution, that our own bodies were simply mechanical machines. However, he also claimed that humans had souls. Thus, the actions of the body could be due to pure reflexes or causes in the external world (a behavioral level explanation), but the actions of the soul were not mechanistic, and had to be described in terms of knowledge (a representational level explanation). This dual nature of humans is what is referred to as Cartesian Dualism. And it seems that many psychologists have been chasing down one or the other stream of Descartes' philosophy ever since.

        Descartes also made a sharp distinction between humans and animals. Animals for him were simply machines that were reflexively responding to events or stimuli in the external world. Humans, on the other hand, had a characteristic setting them apart from animals, and showing their ability to reason. That characteristic was language. And since ideas were innate, this meant that language had to be innate, as well, an idea that resurfaced 35 or so years ago in the work of the linguist, Noam Chomsky.

        Finally, a left-over problem of Cartesian Dualism that psychologists and philosophers still struggle over is what is called the Mind-Body Problem. If Mind and Body are different substances, one material, the other non-material, then how can we account for the fact that Mind and Body normally seem to be in synch? A number of answers have been posed here. They break down into dualist and monist positions. Monist positions deny that there are two different types of substance, mental and physical. Instead, they claim that everything is composed of a single substance, and follows the laws of that substance. A monist position held by a number of people, materialism, claims that all things are physical, and follow the laws of physics. So, on this account, whatever Mind is, it has to have a physical basis, and can't really be different from Body. According to dual-aspect theory, Mind and Body are two aspects of the same thing, much as a coin has two faces. In fact, some theorists simply get rid of the problem of Mind by denying that there is such a thing as Mind or mental events.

        Dualist positions maintain two separate entities, Mind and Body. The two most well-known dualist positions are interactionism and parallelism. According to interactionism, one substance can influence the other. Descartes held an interactionist position, and thought that the soul could influence the body through the pineal gland. But the problem that arises is describing how a non-physical substance can have an effect on a physical event, or vice versa. Parallelism, in contrast, claims that Mind and Body happen to have experiences in common, but that neither influences the other. The metaphor that one often reads here is that of two clocks set going at the same time: Though neither influences the other, they will both always give the same time.

        To date, no one has come up with a universally satisfactory answer to the Mind-Body Problem. The issue is how do we treat mental events, and how can they influence physical behavior? Most of us are materialists who believe that mental events have to have physical events underlying them. But a question that continues to perplex a number of researchers is whether the mental level can be completely reduced to the physical level, or whether there are new, emergent properties that can only be described at the mental level. Regardless of what the eventual outcome of this debate turns out to be, many people (myself included) believe that a mental level of explanation will continue to be useful, even if for no other reason than as a short-hand description of some very complex biological and physiological states. That is, it is a lot easier to talk about someone's category of triangles than it is to talk about the huge numbers of neurons and the patterns of brain activation that may correspond to the mental category triangle.

        There was a lot of activity and ferment on the Empiricist side, as well. Thomas Hobbes, a contemporary of Descartes (Hobbes died in 1679) argued against dualism, and for monism. He claimed that Mind was essentially reducible to the actions of the nervous system. So, for Hobbes, all actions had mechanical causes, including our so-called mental ideas. At about the same time, John Locke (he died in 1704) came up with a strong attack against the notion of innate ideas, although there is some thought that he was arguing specifically about innate moral ideas rather than all innate ideas. In any case, he is the one who is responsible for the metaphor of the tabula rasa: The mind, according to this metaphor, is like a blank slate on which experience writes. Also harking back to Aristotle, David Hume (who died in 1776) focused on the importance of the process of association. In his case, he attempted systematically to explore the structure of the mind using the laws of association. Hume believed one could catalogue the contents of mind much as astronomers catalogue the stars in the sky. And, much as the stars and planets are subject to a general principle of gravity which 'organizes' their positions, so did Hume believe that the contents of mind were subject to a general principle of association that served a similar organizing function. Hume explored two laws that will be familiar to you from our discussion of Aristotle: the law of resemblance (or similarity), and the law of contiguity. But he added a third important law: the law of cause and effect. According to this principle, if two events are repeatedly associated in the same temporal order, then we will view the earlier event as the cause of the later event, which we will interpret as a result or outcome of the earlier event. Wings and beaks are not associated in a strict temporal order, but kicking someone and hearing a yell are; thus, kicking causes yelling, but we can't and won't say that beaks cause wings. For Hume, association was a fundamental force or principle in mental life much as gravity served as a fundamental force or principle in the physical world.

        Since I asked you to consider whether depth perception was innate earlier in this chapter, let me also briefly introduce another of the British Empiricists, Bishop Berkeley (d. 1753). Berkeley went far beyond Locke, claiming that we had no reasonable basis on which to conclude that objects existed in the physical world. We don't apprehend objects directly; we just receive sensations. So, he built his system around the notion that sensations or ideas were the only things about which we could learn. His motto was esse est percipi (Latin for to be is to be perceived). If you compare this with Descartes, Berkeley's claim is that what we perceive is what there is. As he was a bishop, he also claimed that what kept the world in existence was a divine entity that was always perceiving everything. But Berkeley was also a strong associationist, as he would have to be, since he starts out with only sensations coming in through our sense organs. One of his very interesting analyses had to do with depth perception, which he claimed involved associations formed between several different senses. As something becomes further away, its image becomes smaller, and our eyes don't angle inwards as much when they focus on it. At the same time, we become increasingly incapable of grasping or touching the object. On the basis of such correlations, Berkeley concluded that even depth perception was learned.

        Three other developments deserve mention in this brief overview of philosophically relevant ideas. Two that fit in with Empiricism include Utilitarianism and Positivism. Utilitarianism, as presented by Jeremy Bentham (d. 1832), was an attempt to develop laws of choice having to do with how much pleasure or pain various choices would bring. And Positivism, as presented by Auguste Comte (d. 1857) was a claim that true scientific explanation in its final, mature stage would avoid all reference to unobservable things. (You can probably guess that Comte loathed then-current thinking having to do with psychology, as it all referred to unobservables of one sort or another.) The latter two ideas became part of the Empiricist kernel. Utilitarianism eventually gave rise to the notion that associations will be affected by pleasure or pain (so that you form associations to do something if the result is pleasure, and to avoid doing something if the result is pain). Positivism became the rallying cry of later Empiricists who adopted the extreme position that since mental events were unobservable (at least, to other people than yourself), they could not legitimately be part of scientific explanation.

        The third development that ought to be mentioned is German Idealism, an offshoot of Rationalism, in some respects. Idealism later gave rise to Romanticism, a movement that became particularly important in literature and the arts, a repudiation of the Enlightenment belief that reasoning could create a successful, utopian world. German Idealism had important roots in the work of the two German philosophers, Gottfried Leibnitz (d. 1716) and Emmanuel Kant (d. 1804). Leibnitz repudiated the notion of a material world composed of non-sentient atoms. Instead, he substituted the notion of monads as the building blocks out of which everything was constructed. But he also claimed that monads had some minimal consciousness. So, for Leibnitz, everything had a spark of consciousness, being composed of monads. But what this also accomplished was that it opened up the door for talking about degrees of consciousness, and in particular, a level of consciousness so low as to be properly termed unconscious. Like Plato and Descartes, Leibnitz believed in innate ideas.

        Kant also talked about degrees of awareness, and about the possibility that we sometimes did things for reasons of which we were unaware. In addition, he presented an extraordinarily sophisticated analysis of reasoning and thinking, and made a very strong claim that certain ways in which we experience things had to be a priori, or unlearned. Kant tore into Berkeley's analysis of depth perception. For Kant, perceiving depth was one of the innate mechanisms we have for structuring our experiences about the world. As you may know from some of your other classes, experiments by Eleanor Gibson and her associates in the 1960s using a device called the visual cliff (a cliff-like drop covered by glass so that an infant or animal can safely crawl out over the drop) finally demonstrated that some aspects of depth perception were unlearned, consistent with Kant's claims.

        The change of emphasis to degrees of consciousness and to the possibility of being influenced by unconscious motives changed the focus for later Idealists and Romanticists from reason and consciousness to the unconscious and in certain instances, the irrational. And with these developments, we have set the groundwork for the start of psychology as a science in its own right.

        In the mid to late 1800s, several people started doing research and experimentation on a topic that up to that point had been regarded as strictly the provenance of armchair speculation: psychology. Most notably, Ebbinghaus started a series of studies on memory that is still relevant and worth reading today; and Fechner (who is regarded as the Father of Experimental Psychology) started experiments showing that mental sensations could be reliably scaled and related to physical sensations. So, the push was on to found a new science of psychology, and various theoreticians were searching for a proper definition of what this science might be. Around the turn of the century, three very different definitions arose, giving rise to three very different schools of psychology. One, arising in part from Idealism and Romanticism with their emphasis on innate irrational forces, was Freud's School of Psychoanalysis. A pivotal work here was Freud's publication of his masterpiece The Interpretation of Dreams in 1900. In Psychoanalysis, of course, irrational instincts that form part of the unconscious Id provide a motive for much of our behavior. Arising from the Rationalist tradition in part (though certainly adopting elements of Empiricism), was a school of psychology later known as Structuralism, developed by Wilhelm Wundt. (Technically, Structuralism refers to the work of Edward Titchener, who studied with Wundt; we'll look at some of the differences in the two systems below.). Wundt focused on consciousness, and explaining the contents of the mind. And finally, in part in reaction to Structuralism, John Watson proclaimed the science of Behaviorism (in a still-worth-reading paper published in 1913, Psychology as the Behaviorist views it). Behaviorism came directly out of an extreme Empiricist position, and simply denied the existence of mental events. For the psychology of learning, Behaviorism and Structuralism had profound influence.
 

III. Two Founding Psychologies: Wundt & Watson

        In Germany, in the late 1800s, Wilhelm Wundt was working to develop psychology as a systematic science. He in fact published a text, based on the work started by Fechner, that held out the promise of discussing psychological facts in terms of physiological ones. As he did further and further work, this became a less and less important goal. He finally developed a type of psychology that was one of the first precursors to modern-day cognitive psychology. We will follow a perhaps bad practice of referring to Wundt's school as Structuralism, since Wundt's writings have typically been seen through the work of a student, Titchener, who later came to America with his own take on psychology (and whose system had the name Structuralism). In any case, Both Wundt and Titchener effectively provided a definition for scientists that told them what kind of research to conduct on individual subjects. We can restate this definition in the following terms: Psychology is the science of the contents of consciousness.

        Definitions are necessary, because they help mark out the territory within which you work. They tell you what to study, and they may even provide hints about what methods to use. In this case, one goal (a goal especially important to Titchener) reflected something that the British Empiricist J. S. Mill (d. 1873) had talked about: establishing a science that would essentially be the equivalent of mental chemistry. In mental chemistry, we attempt to describe the composition of mental elements in terms of simpler mental atoms. Wundt and Titchener spoke about three broad types of mental atoms that populated and permeated conscious experience, and Titchener's research, in particular, was geared towards discovering what these atoms were, and how they combined. The three types included sensations, images, and feelings.

        Sensations refer to the direct mental experiences we have as a result of physical energies stimulating our various senses. These are often present in a multitude. Thus, to go back to an example I mentioned earlier, if you simultaneously hold and look at an apple, there are numerous sensations having to do with texture, shape, color, depth, etc. These are all present simultaneously. So, an experience of seeing or touching an apple is already a complex mental event, rather than a simple one. But, how do you determine what are the basic elements going into this complex event? That is what the Structuralists set out to determine.

        The way in which the Structuralists set out on their mental chemistry project was to do experiments in which people would be presented with objects of various sorts, and would have to describe their mental experiences while perceiving these objects. Now, an objection might immediately occur to you: On seeing an apple, you would be strongly tempted to report back something like I'm looking at an apple when being asked to describe your experience. As we have seen earlier, that is in part because we have already learned, through association, to take a number of sensations of a certain sort that occur together, and give the complex a single name that stands for and represents the whole experience. But Wundt and Titchener were aware of this. So, rather than work with naive subjects, they adopted a methodology (a way of conducting scientific experiments to gather data) that was called controlled introspection.

        In controlled introspection, people had to analyze complex sensations; that is, they had to break down a complex experience into its simpler component parts, and then report back those parts. In theory, this should allow us to determine what parts recur in what objects, which should allow us to build an atlas of mental experiences in which all possible objects are listed in terms of their components. But this, in turn, requires fairly sophisticated and trained subjects. Before you use controlled introspection, you need to teach people to analyze their experiences, and they need to know which categories to break those experiences down into. To see the problem, imagine that you are a subject reporting back on what the experience of tasting breaded veal is like. What are the more elementary tastes in veal? Or in boiled crawfish? Or how do you describe the taste of a given wine? (If you've ever read a wine column, then you know that there are some complex categories some people use to describe a taste that seem somewhat bizarre, at first blush.) To address this problem, the experimenter provides the categories (otherwise, everyone is free to develop their own!), and trains people until they are reliable at using these. Some training, of course, may be easier than others (compare color categories to taste categories). But even with taste, people can be trained to report back the amount of bitter, for example, or salt, or sweet, or whatever.

        Notice that this procedure involves the direct and immediate results of elementary perceptions. Both Wundt and Titchener denied that the more complex mental processes involved in, say, thinking and reasoning could be studied through introspection. In fact, Wundt developed another method that contained aspects of modern anthropology and social psychology to study the higher mental processes. He felt that we could learn about these by looking at cultural artefacts, and seeing what these had to tell us about cognitive processes. But as that method did not involve direct experimentation on individual subjects, we will not discuss it further. You should be aware, however, that Wundt thought higher mental processes could be studied, but not through experiments on individual subjects. His arsenal thus included several different types of methodologies, and not just controlled introspection.

        Feelings as elements of consciousness involve emotional or affective states, and are pretty straightforward. But what about the third type of atom, the image? In Structuralism, images were like sensations, but less intense. They also differed from sensations in that they occurred in the absence of objects that would normally give rise to similar sensations. So, if you look at a book, and then close your eyes and mentally picture the book, your experiences in these two situations ought to resemble one another with the exceptions that (1) your mental picture is less clear, and may have fewer details, and (2) there is no physical stimulus in the outside world that is causing your internal image. Both of these points are important. They are important in particular because imagery for Structuralists is the medium through which thinking and reasoning occur. Our ideas essentially consist of images.

        The fact that images can occur in the absence of physical objects that would normally cause similar sensations means that images constitute a memory of those sensations. Thus, we remove ourselves from having to experience just the immediate physical present by being able to call up the past. And that, of course, explains why images have to differ in some respect from sensations: If images were exactly identical to sensations, then how would we ever know that we were actually seeing something rather than hallucinating? And phenomenologically (that is, how things seem to our conscious awareness), images do seem fainter and less crisp than sensations. Note, by the way, that you ought not to assume that imagery is strictly confined to the visual modality. We can have auditory as well as visual images (imagine hearing a song, for example), and indeed, imagery in any sense modality.

        But imagery is also generative, giving it the power of expressing new thoughts rather than just being a record of past events. Because images are generally complex assemblies of more elementary images (just as the experience of seeing an apple is a complex assembly of more elementary sensations), new images can be constructed from the component image atoms much as new sentences can be constructed from the component words. As an example, I will ask you to imagine a fabulous creature that I will call a felornicorn. You have never seen one of these, because it doesn't exist. But you can easily 'picture' it when I tell you that the creature is like a unicorn, only it is a type of cat. You construct this novel thought/image by taking familiar elements (the images of a horn and a cat), and recombining them.

        So, the goal of Structuralism: Find out what the basic sensations and feelings and images are; see how they combine to give us our complex mental experiences in consciousness; and study how image atoms can recombine to constitute thinking and reasoning. Structuralism is so-named because the goal is to describe the structures of mental experience (mental chemistry!), and in order to do that, you have to know how and out of what those structures are built up.

        I stated earlier that Wundt and Titchener did have different systems, despite their use of controlled introspection, and their belief that thought involved imagery. One difference in particular is worth noting: The Mind in Wundt's system was much more active, including a series of processes that affected what was in consciousness. Wundt, for example, thought that attention was one such important process: Things to which we pay attention phenomenologically appear sharper, more detailed, more in focus. Things to which we pay attention (as you will see in later chapters) appear to be more easily learned and remembered. For Titchener, however, attention was not a process, but a cluster of sensations that happened to accompany certain tasks. Furrowing your brow or squinting your eyes, for example, causes certain sense organs to be stimulated, and thus deposits certain sensations in consciousness which can also be given a name (attention) much as another cluster of sensations can be called apple.

        Wundt's and Titchener's work was the precursor of modern cognitive psychology for a number of reasons. It represented a psychology that thought the mental life was what had to be explained; it focused on thinking and reasoning; it talked about imagery; it presented a theory of memory; and it led these researchers and their students and colleagues to conduct a number of experiments that, though less sophisticated than those conducted 70 years later, were surprisingly similar to (and obtained similar results as) later modern experiments. Thus, this work adopted some of the Rationalist kernel, operated at a representational level, and anticipated some modern research in the area of human memory.

        It also quickly ran into problems. There were three issues, in particular, that helped bring Structuralism to a halt. These included the imageless-thought controversy, a potentially fatally flawed methodology, and the status of child and animal research.

        We'll take these in reverse order. A question that very quickly arose is whether animals or children could be studied directly, using the Structuralist approach. The problem is that neither is easily trained to analyze and report back on sensations in consciousness. That focused experimental psychology directly on fairly sophisticated human subjects. If you read Watson's Psychology as the behaviorist views it, you will see that this de facto denigration of animal researchers particularly rankled him. Watson had one of the first experimental laboratories at Johns Hopkins University, and at one point he was studying color vision in various species of animals. A colleague who was visiting his lab was overheard to mutter something along the lines of And they call this psychology! It was not a comment Watson found endearing. Moreover, to what extent are we even willing to attribute consciousness (and imagery, in particular) to animals?  (Note the implicit residue of Cartesian Dualism that allows us to focus on humans rather than animals.)

        Not all psychologists felt that lack of the ability to be trained in controlled introspection was a major stumbling block. There were some very cute 'experiments' that some people conducted in which they tried to act like animals to get some insight into animal consciousness. Probably, the less said about the major flaws in this methodology, the better (though you might enjoy searching out one or two of these 'studies'). But even if you grant this type of an ability, it still fails to allow for the type of research that interested people like Watson.

        More seriously, Watson and others started questioning whether the experimental methodology of controlled introspection was valid. If the goal is to identify the basic sensations, images, and feelings out of which everything is constructed, and if you have to teach people what basic sensations and feelings to report back, then aren't you essentially 'discovering' what you tell people to look for? Thus, imagine trying to resolve by experiment the basic number of tastes. Using this method, it can't be done. One researcher who trains her subjects to report back by analyzing complex tastes down to four basic simple tastes will always find combinations of just those four basic tastes. Another who may choose to train subjects to report back in terms of eleven basic tastes will find combinations of those eleven. The point is, the data themselves will not settle who is right, because the need for training means you have to decide on the basic number before you ever start collecting data from your subjects!

        What finally really helped bury Structuralism, however, was the imageless-thought controversy. This controversy resulted in heated debates about who was a careful experimentalist, and who was sloppy, and people finally got tired of all the arguing back and forth. The debate started when a colleague of Wundt's, Külpe (who was at the University of Würzburg) asked a very simple question: Why not use controlled introspection on higher mental processes involving thinking and reasoning, rather than simpler processes like perceiving? Külpe and his group (known as the Würzburgers) discovered something very important: In a lot of cases, their subjects reported no imagery whatsoever before becoming aware of the answer! Or to put it another way, there was imageless thought. Or to put it another way, something seemed to be going on that didn't involve the three types of mental contents the Structuralists posited. Or to put it another way, it appeared that thinking could be unconscious.

        If your definition of psychology is the science of what's in consciousness and it turns out that many of the important things we do are unconscious, then there is a serious problem (1) with the definition, and (2) with an experimental methodology that restricts itself just to conscious processing. Needless to say, Wundt and Titchener were not pleased with this series of results. Titchener's group, in particular, replicated a number of experiments conducted by the Würzburgers and claimed that careful analysis did indeed show evidence of imagery. Külpe's group redid the experiments and claimed that Titchener's group was simply wrong. There was a lot of heated arguing back and forth, and it seemed that no one was getting anywhere convincing anyone else. So, the time was right for someone to step in with an alternative view that promised to settle these problems once and for all. That person was John Watson, and that view was Behaviorism.
 
 
        The philosopher and historian of science Thomas Kuhn argues strongly that science occasionally goes through revolutions in which one paradigm overthrows another. A paradigm in his sense partly represents a complex series of assumptions about what to study, how to study, and what counts as an explanation. It of course includes the types of philosophical assumptions we have been discussing above. A point that Kuhn stresses is that revolutions tend to occur in conditions of crisis, when anomalies have arisen, and there appears to be no way of resolving those anomalies. An anomaly is an unexpected result that doesn't seem to fit the theory. The imageless-thought controversy was one such anomaly, and with the debate over this and the validity of the method of controlled introspection, Structuralism indeed was in a period of crisis. However, one more feature is needed for a revolution to occur, and for a new paradigm to take over. And that is that there has to be an alternative paradigm that seems to hold out the promise of resolving these critical issues. So long as there is no alternative, people will have no choice but to continue working within the confines of the old paradigm. Unfortunately for Structuralism, Watson's Behaviorism cut the Gordian Knot of its problems by denying that psychology should have anything to do with mental events.

        A new paradigm of science basically (and sometimes implicitly) starts out with a new definition of what its scientific area is all about. Based in part on the then-recent work of Ivan Pavlov on Classical Conditioning and of Edward Thorndike on the effects of rewards and practice (Instrumental Conditioning), Watson proclaimed psychology the science of behavior, thus founding the school of psychology known as Behaviorism. The goal of psychology in this new field was no longer to develop the principles of mental chemistry and describe the structures of the mind; instead, the goal was quite simply the prediction and control of behavior. According to this view, when you can demonstrate that you are able to control what happens, then you obviously have found the cause of a given piece of behavior, and thus have explained why it occurs. At the same time, of course, you can predict the occurrence or non-occurrence of that piece of behavior as depending on whether the cause is present or not.

        As I have mentioned above, Watson was a comparative psychologist, a person who studied animal behavior. So, he was less than enthused about being regarded as a second-class citizen in the world of scientific psychology. In addition, he claimed not to have any imagery whatsoever, so the whole baggage of controlled introspection and description of images struck him as nonsense. And, coming out of an Empiricist tradition, Watson also strongly argued for a positivist approach to science. In his interpretation of Positivism, science could only legitimately deal with public events, observations and facts that could be verified independently by anyone. The problem, according to Watson, was that so-called private events like phenomenological reports of images and sensations could not be verified: If you claim to be seeing a certain image or experiencing a certain sensation or feeling, how do I know that you are an accurate observer of your own mental life, or a truthful reporter of it? So, in one fell swoop, Watson rid psychology of the problems the Structuralists had by denying that mental events were a legitimate part of scientific theory. (In fact, he argued that there were no such things as mental events in the first place.)

        But if you don't study mental events, then what is left? The answer is, the behavior of an organism; its physical movements as characterized by muscular actions. These movements can also be described in terms of basic atoms (the assumption of atomism) that combine to give more complex movements. Watson termed these basic atoms responses.

        But what causes a response, either simple or complex? When you observe me make the response of waving my arm, what is the explanation? It is critical to understand that Watson denied that mental events could be used as explanations of physical responses. That is, it would not be legitimate to say that my arm motion was caused by my wanting to wave "hello" to you, because that event, again, is a private event whose existence cannot be verified by an outside observer. Thus, causes of responses must themselves be physical, and must reside in the outside world, the physical world. Watson termed these causes stimuli. Essentially, we can describe a stimulus in terms of physical energies that impinge on our sense receptors, and cause responses in consequence. The goal then becomes one of determining the conditions under which a stimulus will come to control a response. For this reason, Behaviorism is sometimes called stimulus-response psychology (or S-R psychology, for short).

        How do stimuli come to control responses? The basic principle Watson adopted was that of association, in particular, the principle of contiguity (from Aristotle). Under certain circumstances, an association will form that will result in a stimulus triggering a response that it hadn't triggered up until then. The forming of that association is what learning is all about. Whether that association occurs between two stimuli (as has sometimes been claimed in Classical Conditioning) or a stimulus and a response (as has sometimes been claimed in Instrumental and Operant Conditioning), it requires that the two events to be associated occur at about the same time. So, the major experimental techniques for predicting and controlling behavior involved the formation of associations. Classical and Instrumental/Operant Conditioning were the methodologies of the Behaviorist.

        Now that we have presented some of Watson's basic ideas, we can consider how Behaviorism promised to get around the problems the Structuralists were having. It got rid of the seemingly inexhaustible imageless-thought debate by getting rid of thought: If there is no such thing as thinking that involves mental events, then the whole issue of whether thinking is conscious or not simply disappears. It also got rid of the flawed methodology issue by simply disallowing controlled introspection. I've always found this a curious casualty of the Behaviorist revolution. Your verbal descriptions, for a Behaviorist, certainly count as behavior, and the objects to which you react certainly count as stimuli. In this sense, so-called introspective reports could have been assimilated to a Behaviorist study of perception. But in their enthusiasm to purge science of anything having to do with Structuralism, the early Behaviorists ignored much of the useful (and replicable!) results some of the Structuralists had obtained (even though replicable results presumably meet the criterion of prediction and control). Behaviorism also got rid of the 'Cartesian Dualism' problem: Behavior can be observed, described, and studied in any organism. Kids, rats, and insects were thus as valid subjects for psychology experiments as adults. Moreover, non-humans became the preferred subjects for many experimentalists. Humans supposedly come to the laboratory with many pre-existing associations; they have a history of learning that the experimenter cannot know completely. But animals can be reared from birth in the lab, and their history of learning can be precisely controlled (at least in theory!). Thus, the hope was that the basic principles of learning that apply to all species could be discovered by studying non-humans. The result was a proliferation of studies on rats and pigeons (as you will discover for yourself from glancing through this or any other text on learning).

        Behaviorism dominated American experimental psychology for about 50 years. It obviously came out of the Empiricist kernel, with its emphasis on associationism, atomism, positivism, and environmentalism. In Instrumental and Operant Conditioning, where an association strengthened or weakened as a result of reinforcement or punishment, it also adopted aspects of Bentham's utilitarianism, and combined these aspects with a Darwinian notion that behavior that was adaptive would survive, and behavior that was mal-adaptive would not. It was a monist position emphasizing strict materialism. And also in contradiction to Cartesian Dualism, it adopted an assumption called general process theory: There is one general process of learning (namely, association), and all things learn by means of that process (this also represented an influence of Darwin's work suggesting continuity and similarity among different species). Thus, while some species may be capable of more complex associations than others, the principles of learning ought to be the same everywhere, so that studying one species informs you about the learning of others. A final assumption in the Behaviorist kernel that wraps together aspects of some of the previous assumptions is the assumption of black box psychology: People and animals are to be regarded as sealed black boxes. We don't and can't know what goes on inside those boxes. What we can do is manipulate inputs to those boxes (that is, present stimuli), and observe the outputs of the boxes (that is, their behavior or responses). The task is then to determine what types of inputs trigger what types of outputs, and why.  This assumption is sometimes more formally referred to as peripheralism: We must restrict ourselves to the ‘outside' or periphery, since that is all we can know.  Peripheralism applied to living things meant going no further than the incoming sensory signals constituting a stimulus, and the outgoing muscular signals resulting in a response.

        If Structuralism ran into problems, so, eventually, did Behaviorism, although there were some very early signs that Watson's radical position went too far. One of the claims Watson made was that our experiences of thinking were really the result of peripheral muscle movements in the larynx, rather than mental events. That is, Watson at one point claimed that thinking was simply very quiet speech (subvocal speech), involving tiny responses, but of the same sort that we would make in speaking out loud. Indeed, he conducted a number of experiments in which he tried to measure tiny throat movements when people were supposedly engaged in thinking.  In an attempt to test this claim of Watson's, one psychologist, Smith, had himself injected with curare, a South American poison that works by paralyzing the voluntary muscles (you die from curare because you can no longer breathe on your own). Smith, of course, had to be kept on a respirator during the experiment. But when he came out of the paralysis, he reported being able to understand the questions he was being asked at the time, and also being able to formulate answers. Since his laryngeal muscles (involved in speech production) were paralyzed at the time, he demonstrated that thinking was not just subvocal speech. Other later experiments have come to very similar conclusions. In particular, thinking very different thoughts does not result in large or systematic differences in laryngeal movements. Also, studies on articulatory suppression (in which people have to continually repeat one or two specific words out loud to deny them the use of subvocal articulation for some other purpose) show that we can still read and comprehend simple passages. In short, whatever thinking is, it cannot be identified strictly with tiny muscle movements in the throat.

        In the late 50s and 60s, there were a number of developments that led American psychologists back to a more cognitive approach (and even back to discussing the role of imagery in thinking). The linguist Noam Chomsky published a scathing attack on the idea that language could be learned through reinforcement and associations; he also presented strong arguments that language had to involve highly abstract rules of grammar (the rules we talked about at the beginning of this chapter, by which all proper sentences of a language can be generated). Also, a trio of psychologists, Bruner, Goodnow, and Austin, published a major book on how people form categories and concepts; their work showed that people were actively testing out hypotheses about what some concept might be like, and that these hypotheses were essentially mentalistic rules that specified a potential definition of a concept or category. People like Miller were busy exploring different memory systems, arguing that we actually had to make guesses about some of the structure inside the so-called black box if we wanted to account for the findings. And a number of people in computer science and psychology (including Miller, again) were showing that computers could be analyzed as devices that perform actions designed to accomplish a goal (a non-physical, non-present concept that could not properly be described as a stimulus hitting the sense organs, and causing the reflexive responses). So, the pendulum swung once more.

        But revolutions never put an old paradigm back; they always result in new ideas, and new ways of seeing and explaining. Thus, we all agree that Structuralism and similar approaches had their flaws, and were incomplete. In terms of modern-day psychology, most people (there are still Behaviorists around, of course!) adopt something from both the Structuralists and the early Behaviorists. From the Structuralists comes a concern with the mental level, and a belief that any adequate explanations of our actions and learning has to include this level. At the same time, Watson and his group have left us with a belief in methodological behaviorism, an offshoot of his insistence on positivism. Methodological behaviorism is the claim that mental-level events can be studied only if they have clearly specified observable consequences. Thus, we use methodological behaviorism to evaluate the existence of otherwise private events. And sometimes, people even come up with what appear to be strongly associationistic, behavioral theories, except that the associations occur at a central rather than peripheral level (see the chapter on connectionism).

        From an historical perspective, however, the work that arose out of Behaviorism sought behavioral-level explanations, and came to be known collectively as learning theory. In contrast, the work that arose out of Structuralism sought representation-level explanations, and came to be known collectively as studies in memory. It is only relatively recently that these two areas have had a happy reunion in which theorists use the same sort of vocabulary and principles to explain findings in each.
 

IV. Learning & Memory

         Perhaps surprisingly, we have so far avoided giving a general definition of learning or memory. But you should know by now that definitions depend on one's a priori cluster of philosophical assumptions, on the level of explanation which strikes one as particularly appropriate, and indeed, on what the prevailing paradigm is. Not surprisingly, then, the definitions of these terms have changed over the years. Indeed, crack open any text on learning and memory, and the odds are that you will collect a slightly different definition.

        Before getting to a given definition of learning, I think it important to make explicit one of my own assumptions in my particular cluster: I think learning is a mechanism for adapting to novel situations. Because of the stress on novelty, I will exclude other mechanisms of adaptation such as reflexes (though learning can certainly be based on these: see the next several chapters), and instincts (though instincts can certainly include a component of learning). Reflexes and instincts have become part of an organism's genetic makeup through a long process of evolution: Organisms that had a predetermined tendency to react in a certain way to certain events lived longer, had more offspring, and so passed these tendencies on to the species's gene pool at a greater rate.

        Reflexes will prove important in Classical Conditioning: They are simple motoric responses triggered without learning by certain stimuli. Typically, all normal members of a species exhibit the same reflex (thus while some reflexes may be species specific in that they are limited to a given species, a given reflex is also species characteristic: found in other members of the species, and part of what it means to be a member of that group). A newborn infant's sucking and rooting reflex is an example in humans (and many other species) that involves nourishment critical for survival. Another reflex shared by a number of species is the orienting reflex, involving an immediate change of attention (orientation of the visual and auditory receptors) to any sudden and dramatic visual or auditory change: Sudden changes may signal the opportunity for obtaining food, or for avoiding becoming food. Reflexes are adaptive by definition, but limited in their utility. It takes a long time for a species to acquire a reflex, and there are a number of stimuli and situations out there for which a species will have no reflexes.

        Instincts are also species characteristic (and in some cases, species specific). Unlike reflexes, instincts involve more complex patterns of responses. But, they are part of the species's genetic makeup, and thus triggered automatically by certain stimuli or events. When a behavioral pattern of this sort is set off, it tends to go to completion, even when it is no longer appropriate. In some sense, these types of patterns represent a dance consisting of a series of steps, with each triggered by the previous step. Male sticklebacks (a type of fish), for example, have a complex routine they go through to woo and fertilize the eggs of a female stickleback. The male will tend to go through the whole routine even when the female has been removed. However, these complex patterns do sometimes exhibit learning influences. Thus, birds who have not been exposed to the songs of other birds will still sing when they reach sexual maturity (singing serves a number of purposes, including attracting a mate and identifying a foraging area), but their songs (complex patterns of notes) will be deficient, and thus will not be successful in their function. Or to take another example involving a phenomenon termed imprinting, many species of birds (chickens, peacocks, and geese, for example) will follow the first large moving object that comes by within a few hours of hatching. When these imprinted birds reach sexual maturity, they display mating behaviors directed towards other instances of that first object. In imprinting, it appears to be the case that an animal is rapidly acquiring information about the stimuli that are appropriate as releasers for some of its instinctive behaviors. Here, then, is a situation where the response sequence is pre-programmed, but a type of environmental influence or learning can determine the range of stimuli or triggers for that sequence.

        In the case of bird song and imprinting, there appears to be a genetically determined window of opportunity during which this learning may take place, a critical period. If the learning does not occur within that window, then future instinctive behavior will be seriously affected. Normally, birds of a species hear other birds, and so are exposed to their songs; normally, the first large moving object goslings see is the mother goose, so they imprint on the proper species. But as the examples above suggest, things can sometimes go very wrong. There has been some suggestion of critical periods in humans for both attachment behavior, and for normal development of language behavior.

        How do we know that something is genetically programmed, rather than due to learning or interaction with the environment? One clue is that the behavior is relatively inflexible, although this is not an invariable cue. As we will see in a later chapter, a type of learning that results in automatization can also yield inflexible behavior. A more direct approach is to deny the animal any opportunity of learning the behavior pattern. If it still occurs, then we have evidence that it is unlearned. An example here involves an experiment by Eibl-Eibesfeldt, who raised baby squirrels on liquid diets in carpeted environments. When they were first exposed to nuts, they went through the same routine of attempting to bury the nut in front of some large object that squirrels in their natural habitats do. Given the flexibility of much human behavior and the immorality of conducting experiments on raising humans in isolation, the issue of whether adult humans exhibit instinctive behaviors is controversial.

        But instincts and reflexes do not allow reacting to novel stimuli. Thus, while they are essentially time-proven useful means of dealing with certain stimuli, our world is composed of many situations and events for which we have no built-in reactions. In this sense, a mechanism, like learning, that allows the individual over the course of a lifetime to acquire new ways of reacting with the environment should prove useful. Moreover, to be useful, such a mechanism must operate relatively rapidly, unlike the slow adaptations of a species. Learning to avoid a dangerous situation should occur with some haste, as otherwise the organism's probability of succumbing to the danger may result in its death.

        Moreover, we can add that learning can involve dealing with novelty in two ways. One is that you learn a new way of dealing with some sort of stimulus or event, so that on being exposed to it again, you perform whatever action you have learned to do. We can call this reactive responding, for short. But particularly in humans, there is also proactive responding, the ability to anticipate how you should respond to a novel stimulus before actually physically encountering it. Based on past experiences with somewhat similar situations, we can run mental 'scenarios' involving new stimuli in which we mentally pre-test a response to see what will happen. This ability to run mental models is a very powerful device that allows us to discover potentially dangerous responses before they get us into danger in the first place. But, this also leads us towards the area of memory. For me, learning requires a relatively permanent change in memory, so that I view learning and memory as inextricably linked. Within these constraints, I am willing to adopt the definitions of learning and memory offered by John Anderson in his text (1995, pp. 4-5). Thus,

Learning is the process by which relatively permanent changes occur in behavioral potential as a result of experience.
and:
Memory is the relatively permanent record of the experience that underlies learning.
His definition of learning, in particular, does not radically differ from those given by others.

        So, what is important about the above (and similar) definitions of learning? Textbooks like to point out three aspects of such definitions in particular. One is that whatever learning involves, the result should be somewhat long-lasting. This ties in to my insistence that learning be adaptive. For learning to be useful, it must be something that can potentially apply to future situations, and that, in turn, requires that learning not dissipate. Eventually, new learning may alter or displace old learning, or things may indeed be forgotten. But we would not want to label as learning some sort of change in knowledge or skills that is only part of our immediate consciousness, and is forgotten when we shift attention to something else (though we might want to label this as a memory of some sort that does not become permanent). We have all had the experience of very rapid forgetting out of short-term or working memory, and we may want to exclude from our definition something that never made it past short-term memory (assuming there is a distinction between short-term and long-term memory: see the chapter on memory). So, some relatively long-lasting change in neurophysiology must underlie true learning, and this change must also involve a change in long-term memory.

        Second, learning involves an alteration in behavioral potential, with stress on the word potential. What is meant by this is that new learning need not result in actual observed changes in behavior. Learning is different from performance, the behavior we can observe in any given situation, or over any given interval of time. Performance is affected by a number of variables, including the motivation of an organism to perform (a satiated rat, for example, has little reason to run a maze for some more food), and the appropriateness of the task to what was learned. With respect to the latter, if neither I nor anyone else ever tests you on Külpe and the Würzburgers, then your performance will likely give no clue or hint that you've learned about what he and his group did. That does not mean you didn't learn; it just means I didn't give you an appropriate test. So, we specify potential behavior with the belief that we can come up with appropriate tests and appropriate motivation, if called on, to assess whether you in fact have learned something. But we also acknowledge that in the absence of appropriate tests and motivation, your behavior may not be the best guide to what you've learned.

        Third, all definitions specify that learning is a result of experience. Many things besides experience can have temporary or relatively long-lasting effects on the potential to respond. In terms of temporary effects, for example, fatigue, illness, lack of nourishment, etc., obviously affect what you are capable of doing at the moment; but clearly these are not learning. And in terms of long-lasting effects, maturation, aging, and loss of limbs also affect your potential to respond; but again, we would not wish to describe these factors as learning.

        So, the definition is a bit vague and negative, in the sense of what learning is not. Perhaps we should make it a bit more positive by stating that learning involves the forming of new connections that would otherwise not have formed had the immediate experiences, internal (as in thinking and reasoning about a problem) or external (involving outcomes, for example) been different.

        One more point is critical, and should serve as a theme for this text: There are many different types of learning and memory that we can talk about and explore. Behavioral potential (to use Anderson's phrase) is a highly ambiguous phrase that can refer to a number of very different situations. In particular, people like Larry Squire have claimed that there are a number of different memory systems corresponding to these different types of learning. We will revisit some of these issues regarding memory systems and types of learning in a later chapter on memory. For now, let me simply assert that I take general process theory to be wrong: There are many processes of learning, even in something simple like Classical Conditioning. Not all species will have all types of learning. Indeed, one of the most important issues that learning theorists will face is whether any species besides the human is capable of declarative learning.

        Let us briefly return to the issue of levels of explanation. At the representational level, we can regard learning as involving the acquisition of knowledge. Your task in this class will be to increase your knowledge of the areas of learning and memory, but that will mean being able to understand and derive implications from various theories and principles, and not just storing away new facts that you come across in this text or in lectures. (Or to tie this in to the beginning of this chapter, I will expect you to be a like a scientist capable of generating predictions from what you are learning, and capable of evaluating this information). At that level, you are acquiring declarative knowledge, knowledge that you can state. Declarative knowledge is also sometimes called know-that knowledge, because you can put the phrase I know that in front of it: You should now know that Külpe's group claimed there was unconscious thought. Do animals have declarative knowledge? You will see that many theories of learning are now couched in terms of representational-level explanations, and talk about the predictions animals seek to make about potentially significant biological events (these are often referred to as contingency or cognitive theories).

        In contrast, at a behavioral level, we can regard learning as involving the acquisition of novel behaviors, whether this means adding a new response to the repertoire of responses you already have, or taking an old response and hooking it up to a new stimulus. Some responses at the behavioral level in humans appear to involve procedural knowledge or learning. In this type of learning situation, we acquire a skill (often, but not always, motoric) that tends to occur automatically, with relatively little need for conscious processing or attention (as opposed to declarative knowledge). Procedural knowledge is sometimes called know-how knowledge: To take the classic example, you know how to ride a bike, but you do not have the declarative knowledge that would allow you to explain that to a young child trying to learn. Some theorists claim that procedural knowledge must first start out as declarative; regardless of whether that claim holds up in all cases, these two types of knowledge do seem fundamentally different, and often display different time courses of acquisition (with proceduralization of a complex skill taking much longer than learning of a complex piece of knowledge, assuming that we can compare levels of complexity in the two). In speaking of behavioral potential, we include both types of learning in this phrase.

        In any case, you will see some non-cognitive associational or contiguity theories of learning, as well as the cognitive or contingency theories. And it may well be the case that both types of theories are needed for how different species learn, or indeed, for how the same species learns in different situations. Historically, the people who studied learning came out of the Empiricist tradition and concentrated on the study of animal learning. Due to positivism, they took the behavioral-level explanation as being the only legitimate explanation, so they built theories of how organisms form associations among stimuli and responses. In contrast, the people who studied memory came out of the Rationalist tradition and concentrated on humans. They felt that behavioral-level explanations were simply inadequate, and thus they focused exclusively on representation-level explanations. The result resembled two paradigms in which people had nothing in common with one another, despite the obvious interplays of learning and memory. But this exclusive focus on one or the other level is probably counterproductive. Both may be needed, depending on the situation, and it is even conceivable that both have a role to play in the same situation. I like to work complex puzzles like the metal puzzles in which you have to figure out how to remove a certain ring. On some of the very complex puzzles, I have the feeling that I have solved them the first few times by acquiring the right moves without understanding why they work (a behavioral or associational level); but thereafter, I become able to 'see' how the puzzle is put together, and to understand why certain moves have to be done before others (a representational level). That symbolic information, in turn, often helps me with a new puzzle. How to tease these levels apart and understand their interplay is part of what I take the psychology of learning and memory to be about.
 
 

 

Partial Bibliography

Amsel, A. (1958). The role of frustrative nonreward in noncontinuous reward situations. Psychological Bulletin, 55, 102-119.

Anderson, J.R. (1995). Learning and memory: An integrated approach. NY: Wiley.

Bruner, J.S., Goodnow, J.J., & Austin, G.A. (1956). A study of thinking. NY: Wiley.

Chomsky, N. (1957). Syntactic structures. The Hague, Netherlands: Mouton.

Chomsky, N. (1959). Review of Skinner's Verbal Behavior. Language, 35, 26-58.

Freud, S. (1950). The interpretation of dreams. NY: Random House.

Gibson, E.J., & Walk, R.D. (1960). The "visual cliff." Scientific American, 202(4), 64-71.

Kuhn, T.S. (1970). The structure of scientific revolutions (second edition). Chicago: University of Chicago Press.

Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. San Francisco: W. H. Freeman.

Miller, G.A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97.

Mowrer, O.H., & Jones, H. (1945). Habit strength as a function of the pattern of reinforcement. Journal of Experimental Psychology, 35, 293-311.

Titchener, E.B. (1909). A text-book of psychology. NJ: MacMillan.

Watson, J.B. (1913). Psychology as the behaviorist views it. Psychological Review, 20, 158-177.
 
 

 

Additional:

Three books you might enjoy perusing (two on the History of Psychology and one on the Scientific Revolution:

Butterfield, H. (1957). The origins of modern science. New York: The Free Press.

Hergenhahn, B.R. (1992). An introduction to the history of psychology (second edition). CA: Wadsworth.

Leahey, T.H. (1997). A history of psychology (fourth edition). NJ: Prentice-Hall.
 
 
 
 

Some Relevant Internet Sites (but there are many more out there!):

On Philosophy:

  The Ism Book:        

 A Dictionary of Philosophical Terms and Names:         

 Internet Encyclopedia of Philosophy:         

Stanford Encyclopedia of Philosophy     
 
 

On the History of Psychology:

Classics in the History of Psychology     

History of Psychology Timeline     

 History of Psychology    
 

 
 

1. © 1998 by Claude G. Cech