Computer Science 371: Cognitive Science Assistant Professor of Computer Science Douglas Blank
What is intelligence?
What is consciousness?
How do neurons give rise to thinking?
Can a computer think?
Tasks that require humans to concentrate on the steps needed to produce a certain result-filing a 1040 form, for example-are easy to program on a computer. This is not the case with the daily decision-making activities that come so naturally, we don't even think about the mental processes involved.
If a computer could simulate the latter, would it have intentions-and would those reside in the machine's hardware or software?
These are some of the questions examined in this introduction to cognitive science, the interdisciplinary study of intelligence in mechanical and organic systems. Students explore topics in psychology, philosophy, computer science, linguistics, neuroscience, and mathematics. (No prior knowledge or experience with any of the sub-fields is assumed or necessary.)
During the course, students use Pyro (an acronym for PYthon RObotics), an NSF-funded programming environment created by Assistant Professor Douglas Blank that allows beginning students to experiment with several different types of robots and robotic "brains."
For the first lab project, each student must design a brain in Pyro to make a robot perform an interesting behavior and then demonstrate it to friends without letting them see how the robot's brain actually works. As the robot is running, observers are asked to describe the behavior and then guess how the robot works. Students note what kinds of mental structure or mental mechanisms (memory, desire, fear, or randomness, for example)their observers may or may not attribute to the robot.
Braitenberg, Valentino, Vehicles: Experiments in Synthetic Psychology (MIT, 1984).
Chalmers David, "The 'Matrix' as Metaphysics" (a paper written for the philosophy section of the official "Matrix" website).
Dennett, Daniel, "Where am I," from Brainstorms: Philosophical Essays on Mind and Psychology (MIT, 1978).
Hofstadter, Douglas, "Acoffeehouse conversation on the Turing Test" (1981).
Ibid., "How Could a COPYCAT Ever Be Creative?" (1994).
Searle, John. R., "Minds, Brains and Programs," Behavioral and Brain Sciences 3 (3): 417-457 (1980).
Turing, A.M., "Computing machinery and intelligence," Mind 59: 433-460 (1950).
This year's course benefited from having two experts in analogy-making computer programs on campus, Blank himself and James B. Marshall, who is visiting Bryn Mawr this year as a research associate to learn about robotics from Blank's team (see sidebar).
Marshall lectured to the class about Metacat, a computer program he developed that has the ability to watch its own behavior and compare its answers.
Assistant professor of computer science at Pomona College, Marshall worked with Douglas R. Hofstadter, author of the Pulitzer Prize-winning book Godel, Escher and Bach: An Eternal Golden Braid, as a graduate student at Indiana University. Blank and Marshall did their doctorates at IU at the same time, but Blank approached analogy from a different perspective, through neural networks.
Metacat is an extension of Copycat, which was developed by Hofstadter. Both are part of an ongoing research project on computational modeling of the psychological processes underlying creativity. Copycat and Metacat explore analogies between groupings selected from 26 abstract objects, represented by the lowercase letters of the alphabet: for example, "abc is to abd, as xyz is to ?." Only three relations among the 26 objects are meaningful: sameness, predecessorship and successorship; a is understood to have no predecessor and z no successor. (One possible answer to the question above is wyz. Why? What else can you think of?)
"Despite its apparent simplicity, this letter-string microworld harbors an exceedingly rich variety of subtle analogy problems in which many surprisingly creative and non-obvious answers are possible," Marshall said. Unlike Copycat, Metacat can store and retrieve past answers and evaluate analogies made on its own or suggested to it. "As Metacat works on an analogy problem," Marshall said, "it displays a running commentary in English of its ideas and observations about the problem and about its own 'train of thought.' "
An earlier section of the course was devoted to Blank's own "analogator" model, which trains a neural network on analogies; when it is done learning, it is able, Copycat-like, to make novel analogies on its own.
"Cognitive Science 371 is a prototype for courses in our new Computational Minor, which may be taken even with major disciplines such as art history or English," Blank said. "These courses will cater to students with a wide variety of backgrounds. Topics such as geoinformatics, for example, will combine content from geology and the science of computing."
|Bringing up robots|
"I went into computer science because Iwas very interested in language, how people think, where culture comes from, and how people evolve to become who they are," said Assistant Professor Douglas Blank, who did an undergraduate major in anthropology at Indiana University before completing a B.S. there in computer science and Ph.D. in computer science and cognitive science.
Blank became interested in robotics when he "came to believe that you have to have a body to acquire intelligence." His is a controversial view:robots that operate in the real word must evolve in the real world.
He is working with Associate Professor Deepak Kumar and Swarthmore Associate Professor Lisa Meeden to develop an artificial intelligence system that can adapt to behave intelligently within an environment without a "fitness function," or being told how to behave. "Evolution itself has no 'goal' or 'fitness' for a task," Blank noted.
"In 'developmental robotics,' a term we've coined, a robot goes through phases of mental growth, with a result of increasing levels of sophisticated behavior and mental representations," Blank said. "A system interacts with the environment, and that interaction causes changes in the system that drive it to become increasingly complex. We don't want to tell it by programming what it should pay attention to, but a robot that didn't want to do anything or learn wouldn't be any good.What does it mean to make a robot want to explore? Does genuine intelligence require the awareness of one's ignorance?"
Blank is one of a group of faculty and students at Bryn Mawr who are interested in the phenomenon of emergence, systems whose wholes are greater than the sum of their parts. "A family is greater than its individual members because of the interaction between them," he said. "We think that emergence is behind learning and evolution, but we don't really understand the principles of emergence from a scientific point of view. That's because it's very difficult in an interconnected system to ascribe a particular cause to any particular effect."
He conjectures that "the more intelligent an emergence system is, the less possible it will be to understand how it works. An emergent computer model of the human brain may be equally complex and opaque. As a result, science may shift its focus from understanding how a system works to how it develops."
Although Blank's approach differs in core ways from Hofstadter's, especially in emphasizing learning, both Blank and Marshall use an "emergent" method and are working together on a paper arguing that this is the proper approach to AI.
You may order these books from the Bryn Mawr College Bookstore, whose proceeds benefit the College: Elizabeth Morris, Bryn Mawr College Bookshop, New Gulph Road, Bryn Mawr, PA 19010, 610 526 5322, email@example.com
Return to Spring 2004 highlights
Return to Spring 2004 highlights