Ph. D. General Examinations General Area Exam (Prof. Pattie Maes, Examiner) Xinyu Hugo Liu

:)


Download 110.75 Kb.
Page3/3
Date conversion11.06.2018
Size110.75 Kb.
1   2   3

FEELING

Feelings, emotions, and sentiments have, in the history of the intelligence sciences and in the history of Western philosophy, often been derided as secondary to cognition and intelligence. However, in more recent decades, it has emerged that feelings actually play a hugely important role in cognition, and participates ubiquitously in all areas of cognition. This section overviews some of feeling’s roles in cognition, and then discusses computational models to support it.

Feeling’s meta-cognitive role. Minsky has suggested that feelings play a meta-cognitive role, involved in the control mechanisms for thinking (Minsky, forthcoming). For example, feeling fear toward a situation heightens cognition’s attention to possible dangers, while feeling anger influences a person’s selection of goals, such as choosing to take revenge over other goals. Feeling self-conscious emotions such as pride or embarrassment assist in a person’s revision of personal goals, and participates in the development of self-concept.

Feeling as a means of indexing memories. In the encoding of memories, feelings are an important contextual feature, as much as, if not more than sensory features like sight, sound, and smell. We can think of feelings as the universal metadata for memory, because it applies even when sights or sounds do not. Gelernter suggests that all memories are linked through feeling, and that it is primarily navigation through memories via feeling pathways which constitutes low spectrum thought, a dream-like state. He calls this effect affect linking (Gelernter, 1994).

Feelings arise out of cognition. Furthering the intimate connection between emotions and cognition, Ortony, Clore & Collins (1988) theorize that emotion arises out of cognition, resulting from the cognitive appraisal of a situation. In their model, emotions are directed at, and stay associated with, events, agents, and objects. This stands in contrast to previous conceptualizations of emotions as arising out of the body rather than out of the mind and mental processes.


Representations of feelings. For this computational discourse, we switch to the term “affect” which is more popularly used in the field of affective computing (Picard, 1997). In some ways it is a cleaner word; it is less burdened with the pejorative meanings which have been imbued onto the words “feeling” and “emotion,” and unlike “emotion” which is generally urged to fall into a linguistically articulatable ontology of emotional states (e.g. “happy,” “sad”, but not arbitrarily in-between), “affect” can refer easily to unnamed states.

Computational representations of feelings are of two types: ontological, and dimensional. Ontological models provide a list of canonical emotion-states, and the selection of the base vocabulary can be motivated by a diverse range of considerations. For example, Ekman’s emotion ontology (1993) of Happy, Sad, Angry, Fearful, Disgusted, Surprised, derives from the study of universal facial expressions. Dimensional models pose emotions as being situated within a space whose axes are independent aspects of affect. They carry the advantage of continuous representation, and allow distances to be determined between two affective states using simple Cartesian distance measurements. An example is the dimensional PAD model of affect proposed by Albert Mehrabian (1995), which specifies three almost orthogonal dimensions of Pleasure (vs. Displeasure), Arousal (vs. Nonarousal), and Dominance (vs. Submissiveness).

CREATIVITY

Creativity is perhaps the cognitive faculty most admired by people. Computationally, simple models of creativity have been developed using three paradigms: variations-on-a-theme, generate-and-test, and analogy-based reasoning, but more sophisticated models such as those in which the test procedure employs aesthetic criticism, are still beyond reach.


Variations on a theme. The variations on a theme model of creativity represents the most conservative kind of creativity. In The Creative Process (1994), Turner models the author’s storytelling process using Schankian conceptual schema structures, with role slots and values. The implemented system is called MINSTREL and operates in the King Arthur story domain, and so examples of slot-value pairs are: “agent – knight,” “patient – dragon,” “action – kill.” MINSTREL introduces creativity through the TRAM mechanism which recursively mutates one slot value at a time into a variant using a simple subsumption hierarchy of types. For example, “action – kill” mutates to “action – wounds” and “patient – dragon” mutates to “patient – troll,” using the subsumptions “dragon is-a villain,” “troll is-a villain,” “kill is-kind-of hurt,” “wound is-kind-of-hurt.” However, this kind of approach, while more likely to generate sensical results, is subject to the limitations of local hill-climbing, never reaching better solutions which are too many changes away from the original.

Generate-and-test. The generate-and-test paradigm allows more radical changes to be explored, and the creative solution is not tethered to a dominant theme. However, there is a new onus on the test procedure to assure that the creative solution is both good and workable. In the literature of AI-driven art, Latham’s AARON program and Sims’s genetically recombinating drawing program explores more radical mutations (cited in (Boden, 1990)). Sims’s program for example, consists of a sets of distributed agents each capable of achieving the same goal in different aesthetic ways; these agents combine and recombinate in the fashion of DNA, creating completely new agent capabilities. Artwork created through some combination of agents is judged by a human (the test procedure is unfortunately, not automated), and a positive judgment promotes those responsible agents, while negative judgments decimate the ranks of those agents.


Analogy-based creativity. The paradigm of analogy-based reasoning is to identify a mapping between two unrelated domains, and then to explain aspects of the source domain by examining the corresponding aspects of the target domain. In order to perform analogy computationally, a common technique called structure-mapping is employed (Gentner, 1983). However, this requires that the computational system performing analogy possesses thorough knowledge about the source and target domains, their associated features, and the cross-domain relationships between the features. ConceptNet (Liu & Singh, 2004b), a large semantic network of common sense knowledge, is one source of such information. And an example of conceptual analogy from ConceptNet is shown below (read: “war is like.. fire, murder, etc”):



Aesthetic criticism. In the generate-and-test paradigm, more of the burden for a good creative solution is placed on the test routine. Typically, several aspects must be tested: 1) that the solution is well-formed, 2) that the solution has high utility, and 3) that the solution is elegant. This third criteria poses a particularly interesting challenge to computing. Fortunately, there is some computational literature on aesthetic critics.

In Sims’s genetically recombinating artist, the aesthetic qualities of the produced artworks had to be judged by people, further illustrating the difficulty of computing aesthetic qualities. However, Hofstadter has investigated aesthetic criticism in two projects. Using analogy, Hofstadter created a computer program called CopyCat (Hofstadter & Mitchell, 1995) capable of answering this question creatively: “Suppose the letter-string abc were changed to abd; how would you change the letter-string xyz in the same way?” A shallow solution is “xyd,” but that is unsatisfying because it does not acknowledge the relationships between the letters such as succession. A more subtle and aesthetically satisfying answer would be “wyz,” and CopyCat is capable of judging the aesthetic sophistication of its solutions by knowing which types of solutions feel more profound to people. With McGraw, Hofstadter also explores “the creative act of artistic letter-design” in Letter Spirit (McGraw & Hofstadter, 1993). Their goal is to design fonts which are creative, yet aesthetically consistent among the letters. In Letter Spirit, the Adjudicator critic models aesthetic perception and builds a model of style. Here, the generate-and-test method for creativity is elaborated to what McGraw & Hofstadter call a “central feedback loop of creativity.”


CONSCIOUSNESS

Previous sections have built us up to a discussion of conscious experience. Although Minsky (1986, forthcoming) and Dennett (1992) largely regard the perception of consciousness as a mirage, a grand trick, there is an undeniable sense, that mirage or not, people feel it to be real.

Prototypes of consciousness. Consciousness is the driver of a cognitive machine, making the decisions, lending to the perception of free will, and lending a coherency and persistence to the self. The classic metaphor used to explain conscious experience is the Cartesian Theatre (Descartes, 1644) – the idea that consciousness is a theatre, a play executing on the stage, and also the audience watching the play being executed. However, this is an idealization. Gelernter would likely argue that the crispness and polish of the theatre only represents the high-focus end of the thought spectrum, also the home of rational thinking. At this end, thought is a serial stream, whose navigation accords to our sense of what is and is not rational. As we reduce focus, the middle of the spectrum is creative thought, where occasionally, analogies and other divergent thoughts are pursued. Going lower still, we begin to think by simply holding images and remembrances in the mind for a while, traversing to the next memory through affect or other sensorial cues. Here, thought is more of a wandering sojourn than a path toward a goal.

What allows us to gain focus over thoughts are the cognitive faculties for attention and intention. Attention allows us to juggle the same idea in working memory while we “think” about it, giving us the perception of continuity and persistence of thought. Our ability to perceive the intentions of others, the directedness and aboutness of their actions, fold back symmetrically unto ourselves, causing us to be conscious of our own ability to intend.


Our ability to possess and manipulate our self-concept also figures into conscious experience. When we attend to our self-concept, and compare that to our previous self-concepts through remembrance, we experience continuity of self. Self-concept and self-representation are seemingly unique to our own species.

Architectures of the conscious mind. If we combine the ideas about the self as being composed of instinct and reaction, the ability to juggle thoughts, and the ability to possess and manipulate a self-concept, we arrive at a common architecture for the mind proposed by Freud (1926), Minsky (forthcoming), and Sloman (1996). Freud called his three tiers of the human psyche id-ego-superego, while Minsky and Sloman call their layers reactive-deliberative-reflective (Minsky also explores an elaborated six-layer hierarchy in The Emotion Machine). The reactive layer is common to all living creatures, while deliberation and reflection are seemingly only found in humans. Deliberation requires the juggling of thoughts with the help of attention, and the fact that there exists crisp thoughts to juggle in the first place is perhaps best owed to the presence of language, which is socially developed. This begs the question: what would people be without sociality? Would thought be possible in the same way? Would consciousness be possible in the same way? Reflection is a special kind of deliberation which involves thinking about and manipulating a representation of the self and of other people’s minds.

Another way to think about consciousness, computationally, is that it is a meta-manager of other processes, called demons. This is Minsky’s idea of a B-Brain, capable of observing A-Brain processes (1986). Selfridge’s Pandemonium system (1958) originated the demon-coordination idea in the computational literature, which has recently evolved into the distributed agents system problem. In that system, each demon would shout when it could solve a problem, and the demon manager would select the demon with the loudest voice. However, if a demon fails the task, its voice to the demon manager is reduced in the future. Sloman conceptualizes the demon-coordination problem similarly. In his scheme (1996), concurrent goal-directed processes are running in parallel, and each is indexed by its function. A central process coordinates between these processes, resolving conflicts, and making decisions such as which processes to keep and which to end in the face of resource shortages.


HOMEOSTASIS

It may seem a bit funny to talk about homeostasis in regard to cognition and the mind, as the word is usually applied to phenomenon such as the body’s regulation of its temperature, or the balancing of ecological systems. However, the mind can also drift towards trouble, which needs to be placed in check. If the mind begins to take on cognitive baggage, those need to be purged. If the mind is tense, it needs to be relaxed. And if certain kinds of errors are frequently committed, those need to be corrected.

Expunging the baggage of failure. When one fails at achieving a goal, the result is not only emotional disappointment, but often also cognitive baggage. That is to say, the failure remains in our thoughts, it can become distracting, and the memory and emotions of the failure may recur to disrupt our lives. Sometimes we feel the failure recurring, but have lost an exact pointer to its cause. Freud calls this class of traumatic memories repressions (Freud, 1900). As the amount of baggage increases, it can become a burden. Luckily, there are cultural and personal mechanisms for garbage collecting this baggage. Freud identified night dreaming as one such mechanism for integrating a day’s worth of ideas into our memory and discarding the rest. Daydreaming is another. Mueller, in fact, implemented a computational system called DAYDREAMER (1990) which explored the utility of daydreams. Of the purposes which Mueller identified, one is related to emotional regulation. If the DAYDREAMER system suffered a recent failure and is bothered, it can create a daydream about an alternate ending, a more desirable ending, and experience the pleasures of that ending even though it did not actually happen. Sometimes, it is just a matter of “getting it out of your system,” and imagining the alternate ending is enough to satiate those emotional needs.


The catharsis of laughter. For Freud, laughing at jokes represents a liberation of the repressed (1905). He found that jokes often covered subjects which were cultural taboos – those tales not kosher to discuss in any earnest context because of their unseemliness or over-frivolity. However, formulating the topic as a joke is a way of sneaking it past the mental censors which inhibit them. The effect of laughing at these jokes is catharsis – relieving the pressure built up in the unconscious pressure cooker. Minsky views jokes similarly but adds that they also have a nice utility. Jokes, he says, are a mechanism for learning about bugs in everyday common sense which if not for being disguised as a joke, would be blocked by mental censors (Minsky, 1981). Jokes are a means to learning some commonsensical negative expertise.

Getting unstuck common bugs. Acquiring negative expertise through humor may help us get unstuck from common bugs. Another medium which teaches us about rare bugs may come from adages, e.g. “Closing the barn door after the horse,” or, “the early bird gets the worm.” In the computational literature, Dyer shows that adages are often buried as morals in stories we often here. In his BORIS understanding system (1983), TAUs or Thematic Abstraction Units, represent these adages. According to Dyer, adages all illustrate common planning errors. When one experiences a goal failure, these seemingly harmless and irrelevant adages come to mind, and often, help us in the reflective process to realize the bugs in our plans, suggesting solutions to us for getting unstuck.

CONCLUSION

In this paper, we dove anecdotally into some of the most interesting problems in humanesque cognition: remembrance, instinct, rationality, attention, understanding, learning, feeling, creativity, consciousness, and homeostasis. Our goal was to tell a story about aspects of the mind and of human behavior using the literature on knowledge representation, reasoning, and user modeling in Artificial Intelligence, Cognitive Science, and other relevant fields. Often times AI researchers lose sight of the relevance of their computational work to the greater problems, deeper problems in humanesque cognition, and we feel it is vitally important to tell exactly this kind of story. Each topic covered in this paper is a story of where we have been, where we are computationally, and is suggestive of where there is left to go.

In reflection, the field has come quite far with its ideas, especially in the wake of the birth of Cognitive Science, which often seems to pick up the unfinished business of abandoned deep AI ventures; a further observation is that some of the most interesting and provocative work seems to be coming from the fringes of the field, not yet picked up by mainstream research. Also there is some deeply important work which threads through the paper; these themes include Gelernter’s spectrum theory of thought, Dennett’s stances, Drescher’s constructivist learning “baby machine,” Minsky’s Society of Mind, analogical reasoning and metaphor, research on cognitive reading, and the Schankian tradition of understanding.

Above all, what we most wanted to achieve here is a reinvigoration of the spirit which birthed AI in the first place: AI’s first love was the beautiful human mind, its conscious experience, its remarkable ability to focus, attend, intend, learn keenly, think both creatively and rationally, react instinctively, feel deeply, and engage in remembrance and imagination. Reconnecting AI to AI’s original muse, the mind, and realizing where the gaps lie, is a humbling and eye-opening experience. This is a checkpoint. We know where we can go next. Are you ready? Let’s go.

WORKS CITED

F.C. Bartlett: 1932, Remembering Cambridge: Cambridge University Press.

Paul Bloom: 2000, How Children Learn the Meanings of Words. MIT Press.

Margaret Boden (ed.): 1990, The Philosophy of Artificial Intelligence, Oxford University Press, New York.

G. C. Borchardt: 1990, “Transition space”, AI Memo 1238, Artificial Intelligence Laboratory, Massachusetts Institute Of Technology, Cambridge, MA

Rod Brooks: 1991a, “Intelligence Without Representation”, Artificial Intelligence Journal (47), 1991, pp. 139–159.

Rod Brooks: 1991b, “Intelligence without Reason.” Proceedings International Joint Conference on Artificial Intelligence '91, 569-595.

Mihaly Csikszentmihalyi, Eugene Rochberg-Halton: 1981, The Meaning of Things: Domestic Symbols and the Self, Cambridge University Press, UK.

Randall Davis, Howard Shrobe, Peter Szolovits: 1993, “What is a Knowledge Representation?” AI Magazine, 14(1):17-33.

S. Deerwester, S.T. Dumais, G.W. Furnas, T.K. Landauer & R. Harshman: 1990, Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6), 391-407.

D.C. Dennett: 1987, The Intentional Stance. MIT Press. Cambridge Massachusetts.

Daniel Dennett: 1992, Consciousness Explained.

R. Descartes: 1644, Treatise on Man. Trans. by T.S.Hall. Harvard University Press, 1972.

Gary Drescher: 1991, Made-Up Minds: A Constructivist Approach to Artificial Intelligence. MIT Press.

M.G. Dyer: 1983, In-depth understanding. Cambridge, Mass.: MIT Press.

Paul Ekman: 1993, Facial expression of emotion. American Psychologist, 48, 384-392.

R. Fikes and N. Nilsson: 1971, STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 1:27-120.

Sigmund Freud: 1900, The Interpretation of Dreams, translated by A. A. Brill, 1913. Originally publish in New York by Macmillan.

Sigmund Freud: 1905, Jokes and Their Relation to the Unconscious. Penguin Classics.

Sigmund Freud: 1926, Psychoanalysis: Freudian school. Encyclopedia Britannica, 13th Edition.

David Gelernter: 1994, The Muse in the Machine: Computerizing the Poetry of Human Thought. Free Press

D. Gentner: 1983, Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7, pp 155-170.

M. P. Georgeff et al.: 1998, The Belief-Desire-Intention Model of Agency. In N. Jenning, J. Muller, and M. Wooldridge (eds.), Intelligent Agents V. Springer.

I.J. Good: 1971, Twenty-seven principles of rationality, in: V.P. Godambe, D.A. Sprott (Eds.), Foundations of Statistical Inference, Holt, Rinehart, Winston, Toronto, pp. 108--141.

D. Hofstadter & M. Mitchell: 1995, The copycat project: A model of mental fluidity and analogy-making. In D. Hofstadter and the Fluid Analogies Research group, Fluid Concepts and Creative Analogies. Basic Books.

Ray Jackendoff: 1983, “Semantics of Spatial Expressions,” Chapter 9 in Semantics and Cognition. Cambridge, MA: MIT Press.

L.P. Kaelbling, L.M. Littman and A.W. Moore: 1996, "Reinforcement learning: a survey," Journal of Artificial Intelligence Research, vol. 4, pp. 237—285.

Jacques Lacan: 1977, “The agency of the letter in the unconscious or reason since Freud,” A. Sheridan (trans.), Ecrits. New York: W.W. Norton. (Original work published 1966).

George Lakoff, Mark Johnson: 1980, Metaphors We Live by. University of Chicago Press.

George Lakoff & Rafael Nunez: 2000, Where Does Mathematics Come From? New York: Basic Books, 2000.

David B. Leake: 1996, Case-Based Reasoning: Experiences, Lessons, & Future Directions. Menlo Park, California: AAAI Press

J. F. Lehman et al.: 1996, A gentle introduction to Soar, an architecture for human cognition. In S. Sternberg & D. Scarborough (eds.) Invitation to Cognitive Science (Volume 4).

D. Lenat: 1995, CYC: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11): 33-38.

Hugo Liu: 2004b, ESCADA: An Experimental System for

Character Affect Dynamics Analysis. Unpublished Technical Report.

Hugo Liu and Push Singh: 2004b, ConceptNet: A Practical Commonsense Reasoning Toolkit. BT Technology Journal 22(4). pp. 211-226. Kluwer Academic Publishers.

Pattie Maes: 1994, Modeling Adaptive Autonomous Agents, Artificial Life Journal, C. Langton, ed., Vol. 1, No. 1 & 2, MIT Press, 1994.

John McCarthy: 1958, Programs with Common Sense. Proceedings of the Teddington Conference on the Mechanization of Thought Processes.

Gary McGraw and Douglas R. Hofstadter: 1993, Perception and Creation of Diverse Alphabetic Styles. In Artificial Intelligence and Simulation of Behaviour Quarterly, Issue Number 85, pages 42-49. Autumn 1993. University of Sussex, UK.

Albert Mehrabian: 1995, for a comprehensive system of measures of emotional states: The PAD Model. Available from Albert Mehrabian, 1130 Alta Mesa Road, Monterey, CA, USA 93940.

Marvin Minsky: 1974, A framework for representing knowledge (AI Laboratory Memo 306). Artificial Intelligence Laboratory, Massachusetts Institute of Technology.

Marvin Minsky: 1981, Jokes and the logic of the unconscious. In Vaina and Hintikka (eds.), Cognitive Constraints on Communication. Reidel.

Marvin Minsky: 1986, The Society of Mind, New York: Simon & Schuster.

Marvin Minsky: forthcoming, The Emotion Machine. New York: Pantheon.

Erik Mueller: 1990, Daydreaming in humans and computers: a computer model of stream of thought. Norwood, NJ: Ablex.

Srinivas S. Narayanan: 1997, Knowledge-based action representations


for metaphor and aspect (KARMA)
(Unpublished doctoral
dissertation). University of California, Berkeley.

A. Newell: 1990, Unified Theories of Cognition, Cambridge, MA: Harvard University Press.

Nils Nilsson: 1984, Shakey the Robot. SRI Tech. Note 323, Menlo Park, Calif.

A. Ortony, G.L. Clore, A. Collins: 1988, The cognitive structure of emotions, New York: Cambridge University Press.

Rosalind Picard: 1997, Affective Computing, MIT Press.

Martha Pollack: 1992, “The uses of plans,” AI Journal:57

Ashwin Ram: 1994, “AQUA: Questions that drive the explanation process.” In Roger C. Schank, Alex Kass, & Christopher K. Riesbeck (Eds.), Inside case-based explanation (pp. 207-261). Hillsdale, NJ: Erlbaum.

C. K. Riesbeck and R. C. Schank: 1989, Inside Case-Based Reasoning. Lawrence Erlbaum Associates, Hillsdale.

Deb Roy: 2002, Learning Words and Syntax for a Visual Description Task. Computer Speech and Language, 16(3).

Roger C. Schank: 1972, Conceptual Dependency: A Theory of Natural Language Understanding, Cognitive Psychology, (3)4, 532-631

R.C. Schank & R.P. Abelson: 1977, Scripts, Plans, Goals and Understanding. Erlbaum, Hillsdale, New Jersey, US.

John Searle: 1980, Minds, Brains, and programs, The Behavioral and Brain Sciences 3, 417-457.

O. G. Selfridge: 1958, Pandemonium: A paradigm for learning. In Mechanisation of Thought Processes: Proceedings of a Symposium Held at the National Physical Laboratory, London: HMSO, November.

Push Singh: 2003, Examining the Society of Mind. Computing and Informatics, 22(5):521-543

Aaron Sloman: 1996, What sort of architecture is required for a human-like agent? Cognitive Modeling Workshop, AAAI96, Portland Oregon, August.

G. Stanfill and D. Waltz: 1986, Toward Memory-Based Reasoning, Communications of the ACM 29:1213-1228.

Leonard Talmy: 1988, Force Dynamics in Language and Cognition. Cognitive Science 12: 49-100.

E. Tulving: 1983, Elements of episodic memory. Oxford: New York.

Scott Turner: 1994, The Creative Process: A Computer Model of Storytelling and Creativity. NJ: Lawrence Erlbaum.

Joseph Weizenbaum: 1966, ELIZA--A Computer Program For the Study of Natural Language Communication Between Man and Machine. Communications of the ACM Volume 9, Number 1 (January 1966): 36-35.

P. H. Winston: 1975, Learning Structural Descriptions from Examples. In P. H. Winston (Ed.), The Psychology of Computer Vision. New York: McGraw-Hill, pp. 157-209 (originally published, 1970)

Rolf A. Zwaan & Gabriel A. Radvansky: 1998, Situation models in


language comprehension and memory. Psychological Bulletin, 123(2), 162-185.


1   2   3
:)


The database is protected by copyright ©hestories.info 2017
send message

    Main page

:)