ASIC05

abstract

Abstracts

Alphabetical listing by speaker

A | B | C | D | F | G | H | L | M | O | P | R | S | T | W | Y | Z

Speaker: Adams, Fred: University of Delaware
Title: Why there still has to be a mark of the mental
Abstract: Proponents of embodied cognition and extended mind have repeatedly stressed the close relationship between the human brain and the human body. It is noted that the brain not only controls mental processes, but that the mental processes it controls are closely connected with the bodily processes it controls. In fact, it has been claimed that it is a mistake to try talk about mental processes in isolation from the bodily processes that the brain controls. It has even been claimed that one can in some cases "read off" properties of the human body from properties of the human mind, and that one should think of the mind as realized by the entire brain and body. However, surely not every process in the body or brain is a mental process. Without some mark of the mental (or the cognitive), there is no way to distinguish a mental process from a non-mental bodily process. With a mark of the mental, some of the claims about embodied cognition and mind extension may well be false.
Speaker: Aizawa, Ken: Centenary College of Louisiana
Title: Extended Cognition and Cognitive Systems
Abstract: Dotto is 75 years old. He suffers from late onset diabetes. In order to more effectively control his blood sugar level, he sometimes keeps a small 2"x4" notebook. In it he records such things as his measured blood glucose levels, what and how much he eats, and when he takes what medications. Sometimes he uses the notebook to make grocery lists and sundry "to do" lists. Sometimes, when he gets into a routine where his blood sugar is well under control, he stops using his notebook. When he is most active with his notebook, he tries to take it wherever he goes. He carries it in a small black bag that also contains some of his other diabetic supplies, such as insulin, syringes, and hard candies. Figure 1 is a page from his notebook. Part way down the top page he has "11/23" for November 23, 2004. The left hand column is dedicated to his diabetic regimen, the right hand column is about his other medications. On the morning of November 23, he records that his blood glucose (BG) was 135 and he took 9 units of Humerol brand insulin (9H). For breakfast, he had 1 ½ cups of rice, which he calculated contains 67 grams of carbohydrates, and 4 ounces of orange juice, which contains 12 grams of carbohydrates, giving him a total of 79 grams of carbohydrates for breakfast. For lunch, he only lists his blood glucose and how much insulin he took. For dinner, he lists his blood glucose level, the restaurant where he had dinner, and how much insulin he took. In the right hand column, he notes, for example, that at 5:15 a.m., he took Synthroid, a synthetic thyroid hormone. He had his thyroid removed more than 20 years ago. At 6:27 a.m., he took Lortab, a pain reliever for a back problem. At 6:46, he took eye drops that were part of his regimen for recovering from cataract surgery. Lest one worry too much about Dotto, we should note that this disciplined regimen seems to play an important part in his enjoying a high quality of life. He is a capable driver, he goes to exercise class three times a week, plays bridge once a week, and manages a handsome financial portfolio. The common sense thing to say about this situation is that Dotto can't remember all the information he would like to have concerning his medical condition, so that he has to rely on notes. Dotto's cognitive apparatus is not up to managing all this information, so he relies on something else. On this view, Dotto's cognitive processes are confined to his central nervous system (CNS), where supplementary non-cognitive processes are found in his pencil and notebook. There are, of course, causal connections between Dotto and his pencil and notebook, but Dotto's cognitive processing is strictly limited to the confines of his central nervous system. In other words, causal processes are extended beyond the boundaries of the CNS, but cognitive processes are not. In the face of such common sense, a number of philosophers and psychologists have recently been advocating an alternative. They have argued that cognition is extended beyond the boundaries of the brain. According to this view, Dotto's cognitive processes literally extend from his brain into the pencil and paper of his notebook. Dotto's cognitive processing spans his brain, body, and the material pencil and notebook. Stated generally, the Extended Cognition Hypothesis (ECH) maintains that in (certain cases of) tool use, cognitive processes literally extend from the central nervous system into the external tools one uses. Many of the very same philosophers who endorse the Extended Cognition Hypothesis also defend a somewhat different hypothesis which we might call the Cognitive System Hypothesis (CSH). This hypothesis maintains that in (certain cases of) tool use, the central nervous system, body, and external tools one uses form a cognitive system. Although it is common to find these two hypotheses articulated almost interchangeably, I want to explore their inter-relations more carefully. In particular, I will argue, first, that the truth of the CSH does not support ECH. Second, and more surprisingly, I will argue that reflection on systems suggests that ECH encourages a pointless conceptualization of things as cognitive. Third, and most surprisingly, the truth of CSH actually conflicts with ECH.
Speaker: Allen, Colin: Indiana University
Title: Mind and world -- an unprincipled distinction?
Abstract: Philosophical arguments about "extended mind" often hinge on the question of whether there are principled reasons for drawing the boundaries of mind and cognition at the boundaries of the brain. Both sides seem to accept as common ground the idea that their position stands or falls on the answer to this question. I will examine this common assumption and argue that its uncritical acceptance may have distracted philosophers' attention away from a proper understanding of recent scientific attempts to model cognition as a continuous process between brain and environment.
Speaker: Anderson, John; Carnegie Mellon University
Title: Mastering a Novel Algebraic Concept
Abstract: I will describe the success students have learning a novel algebraic concept from minimal instruction. They are able to apply the concept in ways that they were not instructed and deal with cases that were not anticipated in the instruction. Their success in doing so poses challenges for cognitive architectures like ACT-R and perhaps more generally for current ideas about learning and performance. I will try to highlight these challenges.
Speaker: Biederman, Irv: University of Southern California
Title: The Neural Basis of Object Recognition
Abstract: Almost 20 years ago, a proposal was advanced that a considerable range of behavioral phenomena associated with human object recognition can be understood in terms of a representation positing an arrangement of simple part primitives distinguished by viewpoint invariant properties (= geons). Recent research on optical imaging as well as single unit activity of cells in macaque IT and behavioral and fMRI studies in humans provide a surprisingly strong confirmation of this proposal.
Speaker: Borghi, Anna: University of Bologna, Italy
Title: Language and objects: Simulation and affordances
Abstract: I will describe two experimental lines, one in collaboration with Luca Oggianu and the other with Lucia Riggio. In a first line we investigated the effects of different kinds of visual primes on the processing of action words. Participants were presented with visual primes followed by verbs; their task consisted in deciding whether the verb referred to a concrete or to an abstract action (Buccino et al., 2005). The prime could consist of the photograph on a hand performing an action (e.g., to press), of the object (e.g., the switch), or of the interaction between hand and object (e.g., a hand pressing a switch). Response times were faster in the interaction condition than in the other two conditions, probably due to the fact that both canonical and mirror neurons were activated. More interestingly, even if the task required judging a verb, object primes were processed faster than action primes. This is consistent with studies on motor resonance showing that mirror neurons are activated by goals rather than by means (Umiltą et al., 2001), and with studies showing that actions are encoded at a distal rather than at a proximal level (Hommel et al., 2001). In a second experimental line participants read action or observation sentences (Take the / vs. Look at the /) and had to decide whether or not the target represented the same object described in the sentence. The target was the photograph of an object graspable either with a power (e.g. brush) or with a precision grip (e.g. pen); the objects could be presented either with the affordances in the canonical position or not (e.g., a brush with the handle on the bottom part of the screen or an upright brush). Results showed that action sentences were processed faster than observation sentences, suggesting that during sentence comprehension a simulation process occurs. This was confirmed by the fact that objects with the affordances located in the canonical way were processed faster than objects not affording actions; this was true in particular for objects graspable with a power grip and when objects were preceded by an action sentence. The results are compatible with the idea that action words as well as visual objects activate canonical neurons. The results of both experimental lines will be discussed within the framework of an embodied view of cognition.
Speaker: Cohen, Andrew: University of Massachusetts Amherst
Title: Inducing multiple independent classification templates from stochastic stimuli
Abstract: Current techniques for determining the pixel values used by an observer to classify an image usually assume linear classification and a single template. Psychological evidence suggests, however, that certain image features are detected independently and then combined to produce a classification. We propose a Bayesian network model of classification that encapsulates this more complex structure. The model is used in conjunction with machine learning techniques to discover a set of templates that describe the independent features used by both simulated and human classifiers.
Speaker: Criss, Amy: Carnegie Mellon University
Title: The consequences of differentiation in episodic memory
Abstract: When items on one list are studied longer than items on another list, the improvement in performance typically manifests as an increase in the hit rate and a decrease in the false alarm rate. This finding is referred to as the strength based mirror effect and has been accounted for by assuming that participants adopt a more strict criterion following a list containing items studied several times. An alternative account is found in differentiation models where longer study leads to a more accurate memory representation for the studied item. The more accurate the stored representation, the less it is confusable with a randomly chosen foil, resulting in a decrease in the false alarm rate. Differentiation models make additional a priori predictions that the level of differentiation of a study list interacts with the similarity between the studied items and the foils. These predictions are empirically tested and confirmed.
Speaker: Davelaar, Eddy: University of California, San Diego
Title: Preview benefits in visual selective attention: "Hang on a second!"
Abstract: In the Eriksen flanker paradigm, peripheral flankers can help or harm performance depending on whether they indicate the correct or incorrect response of a central target. With this paradigm it has been observed that immediate preview of the flankers reduces flanker effects. The reported experiments investigated this phenomenon, establishing the separate contributions of identity preview and response preview. Participants made CV judgments on the middle letter of a five-letter string and different durations of preview were examined. The results indicate that response preview effects are small, and apply similarly across location. In contrast, identity preview is location specific, producing results that reverse depending on flanker preview versus target preview. We explain these effects in terms of perceptual discounting that accrues over time as a function of preview durations. Depending on which locations and items are previewed, this can result in a "repetition blindness" for flankers, which reduces flanker interference, or a repetition blindness for the target, which greatly harms performance. This theory is implemented in a model with dynamic neural accommodation within spatially specific identity detectors and spatially non-specific evidence accumulators (response units).
Speaker: Fuller, Gary, Central Michigan University
Title: Empty Names & Pragmatic Implicatures
Abstract: What are the meanings of empty names "Vulcan," "Pegasus," or "Santa Claus" in sentences such as "Vulcan is the tenth planet," "Pegasus flies," or "Santa Claus does not exist"? Our view Adams et al. 1992, '94,'97a, 2004) is a direct-reference account on which empty names lack meaning, in combination with a pragmatic-implicature account of why empty names seem to have meaning. The appearance of meaning comes from associated implicated descriptions that do not give the meaning of the names. In a recent article (2005) Mitch Green criticizes our view of empty names. He argues that our "pragmatic defense" fails. He thereby implicitly casts doubt on our whole direct-reference package. According to Green there are a number of familiar mechanisms that can generate pragmatic implicatures: conversational and conventional mechanisms, discussed in detail by Grice (op. cit.), and other mechanisms, such as ones involving expression, that are neither conversational nor conventional. Green argues that none of these familiar mechanisms; can generate the implicatures needed by our view. We could try to counter Green by showing that there are unfamiliar mechanisms, mechanisms other than those that he mentions, that can generate the kind of implicatures that we need. Indeed, towards the end of his paper Green allows that this is a possible way to save the pragmatic defense, although he is skeptical of it (23-24). Luckily, we will not have to take this route. The main purpose of our paper is to show that our view works despite Green's suggestions to the contrary. We shall show that Green's arguments that the Gricean mechanisms of conversational or a conventional implicature cannot generate the implicatures that our view needs are seriously flawed. Towards the end of our paper we also briefly sketch an account of what the relevant Gricean mechanisms might be.
Speaker: Gonzalez-Vallejo, Claudia: Ohio University
Title: The role of scaling in models of choice: Comparing the proportional difference model to decision field theory in decisions over consumer products
Abstract: Three studies tested the stochastic difference choice model (proportional difference, PD, version in Gonza“lez-Vallejo, 2002) in the domain of decision making under certainty. Consumer services and products, hotels defined by price and quality and MP3 players defined by price and memory size, served as choice pairs. The ordinal prediction relating the proportional difference variable, d (computed from stimuli pairs), and the observed choice proportions was supported. Model fitting showed that PD's estimated decision threshold measured within-person sensitivity to value attribute differences both at baseline and after persuasion manipulations. The threshold was also related to whether individuals were low or high in Need for Cognition (NFC, Cacioppo & Petty, 1982). Cross-validation strategies also showed PD to be descriptive and robust.
Speaker: Harris, Steve; Indiana University
Title: Extended Cognitivism and Intrinsic Content
Abstract: I defend the thesis of extended cognitivism against a particular type of objection that is insightful but incorrect. I call it the intentionality objection. The extended cognitivist argues that human cognition (and consequently, quite plausibly, the human mind) is something extended, something literally comprised of external as well as internal states and processes. The popular account emphasizes the role that technological artifacts play in extending the human cognitive system by becoming functional parts of that system. The claim is that technologies in use sometimes constitute, in part, the vehicles of cognitive (and perhaps even mental) content. Intentionalist critics take such radical anti-individualist consequences to indicate a failure on the part of the extended cognitivist to properly distinguish what is cognitive from what is not cognitive. According to the internal intentionalist, cognition extends no further than the individual, since cognition requires intentionality and intentionality extends no further than the individual. In this talk I consider a specific version of the intentionality objection that assumes that cognitive systems involve states and processes that have intrinsic content. Since the states and processes implemented in tools (or, "external, nonbiological vehicles" generally) never involve intrinsic content, there is no good reason for believing that cognition ever extends into tools and other artifacts. This objection fails, however, to show that extended cognitivism is false of human cognition. For the purposes of bounding the cognitive system anyway, the appeal to intrinsic content is deeply and systematically problematic. The notion is either too weak to uniquely identify human cognitive agents or else it is too strong to be plausibly applied to them. The missing element in an adequate appeal to the intentional capacities of human cognitive systems, I argue, is wide-world human technology.
Speaker: Huber, Dave: University of California San Diego
Title: Individual Differences in Face Processing as Revealed with Priming
Abstract: Recent experiments with the immediate repetition of words demonstrated that brief prime presentations help target perception whereas long prime presentations harm target perception (Weidemann, Huber, & Shiffrin, 2005). In this paradigm, targets are briefly flashed and masked and performance is assessed through a two-alternative forced choice in which the prime can be identical to the target, the foil, or neither choice. In a series of experiments, we extended this paradigm to face perception, mapping out similar costs and benefits of immediate face repetition as a function of prime duration. Unlike words, qualitative individual differences were observed: the change from positive to negative priming was greater for participants with lower perceptual face thresholds. In a second experiment, this effect was only seen with upright, but not inverted faces. In a third experiment, these individual differences were found to be resistant to manipulations designed to change the basis of strategic responding in terms of featural or configural information. A fourth experiment demonstrated that the target duration needed for perceptual threshold does not itself explain these effects, and, instead, increasing target duration produced stronger negative priming effects. We implemented a multi-layer dynamic neural network model of these results that includes synaptic depression to produce the transition from positive to negative priming. The model assumes that higher level configural processing is less developed in some individuals, forcing them to rely more heavily on featural information.
Speaker: Lewandowski, Steve: University of Western Australia
Title: Temporal isolation in short-term memory
Abstract: According to temporal distinctiveness models, items that are temporally isolated from their neighbors during list presentation are more distinct and thus should be recalled better. While there is clear evidence that free recall benefits from temporal isolation, we conclude on the basis of several recent studies that no reliable temporal isolation effects exist in serial recall. We present two additional experiments which reconciled those two discrepant outcomes by comparing a retrieval task in which output order is controlled (forward serial reconstruction) with a virtually identical task in which report order is unconstrained (free reconstruction). Temporal isolation effects emerged in the unconstrained task irrespective of actual report order if (and only if) people expected free report at the time of study. The data suggest that people can choose to rely on the temporal dimension at encoding, but do so only when they expect report order to be unconstrained. By contrast, when people expect strict forward serial retrieval, they do not use time to differentiate between items at encoding.
Speaker: Malmberg, Ken: University of South Florida, Tampa
Title: Directed Forgetting in Free Recall and Recognition
Abstract: Forgetting can occur as the result of unconscious or automatic memory processes or as the result of their conscious control. The latter form of forgetting is often referred to as suppression, repression, or inhibition, and it is often investigated in the laboratory using the directed forgetting procedure. The authors describe and empirically test a formal model of directed forgetting, implemented within the framework of the Search of Association Memory Theory (SAM). The critical assumption is that episodic memory can be suppressed by a conscious attempt to alter the mental context in which new memories are encoded. This model captures much of the data. However, additional assumptions are required to account for serial position and output order effects and the effect of forgetting instructions on recognition memory.
Speaker: Moors, Agnes: Ghent University, Belgium
Title: Five sources and five solutions to the all-or-none view of automaticity
Abstract: Feature-based accounts define the concept automatic in terms of a number of features such as unintentional, goal-independent, autonomous, purely stimulus-driven, unconscious, uncontrolled (in the sense of alter/stop), efficient, and/or fast. The concept nonautomatic covers the opposites of these features. Different feature-based accounts differ with regard to the features they emphasize most and with regard to the amount of coherence they assume among features. The best known feature-based account is the dual-mode view, which assumes a perfect coherence among the features of each mode (automatic processes hold all automatic features; nonautomatic hold all non-automatic features). The dual-mode view has been criticized on the basis of empirical evidence showing a lack of cooccurrence among the features of each mode (cf. Bargh, 1992; Logan, 1985 ). Nevertheless, the dual mode view seems difficult to shake off. I discuss five sources that have been or can be appointed as responsible for the creation and/or persistence of the dual mode view: The capacity view of attention (Shiffrin & Schneider, 1977), the New look in perception (Bruner & Goodman, 1947), the computational framework of cognition in general, assumptions of conceptual overlap among features, and assumptions of one-to-one modal relations (i.e., the presence of one features is considered a necessary and/or sufficient condition for another). After that, I discuss five alternative views that have been proposed to replace the dual-mode view: The triple mode view (e. g., Carver & Scheier, 2002 ), the decompositional view that does away with the general concept of automaticity (Regan, 1981), the decompositional view that conceives of automaticity as a gradual concept (Logan, 1985; Shiffrin, 1988), the view that chooses one minimal feature of automatic processes (Bargh, 1989), and the construct-based approach (Logan, 1988). A critical review of these alternatives favors a gradual decompositional view. This view faces its own limitations, which I propose can be accomodated by a relative rather than a purely gradual view.
Speaker: Mueller, Shane: Indiana University
Title: REM-II: A model of the formation and use of episodic memory and semantic knowledge
Abstract: Episodic memories form through the interpretation of events by semantic knowledge, while semantic knowledge forms by the accumulation of episodic memories. Through this two-way process, our extensive episodic memory for events in the past co-evolves with our vast knowledge about the world. We present REM-II, a new bayesian account of episodic and semantic memory that explicitly models the development of these two aspects of our long-term memory. REM-II encodes episodic traces as sets of features with different values, and semantic knowledge as a set of co-occurrences of these features, while assuming that co-occurrence of concepts allows for relational and semantic similarity to emerge. The use of feature co-occurrence allows polysemy and connotation of meaning to be encoded within a single structure, based on the distinct contexts in which a concept appears. We demonstrate knowledge formation in REM-II and show the emergence of semantic spaces through experience and the resultant polysemy and biasing of encoding that REM-II produces.
Speaker: Nikolic, Danko: Max-Planck Institute for Brain Research
Title: Phase precedence and time delays in the visual cortex
Abstract: Despite extensive investigation, we still do not fully understand how the brain represents visual information. Here we present results indicating that time delays between action potentials, that are as small as one millisecond, can be highly informative and can carry stimulus-related information. Based on cross-correlation analysis of simultaneous extracellular recordings from a large number neurons in cat area 17, we show that such small temporal delays form precise and repetitive spatio-temporal patterns, such that each neuron has its own preferred time of firing relative to the firing times of the other neurons. The delay between two neurons rarely exceeds 3 ms and thus, the time scale of the resulting spatio-temporal patterns fits within a single cycle of a the gamma oscillation. Moreover, the preferred relative firing times of neuronal discharges change with stimulus properties. These changes are partially related to neuronal activation but are, for the most part, not correlated to the other neuropy siological measures that contain stimulus-related information (i.e., neuronal rate responses and synchrony). Instead, these fine spatio-temporal patterns seem to constitute an independent source of information. These results open up the possibility that downstream readout units take advantage of these spatio-temporal patterns during computation. If true, small time delays would play an important role in the cortical representation of visual stimuli.Brain connections revealed by reverse correlation
Speaker: Ohnesorge, Clark: Carleton College
Title: The modulation of visual attention by emotion-eliciting stimuli
Abstract: The emotional valence evoked by visual stimuli has been shown to influence performance across a large range of cognitive and perceptual tasks. Generally the concept of attention is invoked in developing theoretical explanations for this phenomenon with a conclusion that negative stimuli attract or receive more attention than do positive or neutral stimuli. In prior research using lexical stimuli we explored the temporal unfolding of an attentional window that differed with the valence of the eliciting stimulus and opened and closed within a time envelope of about 500 milliseconds. In the current research, we extend our manipulation to include arousal as well as valence and adopt pictorial stimuli to further address the phenomenon of attentional modulation.
Speaker: Pecher, Diane; University Rotterdam, The Netherlands
Title: Retrieval Induced Forgetting
Abstract: Retrieval practice with particular items from memory can impair the recall of related items on a later memory test. Both inhibitory and associative explanations have been offered for this retrieval-induced forgetting effect. The independent probe technique has been developed to distinguish between these two accounts. In this paradigm, memory of a suppressed item is tested with an extralist cue that is unrelated to the practiced item. We argue that different versions of this paradigm cannot adequately distinguish between inhibitory and associative accounts. Using an adapted version of the paradigm, we demonstrate that retrieval-induced forgetting of both semantic and episodic memory items does not occur using item-specific independent cues, but does occur when related cues are used. These results pose problems for inhibitory accounts.
Speaker: Raaijmakers, Jeroen: University of Amsterdam, The Netherlands
Title: A non-inhibitory explanation of retrieval inhibition
Abstract: Retrieval inhibition refers to the phenomenon that practicing some items leads to generalized (cue-independent) inhibition of related items. The cue-independent nature of these effects appears to make these effects difficult to explain from by traditional interference accounts based on competitive retrieval processes. An alternative explanation based on the REM model will be presented. I will also present the results of new experiments aimed at testing this alternative account of inhibition effects.
Speaker: Reder, Lynne, Carnegie Mellon University
Title: The interaction of implicit and explicit memory processes in learning and behavior
Abstract: I have long argued that most tasks labeled as part of implicit memory operate on the same representations that are used for explicit memory tasks. I will review the evidence for why these assumed separate memory systems are really not separate at all and speculate as to why the erroneous assumptions developed. I will go on to argue why so much of learning is implicit and should be implicit and when learning should be explicit.
Speaker: Rieskamp, Joerg: Max Planck Institute for Human Development, Berlin, Germany
Title: Perspectives of probabilistic inferences: Reinforcement learning and an adaptive network compared
Abstract: The assumption that people possess a strategy repertoire for inferences has been raised repeatedly. The strategy selection learning theory specifies how people select strategies from this repertoire. The theory assumes that individuals select strategies proportional to their subjective expectations of how well the strategies solve particular problems; such expectations are assumed to be updated by reinforcement learning. The theory is compared to an adaptive network model that assumes people make inferences by integrating information according to a connectionist network. The network's weights are modified by error correction learning. The theories were tested against each other in an experimental study with an dynamic environment in which the performance of inference strategies change. In this situation a quick adaptation to the new situation was not observed; rather individuals got stuck on the strategy they had successfully applied previously. This "inertia effect" was most strongly predicted by the strategy selection learning theory.
Speaker: Rupert, Rob: University of Colorado
Title: A dilemma for the extended mind
Abstract: The hypothesis that human cognition extends into the environment can be understood as a claim about the subjects of cognitive states or, instead, as a claim about the implementation or realization of cognitive states. In this paper, I argue that neither approach offers a promising theoretical framework within which to pursue empirical psychology. Considered as a theory of the subjects of cognitive states-i.e., as a claim about the systems that instantiate cognitive properties-the extended framework does substantial violence to productive research programs and methods in cognitive psychology. Such research compares systems' reactions across different experimental conditions, constructs theories to account for subjects' various responses, and designs new experiments to test those theories. These methods presuppose that the same systems (or the same kinds of system) persist through variations in conditions and experiments. In contrast, the actual extended systems discussed in the literature are fleeting systems, often composed of organisms together with items presented in experimental conditions. The extended view fares no better when interpreted as a claim about the realizers of cognitive states. Not just anything causally related to a cognitive state can count as part of the realizer of that state. The causal role of the realizer of a personal-level cognitive state must mirror the causal profile of the state so realized. Extended realizers typically do not satisfy this requirement, and for principled reasons.
Speaker: Sanborn, Adam: Indiana University
Title: Alternative algorithms for the rational model of categorization
Abstract: The rational model of categorization (RMC; Anderson, 1990) assumes that categories are learned by clustering similar stimuli together using Bayesian inference. As computing the posterior distribution over all assignments of stimuli to clusters is intractable, an approximation algorithm needs to be used. The original algorithm used in the RMC was an incremental procedure that had no guarantees for the quality of the resulting approximation. Drawing on connections between the RMC and models used in nonparametric Bayesian density estimation, we present two alternative algorithms for the RMC that are asymptotically correct. Using these alternative algorithms allows the effects of the assumptions of the RMC and the particular inference algorithm to be explored separately. We look at how the choice of inference algorithm changes the strength of predicted order effects.
Speaker: Sasaki, Yuka: Harvard medical School
Title: The primary visual cortex fills in color
Abstract: One of the most important goals of visual processing is to reconstruct adequate representations of surfaces in a scene. Surface representation is thought to be produced mainly in the mid-level vision and that V1 activity is solely due to feedback from the mid-level stage. However, contradicting empirical and theoretical reports have also been proposed. One reason for this controversy may be due to the tacit assumption that surface representation is made by single processing rather than multiple processing. Surface representation could be a result of many different aspects of processing. Another reason for the controversy may be that most studies have not controlled effects of attention on a surface. Thus, it is necessary to examine how subcomponents of a surface contribute to surface representation with attentional effects controlled. Here, we measured fMRI signals corresponding to "neon color spreading" that is thought to be due to interactions between mechanisms for two surface subcomponents --- color filling-in and illusory contours. In the present study, we used 3T fMRI that provides a fine spatial resolution so that brain activity corresponding to illusory contours and filling-in both as surface subcomponents could be spatially dissociable if surface representation occurs in the retinotopic visual areas. To eliminate or decrease the attentional component of feedback signals, subjects performed an attentionally-challenging task unrelated to the surface perception. Activity for filling-in was observed only in the primary visual cortex, whereas activity for illusory contours was observed in multiple visual areas. These findings indicate that surface representation is produced by multiple rather than single processing, and that V1 activity for surface representation is not solely from feedback from higher cortical stages.
Speaker: Schooler, Lael: Max Planck Institute for Human Development, Berlin, Germany
Title: Why you think Milan is larger than Modena: Neural correlates of the recognition heuristic.
Abstract: When ranking two alternatives by some criteria and only one of the alternatives is recognized, participants overwhelmingly adopt the strategy, termed the recognition heuristic (RH), of choosing the recognized alternative. Understanding the neural correlates underlying decisions that follow the RH could help determine whether people make judgments about the RH's applicability or simply choose the recognized alternative.
Speaker: Seitz, Aaron: Boston University
Title: Reinforcement and blinks in perceptual learning
Abstract: We are constantly learning new things as we go about our lives. In addition to learning new facts, procedures and concepts, we are also refining our sensory abilities. How and when these sensory modifications take place is the focus of intense study and debate. While sensory improvements were thought only to occur when attention is focused on the stimuli to be learned (task-relevant learning), recent studies demonstrate performance improvements independent of the focus of attention (task-irrelevant learning). I will present research that shows that task-irrelevant learning can occur for motion stimuli that are paired with the targets of a letter identification task. These results are consistent with a learning model in which long-term sensitivity enhancements to task-relevant or irrelevant stimuli occur as a result of timely interactions between diffused signals triggered by task performance and signals produced by stimulus presentation. To test this model and demonstrate that high-level processing is necessary for this unconscious, automatic learning, research of a "blink" in attentional processing, which occurs when subjects must process two task-targets presented in rapid succession, is adapted to study perceptual learning. This "blink" has been shown to result from a bottleneck in high-level processing (such as decision making and memory encoding) but does not affect perceptual and semantical processing. Results show that while subjects obtain sensory improvements for motion stimuli presented outside of the time-window of this "attentional blink", no learning occurs for stimuli presented during the attentional blink.
Speaker: Shiffrin, Richard; Indiana University
Title: Paradoxes real and imagined
Abstract: I use a variant of the 'Exchange Paradox' to motivate a discussion of the psychological basis of rationality, and the consequent appearance and actuality of paradox. To whet the reader's appetite, the paradox follows: Suppose one flips a coin until a heads appears on flip n, and places 10**(n) and 10**(n+1) dollars in each of two sealed envelopes. The envelope with the larger amount is handed to you with probability 0.8, else you are handed the other envelope. You open your envelope and observe $X. You are to either keep this amount or irrevocably exchange for the contents of the other, with the goal of maximizing expected payoff. Strangely, it can be shown that one should exchange regardless of X. It seems paradoxical to 'always' exchange what you know to be the envelope with the higher probability of having the larger amount.
Speaker: Sperling, George: University of California Irvine
Title: How the two eyes combine information: A neurally-plausible mathematical theory and some supporting evidence
Abstract: In binocular combination, light images on the two retinas are combined to form a single "cyclopean" perceptual image, in contrast to binocular rivalry which occurs when the two eyes have incompatible ("rivalrous") inputs and only one eye's stimulus is perceived. We propose a computational theory for binocular combination based on two neurally plausible principles of interaction: in every spatial neighborhood (1) each eye exerts gain control on the other eye's signal in proportion to the contrast energy of its own input and (2) each eye additionally exerts gain control on the other eye's gain control. For stimuli of ordinary contrast, when either eye is stimulated alone, the predicted cyclopean image is the same as when both eyes are stimulated equally -- a significant nonlinearity that coincides with an easily observed property of natural vision. The gain-control theory is contrast dependent: Very low-contrast stimuli to the left- and right-eyes add linearly to form the predicted cyclopean image. The intrinsic nonlinearity manifests itself only as contrast increases. To test the theory more precisely, 48 combinations of horizontal sinewave gratings that differ in phase and contrast are presented to each eye; the apparent phase of the cyclopean sinewave indicates the relative contributions of the two eyes. In another experiment, noise added to to the stimulus in one eye is shown to cause that eye to dominate in binocular combination. In all, six experiments define the paramters of the theory; the theory accounts for 96.9 to 99.4% of the variance of these data. Conclusion: A simple, robust, physiologically plausible gain-control theory accurately describes an early stage of binocular combination.
Speaker: Stewart, Neil: University of Warwick, Warwick, UK
Title: Decision by sampling
Abstract: We present a theory of decision by sampling (DbS) in which, in contrast with traditional models, there are no underlying psychoeconomic scales. Instead, we assume that an attribute's subjective value is constructed from a series of binary, ordinal comparisons to a sample of attribute values drawn from memory and is its rank within the sample. We assume that the sample reflects both the immediate distribution of attribute values from the current decision's context and also the background, real-world distribution of attribute values. DbS accounts for concave utility functions; losses looming larger than gains; hyperbolic temporal discounting; and the overestimation of small probabilities and the underestimation of large probabilities.
Speaker: Thomas, Rick: University of Oklahoma
Title: A model of hypothesis generation, probability judgment, and information search.
Abstract: The talk will present a summary of our work with HyGene, a cognitive process model of how decision makers generate, evaluate, and test hypotheses. The model is an integration of theoretical constructs from the long-term memory, working memory, and judgment and decision making literatures. In simulations, we illustrate HyGene's account of several judgment phenomena, including the effects of WM limitations and time pressure on hypothesis generation and probability judgment. Implications of the model for understanding the conditions that lead to diagnostic information search versus positive testing and pseudodiagnostic information search will be discussed. In general, our work with HyGene is focused on understanding how well-known memory constructs (e.g., similarity-graded retrieval and limited working-memory capacity) systematically constrain judgment processes. Much of our work has found that memory processes lead to hypothesis generation, evaluation, and testing behaviors that are quite adaptive.
Speaker: Tjan, Bosco: University of Southern California
Title: Classification images, spatial uncertainty, and visual crowding
Abstract: Invariance or constancy is a hallmark of visual processing. Linear techniques such as classification images and spike-triggered averaging are thought to be incapable of recovering the front-end template or receptive-field structure of a higher-order visual mechanism whose response may be invariant to the position, size, or orientation of a target. Higher order techniques such as spike-triggered-covariance) also cannot handle even the simplest kind of positional invariance for spatially broadband stimuli. Using the max-pooling property of a typical uncertainty model, we show analytically, in simulations, and with human experiments (single-letter identification in fovea and periphery, with and without positional uncertainty) that the effect of intrinsic uncertainty (i.e. invariance) can be reduced or even eliminated by embedding a signal of sufficient strength in the masking noise of a classification-image experiment. We refer to this technique as "signal clamping". We show that the signal-clamped classification images from the error trials contain a clear high-contrast image that is negatively correlated with the perceptual template associated with the presented signal; they also contain a low-contrast "haze" that is positively correlated with the superposition of all the templates associated with the erroneous response. In the case of positional uncertainty, we show that this "haze" provides an estimate of the spatial extent of the uncertainty. With the effect of intrinsic uncertainty significantly reduced by signal clamping, we further show that a covariance analysis can be used to different regions of a classification image to reveal the elementary features that are the components of the perceptual template seen in the classification image. We applied this technique to study the change in features and feature integration during visual crowding.
Speaker: Wagenmakers, Eric-Jan: University of Amsterdam, The Netherlands
Title: Modeling choice behavior in the Iowa gambling task
Abstract: The purpose of the Iowa gambling task, developed by Damasio and Bechara, is to mimic real-life decision making in an experimental context. The Iowa gambling task has recently been used to assess decision making deficiencies in several different clinical populations. Busemeyer and Stout proposed a reinforcement learning model to account for choice behavior in the Iowa gambling task. Their model incorporates three components of decision making: weighing of gains versus losses, memory for past payoffs, and response consistency. A test of specific influence demonstrates the validity of the model. Based on a large-sample study, it is argued that despite the validity of the model, care should be taken when the model is applied to clinical diagnosis on the level of the individual. Several extensions of the model are discussed.
Speaker: Watanabe, Takeo: Boston University
Title: Perceptual learning without perception is not passive and results in robust perception
Abstract: The brain demonstrates an amazing ability to become increasingly sensitive to important stimuli. It is often claimed that we become more sensitive only to the critical signals in the tasks we attend to. However, our recent series of experiments have shown that perceptual learning occurs with little attention. First, mere exposure to sub-threshold and task-irrelevant motion coherence signals led to enhancement in sensitivity to the motion direction. This finding indicates that attention is not necessary for perceptual learning (Watanabe, Nanez & Sasaki, 2001, Nature). Second, exposure to two types of task-irrelevant motion that are processed at different levels of visual processing improved sensitivity only at the lower-level. These results suggest that task-irrelevant perceptual learning occurs at a very low-level (Watanabe et al, 2002, Nat Neuroscience). Third, we addressed the question as to whether such task-irrelevant learning occurs purely passively (caused by stimulus-exposure). During exposure, we presented four different directions of motion an equal number of times, but the direction of interest (DOI) was paired with the task targets. If learning is purely passive, thresholds should improve equally for all the presented directions. Surprisingly, the threshold improved only for the DOI. These results show that learning of a task-irrelevant and sub-threshold feature is not purely passive, but it occurs only when the feature is correlated with a task target (Seitz & Watanabe, 2003, Nature). Finally, we have recently found that such learning is so robust that it sometimes results in perception of the exposed direction even when nothing is presented (Seitz, et al, 2005, PNAS). Based on these findings, we propose a model in which diffuse reinforcement learning signals perform an important role, complementary to focused attention in perceptual learning.
Speaker: Yu, Chen: Indiana University
Title: Statistical Cross-Situational Learning to Build Word-to-World Mappings
Abstract: There are an infinite number of possible word-to-world pairings in naturalistic learning environments. This mapping problem might be constrained in the task of word learning by computing distributional statistics across words, across referents, and most importantly across the co-occurrences of these two at multiple moments. As a preliminary test of this idea, we briefly exposed adults to multiple spoken words and multiple pictures of individual objects with no information about word-picture correspondences. Nonetheless, subjects learned over trials the word-picture mappings through cross-trial statistical relations. Different learning conditions compared the degree of within-trial reference uncertainty, the number of trials and the length of trials. We also propose and implement a computational model and feed it with the same training data used in different learning conditions in experimental studies, to shed light on the possible underlying mechanism of statistical learning. Overall, these results suggest that statistical cross-situational learning may be one of fundamental mechanisms to tackle the word-to-world mapping problem.
Speaker: Zeelenberg, Rene; Erasmus University Rotterdam, The Netherlands
Title: False recognition of nonwords
Abstract: Participants studied lists of nonwords (e.g., froost, floost, stoost, etc.) that were orthographic-phonologically similar to a nonpresented critical lure, which was also a nonword (e.g., ploost). Experiment 1 showed a high level of false recognition for the critical lure. Experiment 2 showed that the false recognition effect was also present for forewarned participants who were informed about the nature of the false recognition effect and told to avoid making false recognition judgments. Experiment 3 used an overt-rehearsal paradigm and showed that the lure was allmost never rehearsed during study. The present results show that false recognition effects can be obtained even when the critical lure itself is not stored during study. These findings are problematic for currently popular accounts of false recognition which attribute false memories to implicit associative responses or spreading activation but are easily explained by global familiarity models of recognition memory.

Website designed and maintained by Krystal Klein and Fang Fang. Best viewed with Internet Explorer 6.0.
Landscape