Ninth Annual Summer Interdisciplinary Conference

Authors, Titles, Abstracts


(When sufficient titles, and abstracts for talks and posters arrive, I will begin posting them in this section)

Listing by speaker

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O
P | Q | R | S | T | U | V | W | X | Y | Z

B
Speaker's Name:Gordon Brown
First Author's Name:Gordon Brown
First Author's Affiliation:University of Warwick
Title:The Rank Principle in Economics and Psychology
Abstract:Several cognitive models of judgement and decision-making suggest that the relative ranked position of an option within a comparison set is an important determinant of judgement and choice. I will summarise evidence for the relative rank principle in a number of different contexts, ranging from actual product choice through to social attitudes including how life satisfaction and propensity to mental illness are related to income; self-perception of alcohol consumption and body-image, and student satisfaction (work carried out in collaboration with Alex Wood). I will also describe some agent-based simulations that show how rank-based social comparison can lead to herd behaviour in interconnected networks of status-conscious agents.

Speaker's Name:Tom Busey
First Author's Name:Tom Busey
First Author's Affiliation:Indiana University
Title:Machine Learning Approaches to the Information Content of Latent Fingerprints
Abstract:Latent print examiners have the difficult task of matching noisy, distorted latent prints collected at crime scenes to inked prints taken from suspects. Although practiced for more than 100 years, latent print examinations have few standards for training and reliability. Recent criticism from the National Academies has prompted a reevaluation of the accuracy and methods of these examinations. In this talk I will discuss the nature of expertise as inferred from eye gaze data, and use machine learning approaches to infer the basis or feature set used by experts. This feature set has applications for approaches that seek to quantify the amount of information in latent prints, and therefore their utility for law enforcement and the judicial system.

C
Speaker's Name:Allison Chapman
First Author's Name:Allison Chapman
First Author's Affiliation:The Ohio State University
Second Author's Name:Simon Dennis
Second Author's Affiliation:The Ohio State University
Title:The Null List Length Effect in a Sternberg Paradigm
Abstract:Debate continues regarding the extent to which contextual and item-related noise contribute to interference in recognition memory. A number of recent studies (Dennis & Humphreys, 2001; Dennis, Lee, & Kinnell, 2008) demonstrate that experimental constraints such as a retention interval nullify the list length effect. These findings suggest that pure-exemplar models must be augmented to account for categorical representation of study items (Dennis & Chapman, under revision). Sternberg (1966) demonstrated a positive linear relationship between number of items at study and reaction time in a recognition test. The aim of the current research is to apply a retention interval to the Sternberg paradigm to explore our list length findings in a short-term task. The results from experiment 1 demonstrate that although there are no strong recency effects, longer list length increases reaction times in both immediate and delayed conditions. Experiment 2 implements feedback during the distractor task to examine the assumption that the filler was ineffective. Preliminary results from experiment 2 demonstrate a null effect of length in the delayed condition. These findings suggest that a retention interval, when filled with an engaging task, is sufficient to induce contextual reinstatement. Final conclusions will be drawn regarding the relative involvement of item and context noise across short and long term memory.

Speaker's Name:Eliana Colunga
First Author's Name:Eliana Colunga
First Author's Affiliation:University of Colorado, Boulder
Title:Understanding word learning in early- and late-talkers
Abstract:In typical development, word learning goes from slow and laborious to seemingly rapid and effortless. Typically developing 3-year-olds are so skilled at learning noun categories that they seem to intuit the whole range of things in the category from hearing a single instance named, for example, consistently extending the name of a novel solid object to others of the same shape regardless of color, material, texture. Unlike typically developing children, late talkers (children below the 15th-20th percentile on productive vocabulary) do not consistently extend new names for objects by shape, and in fact some of them show a consistent preference for texture. In this talk I will discuss the developmental interplay between these two things: vocabulary and novel noun generalizations. I will present data from early-and late-talker toddlers showing different patterns of generalizations for novel words and simulations showing how these different patterns of generalization can emerge from differently structured vocabularies.

Speaker's Name:Rosie Cowell
First Author's Name:Rosie Cowell
First Author's Affiliation:University of California, San Diego
Second Author's Name:David Huber
Second Author's Affiliation:University of California, San Diego
Third Author's Name:John Serences
Third Author's Affiliation:University of California, San Diego
Title:Virtual Multi-Unit Recording: Inferring neural response profiles from fMRI data
Abstract:Functional Magnetic Resonance Imaging (fMRI) is the premier non-invasive experimental technique for localizing brain responses in humans, but its spatial resolution is poor relative to in vivo electrophysiological recording of single neuron responses. We propose an analytic tool that may greatly increase the utility of fMRI, by enabling us to 'look inside' the fMRI voxel. Voxels in visual cortex exhibit fMRI responses that are selective for the orientation of a visual stimulus, similar to orientation-selective tuning functions observed for single neurons. We assume that voxel tuning functions (VTFs) arise from some underlying population of neurons that are characterized by a set of neural tuning functions (NTFs). Manipulation of a scanned subject's cognitive state (e.g., viewing a visual stimulus with or without selective attention) can induce a change in the shape of a VTF via modulation of the underlying NTFs. However, there are several plausible mechanisms of changes at the neural level that can produce seemingly equivalent changes in the VTF. This is because the voxel is composed of different subpopulations of neurons, each with their own response function, but the distribution of these subpopulations is unknown. Distinguishing between alternative models of neural modulation by examining only changes in the BOLD signal is thus an inverse problem in which voxel activations must be mapped back to neural activations. We present feasibility studies for a technique to solve this inverse problem, employing model fitting and model recovery to determine the quantity of fMRI data required to distinguish between alternative models of neuromodulation.

Speaker's Name:Greg Cox
First Author's Name:Greg Cox
First Author's Affiliation:Indiana University
Second Author's Name:Richard Shiffrin
Second Author's Affiliation:Indiana University
Title:On the Formation of Associative Structure in Knowledge
Abstract:Extending the thesis research of Angela Nelson (2009), we seek to bring the structure of knowledge under experimental control, with the aim of investigating how associations are formed, represented, and used by memory. Participants with no prior knowledge of Chinese are trained over several weeks to match Chinese characters to themselves, trying to detect slight physical changes. Although it is incidental to the change-detection task, each training trial presents a pair of characters, permitting incidental associations between characters to form. The pairs are drawn from a set of associative structures, including linear chains of varying length (e.g., A-B-C-D-E) and binary trees. The nature of any associations that may have formed between characters as a result of training is probed with a primed pseudo-lexical decision task, as well as perceptual discrimination and episodic recognition tasks. In primed pseudo-lexical decision (PPLD), a non-diagnostic prime character is presented prior to a target character, requiring a decision whether the target character had appeared during training. Although paired and unpaired characters are equally discriminable, priming is observed in PPLD when the target character was paired with the prime during training (A-B), and inhibition is observed for other trained prime characters, particularly when the prime is one-removed from the target (A-C). These results suggest that although direct associations can form under these highly-constrained conditions, indirect associations do not, or require different tasks to be evident.

Speaker's Name:Amy Criss
First Author's Name:William Aue
First Author's Affiliation:Syracuse University
Second Author's Name:Amy Criss
Second Author's Affiliation:Syracuse University
Third Author's Name:Nicholas Fischetti
Third Author's Affiliation:Syracuse University
Title:Item repetition within (but not across) pair type helps and harms cued recall
Abstract:The impact of item and associative information on memory was investigated in four experiments. Employing a list discrimination paradigm, items that were presented in one pair-type in an initial study list were rearranged to create new pairs of either the same or different pair-type in a second study list. Repeating items in the same pair-type across lists lead to a greater number of correct and incorrect responses when participants were tested using cued recall. When items were presented in different pair-types across lists the effect was eliminated. Likewise, when the test did not require associative information (i.e., single item recognition) there was no difference between same/different pair-type conditions. The results of these data suggest that information specific to the pair-type (i.e., associative information) is stored and utilized strategically during memory searches.

D
Speaker's Name:Eddy Davelaar
First Author's Name:Eddy Davelaar
First Author's Affiliation:Birkbeck, University of London
Title:Modelling semantic fluency
Abstract:Semantic fluency is a test in which participants report as many exemplars of a given category within a fixed time limit (1 or 2 minutes). This simple task has attracted much attention in the neuropsychological literature in the early 1990s for its use in determining the structure of semantic memory. Various concerns about the statistical procedure to achieve this have been voiced. Concerns relate to statistical reliability, sensitivity to other processes impacting on the order in which items are reported, and simply the lack of validation studies. In this talk, I will show how the commonly accepted interpretation of the performance is incorrect by means of computer simulation. A SAM model of semantic fluency is gradually build up, showing at each step the difficulties with capturing performance in this task. This process is paralleled by a series of experiments and norming studies to obtain the relevant data sets necessary. The main conclusion of this project so far is that in order to simulate semantic fluency, the model needs to have a very rich repertoire of retrieval mechanisms. These results have dramatic implications for models of free recall that assume contextual retrieval. In addition, model suggests a way forward beyond the debates on the valid use of semantic fluency in screening for dementia.

Speaker's Name:Simon Dennis
First Author's Name:Simon Dennis
First Author's Affiliation:Ohio State
Title:What do cows drink?
Abstract:Lower level language tasks like spoken and written word identification are often modeled using relatively simple associative models. As one moves up the scale to sentence processing, however, more complicated representations such as trees and propositional units are often invoked. In this talk, I will make the case that this is unnecessary - that hierarchical phrase structure and propositional knowledge can be captured by the right kind of simple associative model. Furthermore, an associative model can be easily trained from unannotated input and accounts for data in recognition priming and cued recall tasks that are not easily accommodated under other representational schemes.

Speaker's Name:Stephen Denton
First Author's Name:Stephen Denton
First Author's Affiliation:Indiana University
Second Author's Name:Richard Shiffrin
Second Author's Affiliation:Indiana University
Title:Short-term Word Priming Across Eye Movements
Abstract:The authors conducted short-term priming experiments (using a visual, forced-choice, word identification task) that compared a standard priming condition, where prime and target words appeared in the same spatial location, with an experimental condition in which prime and target words were separated enough to necessitate an eye movement. Prime presentation duration was manipulated and, in both eye movement conditions, short primes produced a preference to choose a prime alternative, whereas, for longer duration primes this preference was absent. Based on modeling results, it is argued that prime and target features from separate fixations are still confusable and that evidence regarding prime features must still be discounted.

Speaker's Name:Christopher Donkin
First Author's Name:Christopher Donkin
First Author's Affiliation:Indiana University
Second Author's Name:Denis Cousineau
Second Author's Affiliation:University of Montreal
Third Author's Name:Richard Shiffrin
Third Author's Affiliation:Indiana University
Title:Guided Visual Search
Abstract:Cousineau and Shiffrin (2004) presented a model for the control conditions of a visual search study. Multiple modes were observed for target present trials and the model posited serial terminating comparisons, partially and probabilistically guided to the target location by a separate parallel automatic process. We expand and extend this model to the experimental conditions in which the visual display objects are presented successively, at speeds fast enough that the displays appear simultaneous. These successive display conditions place strong constraints on models, and allow us to develop a quite precise account for parallel and serial processes operating together in visual search.

Speaker's Name:Barbara Anne Dosher
First Author's Name:Barbara Anne Dosher
First Author's Affiliation:Memory, Attention, Perception Lab (MAP-Lab), Department of Cognitive Sciences, University of California, Irvine, CA
Second Author's Name:Wilson Chu
Second Author's Affiliation:Memory, Attention, Perception Lab (MAP-Lab), Department of Cognitive Sciences, University of California, Irvine, CA
Third Author's Name:Zhong-Lin Lu
Third Author's Affiliation:Laboratory of Brain Processes (LOBES), Departments of Psychology, University of Southern California, Los Angeles, CA
Title:How attention interacts with precision of judgment.
Abstract:Focused attention, whether spatially cued or object attention, improves visual identification, especially in the presence of visual noise in the stimulus. Spatial cues and division of attention over objects both show larger effects on discrimination accuracy for judgments of somewhat higher precision. At the same time, the precision of stimulus representations available for immediate report can be assessed for the same attention paradigms using immediate report methods similar to those recently used to study visual working memory. One example of variation in judgment precision requires distinguishing between patterns differing in orientation by larger or smaller angles. In a related analysis, precision of an immediate report representation may be assessed. There are distinct but inter-related effects on perception and its interaction with attention and representational or test precision across a range of testing conditions. Perceptual models that integrate precision into the perceptual decision reveal important effects in the impact of attention on performance.

E
Speaker's Name:Samantha Emerson
First Author's Name:Samantha Emerson
First Author's Affiliation:Middle Tennessee State University
Second Author's Name:Cyrille Magne
Second Author's Affiliation:N/A
Title:Learning New Rhythms: The Relationship between Music Aptitude and Second Language Proficiency
Abstract:Research indicates a close link between the neural and cognitive processes involved in both music and language. In fact, music aptitude has been correlated to language skills and musical expertise has been shown to improve the perception of certain aspects of language such as pitch and rhythm. However, little research has been done on the relationship between music aptitude and the learning of a second language (L2). The purpose of the present study was to extend research on the effect of music aptitude on L2 proficiency by investigating the perception of speech rhythm using both behavioral and Event-Related Potential (ERP) measures. English speaking learners of French with either high music aptitudes (HMA) or low music aptitudes (LMA) listened to spoken French sentences ending with a trisyllabic noun that was either rhythmically congruous or incongruous. ERPs time-locked to the onset of these critical words were recorded and the results were compared between the two groups. In comparison to the LMA group, HMA group exhibited an N400-like component in response to the semantically incongruous words which was interpreted as reflecting the difficulty of integrating the word into the context of the sentence. The HMA group also displayed a late positivity in response to the rhythmically incongruous words reflecting an increased sensitivity to the rhythmic structure of French. The practical implications of these results pertaining to extracurricular music programs and L2 learning are discussed.

G
Speaker's Name:Tarun Gangwani
First Author's Name:Tarun Gangwani
First Author's Affiliation:Indiana University
Second Author's Name:George Kachergis
Second Author's Affiliation:Indiana University
Third Author's Name:Chen Yu
Third Author's Affiliation:Indiana University
Title:Simultaneous Cross-situational Learning of Category and Object Names
Abstract:Previous research shows that people can acquire an impressive number of word-referent pairs after viewing a series of ambiguous trials by accumulating co- occurrence statistics (e.g., Yu & Smith, 2007). The present study extends the cross-situational word learning paradigm, which has primarily been used to investigate the acquisition of 1-to-1 word-referent mappings, and shows that humans can concurrently acquire both 1-to-1 and 1-to-many mappings (i.e., a category relation), even when the many referents of a single word have no unifying perceptual features. Thus, humans demonstrate an impressive ability to simultaneously apprehend hierarchical regularities in their environment.

Speaker's Name:Steven Gibson
First Author's Name:Steven Gibson
First Author's Affiliation:California State University - Northridge
Title:Modeling building blocks for language production and processing
Abstract:This paper describes the design and implementation of software that models an aspect of language use. This software postulates that human linguistic activities can be treated as modules that can be assembled and joined together. This model represents activity of modules in the production of language behaviors. The model asserts these modules interpret sensory data and respond with language behaviors. This paper seeks to offer a unified model to explain language production and processing. The software demonstates elements that are added and fulfill language needs. The model also expresses the coexistence of multiple modules that will interact and support each other. While this model does not offer a complete solution to all language use, it presents indications that language processing can be modeled computationally. Future work will be suggested to build on the model.

Speaker's Name:Robert Goldstone
First Author's Name:Robert Goldstone
First Author's Affiliation:Indiana University
Second Author's Name:David Landy
Second Author's Affiliation:University of Richmond
Third Author's Name:Ji Son
Third Author's Affiliation:California State University - Los Angeles
Title:The Education of Perception
Abstract:While the field of perceptual learning has mostly been concerned with low- to middle-level changes to perceptual systems due to experience, we consider high-level perceptual changes that accompany learning in algebraic reasoning. We find that proficiency in mathematics involves executing spatially explicit transformations to notational elements. People learn to attend mathematical operations in the order in which they should be executed, and the extent to which students employ their perceptual attention in this manner is positively correlated with their mathematical experience. Relatively sophisticated performance is achieved not by ignoring perceptual features in favor of deep conceptual features, but rather by adapting perceptual processing so as to conform with and support formally sanctioned responses. These “Rigged Up Perceptual Systems” (RUPS) offer a promising approach to educational reform.

Speaker's Name:Reyna Gordon
First Author's Name:Reyna Gordon
First Author's Affiliation:Center for Complex Systems and Brain Sciences, Florida Atlantic University
Second Author's Name:Cyrille Magne
Second Author's Affiliation:Psychology Department, Middle Tennessee State University
Third Author's Name:Edward Large
Third Author's Affiliation:Center for Complex Systems and Brain Sciences, Florida Atlantic University
Title:The influence of metrical alignment on the perception of song lyrics
Abstract:Language and music each have their own metrical structures, but in song these rhythmic patterns are unified to form one prosodic realization. The present study was designed to explore the idea that well-aligned textsettings, in which the strong syllables occur on strong beats, capture listeners’ attention and help them to better understand song lyrics. EEG was recorded while participants listened to well-aligned and misaligned sung sentences and performed a lexical decision task on subsequently presented visual targets. Results showed that induced beta and evoked gamma synchronization were modulated differently for well-aligned and misaligned syllables, and that task performance was adversely affected when visual targets followed misaligned sentences. These findings suggest that alignment of linguistic stress and musical meter in song enhance beat tracking and linguistic segmentation by entraining periodic neural oscillations to the stimulus.

Speaker's Name:Tom Griffiths
First Author's Name:Tom Griffiths
First Author's Affiliation:University of California, Berkeley
Title:Effects of inductive biases on the creation of communication systems
Abstract:Accounts of language evolution have tended to focus on two kinds of forces that can change the structure of a language: cultural transmission, and the goal of producing a shared communication system. Both of these forces rely on learning, as people need to infer the structure of a language from the utterances of other people in both cases. However, the effects of inductive biases - those factors that make some languages easier to learn than others - have only been explored in the context of cultural transmission. I will present a mathematical analysis of a simple model of the creation of a communication system by Bayesian agents, and describe the results of two experiments testing the predictions of this model in the laboratory with human learners. Both model and data suggest that inductive biases can have a strong influence on the creation of communication systems.

H
Speaker's Name:Kameko Halfmann
First Author's Name:Kameko Halfmann
First Author's Affiliation:St. Olaf College
Second Author's Name:Clark Ohnesorge
Second Author's Affiliation:St. Olaf College
Title:The effect of the manipulation of emotion on the perceptual representation of space
Abstract:Emotion plays an adaptive role in mediating our attentional resources by filtering sensory information, providing meaning to situations, and selectively guiding information already available to the system prior to the allocation of focal attention. The relative activation hypothesis states that positive/approach and negative/withdrawal emotions are neuroanatomically dissociable. Positive/approach emotions are associated with relatively greater activity in the left frontal hemisphere than the right frontal hemisphere, whereas negative/withdrawal emotions are associated with relatively greater activity in the right frontal hemisphere than the left frontal hemisphere (Davidson, 1992). Right pseudoneglect is an attentional phenomenon resulting from the dominance of the right hemisphere in visuo-spatial attention. Phenomenologically it yields a pronounced leftward bias in the traditional line bisection task (McCourt & Olafson, 1997). This experiment employs a mood induction procedure to elicit emotions prior to the completion of a standard pen and paper line bisection task to study the effect of the relative activation of the frontal hemispheres due to emotion condition on visual attention. The results suggest that the manipulation of frontal activation alters the perceptual representation of space.

Speaker's Name:Pernille Hemmer
First Author's Name:Pernille Hemmer
First Author's Affiliation:University of California, Irvine
Second Author's Name:Mark Steyvers
Second Author's Affiliation:University of California, Irvine
Title:Estimating Individual Differences in a Rational Model of Memory
Abstract:In recalling the size of fruits and vegetables, the height of people, objects in scenes and the order of events, people appear to use a strategy of incorporating prior knowledge to improve average recall performance. This behavior can be explained by a rational model of memory that assumes that people seek to optimize recall performance by combining the available episodic and semantic information. It is however, unclear whether people always perform rationally within a given recall task. In the current approach we seek to extend our previous work using a rational model of memory that allows for individual differences. We develop this approach within the framework of an empirical study in which people recall the height of females, males and ambiguous silhouettes. We assume that individuals can have varying levels of internal noise in memory as well as different levels of prior knowledge about the height of men and women. Individual differences are estimated using a Bayesian data analysis applied to the rational model. This approach therefore combines two kinds of Bayesian analysis. The Bayesian inference procedure from the perspective of the participant motivates a simple rational model of memory. The Bayesian inference procedure from the perspective of the experimenter allows individual differences within the rational model to be estimated.

Speaker's Name:Douglas Hintzman
First Author's Name:Douglas Hintzman
First Author's Affiliation:University of Oregon
Title:Fads and Fallacies of Memory
Abstract:For the past three decades, research and theory on human memory have been dominated by a focus on the recognition-memory task. I argue that this focus has been bad for the field. It perpetuates misconceptions about how memory works, and ignores many basic functions that memory serves. To achieve a real understanding of memory, theorists will have to cast a much wider net. They need to consider a variety of laboratory tasks, observations from the study of everyday memory, and even the phenomenological perspective on what remembering is like.

Speaker's Name:Markus J. Hofmann
Add. Speaker's Name:Lars Kuchinke
First Author's Name:Markus J. Hofmann
First Author's Affiliation:Free University Berlin
Second Author's Name:Lars Kuchinke
Second Author's Affiliation:Free University Berlin
Third Author's Name:Sascha Tamm
Third Author's Affiliation:Free University Berlin
Fourth Author's Name:Chris Biemann
Fourth Author's Affiliation:University Leipzig
Fifth Author's Name:Arthur M. Jacobs
Fifth Author's Affiliation:Free University Berlin
Title:The Associative Read-Out Model predicts recognition memory performance
Abstract:The present study extended the Multiple Read-Out Model (Grainger & Jacobs, 1996) by an associative layer, and tested the predictions of this Associative Read-Out Model (AROM) during recognition memory. Associative relations between words were implemented in terms of co-occurrence statistics. The AROM predicts not only false memory effects, but also successful learning to profit a great deal from associative relations. Words with many associatively linked items in the stimulus set reveal more OLD responses in NEW and OLD stimuli than words with a lower associative density. Moreover it correctly predicts the standard OLD/NEW effect. These predictions were examined in a study-test task in which 30 participants rated their recognition performance on a 6-point confidence scale from ‘sure NEW’ to ‘sure OLD’. The AROM was shown to successfully account for item-level variances of OLD-response probabilities as well as Receiver Operation Characteristics.

Speaker's Name:Jared Hotaling
First Author's Name:Jared Hotaling
First Author's Affiliation:Indiana University
Second Author's Name:Andrew Cohen
Second Author's Affiliation:University of Massachusetts
Third Author's Name:Jerome Busemeyer
Third Author's Affiliation:Indiana University
Fourth Author's Name:Richard Shiffrin
Fourth Author's Affiliation:Indiana University
Title:Models of Information Integration in Perceptual Decision Making
Abstract:In cognitive science there is a seeming paradox: On the one hand researchers studying judgment and decision making (JDM) have repeatedly shown that people employ simple and often sub-optimal strategies when integrating information from multiple sources. On the other hand another set of researchers has had great success using Bayesian optimal models to explain information integration in fields such as categorization, perception, and memory. One impediment to reconciling this paradox lies in the different experimental methods each group has used. Recently, Hotaling, Cohen, Busemeyer, & Shiffrin (submitted) conducted a perceptual decision making study designed to bridge this methodological divide and test whether the sub-optimal integration found in verbal problems stated in terms of probabilities may also appear in perceptual tasks. Their results indicate that a classic JDM finding, the dilution effect, does arise in perceptual decision making. Observers were given strong evidence X favoring A over B, and weak evidence Y also favoring A over B. According to Bayesian analysis, the odds in favor of A should be multiplied, resulting in an increased likelihood of A. Instead, Hotaling et al. found that the weak evidence diluted the strong evidence, producing decreased judgments and choice probabilities favoring A, given X & Y, than given X alone. I review these empirical findings and test both rational and cognitive models of the integration process.

Speaker's Name:David Huber
First Author's Name:Nitin Gupta
First Author's Affiliation:University of California, San Diego
Second Author's Name:Sara Mednick
Second Author's Affiliation:University of California, San Diego
Third Author's Name:Yoonhee Jang
Third Author's Affiliation:University of California, San Diego
Fourth Author's Name:David Huber
Fourth Author's Affiliation:University of California, San Diego
Title:The road not taken: Performance on the Remote Associates Test is best when word frequency is ignored
Abstract:The Remote Associates Test (the RAT) has been used for nearly 40 years as a reliable measure of creativity. Each test question in the RAT consists of three cue words all of which associate to or from a common answer and creativity is measured by the proportion of questions answered correctly. In 1962, Sarnoff Mednick proposed that good performance on the RAT occurs when people use a “flat associative hierarchy” in which remote associations are considered rather than only considering strong associations. However, there are currently no theories that explain why people differ in their associative hierarchy. To provide the data needed to model the RAT, we normed 48 questions across hundreds of participants. We did so with speeded forced response instructions to induce many wrong answers. The model we developed explains both correct and wrong answers as neighbors within a high dimensional semantic space based on the positioning of the three cue words. This computational model performs well on the RAT and it explains individual differences in terms of the degree to which word frequency biases some answers over others. Thus, a flat associate hierarchy is one in which frequency plays no role.

J
Speaker's Name:Kimberly A. Jameson
First Author's Name:Kimberly A. Jameson
First Author's Affiliation:Institute for Mathematical Behavioral Sciences, U.C. Irvine
Title:Human potential for tetrachromacy?
Abstract:Scientific advances over the past twenty years have led to a far better understanding of the relevance and physiological basis of color experience than ever before. Recent research in molecular genetics, color perception and cognitive psychology is clarifying the underpinnings of human color sensations, how color experience has evolved, and along which perceptual paths we might be headed as a species of color-experiencing individuals. Together, such advances suggest that extensions of color perception theory are needed to account for retinal photopigment diversity unanticipated by accepted models of color vision trichromacy. This paper summarizes relevant empirical and theoretical research on this subject from current human and animal literatures.

Speaker's Name:Matt Jones
First Author's Name:Matt Jones
First Author's Affiliation:University of Colorado
Second Author's Name:Darrell Worthy
Second Author's Affiliation:University of Texas
Third Author's Name:Shaw Ketels
Third Author's Affiliation:University of Colorado
Fourth Author's Name:Ross Otto
Fourth Author's Affiliation:University of Texas
Title:The phenomenology of multiple learning systems
Abstract:A primary defining criterion in dual-system theories of learning is whether the representations being learned are verbalizable. Thus a natural prediction is that verbal report should be accurate as to which system is controlling behavior. That is, subjects should be able to correctly report whether they are using explicit, verbalizable hypotheses or implicit, procedural associations. This prediction was tested using categorization tasks and model-based analyses that reliably identify which learning system is operating for each subject. Results show no correlation between self-perceived rule use and actual rule use. However, self-report is correlated with performance, with higher-performing subjects more likely to believe they used rules. Although these findings can be taken as an argument against multiple-system theories, they can also be interpreted in terms of limitations on metacognition, wherein cognitive self-perception operates via "third-person" inference, much as in classical theories of affective self-perception.

K
Speaker's Name:George Kachergis
First Author's Name:George Kachergis
First Author's Affiliation:Indiana University
Second Author's Name:Chen Yu
Second Author's Affiliation:Indiana University
Third Author's Name:Richard Shiffrin
Third Author's Affiliation:Indiana University
Title:Modeling Cross-situational Learning of Word-Referent Mappings
Abstract:Several studies have found that adults can acquire word-referent pairings after experiencing a series of individually-ambiguous trials (i.e., trials containing multiple words and referents). To disambiguate pairings -- which many learners do with great success -- word-referent co-occurrences must be integrated across trials. We discuss a variety of factors that have empirically been found to affect learning performance, such as pair frequency, pairs per trial, temporal contiguity, and contextual diversity (how many pairs a given pair appears with during training). Using this data, a variety of associative models were constructed, implementing diverse attention, memory, and inference mechanisms. We find that some surprising empirical results (e.g., a null frequency effect when pairs of different frequency co-occur) can be explained by shifting attention and limited memory. Finally, we demonstrate that although learners do assume mutually exclusive pairings in some situations (to their great advantage), they can also relax this assumption and learn more complex mappings.

Speaker's Name:Mike Kalish
First Author's Name:Mike Kalish
First Author's Affiliation:University of Louisiana at Lafayette
Second Author's Name:Ben Newell
Second Author's Affiliation:UNSW
Third Author's Name:John Dunn
Third Author's Affiliation:University of Adelaide
Title:Multiple systems in categorization
Abstract:Cognitive science aims to study how people’s cognitive capacities work. Categorization appears to be one such capacity but, like other high-level capacities (memory, perception, etc.), it is not clear if categorization should be considered as one capacity or many. In particular, the issue arises in the context of the analysis of category learning ‘systems’. I will suggest that there are two distinct questions raised by this analysis, one having to do with the nature of category learning as a capacity, and the other with the nature of the brain structures that provide the mechanism upon which the capacity rests. I have nothing to say about the latter question. The former question needs clarification, and I will restrict my attention to the subset of category learning problems commonly identified as ‘perceptual’. A series of recent experimental results support the view that perceptual category learning is a single cognitive ‘system’.

Speaker's Name:Shaw Ketels
First Author's Name:Shaw Ketels
First Author's Affiliation:University of Colorado at Boulder
Second Author's Name:Matt Jones
Second Author's Affiliation:University of Colorado at Boulder
Title:Language is not always helpful: Labels do not facilitate the learning of information-integration category structures
Abstract:Recent research has shown that verbal labels can facilitate learning of new categories (Lupyan, Rakison, & McClelland, 2007). This project investigates whether the advantage of verbal labels is universal, or whether it is specific to certain types of category representations. A growing body of evidence suggests that there are at least two category-learning systems, one that uses a declarative, rule-based approach and another that uses a procedural approach (e.g., Ashby & Maddox, 2005). The rule system tends to drive learning of rule-based (RB) category structures that admit verbal description, whereas the procedural system tends to drive learning of nonverbalizable information-integration (II) category structures. Numerous dissociations between the two systems have been reported, which show rule-based learning is symbolically or verbally mediated, whereas procedural learning is more perceptually grounded and directly tied to the specific experimental context (i.e., hand or key of response). This distinction suggests that the procedural system may not be able to take advantage of verbal labels during learning in the same way that the rule system can. We tested this hypothesis in a simple category-learning experiment that crossed two category structures (one RB and one II) with two types of response. Half the subjects responded with verbal labels for the categories (i.e., species of fish), whereas the other half responded with actions to be performed on the stimuli (i.e., ways to cook them). Our results reveal an interaction, such that learning was better with verbal labels for the RB structure but not for the II structure. This finding suggests that the benefit of verbal labels in category learning is limited to the rule-based system.

Speaker's Name:Eileen Kowler
First Author's Name:Eileen Kowler
First Author's Affiliation:Rutgers University
Title:The management of time in saccadic decisions
Abstract:There has been a lot of attention paid to studying where people look when scanning pictures, or performing various natural tasks, but almost no attention devoted to understanding the temporal properties of the saccadic eye movement patterns, even though time is crucial for making accurate perceptual decisions. I will describe several results that show a pronounced preference to minimize fixation (inter-saccadic) pause time in favor of a strategy of maintaining a rapid rate of scanning. The examples will be taken from studies of: (1) ‘center of gravity’ saccades; (2) saccades during visual search; (3) saccades during counting; and (4) saccades during reading. The strategy of maintaining a rapid rate of scanning is compatible with two fundamental characteristics of the saccadic system. One allows concurrent (parallel) planning of multiple saccades, thus facilitating rapid generation of secondary ‘corrective’ saccadic eye movements in the event that a hastily planned saccade lands at a useless location. The other is a recently-discovered effect of stimulus context on fixation durations, such that decisions about how long to dwell in a given location are made according to the predicted (rather than actual) difficulty of the visual judgment (Wilder, Aitkin, Schnitzer, Kowler, 2010). Taken together, these findings show that the saccadic system has built-in biases to explore the visual environment, devoting minimal resources to careful planning, and instead preferring the use of corrective saccades and re-visits to previously seen locations when needed.

Speaker's Name:Jake Kurczek
First Author's Name:Jake Kurczek
First Author's Affiliation:University of Iowa
Second Author's Name:Clark Ohnesorge
Second Author's Affiliation:St. Olaf College
Title:Evaluating the Impact of Spatial Frequencies on the Perception of Gender
Abstract:Previous studies have suggested that gender categorization of human faces may take place in high level processing areas. We examined how lower level processing, such as spatial frequency adaptation (which takes place in lower areas of the visual cortex) affects the perception of gender. It has been shown that male and female faces are generally composed of greater levels of lower and higher spatial frequencies respectively. In adaptation aftereffect studies, it is known that a person adapted to a spatial frequency will tend to judge novel frequencies in the direction opposite of that to which they were adapted. Thus, if a person is adapted to a certain spatial frequency and then asked to judge a gender neutral picture, we assumed that their responses would be shifted in the opposite direction (high frequencies producing a shift in the male direction and low frequencies to the female direction). In the present study, subjects were randomly adapted to five different spatial frequencies and asked to judge 10 gender morph series (five Caucasian series and five Asian series) which varied from 100% male to 100% female in 25% steps. Our results showed that perception of gender could be shifted in the masculine direction by high spatial frequencies, but adaptation to low frequencies did not result in a similar shift toward feminine.

L
Speaker's Name:Soo-Young Lee
First Author's Name:Jung-Hui Im
First Author's Affiliation:KAIST
Second Author's Name:Soo-Young Lee
Second Author's Affiliation:KAIST
Title:Top-Down Attention for the Integration of Audio-Visual Perception in Lip Reading
Abstract:Speech is inherently bimodal, relying on cues from the acoustic and visual speech modalities for perception. The McGurk effect demonstrates that when humans are presented with conflicting acoustic and visual stimuli, the perceived sound may not exist in either modality. This effect has formed the basis for modeling the complementary nature of acoustic and visual speech by encapsulating them into the relatively new research field of audio-visual speech processing (AVSR). A top-down selective attention model had been proposed for isolated word recognition in noisy environments and sequential recognition of superimposed patterns. In this paper we further extend the top-down attention model to integrate audio and visual information. The model successfully explains the McGurk effects and results in much better performance for noisy speech recognition.

Speaker's Name:Tania Lombrozo
First Author's Name:Tania Lombrozo
First Author's Affiliation:Department of Psychology, UC Berkeley
Title:Causal-explanatory pluralism
Abstract:Both philosophers and psychologists have argued for the existence of distinct kinds of explanations, including teleological explanations that cite functions or goals, and mechanistic explanations that cite causal mechanisms. For example, a tiger’s stripes can be explained mechanistically by appeal to variation in pigments, or teleologically by appeal to camouflage. Theories of causation, in contrast, have generally been unitary, with dominant theories focusing either on counterfactual dependence or on physical connections. This paper argues that both approaches to causation are psychologically real, with different modes of explanation promoting judgments more or less consistent with each approach. Four experiments isolate the contributions of counterfactual dependence and physical connections in causal ascriptions involving events with people, artifacts, or biological traits, and manipulate whether the events are construed teleologically or mechanistically. The findings suggest that when events are construed teleologically, causal ascriptions are sensitive to counterfactual dependence and relatively insensitive to the presence of physical connections, but when events are construed mechanistically, causal ascriptions are sensitive to both counterfactual dependence and physical connections. These findings reinforce the value of characterizing difference kinds of explanations, and provide a new way of thinking about causal reasoning and representation.

M
Speaker's Name:Don MacKay
First Author's Name:Don MacKay
First Author's Affiliation:UCLA
Title:Relations between Visual Cognition and Memory: Evidence from Amnesic H.M.
Abstract:This talk reports a figure detection experiment comparing visual cognition in amnesic H.M. and memory-normal controls matched for age, background, intelligence, and education. On the hidden-figure task, H.M. exhibited deficits relative to the controls when detecting unfamiliar targets but not when detecting familiar targets, e.g., circles, squares and right-angle triangles. H.M.’s visual cognition deficits were not due to his well-known-problems in explicit learning and recall, inability to comprehend or remember the instructions, general slowness, motoric difficulties, low motivation, low IQ relative to the controls, or working memory limitations. Parallels between H.M.’s selective deficits in visual cognition, language, and memory are discussed. These parallels contradict the standard “systems theory” account of H.M.’s condition but are readily explained under binding theory and the hypothesis that H.M. has difficulty representing unfamiliar but not familiar information in visual cognition, language, and memory.

Speaker's Name:Cyrille Magne
First Author's Name:Cyrille Magne
First Author's Affiliation:Psychology Department, Middle Tennessee State University
Title:Exploring the role of metrical expectancy during silent word reading using EEG
Abstract:The present study investigates to what extent metrical structure in English plays a role in silent word reading. The electroencephalogram (EEG) was recorded while participants were visually presented with lists of five bisyllabic words ending with one word that had either the same or different stress pattern as the previous four words. Results revealed that final words that did not match the stress pattern of the previous words elicited specific electrophysiological responses, thus suggesting that speech rhythm is processed automatically even when reading.

Speaker's Name:Shane Mueller
First Author's Name:Shane Mueller
First Author's Affiliation:Applied Research Associates
Title:Exploring the role of consistent belief systems in group opinion dynamics
Abstract:Much recent research on opinion dynamics has focused on modeling knowledge as a simple (one-dimensional rational) value, while simulating large populations of simple agents to demonstrate hypothesized group-level phenomena. One particular focus of this research has relied on the bounded influence conjecture: an agent can only be influenced by agents whose opinions are sufficiently similar to its own. This assumption makes intuitive sense, because it reflects our perception of political debates in which two opposing sides engage in arguments with one another, but never appear to listen to the opposing beliefs. The bounded influence conjecture has been shown to produce commonly-reported social dynamics effects, such as the formation of stable groups of disagreement. However, it has little direct evidence, and is at odds with cognitive research that suggests people have difficulty discounting and ignoring information. Furthermore, the conjecture effectively builds in as an assumption the effect it intends to produce. I will explore another account of these effects: that they stem from the desire to maintain a consistent system of beliefs. In this framework, extreme opinions will be listened to, and will have a chance to impact the beliefs of an individual, but only if it maintains a coherent set of beliefs. I will report analysis and simulation that shows the conditions in which consistent belief systems (represented as a knowledge space graph) can produce the same types of phenomena as the bounded influence conjecture, and show how it supports new previously unexplored hypotheses about opinion dynamics.

Speaker's Name:Bennet Murdock
First Author's Name:Bennet Murdock
First Author's Affiliation:University of Toronto
Title:The Terrace Simultaneous Chaining Paradigm
Abstract:Herb Terrace has developed a method for getting rhesus monkeys to learn short serial lists which seem to be ordered by position associations not item-to-item or chaining cues. I will report my attempts to develop an application of the TODAM working memory model using position cues rather than chaining to explain how the monkeys might be able to perform this serial learning task succesfully.

Speaker's Name:Jay Myung
First Author's Name:Jay Myung
First Author's Affiliation:Ohio State University
Second Author's Name:Daniel Cavagnaro
Second Author's Affiliation:Ohio State University
Third Author's Name:Mark Pitt
Third Author's Affiliation:Ohio State University
Title:Squeezing every ounce of information from an experiment: Adaptive design optimization
Abstract:Experimentation is fundamental to the advancement of science, whether one is interested in studying the neuronal basis of a sensory process in cognitive neuroscience or assessing the efficacy of a new drug in clinical trials. Adaptive design optimization, in which the information learned from each experiment is used to inform subsequent experiments, is a particularly attractive methodology because it can potentially reduce the time required for data collection while simultaneously increasing the informativeness of the knowledge learned in the experiment. More concretely, the problem to be solved in adaptive sequential design optimization for model discrimination is to identify an experimental design under which one can infer the underlying model, among a set of candidate models of interest, in the fewest possible steps. In this presentation, addressing the design optimization problem in discrimination of formal models of forgetting, we demonstrate the success of a Bayesian adaptive method in improving design decisions by implementing the method in an experiment with human participants.

N
Speaker's Name:Angela Nelson
First Author's Name:Angela Nelson
First Author's Affiliation:UCSD
Second Author's Name:James Fowler
Second Author's Affiliation:UCSD
Third Author's Name:Derek Ruths
Third Author's Affiliation:McGill University
Title:Modeling the Contributions of Influence and Homophily in Social Network Dynamics
Abstract:Previous research has shown that health behaviors (like obesity) tend to spread through social connections within networks. Traditional analyses of network properties do not distinguish between several possible reasons for this phenomenon. The current project uses a model of network dynamics to study the relative contribution of two possible causal factors in the spread of behaviors through social networks: homophily and influence. We used a modified social space model to simulate networks at varying levels of homophily and influence, and found the proportion of motif types in a network is indeed dependent on these two properties to varying degrees. Further research will use this finding to apply our modeling techniques to an analysis of real world data (for example, the Add Health dataset).

O
Speaker's Name:Clark Ohnesorge
First Author's Name:Clark Ohnesorge
First Author's Affiliation:St Olaf College
Second Author's Name:Seth Greenberg
Second Author's Affiliation:Carleton College
Third Author's Name:Doug Sylvester
Third Author's Affiliation:Carleton College
Title:The Cross-Race effect
Abstract:The Cross-Race effect refers to the finding that caucasian subjects routinely are faster to identify the skin color of african americans at the same time that they are less able to recognize individual members of that racial group. This finding has been replicated frequently. Theoretical accounts typically rely on an asymmetry between the impact of the presence or absence of definitional features for out-group membership. Interestingly, prior studies have not included a manipulation of familiarity. In two studies we asked caucasian subjects to identify skin color for famous and non-famous african-american and caucasian faces. We find that the speed with which they made the judgment varied with familiarity and far more strongly for african american faces than caucasian. Additionally, the effect was mediated by such parameters as blocked vs random presentation. In a third study we trained subjects to recognize previously unfamiliar images in order to test our proposal that the effect can be understood within the framework of a Stroop-like response competition. The results of these studies converge on an explanation in which both response competition and percepetual fluency play a role.

Speaker's Name:Adam Osth
First Author's Name:Adam Osth
First Author's Affiliation:The Ohio-State University
Second Author's Name:Simon Dennis
Second Author's Affiliation:The Ohio-State University
Third Author's Name:Vladimir Sloutsky
Third Author's Affiliation:The Ohio-State University
Title:Developmental Changes in Recognition Memory: The Effects of Categorization on Eye Movements
Abstract:Dennis and Chapman (under revision) demonstrated in a yes/no recognition memory experiment that increasing the number of exemplars on a list while keeping the number of categories constant led to a decrease in unrelated false alarms and argued that category information may be used to reject non-category probes at test. Additionally, Sloutsky and Fisher (2004) found that when both adults and children categorized at study, only adults exhibited an increase in related false alarms. We argue that if children are less sensitive in using category information at test, they should not experience an advantage in discriminating between study items and unrelated distracters in long lists. While adults did not show an effect in their memory performance in our experiment, they exhibited a significant decrease in their reaction times to unrelated distracters in the long list without decrementing their accuracy. Converging evidence came from an eyetracking experiment, which demonstrated that during unrelated distracter trials adults looked longer at the relevant category feature in the long list while the difference was not significant for the children. This evidence suggests that adults may be more strategic than children in how they use category information in their judgments during the test phase.

P
Speaker's Name:Stephen Palmer
First Author's Name:Stephen Palmer
First Author's Affiliation:Psychology Department, UC Berkeley
Second Author's Name:Karen Schloss
Second Author's Affiliation:Psychology Department, UC Berkeley
Title:Human Color Preferences: An Ecological Valence Theory
Abstract:Color preference is an important aspect of human behavior, but little is known about why people generally like some colors more than others (e.g., blues more than browns). Recent results from the Berkeley Color Project provide detailed measurements of preferences among 32 chromatic colors and enable us to fit several models of color preference, including ones based on cone contrasts, color-emotion associations, color appearance, and Palmer & Schloss's ecological valence theory (EVT). The EVT postulates that color serves an adaptive "steering' function, analogous to taste preferences, that biases organisms toward approaching advantageous objects (e.g., blue sky and water) and avoiding disadvantageous ones (e.g., brown feces and rotting food). It predicts that people will tend to like colors to the extent that they like the objects that are characteristically that color, averaged over all such objects. We tested this prediction by having four different groups of participants (1) rate their preferences for 32 colors, (2) give verbal descriptions of objects of each color, (3) rate their preferences for those object descriptions, and (4) rate the similarity of the objects to the colors. 80% of the variance in average color preference ratings was predicted by the Weighted Affective Valence Estimate (WAVE) derived from measurements 2, 3, and 4, much more variance than any of the other models. If time permits, we will also describe how hue preferences for single colors differ as a function of object-type, gender, expertise, culture, social institutions, and perceptual experience.

Speaker's Name:Diane Pecher
First Author's Name:Diane Pecher
First Author's Affiliation:Erasmus University Rotterdam, The Netherlands
Title:Motor affordance and visual working memory
Abstract:Neuro-imaging studies (e.g., Mecklinger, Gruenewald, Weiskopf, & Doeller, 2004) have indicated that motor affordances play a role in visual working memory for objects. Mecklinger et al. found greater premotor cortex activation when participants remembered manipulable objects (e.g., a picture of a hammer) than when they remembered non-manipulable objects (e.g., a picture of a polar bear), suggesting that motor affordances form an additional component of working memory. If such motor memory indeed is a component of working memory for visual information, one should expect interference from concurrent motor tasks. In particular, motor interference should be found for pictures of manipulable objects but not, or to a lesser degree, for non-manipulable objects. I will present a series of experiments in which participants held object pictures in working memory while performing concurrent tasks such as articulation of nonsense syllables and hand movements. While concurrent tasks did interfere with working memory performance, in none of the experiments did I find any evidence that concurrent motor tasks affected memory for manipulable objects differently than for non-manipulable objects. I conclude that motor affordance is not used for visual working memory.

Speaker's Name:Bill Prinzmetal
First Author's Name:Bill Prinzmetal
First Author's Affiliation:University of California Berkeley
Second Author's Name:Jordan Taylor
Second Author's Affiliation:University of California Berkeley
Title:What Causes Involuntary Attention, Contingent Capture and Inhibition of Return?
Abstract: We studied involuntary attention with the spatial cueing paradigm. In this paradigm, a noninformative spatial cue, preceding a visual target, is said to “capture” attention such that RTs are faster if the target appears at the cued location than an uncued location. We hypothesized two mechanisms for this cueing effect. The first is a (serial) search mechanism that predicts the cueing effect will be larger the more display locations. The second is a decision mechanism, described by a competitive accumulator model, that predicts the more display locations, the smaller the cueing effect. We found that when there were distractors in the display, the search model predicted performance. However, when there were no distractors in the display, the decision mechanism accounted for performance. We further investigated two effects associated with involuntary attention: contingent capture and inhibition of return (IOR). Contingent capture is the finding that the more similar the cue and target, the greater the cueing effect. IOR is the finding that as the time between the onset of the cue and target increase, the cueing effect reverses so that RT is faster at an uncued location. We found that contingent capture was caused by the search mechanism whereas IOR was caused by the decision mechanism. Furthermore, model fits confirm that IOR can be described in an increase in threshold for targets in the cued location.

R
Speaker's Name:Cory Rieth
First Author's Name:Cory Rieth
First Author's Affiliation:UCSD
Second Author's Name:David Huber
Second Author's Affiliation:UCSD
Title:Adaptation to the temporal statistics of spatial cueing
Abstract:Spatial cueing experiments find facilitated cueing at short delays (< 150 ms) but deficits at long delays (> 300 ms). These deficits have been termed ‘Inhibition of Return' (IOR). This is thought to be an automatic process within the attention or oculomotor systems. We consider the hypothesis that this reflects environmental regularity for the appearance of sought objects: It is likely that the appearance of an object (the cue) signals interesting nearby objects (the target), but if none are found, then it is unlikely another interesting object will appear at that location. To test this hypothesis, we trained participants in a particular environment where the time of target appearance determined its location and then we tested them in environments where all conditions were equally likely. After training that targets would appear on the cued side after a delay, the IOR effect was reversed. In contrast, IOR was enhanced after training that cues would appear on the opposite side after a delay. However, no training effects were observed for cueing at the short SOAs. A subsequent experiment used probabilistic training with only short SOA trials so that cue-based expectations about target location occurred reliably at just one cue-target delay. With this adjustment, we found that training changed the nature of spatial cueing even at short SOAs. These results are compatible with the claim that spatial cueing is a learned behavior that incorporates not only expectation of location but also expectations of time.

S
Speaker's Name:Richard Shiffrin
First Author's Name:Angela Nelson
First Author's Affiliation:UCSD
Second Author's Name:Richard Shiffrin
Second Author's Affiliation:Indiana University
Third Author's Name:Amy Criss
Third Author's Affiliation:Syracuse
Fourth Author's Name:Ken Malmberg
Fourth Author's Affiliation:University of South Florida
Title:Effects of Event Traces and their Frequency in Recognition Memory.
Abstract:The strong and reliable effect of normative word frequency upon recognition performance does not lead to clear interpretations because word frequency is confounded with every other measure definable for words. Angela Nelson therefore trained novel Chinese characters for two weeks (in two studies), with characters trained to different degrees in the ratios: 1:3:9:27. In study 1 training was in the context of visual search. Because higher frequency characters occurred more often with other higher frequency characters, and because our modeling says a knowledge trace for one character accumulates features from co-occurring characters, we infer that higher frequency characters became more similar to each other. Our initial modeling was able to explain recognition frequency effects on this basis. However study 2 eliminated this effect of character context by training with a task requiring a character to be matched to itself. Similar recognition frequency results were obtained, suggesting another effect of training frequency: The greater number of training session event traces for the higher frequency characters, equivalent to what Dennis and Humphreys called ‘context noise’. If activation of training session traces produced noise that lowered performance particularly for HF characters, then a six week delay until the recognition study/ test should reduce such activation and reduce frequency effects. This is what we found. D&H proposed that the only traces activated by a word recognition test are traces of the test word, both from the list and past history. (It is unclear whether they would make such a claim for Chinese characters). Criss, Malmberg and I report results from word recognition studies showing large decreases in performance as testing after the list continues. If testing leaves event traces, and if these are activated by a subsequent test of some other word, then we would predict the observed decrease over testing. The studies strongly argue that recognition probes activate many types of event traces, not just those matching the test item.

Speaker's Name:Vladimir Sloutsky
Add. Speaker's Name:Xin Yao
First Author's Name:Vladimir Sloutsky
First Author's Affiliation:Ohio State University
Second Author's Name:Xin Yao
Second Author's Affiliation:Ohio State University
Title:Selective Attention and the Development of Categorization
Abstract:Although the role of selective attention in categorization tasks is widely acknowledged, the developmental immaturity of selective attention has been often ignored by theories of development of categorization. The current study examined the role of developing visual attention in the development of categorization. In Experiment 1, young children and adults were presented with a match-to-sample task in which perceptual features were in conflict with the matching rule. Eye-tracking data suggested that 3- and 4-year-olds had difficulty inhibiting visual attention to distracters, whereas adults exhibited near-optimal performance. These findings were then used to predict and explain developmental differences in performance on a variety of categorization tasks (Experiments 2-4).

Speaker's Name:George Sperling
First Author's Name:George Sperling
First Author's Affiliation:Department of Cognitive Sciences University of California, Irvine
Title:Towards a theory of the perception of motion direction. Plaids.
Abstract:As with most biological processes, the more the visual computation of the perceived direction of a moving visual stimulus has been studied, the more complex it has turned out to be. Studies of the motion of simple sinewave gratings revealed three concurrent motion-analysis sytems (first-, second- and third-order motion systems). Combinations of two moving sinewaves (called plaids) have led to hundreds of publications but not to any defensible theory of plaid motion perception. The three-systems theory of motion-direction percption will be reviewed (with demonstrations). It is shown that, when plaid stimuli are directed exclusively to the first-order motion system (by using only stimuli with very high temporal frequencies), the plaid combination rule is remarkably simple and robust. Parameter-free, a priori predictions of the perceived direction of new plaid stimuli account for 97% of the variance of the data once the experiments are actually performed. The perceived motion direction of slower (lower temporal frequency plaids) is shown to consist of two components, first-order plus third-order, with zero contribution from the second-order system. The methods described herein ultimately can yield complete descriptions of the first- and third-order motion systems. *The plaid experiments were carried out by Dantian T. Liu as part of her Ph.D. thesis; the data were recovered and analyzed by Ling Lin. I'm looking for a student or postdoc to continue this line of research.

Speaker's Name:Vishnu Sreekumar
First Author's Name:Vishnu Sreekumar
First Author's Affiliation:Department of Psychology, Ohio State University
Second Author's Name:Yuwen Zhuang
Second Author's Affiliation:Department of Computer Science and Engineering, Ohio State University
Third Author's Name:Simon Dennis
Third Author's Affiliation:Department of Psychology, Ohio State University
Fourth Author's Name:Mikhail Belkin
Fourth Author's Affiliation:Department of Computer Science and Engineering, Ohio State University
Title:The dimensionality of episodic images
Abstract: A correlation dimension analysis is done on images that can be thought of as representative of the visual input that goes into forming an individual's episodic memory. A two-scale structure is observed uniformly across sets of images provided by different individuals. The dimension at small length scales is less than the dimension at larger length scales. This two-scale behavior has also been observed in another study on natural language discourse structures. The same two scale structure observed in the raw input of visual information seems to suggest that this constraint is more basic than one that could be posited to be imposed by the cognitive system. We propose that the way we navigate through events by itself imposes certain constraints on the data that the cognitive system then takes as input.

Speaker's Name:Mark Steyvers
First Author's Name:Mark Steyvers
First Author's Affiliation:University of California, Irvine
Second Author's Name:Brent Miller
Second Author's Affiliation:University of California, Irvine
Third Author's Name:Rob Goldstone
Third Author's Affiliation:Indiana University
Title:Bayesian approaches to aggregate knowledge across individuals; exploring the effect of information sharing
Abstract:We analyze the collective performance of individuals in a series of general knowledge tasks involving the judgment of percentages and rankings of events and items. We compare situations in which a group of individuals independently answer these questions with two information-sharing situations: an iterated learning environment in which individuals pass their solution to the next person in a chain and a Delphi-like method in which individuals can revise their answers after sharing the solutions with all members of the group. We introduce Bayesian models for these group-decision making environments and treat the collective group knowledge as a latent variable that can be estimated from the observed judgments across individuals. Importantly, the models allow for individual differences in expertise and confidence in other individuals' judgments. Our initial results seem to suggest that information-sharing environments lead to better collective performance (a stronger "wisdom of crowds" effect) despite the fact that information-sharing increase correlations between judgments.

Y
Speaker's Name:Hyungwook Yim
First Author's Name:Hyungwook Yim
First Author's Affiliation:The Ohio State University
Second Author's Name:Simon Dennis
Second Author's Affiliation:The Ohio State University
Third Author's Name:Vladimir Sloutsky
Third Author's Affiliation:The Ohio State University
Title:The Development of Three-way Binding in Episodic Memory
Abstract:Episodic memory refers to the ability of people to determine when and where they have seen particular stimuli and must involve the binding of items and contexts. However, two-way bindings between items and contexts are not sufficient. In the ABABr task, subjects study two lists of paired items, where the two lists have identical items but differ in their pairing patterns. To recall which item was paired with which in a given list, a three-way binding between the context and the two items must be formed (Humphreys, Bain & Pike, 1989). The current study shows that children (age 4 to 5 year-olds) are not able to conduct a three-way binding reliably. Both adults and children showed high accuracy in the control task, which resembles the procedure of the ABABr task but has different items between the two lists. However, in the ABABr task decrement of the accuracy in adults were much more smaller. The results suggest that the ability to either form or store a three-way bindings matures.

Z
Speaker's Name:Rene Zeelenberg
First Author's Name:Rene Zeelenberg
First Author's Affiliation:Erasmus University Rotterdam
Second Author's Name:Bruno Bocanegra
Second Author's Affiliation:Erasmus University Rotterdam
Title:Effects of Emotion on Low-Level Vision
Abstract:It is generally assumed that emotion facilitates human vision in order to promote adaptive responses to a potential threat in the environment. Surprisingly, we recently found that emotion in some cases impairs the perception of elementary visual features (Bocanegra & Zeelenberg, 2009). Here, we demonstrate that emotion improves fast temporal vision at the expense of fine-grained spatial vision. We tested participants’ threshold resolution with Landolt circles containing a small spatial or brief temporal discontinuity. The prior presentation of a fearful face cue, compared to a neutral face cue, impaired spatial resolution but improved temporal resolution. In addition, we show that these benefits and deficits were triggered selectively by the global configural properties of the faces, which were transmitted only through low spatial-frequencies. Critically, the common locus of these opposite effects suggests a trade-off between magno- and parvocellular type visual channels, which contradicts the common assumption that emotion invariably improves vision. Rather than a general ‘boost’ for all visual features, we show that our affective neural circuits sacrifice the slower processing of small details for a coarser but faster visual signal.

Speaker's Name:Xiaoyu Zhang
Add. Speaker's Name:Alfred Owens
First Author's Name:Alfred Owens
First Author's Affiliation:Franklin and Marshall College
Second Author's Name:Megan Hunter
Second Author's Affiliation:Franklin and Marshall College
Third Author's Name:Xiaoyu Zhang
Third Author's Affiliation:Franklin and Marshall College
Title:Assessment of Nighttime Visibility of Realistic Targets Using a Video-based Simulation
Abstract:A dynamic driving simulation investigated the visibility of realistic targets with variable levels of artificial illumination. 20 video clips were recorded on the Virginia Smart Road using a high resolution digital camera. Test clips contained roadway targets such as deer, tires, and pedestrians who entered the road from either the left or right sides. Road illumination levels varied from clip to clip: some included overhead luminaires, others were illuminated only by the vehicle's low-beam headlights. 16 licensed drivers watched the video clips and responded when they recognized a target. Participants’ response times were recorded, and their verbal identifications of target type and location were documented by the experimenter. The results showed that target size and motion significantly affected recognition time in dark conditions (no luminaires): walking pedestrians were recognized at the greatest distance followed by stationary pedestrians, deer, and the tire. Recognition was also affected by the side of the road from which pedestrians entered: pedestrians entering from the right were recognized from significantly longer distances than those entering from the left. Furthermore, recognition times for pedestrians under overhead lighting were significantly longer than for pedestrians illuminated only by headlights. These findings provide an assessment of the effects of target structure, motion, and lighting on drivers’ recognition of realistic objects. These results will be compared with performance of drivers currently being tested while driving on the Smart Road to clarify the validity of video-based night driving simulators.

Speaker's Name:Yuwen Zhuang
First Author's Name:Yuwen Zhuang
First Author's Affiliation:Department of Computer Science and Engineering, Ohio State University
Second Author's Name:Vishnu Sreekumar
Second Author's Affiliation:Department of Psychology, Ohio State University
Third Author's Name:Mikhail Belkin
Third Author's Affiliation:Department of Computer Science and Engineering, Ohio State University
Fourth Author's Name:Simon J.Dennis
Fourth Author's Affiliation:Department of Psychology, Ohio State University
Title:The network properties of episodic graphs
Abstract:We present statistical analyses of the small world properties for two particular types of episodic graphs. One is from the paragraph space of the Internet Movie Database (IMDb) and the other is from images collected as subjects engaged in their activities of daily living. We show that they have a small-world structure which is characterized by sparse connectivity, short average path lengths between nodes, and high global clustering coefficient. However, the degree distribution analyses show that they are not scale-free graphs. For the analyses, we selected edges from different proportions to construct the networks, hence, a series of analyses reveal the growth style of these two episodic graphs.