Twelfth Annual Summer Interdisciplinary Conference

Authors, Titles, Abstracts


Listing by speaker

SpeakerAllen, Colin
Author 1Allen, Colin
Indiana University Cognitive Science and History and Philosophy of Science
colallen@indiana.edu
TitleReconceptualizing Marr`s 3 levels
AbstractMarr`s 3-level account of explanation in cognitive science posits a computational (functional) level, algorithmic (or process) level, and an implementational (neural or hardware) level. The textbook story is that these levels constitute a single explanation because they are unified by the same input-output relations. While textbook stories have their uses, the actual relationship between cognitive models is more complicated. I will illustrate this with an example or two drawn from the recent literature in cognitive science with the goal of gathering more ideas from ASIC participants about the extent to which they think the Marrian framework is outdated and should be abandoned, still useful despite its limitations, or worth refining into something that better reflects the actual practice of cognitive modelers.


SpeakerAnderson, John
Author 1Anderson, John
Carnegie Mellon
ja@cmu.edu
TitleIdentifying the Locus of Learning in Complex Mathematical Problem Solving
AbstractA combination of multivariate pattern analysis and hidden Markov models was applied to a fMRI with behavioral data to identify a sequence of 5 major phases that students go through in solving a type of complex mathematic problem: An Orientation phase where they identify the problem to be solved, an Encoding phase where they encode the needed information, a Computation Phase where they perform the necessary arithmetic calculations, a Transformation Phase where they perform necessary Algebraic operations, and a Response Phase where they generate the answer. Because of the problem structure and because of variability in solution strategy, these phases are broken out into 15 different states that are separated in time. States from the same phase share the same activation patterns. The duration of the Computation and Transformation phases distinguish different problem types. There are temporal and activation signatures that distinguish the trial on which participants master a particular trial type. Reflecting on the answer they have just determined, they show increased time as they output the answer in the Response Phase and they show increased activation in a number of brain areas, particularly the right rostrolateral prefrontal cortex. These results illustrate the power the HMM-MVPA approach to identify the critical events in a long problem-solving sequence.


SpeakerAnnis, Jeffrey
Author 1Annis, Jeffrey
University of South Florida
jannis@mail.usf.edu
Author 2Malmberg, Kenneth
University of South Florida
malmberg@usf.edu
TitleOvercoming Sequential Dependencies in Recognition Memory
AbstractSequential dependencies (SDs) occur when the current response is correlated with previous responses, and have been observed in recognition memory tasks (Malmberg and Annis, 2012). Annis & Malmberg (submitted) modeled SDs in recognition memory in the REM framework (Shiffrin & Steyvers, 1997) by assuming that some of the features from the retrieval cue on trial n carry over to the retrieval cue on trial n + 1. The model also assumes that on each trial, there is a probability that carryover may not occur. During these trials, the retrieval cue is “refreshed,” and the information from previous trials does not inform the decision on the current trial. We hypothesized that in order to refresh the retrieval cue on the current trial, attention must be shifted away from the information held in the previous retrieval cue. To test this, participants were presented with recognition memory test trials with interpolated lexical decision (LD) trials. If attention shifts away from the previous recognition test trial to complete the LD test trial, then on the subsequent recognition test trial we should observe a decrease in the magnitude of SDs. In the standard framework of Null Hypothesis Significance Testing, it is not possible to find evidence in favor of the null hypothesis. In order to overcome this limitation, a Bayesian t-test (Kruschke, 2012) was conducted. The Bayesian analyses suggest interpolating LD test trials between recognition memory test trials reduces SDs.


SpeakerBernhardt-Walther, Dirk
Author 1Bernhardt-Walther, Dirk
Department of Psychology
bernhardt-walther.1@osu.edu
TitleScene categorization is based on structural, not textural features
AbstractHumans can categorize complex natural scenes quickly and accurately. Which properties of scenes enable such an astonishing feat? Line drawings of natural scenes provide us with comparably easy access to these properties, while still being compatible to photographs in their neural representation of scene category (Walther et al., PNAS 2011). We extracted five sets of scene properties from line drawings of natural scenes: contour length, orientation, and curvature, and type and angle of contour junctions. We then categorized natural scenes based on the statistical distributions of these properties. Orientation was the property that allowed for the highest categorization accuracy. However, we found that the pattern of categorization errors for curvature, junction type and angle provided the best match with human behavior. Thus, junctions and curvature appear to be particularly relevant for the human ability to categorize scenes. We verified this computational prediction in a behavioral experiment with manipulated line drawings of scenes, in which the junctions were modified while preserving length, orientation and curvature. As expected, this manipulation led to a significant decrease in categorization accuracy. Our results indicate that the human ability to categorize complex natural scenes is to a large extent driven by the structure of scenes, which is described by junctions and curvature. Line orientation, which is tightly linked to the spatial frequency spectrum, is useful for computational scene categorization but does not match human behavior. This finding challenges the popular view that natural scene categorization relies on statistical regularities of the spatial frequency spectrum.


SpeakerBoehm, Udo
Author 1Boehm, Udo
University of Groningen
Udoboehm1@gmail.com
Author 2Van Maanen, Leendert
University of Amsterdam
Author 3Forstmann, Birte
University of Amsterdam
Author 4Van Rijn, Hedderik
University of Groningen
TitleModel-Based Estimates of Response-Caution Predict Single-Trial EEG Data
AbstractRecent theories of decision-making under time constraints suggest that the pre-supplementary motor area (pre-SMA) modulates the activity of the basal ganglia to increase the activation of an emerging action plan on the cortex and thus facilitate fast but potentially faulty responses (Lo & Wang, 2006; Forstmann et al., 2008, 2010). This idea is supported by a number of fMRI studies that related response caution, the amount of evidence individuals gather before engaging in a decision, to the activity of the pre-SMA. EEG studies have linked the contingent negative variation (CNV), a well-studied slow potential, to the ease with which participants can trigger a response (Elbert, 1990). Source localisation studies have suggested that the CNV originates from brain regions in close proximity to the pre-SMA (Leuthold & Jentzsch, 2001). To test whether the CNV reflects adjustments of response caution implemented by the pre-SMA, we conducted an EEG experiment in which participants performed a random dot motion task. At the onset of each trial, participants were cued to either focus on quick or on accurate responding. We obtained estimates of participants' response caution for every trial from a version of the linear ballistic accumulation model (Brown & Heathcote, 2008) that we fit to their reaction time data. Our results show the CNV amplitude to correlate with fluctuations in response caution under speed but not under accuracy instructions, implying that the CNV reflects the pre-SMA's mediation of action planning. Moreover, our data indicate that response caution is set before participants engage in a decision task.


SpeakerBrown, Gordon
Author 1Brown, Gordon
University of Warwick
g.d.a.brown@warwick.ac.uk
Author 2Lewandowsky, Stephan
University of Bristol and University of Western Australia
stephan.lewandowsky@uwa.edu.au
TitleSocial Sampling Theory: A Model of Social Norms, Segregation, and Polarisation
AbstractAn agent-based model of social norm effects and polarisation is described. The model is cast with a utility-maximising framework. It is assumed that, when choosing an action, agents located within a social network observe the behaviour of social network neighbours and hence infer the social distribution of particular attitudes. Agents are assumed to dislike behaviours that are extreme within their neighbourhood (social extremeness aversion), and hence have a tendency to conform. However agents are assumed also to prefer choices that are consistent with their own true beliefs (authenticity preference). Behavioural choice reflects a compromise between these opposing principles. The model explains a number of social phenomena including homophily and the development of segregated neighbourhoods, polarisation, certainty and confidence effects on social conformity, and a number of other phenomena.


SpeakerCheng, Patricia
Author 1Cheng, Patricia
UCLA
cheng@lifesci.ucla.edu
Author 2Lijeholm, Mimi
California Institute of Technology
mlil@caltech.edu
Author 3Sandhofer, Catherine
UCLA
sandhof@psych.ucla.edu
TitleCausal invariance in intuitive and scientific causal inference
AbstractScientists' concern with objectivity has led to the dominance of associative statistics in scientific journals, with the basic concept of independence being defined on observations only. Our analysis reveals that to infer causes of a binary outcome (e.g., whether or not a tumor cell is malignant), the associative definition of independence (based on observations alone) results in a logical inconsistency -- even for data from an ideal experiment -- for both frequentist and Bayesian statistics. Removing the logical error requires defining independence on counterfactual causal events. We report experiments showing that natural causal discovery in humans adopts the coherent though more complex causal definition. Our findings together suggest that the causal definition is adaptive, and that introducing a causal statistics would result in more consistent and generalizable causal discoveries in medicine and other sciences.


SpeakerCook, John
Author 1Cook, John
University of Queensland, University of Western Australia
j.cook3@uq.edu.au
Author 2Lewandowsky, Stephan
University of Western Australia
stephan.lewandowsky@uwa.edu.au
TitleThe Biasing Influence of Worldview on Climate Change Attitudes and Belief Updating
AbstractIt is well established that political ideology has a strong influence on public opinion about climate change. There is also evidence of ideologically driven belief polarization, where two people receiving the same evidence update their beliefs in opposite direction. Presenting scientific evidence can result in a "backfire effect" where conservatives become more sceptical of climate change. It is possible to model (and hence better understand) the backfire effect using Bayesian Networks, which simulate rational belief updating using Bayes Law. In this model, trust in scientists is the driving force behind polarization and worldview is the knob that influences trust. Experimental data comparing the effectiveness of various interventions are presented and discussed in the context of the Bayesian Network model.


SpeakerCottrell, Gary
Author 1Cottrell, Gary
UCSD
gary@ucsd.edu
Author 2Shan, Honghao
Experian
hshan@cs.ucsd.edu
TitleEfficient Coding: From Retina Ganglion Cells To V2 Cells
AbstractWe use a combination of our Recursive Independent Components analysis (RICA) algorithm and sparse Principal components analysis (sPCA) to provide the first model that learns in an unsupervised fashion a model of the first four visual processing layers in the brain: Center surround cells in the retina and Lateral Geniculate Nucleus (LGN), simple cells in V1, complex cells in V1, and finally, receptive fields that accord with data concerning cells in V2. In most applications of the efficient coding theory, which states roughly that cells in the visual system act to reduce the redundancy in their inputs by learning features that are independent from one another, there is a step where PCA is applied. While PCA can be thought of as a neural network, this step (and the receptive fields that are learned) is usually not reported in detail. Recent work by Vincent et al. has shown that sparse PCA applied to natural images can learn the center surround receptive fields of retina and LGN cells, and that ICA on top of this still learns the edge detectors that have been seen as the result of these algorithms since Bell & Sejnowski and Olshausen & Field's pioneering work. Our contribution is to use sparse PCA in our hierarchical ICA model, and show that sparse PCA applied to the edge detectors gives the local pooling properties seen in complex cells in V1. FInally, ICA applied to the result of this gives cells resembling V2 cells in their receptive field properties.


SpeakerCox, Greg
Author 1Cox, Greg
Indiana University
gregcox7@gmail.com
Author 2Lewis, Nick
Indiana University
njlewis@indiana.edu
Author 3Shiffrin, Richard
Indiana University
shiffrin@indiana.edu
TitleA Dynamic Model for Episodic Memory Retrieval
AbstractWe present a dynamic retrieval model: When a test stimulus is presented, its features are extracted over time, and the current set of features, plus context, are used at each moment to probe memory. Thus the response from memory changes in accord with the probe, dynamically. The model explains a variety of surprising findings in episodic recognition, including 'fluency' (an increase in 'old' responding due to a subliminal matching prime), coherent responding in the face of large test-to-test changes in stimuli type, and the effects caused by presenting some test item features subliminally prior to others when the earlier features vary in diagnosticity.


SpeakerDennis, John Lawrence
Author 1Dennis, John Lawrence
University of Perugia, Perugia, Italy | Catholic University of the Sacred Heart, Milan, Italy
j.lawrence.dennis@gmail.com
TitleLabor and investment: A tale of a core ownership principle.
AbstractProperty ownership is enormously important in people\'s lives. Ownership influences how much we value objects (i.e., endowment effect) (Kahneman, Knetsch, & Thaler, 1990), responsibility judgments when those objects harm others or damage objects (Elkind & Dabek, 1977) how memorable those objects are (Cunningham, Turk, Macdonald, & Macrae, 2008) and our preferences, such that owned objects are preferred over similar non-owned objects (Beggan, 1992). In a series of online, iPad and lab studies with children and adult participants labor/investment influenced ownership judgments while ownership assignment influenced labor/investment. In one study labor/investment that \"caused a change\" in an object was associated with ownership assignment. In another study, labor/investment influenced responsibility judgments for positive/negative consequences associated with that object, while in another set of studies both ownership assignment and responsibility judgments were significantly influenced by whether that labor was in the first or third person. A set of studies reveal that labor/investment significantly influences participants judgments associated with \"stealing\" pirated materials. Another study reveals that when objects were described as being \"owned\" by participants, the labor/investment used to change the object increased and the changed object was judged as being more creative.


SpeakerDixon, Peter
Author 1Dixon, Peter
Dept. of Psychology, Univ. of Alberta
peter.dixon@ualberta.ca
Author 2Bortolussi, Marisa
Dept. of Modern Languages and Cultural Studies, Univ. of Alberta
marisa.bortolussi@ualberta.ca
TitleThe Mediated Nature of Narrative Comprehension
AbstractAccounts of narrative comprehension often neglect the importance of the narrator and memory in the mental representation of the story world. In contrast, we argue that readers generate a representation of the narrator, that is, the implied speaker of the words of the text. In turn, readers use their representation of the narrator to interpret the events of the story world and to decide what is important in the story. We will review experiments demonstrating that subtle variations in the manner in which perception and speech are presented change readers' representation of the narrator, the interpretation of characters and events, and memory for the text.


SpeakerDonkin, Chris
Author 1Donkin, Chris
University of New South Wales
christopher.donkin@gmail.com
Author 2Nosofsky, Robert
Indiana University
nosofsky@indiana.edu
Author 3Shiffrin, Richard
Indiana University
shiffrin@indiana.edu
Author 4Gold, Jason
Indiana University
jgold@indiana.edu
TitleDiscrete-Slots Models of Visual Working-Memory Response Times
AbstractMuch recent research has aimed to establish whether visual working memory (WM) is better characterized by a limited number of discrete all-or-none slots, or by a continuous sharing of memory resources. To date, however, researchers have not considered the response-time (RT) predictions of discrete-slots vs. shared-resources models. To complement the past research in this field, we formalize a family of mixed-state, discrete-slots models for explaining choice and RTs in tasks of visual WM change detection. In the tasks under investigation, a small set of visual items is presented, followed by a test item in one of the studied positions for which a change judgment must be made. According to the models, if the studied item in that position is retained in one of the discrete slots, then a memory-based evidence-accumulation process determines the choice and the RT; if the studied item in that position is missing, then a guessing-based accumulation process operates. Observed RT distributions are therefore theorized to arise as probabilistic mixtures of the memory-based and guessing distributions. We formalize an analogous set of continuous shared-resources models. The model classes are tested on individual subjects both with qualitative contrasts and quantitative fits to RT-distribution data. The discrete-slots models provide much better qualitative and quantitative accounts of the RT and choice data than do the shared-resources models, although there is some evidence for “slots plus resources” when memory set size is very small.


SpeakerDunn, John
Author 1Dunn, John
University of Adelaide
john.c.dunn@adelaide.edu.au
Author 2Kalish, Michael
University of Louisiana at Lafayette
kalish@louisiana.edu
TitleWhy there can be no such thing as the face-inversion effect: The problem of nomic measurement in psychological science.
AbstractCueing with an inverted (rotated) stimulus leads to a decrement in memory accuracy and this decrement is greater for pictures of faces than for pictures of other mono-oriented stimuli such as houses. This is called the face-inversion effect and is of interest as it suggests that faces are perceived, represented, or processed differently from, say, houses. However, the existence of this effect rests on the implicit assumption that the relationship between memory strength and accuracy is the same for faces and houses. This illustrates a more general problem whereby constructs of interest, such as memory strength, attention, or affect, must be inferred from changes in some observable feature of human behaviour. Remarkably, this problem is not confined to psychology but affects all of science and has been called by Chang (2004), the problem of nomic measurement . I outline how this problem affected attempts to measure temperature by physicists over a 250 year period and draw some stern lessons for psychology consonant with earlier admonitions by Loftus (1978). I conclude that at our present level of development, there can be no such thing as a face-inversion effect or other materials-based effects, such as the picture-superiority effect, or the word-frequency mirror effect. Chang, H. (2004). Inventing temperature: Measurement and scientific progress. New York: Oxford University Press. Loftus, G. R. (1978). On interpretation of interactions. Memory & Cognition, 6(3), 312-319.


SpeakerErkelens, Casper
Author 1Erkelens, Casper
Utrecht University
c.j.erkelens@uu.nl
TitleThe power of linear perspective in slant perception and its implication for the neural processing of orientation
AbstractVirtual slant is defined here as the slant of a surface based on the assumption of linear perspective. Virtual slants of obliquely viewed 2D figures consisting of skewed columnar grids were computed as a function of depicted slant and slant of the picture surface. Computations were based on an assumption of parallelism. Virtual slants were compared with perceived slants in binocular viewing conditions. Perceived slant was highly correlated with virtual slant. Contributions of screen-related cues, including disparity and vergence, were negligibly small. The results imply that many past findings of both transformation and (apparent) compensation in pictorial viewing are straightforwardly explained by virtual slant. Analysis shows that slant is perceived from converging lines whose angular differences are smaller than the limits have been measured in orientation discrimination tasks. Slant perception on the basis of linear perspective implies non-local comparisons between line orientations. The power of linear perspective suggests a yet unproposed role for the elaborate network of long-range connections between the abundance of orientation detectors in the visual cortex.


SpeakerFoster, James
Author 1Foster, James
University of Colorado, Boulder
jmfoster@gmail.com
Author 2Jones, Matt
University of Colorado, Boulder
mcj@colorado.edu
TitleAnalogical Reinforcement Learning
AbstractThe goal of the present work is to develop a computational understanding of how people learn abstract concepts. Research in analogical reasoning suggests that higher-order cognitive functions such as abstract reasoning, far transfer, and creativity are founded on recognizing structural similarities among relational systems. However, we argue a critical element is missing from these theories, in that their operation is essentially unsupervised, merely seeking patterns that recur in the environment, rather than focusing on the ones that are predictive of reward or other important outcomes. Here we integrate theories of analogy with the computational framework of reinforcement learning (RL). We propose a computational synergy between analogy and RL, in which analogical comparison provides the RL learning algorithm with a measure of relational similarity, and RL provides feedback signals that can drive analogical learning. We formalized this integration in a model that learns to play tic-tac-toe. The model uses RL to incrementally learn value estimates of stored exemplars and schemas. These estimates are used to predict win probabilities for different game states by similarity-weighted averaging, where similarity is determined by the quality of analogical mappings. On some trials, especially useful analogies produce new schemas that are added to the pool. Initial simulation results support the power of this approach.


SpeakerGoldstone, Robert
Author 1Braithwaite, David
Indiana University
baixiwei@gmail.com
Author 2Goldstone, Robert
indiana University
rgoldsto@indiana.edu
TitleExample variability is beneficial if you’re mathematically strong enough to take it
AbstractWhen teaching a concept from multiple examples, a potent variable to manipulate is the variability of the examples. Low variability may help learners see the structural patterns held in common by the examples. High variability may help learners generalize these patterns to new examples. Some proposals have advocated increasing the variability of examples over time to capitalize on both of these advantages (Elio & Anderson, 1984; Kotovsky & Gentner, 1996). We have studied the possible moderating effect of individual differences on variability in the domain of mathematical combinatorics problems. Learners with relatively strong prior knowledge of combinatorics, as measured by self-reports or pre-test scores, benefitted from high levels of example variability more than learners with low prior knowledge. High variability also increased the abstractness, but not correctness, of learners' descriptions of the general method for solving combinatorics problems, suggesting two separable components involved in generating mathematizations: identification of structural patterns, and abstraction of those patterns from the details of specific examples.


SpeakerHanson, Andrew J
Author 1Hanson, Andrew J
Indiana University
hansona@indiana.edu
TitleMultitouching the Fourth Dimension
AbstractWe demonstrate and explain how a functional cognitive understanding of simple 4D objects can be cultivated by interactive graphics methods available on a multitouch handheld device such as an iPhone. As our prototype example, we employ the 4D analog of a rolling die, and present the results of a user study showing that navigation to a specific goal state is 50 percent faster for our new multitouch interface design compared to our best mouse/keyboard-based interface.


SpeakerHemmer, Pernille
Author 1Hemmer, Pernille
Rutgers University
pernille.hemmer@rutgers.edu
Author 2Criss, Amy
Syracuse University
acriss@syr.edu
TitleEvaluating Word Frequency as a Continuous Variable in Recognition Memory
AbstractThe word frequency mirror effect, higher hit rates and lower false alarm rates for low compared to high frequency words, is one of the hallmarks of recognition memory. However, this regularity of memory is limited because normative word frequency (WF) has been treated as discrete (low vs. high). We treat WF as a continuous variable and find a radically different pattern of performance. Hit rates show a clear non-monotonic U-shaped relationship. That is, hit rates are higher at both the high and low end of the frequency continuum. False alarm rates increase with increasing WF. We discuss the constraints these data place on the Retrieving Effectively from Memory (REM) model and other models of episodic memory.


SpeakerHendrickson, Andrew
Author 1Hendrickson, Andrew
University of Adelaide
drew.hendrickson@adelaide.edu.au
Author 2Navarro, Daniel
University of Adelaide
Author 3Perfors, Amy
University of Adelaide
TitleConservatism in generalization across domains
AbstractThe degree to which individuals are more or less conservative in generalising is increasingly being used in clinical and educational assessment but the consistency of individual differences in generalisation conservatism across cognitive tasks has not been systematically assessed. In this current work, we report the results of an assessment of conservatism in generalisation across a wide array of domains. Those domains include: probability assessment of which distribution items were sampled from, generalisation of grammatical rules to new instances in a grammar learning task, categorisation of new instances in an environment with a shifting category boundary, and a probability assessment of category membership for new items in a one-dimensional category space. Conservatism in each task is quantified as a set of parameters in a cognitive model specific to that task and these assessments of conservatism in generalisation show a pattern of conservatism across individuals that was not consistent across all tasks. This suggests that a single measure of conservation of generalisation across all cognitive domains might not be appropriate. More complex structures of conservatism and domain interaction will be discussed.


SpeakerHoffmann, Janina Anna
Author 1Hoffmann, Janina Anna
University of Basel
janina.hoffmann@unibas.ch
Author 2von Helversen, Bettina
University of Basel
bettina.vonhelversen@unibas.ch
Author 3Rieskamp, J
University of Basel
joerg.rieskamp@unibas.ch
TitleHow episodic and working memory affect rule- and exemplar-based judgments
AbstractMaking accurate judgments, such as correctly diagnosing a patient, is an essential skill in everyday life. However, little is known about the basic cognitive skills required for accurate judgments. When making judgments, people often rely on two kinds of strategies: rule-based and exemplar-based strategies. These strategies differ in the cognitive abilities they require. Specifically, high working memory capacity may benefit rule-based judgments, whereas long-term memory may be crucial for exemplar-based judgments. To investigate this hypothesis, 279 participants performed two judgment tasks that were either best solved by a rule-based or an exemplar-based strategy. Additionally, we measured working memory capacity, episodic memory, and implicit memory with three tests. Consistent with our hypothesis structural equation modeling showed that working memory capacity predicted judgment accuracy in the rule-based task, whereas episodic memory predicted judgment accuracy in the exemplar-based task. Implicit memory was not related to judgment accuracy. Apparently, different memory abilities are essential for successfully adopting different judgment strategies.


SpeakerHolden, John
Author 1Holden, John
University of Cincinnati
john.holden@uc.edu
TitleCognitive Effects as Time Dilation
AbstractCognitive manipulations stretch, rather than simply shift the location of response time distributions. Understanding the basis of this shape change promises to inform cognitive theory. In the context of cognitive tasks that measure response time, distribution rescaling refers to a proportional and self-similar re-sizing of a response time distribution. Empirical response time distributions resulting from several standard cognitive manipulations are examined for evidence of distribution rescaling. One possible basis for the emergence of self-similar distributions arising from cognitive activity is the fundamental mismatch between relative biological time and absolute clock time. Chemical and biological processes govern all neurophysiological and behavioral activity. These often rate-limited neurochemical and physiological processes do not generally unfold at truly fixed time scales, but rather at variable and often proportional rates, across fractal resource networks. Thus, relative to the absolute clock time of a laboratory computer, cognitive and neurophysiological time may tend express proportional time dilation or stretching.


SpeakerHotaling, Jared
Author 1Hotaling, Jared
Indiana University
jhotalin@indiana.edu
TitleDecision Field Theory-Dynamic: A Cognitive Model of Planning On-The-Fly
AbstractHuman are often faced with complex choices involving many interrelated decisions and events. In these situations achieving one's goals usually requires planning a sequence of actions, rather than a single decision. I apply Decision Field Theory-Dynamic (DFT-D), a formal model of planning and multistage choice, to account for individuals' actions in a dynamic decision making study. DFT-D is based on the idea that people plan future choices on-the-fly, through quick, repeated mental simulations of potential future outcomes. Its mechanisms provide insight into how people collect and process information, and by fitting the model at the individual level we can begin to explain individual difference in these terms. DFT-D is compared to several simpler models that assume no mental simulation. I find, through model comparisons, that DFT-D provides the best account of individuals' behavior.


SpeakerJones, Matt
Author 1Jones, Matt
University of Colorado
mcj@colorado.edu
Author 2Curran, Tim
University of Colorado
tim.curran@colorado.edu
Author 3Mozer, Michael
University of Colorado
mozer@colorado.edu
Author 4Wilder, Matthew
University of Colorado
mattwilder.cu@gmail.com
TitleSequential Effects in Response Time Reveal Learning Mechanisms and Event Representations
AbstractBinary choice tasks such as two-alternative forced choice show a complex yet consistent pattern of sequential effects, whereby responses and response times depend on the detailed pattern of prior stimuli going back at least five trials. We show this pattern is well explained by simultaneous incremental learning of two simple statistics of the trial sequence: the base rate and the repetition rate. Subtler aspects of the data that are not explained by these two mechanisms alone are explained by their interaction, via learning from joint error correction. We also find that these learning mechanisms are dissociated into stimulus and response processing, as indicated by eventrelated potentials, manipulations of stimulus discriminability, and reanalysis of past experiments that eliminated stimuli or prior responses. Thus sequential effects in these tasks appear to be driven by learning the response base rate and the stimulus repetition rate. Connections are discussed between these findings and previous research attempting to separate stimulus- and response-based sequential effects, and research using sequential effects to determine mental representations. We conclude that sequential effects offer a powerful means for uncovering representations and learning mechanisms.


SpeakerKachergis, George
Author 1Kachergis, George
Leiden University
george.kachergis@gmail.com
Author 2de Kleijn, Roy
Leiden University
kleijnrde@fsw.leidenuniv.nl
Author 3Hommel, Bernhard
Leiden University
hommel@fsw.leidenuniv.nl
TitleTowards a Spiking Neural Model for Sequential Action Control
AbstractAction selection, planning, and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. The Theory of Event Coding (Hommel et al., 2001) posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but also actions are used to generate perceptions (i.e., intended effects). We propose a spiking neural network model that implements the Theory of Event Coding to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional neural network models, which use discrete percepts to generate discrete outputs, spiking models accept real-time input and output (e.g., Natschläger, Maass, and Markram, 2002). The internal state reflects the input and network history, and the continuous output can become more fine-tuned as further perceptual input is received (i.e., discriminating “BLood” vs. “BLack”) and as the internal context evolves. Thus, this model can show a variety of context effects for sequential actions that humans show. Moreover, embedding both the perceptions and actions in time—as they are in the real world—shows that the model generalizes well to time-warped sequences, and even makes mistakes resembling human errors.


SpeakerKetels, Shaw
Author 1Ketels, Shaw
University of Colorado at Boulder
shaw.ketels@colorado.edu
Author 2Healy, Alice
University of Colorado at Boulder
alice.healy@colorado.edu
Author 3Bromwell, Alan
University of Colorado at Boulder
alan.bromwell@colorado.edu
Author 4Jones, Matt
University of Colorado at Boulder
mcj@colorado.edu
TitleTraining away anchoring in a centroid judgment task
AbstractInitial impressions are lasting, and thus initial misunderstandings in classroom situations can hinder subsequent learning. In previous work we described evidence of the anchoring bias in a centroid judgment task involving sequentially arriving targets, varying in spatial location. In decisions based on sequentially arriving pieces of information, the anchoring bias has been suggested to lead to the primacy, or inordinate influence of the first item presented on the subsequent decision, that is almost always observed in these decisions, as well as the recency, or inordinate influence of the last item or items, that is sometimes also observed. I'll describe four experiments in which we attempted debaising of this anchoring. The first three experiments describe declarative and nondeclarative approaches to debiasing these anchoring effects, with results suggesting that no debiasing technique can ameliorate the strong primacy that is consistently evident in this paradigm. The fifth experiment explores the effects of articulatory suppression on the centroid judgment. Results suggest that anchoring may not be a preverbal decisional bias as was previously accepted, as articulatory suppression attenuated the primacy bias seen in every other case. Implications for education are discussed.


SpeakerKitto, Kirsty
Author 1Kitto, Kirsty
Queensland University of Technology
kirsty.kitto@qut.edu.au
TitleTowards a unified treatment of cognitive context
AbstractHow are we to model a system which responds differently to the same inputs? The context of such a system is usually to blame, changing across the two scenarios, although not in a manner that could be designated as a direct input. Such contextual behaviour plagues a wide range of fields, and many different models of it have been attempted. Problematically, these solutions are often ad hoc in nature, and almost universally require that an explicit listing of each contextual scenario be made before a model can even be constructed. As contexts generally evolve and change, such a requirement is unreasonable, and often unachievable. While it it is possible to assume that there are many different forms of contextuality, and that each field is grappling with its unique variety, it could instead be surmised that it is our fundamental modelling frameworks that have led to these problems. For example, contextual systems are difficult to separate into well defined components, and this non-separability leaves them resistant to reductive analysis. This talk will discuss recent work towards a unified treatment of contextuality, inspired from quantum theory. Examples will be drawn from recent models of conceptual combination, attitude change in a social context, and semantic memory models.


SpeakerKouider, Sid
Author 1Kouider, Sid
Ecole Normale Supérieure & CNRS
sid.kouider@ens.fr
TitleNovel approaches to nonconscious perception
AbstractSubliminal influences exists but are usually weak when measured in laboratory contexts? Does that reflect reliance on inappropriate, non-ecological methodologies (masking, flashing). I will present two more natural approaches, relying either on gaze-contingent peripheral displays, or electrophysiological response in the sleeping brain, and revealing stronger nonconscious emotional and semantic influences.


SpeakerLove, Bradley
Author 1Love, Bradley
UCL
b.love@ucl.ac.uk
Author 2Mack, Michael
University of Texas at Austin
Author 3Preston, Alison
University of Texas at Austin
TitleDecoding the Brain's Algorithm for Categorization from its Neural Implementation
AbstractActs of cognition can be described at different levels of analysis: what behavior should characterize the act, what algorithms and representations underlie the behavior, and how the algorithms are physically realized in neural activity. Theories that bridge levels of analysis offer more complete explanations by leveraging the constraints present at each level. Despite the great potential for theoretical advances, few studies of human cognition bridge levels of analysis. For example, formal cognitive models of category decisions are known to accurately predict human decision making, but whether model algorithms and representations supporting category decisions are consistent with underlying neural implementation remains unknown. This uncertainty is largely due to the hurdle of forging links between theory and brain. Here, we tackle this critical problem by using brain response to characterize the nature of mental computations and representations that support category decisions to evaluate two dominant, and opposing, formal models of categorization. We found that brain states during category decisions were significantly more consistent with latent model representations from exemplar rather than prototype theory. Representations of individual experiences, not the abstraction of these experiences, are critical for category decision making. Holding models accountable for behavior and neural implementation provides a means for advancing more complete descriptions of the algorithms of cognition.


SpeakerMcLaughlin, Anne
Author 1McLaughlin, Anne
North Carolina State University
anne_mclaughlin@ncsu.edu
Author 2Sprufera, John
North Carolina State University
jsprufe@ncsu.edu
TitleThe cognition of rock climbing: a human factors analysis of the human role in accidents
AbstractAbstract: A post-hoc analysis was performed on climbing accidents reported in Accidents in North American Mountaineering 2007, 2008, and 2009. These accident reports are often semi-structured narratives with analysis by witnesses or local experts. Only accidents during technical rock climbing were examined - ice climbing and mountaineering were excluded - resulting in a total of 73 accidents with enough information contained within to code. These were coded by two independent raters on forty-seven codes adapted from the Human Factors Accident and Classification System (HFACS; Wiegmann & Shappell, 2003). This accident analysis system was derived from Reason's (1990) “Swiss cheese” model of the accident causal chain where preconditions, unsafe acts, unsafe supervision, and organizational influences interact to allow accidents to occur. Changes were made to the original HFACS codes due to the unregulated nature of adventure sports compared to traditionally analyzed domains such as aviation (Sprufera & McLaughlin, 2012). Inter-rater reliability exceeded 95% and disagreements were rectified in meetings. In general, climbers involved in accidents tended to be experienced. Complacency of skilled climbers and willful disregard of standards were two of the most commonly coded contributions to accidents. We will discuss the most prevalent skill-based errors and the relatively high contribution of cognitive factors such as attentional overload, and decision-making under mental or physical stress compared to perceptual factors or equipment failure. Many of the injuries and fatalities were preventable, such as by wearing a helmet, but reducing accidents will require a change in the culture of the climbing community.


SpeakerMcNamara, Timothy
Author 1McNamara, Timothy
Vanderbilt University
t.mcnamara@vanderbilt.edu
Author 2Chen, Xiaoli
Vanderbilt University
Author 3He, Qiliang
Vanderbilt University
Author 4Fiete, Ila
University of Texas
Author 5Kelly, Jonathan
Iowa State University
TitleBias in Path Integration in Response to Changes in Environmental Geometry
AbstractEffective wayfinding depends on the ability to maintain spatial orientation during locomotion. One of the ways that humans and other animals maintain spatial orientation is via path integration, which operates by integrating self-motion cues over time, providing relative information about displacement. The neural substrate of path integration in mammals may exist in grid cells, which are found in the dorsomedial entorhinal cortex (dMEC) and pre- and parasubiculum in rat. Grid cells are found in rats, mice, and bats, and signatures of grid-cell activity have been identified in humans. Grid cells have multi-peak receptive fields that form the vertices of a triangular grid spanning the environment. When a familiar environment expands or contracts, the periods of grid cells rescale in the same direction. We found that distance estimation by humans using path integration was sensitive to recent deformations of environmental geometry, and showed that patterns of error were explained by a model in which locations in the environment are represented in the brain by grid cell activity.


SpeakerMontag, Jessica
Author 1Montag, Jessica
University of Wisconsin-Madison
montag@wisc.edu
Author 2MacDonald, Maryellen
University of Wisconsin-Madison
mcmacdonald@wisc.edu
TitleProduction of complex sentences across development: A possible role for emerging literacy
AbstractExperienced-based accounts of language processing emphasize the role of child-directed speech in infants and young children and the ongoing role of text exposure in adulthood, but the emergence of literacy in childhood is rarely investigated as a qualitatively distinctive event in language development. This is despite the fact that lexical and structural differences between written and spoken language are well documented. We investigate the effect of text exposure on production of complex sentences in eight and twelve year-old children and adults. Study 1 consists of corpus analyses of child-directed speech (CHILDES; MacWhinney, 2000) and child-directed literature (COCA; Davies, 2008-). We investigate the frequencies of two complex sentence types: active (The book that the woman read) and passive relative clauses (The book that was read by the woman), which convey similar messages and are thus production alternatives in Study 2. We find that spoken language has a 96:1 ratio of active to passive relatives, whereas this ratio is only 2.5:1 in written language, so an enormous amount of experience with passive relatives comes from reading. Study 2 examined 8-YO, 12-YO, and adult (all N=30) productions of relative clauses in a picture-based production task, that elicited active and passive relative clauses. Passive use (z=2.69, p<0.01) and other features more frequent in written language increased with age. Individual differences in text exposure also predicted choice of specific constructions consistent with their frequencies in written versus spoken language. These findings suggest that literacy may play a role in language development not previously investigated.


SpeakerMunro, Paul
Author 1Munro, Paul
University of Pittsburgh
pwm@pitt.edu
TitleA neural network architecture that learns structural analogies.
AbstractA method for training overlapping feed-forward networks on analogous tasks is extended and analyzed. The network architecture consists of distinct input and output units for the separate tasks, and requires shared weights (not just shared nodes) in the hidden layers; thus there must be at least two layers of hidden units. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the networks. The output of one network in response to a stimulus to the other network can be interpreted as an analogical inference. In a similar fashion, the networks can be explicitly trained to map specific items in one domain to specifc items in the other domain. The method has been applied to spatial tasks in a simple environments and to tree structures.


SpeakerMusca, Serban
Author 1Musca, Serban
CRPCC, EA 1285, European University of Brittany, Rennes, France
serbancmusca@gmail.com
Author 2Ferrand, Ludovic
LAPSCO, CNRS & Blaise Pascal University, Clermont-Ferrand, France
ludovic.ferrand@univ-bpclermont.fr
TitleFactors of picture naming accuracy in healthy elderly people: a model comparison approach
AbstractTwenty French healthy elderly (age range: 69-89 years, mean=78, SD=5.75) passed a confrontation naming task using 172 of the 190 line drawings of Snodgrass and Vanderwart's (1980) set. A model comparison approach using logistic regression models was carried out with age, gender, sociocultural level and MMSE score as subject descriptors, and name agreement, image agreement, concept familiarity, visual complexity, imageability (all from Alario & Ferrand, 1999), objective age of acquisition (Chalard, Bonin, Méot, Boyer, & Fayol, 2003), printed frequency, number of phonemes, number of syllables (LEXIQUE: New, Pallier, Ferrand, & Matos, 2001), printed frequency during childhood (MANULEX: Lété, Sprenger Charolles, & Colé, 2004), frequency trajectory (Zevin & Seidenberg, 2002) and animacy as predictors that describe the items. The Bayesian Information Criterion (BIC) was used to choose the best model among the candidate models. The best model that was found included, in addition to a random subject factor, animacy, objective age of acquisition, image agreement, name agreement and number of phonemes. Naming accuracy was best for unanimated items. Naming accuracy was directly related to name agreement and image agreement, and inversely related to objective age of acquisition and number of phonemes of the target word. All the other predictors, including frequency and frequency trajectory, did not predict picture naming performance in the elderly. These results are discussed and compared to those extant in the literature on picture naming in young adults.


SpeakerNavarro, Dan
Author 1Navarro, Dan
University of Adelaide
daniel.navarro@adelaide.edu.au
Author 2Vong, Wai Keen
University of Adelaide
waikeen.vong@adelaide.edu.au
Author 3Hendrickson, Andrew
University of Adelaide
drew.hendrickson@adelaide.edu.au
Author 4Perfors, Amy
University of Adelaide
amy.perfors@adelaide.edu.au
TitleSampling assumptions in categorization and generalization
AbstractInductive generalization, where people go beyond the data provided, is a basic cognitive capability, and underpins theoretical accounts of learning, categorization and decision-making. Bayesian models in particular make clear that when people acquire new data, the manner in which their generalizations change is connected to the assumptions they about how those data were generated. The literature has tended to focus on two different kinds of assumption, usually termed "strong" and "weak" sampling. In strong sampling, observed exemplars are assumed to be generated from a target category, and must necessarily belong to the generating category. In weak sampling, exemplars are generated by a process assumed to be independent of the category, and so the fact that exemplars belong to a particular category is accidental. In this talk I discuss a more general family of sampling models as they pertain to concept learning tasks involving one or more target categories, the extent to which sampling assumptions depend on prior beliefs versus statistical learning, and the extent to which human inductions are in fact consistent with the standard Bayesian accounts that exist in the literature.


SpeakerNeufeld, R. W. J. (Jim)
Author 1Neufeld, R. W. J. (Jim)
University of Western Ontario
rneufeld@uwo.ca
TitleMonitoring Cognition-Related Treatment-Regimen Efficacy using Cognitive- and Statistical-Science Principled Measurement Technology
AbstractCognitive performance potentially bearing on clinical symptomatology is integrated into a mixture-model design. The mixture model's base distribution is stipulated by a model of latency and/or accuracy of individual task performance; mixing distributions (hyper-distributions) are those of individual differences in base-distribution parameter values. Hyper-parameters of mixing distributions differ systematically over diagnostic groups having varying symptom severity. Cognitive-performance specimens (a representative of which is denoted {*}) on the symptom-significant cognitive task repeatedly are obtained from sampled individuals over the course of treatment. As combined with the respective hyper-distributions, the probability of each performance specimen, given group membership g, is available for each of the G varyingly symptomatic groups. These values become likelihood functions [i.e., Pr({*}|g)] for Bayesian posterior estimates, comprising Pr(g|{*}). Along with their successive renderings of cognitive performance, sampled individuals separately are re-assessed with the same method (e.g., diagnostic interview) used to construct groups originally supplying the sets of hyper-parameters. Now poised for estimation during each measurement phase are the base rates, or group-wise priors Pr(g), g = 1,2, … G, presently constituting the population of treated individuals. The desired Pr(g) values are those that maximize the multinomial likelihood of the diagnostic classification procedure's current symptom-group assignments. The methodology thus synthesizes information on cognition-related symptomatology and symptom-significant cognition, to monitor shifts in Pr(g) estimates-- thereby evaluating if the administered treatment is edging clients toward healthier functioning. Potential assets in assessing CNS-directed pharmaceuticals are noted. The methodology is numerically illustrated using memory-search probe encoding in schizophrenia.


SpeakerOberauer, Klaus
Author 1Oberauer, Klaus
University of Zurich
k.oberauer@psychologie.uzh.ch
Author 2Stephan, Lewandowsky
University of Bristol and University of Western Australia
stephan.lewandowsky.uwa.edu.au
TitleModeling working-memory updating
AbstractComplex working memory tasks such as operation span, n-back, or memory-updating tasks involve retention of relevant material while minimizing interference from irrelevant material (e.g., the arithmetic equations in operation span, outdated memoranda in updating tasks). We present a measurement-modeling framework for identifying parameters of theoretical interest, including the strength of activation of relevant and irrelevant representations, and the strength of binding of (relevant and irrelevant) representations to contexts that could serve as retrieval cues. We illustrate the modeling framework with an application to a working-memory updating experiment. Participants initially encoded four words presented in four different frames. They were then presented with a series of additional words presented one by one, each in one frame. Participants were instructed that each new word replaced the previous word in its frame. After an unpredictable number of updating step, the last word in each frame was tested. People recalled each word by selecting it from a set of candidates, which comprised the last word in each frame, the next-to-last word in each frame, and four words not presented in the entire trial. To separate the contributions of removal of old words and encoding of new words to updating, each new word was preceded by a cue in the same frame. We varied the time between cue and word (available for removing the old word) and the time between word presentation and onset of the cue for the next updating step (available for encoding the word). We comparatively tested two models within the modeling framework, one assuming decay and rehearsal, the other assuming interference and removal of no-longer relevant representations. The interference-removal model proved superior in a Bayesian hierarchical model comparison.


SpeakerPalmeri, Thomas
Author 1Palmeri, Thomas
Vanderbilt University
thomas.j.palmeri@vanderbilt.edu
TitleCognitive and neural models of perceptual decisions
AbstractStochastic accumulator models of perceptual decisions have been linked to neural activity in awake behaving non-human primates. Our recent work has used these models to account for response probabilities and response times of saccade decisions and to predict the temporal dynamics of single unit neural activity. I will describe current work that considers how to scale our current models with small numbers of accumulators predicting activity of individual neurons to models with large numbers of accumulators predicting activity observed within large ensembles of neurons. I will also describe current work that considers how best to quantitatively compare and evaluate predicted model dynamics with observed neural dynamics.


SpeakerPezzulo, Giovanni
Author 1Pezzulo, Giovanni
National Research Council of Italy
giovanni.pezzulo@istc.cnr.it
Author 2Barca, Laura
National Research Council of Italy
laura.barca@istc.cnr.it
Author 3Lepora, Nathan
University of Sheffield
n.lepora@sheffield.ac.uk
TitleThe costs of action within dynamic models of decision-making
AbstractDynamic models of decision-making such as the drift-diffusion have mainly addressed tasks where the motor aspects are simple (e.g., selecting between two buttons to press). We recently performed a series of experiments (e.g., lexical decisions, perceptual discriminations) using a slightly more complex set-up (e.g., buttons are 40-50 centimeters far from the subject and have to be reached and pressed with a mouse, within a deadline). Using this experimental set-up we observe that decisions are not completed before starting the action; rather, subjects start moving very soon and often revise their decision before pressing a button. Uncertainty in the decision is often reflected in the movement trajectories. We propose a formalization of decision-making in which the costs of action (e.g., reaching and pressing a button with a mouse) are considered as proper parts of the decision to be optimized. The choice balances between the benefits of doing the right choice (i.e., pressing the right button) and its costs (e.g., time and biomechanic costs). In this framework, accumulated evidence can be used for motor preparation and to start the action before decision is completed, so as to minimize the risk of missing the deadline and the biomechanic costs of executing abrupt movements. Furthermore, the currently executed movements influence the choice, because when an action is initiated the costs of 'changing mind' depend on the motor costs to change trajectory.


SpeakerRatcliff, Roger
Author 1Ratcliff, Roger
osu
ratcliff.22@osu.edu
Author 2Mckoon, Gail
osu
mckoon.1@osu.edu
TitleIndividual Differences in Speed and Accuracy
AbstractWe examined individual differences in a number of numerosity experiments and found that accuracy and RT were not significantly correlated with each other. It might have been expected that faster subjects would be more accurate subjects, but this was not the case. (Although within an experiment, accuracy was negatively correlation with RT such that easier conditions had more accurate and faster RTs.) We show how a diffusion model analysis assigns individual differences to model parameters: Accuracy is largely governed by drift rate (the quality of the information on which a decision is based) and speed is largely governed by boundary settings (speed/accuracy criteria) and the time taken by nondecision processes. In a further experiment, speed instructions were used in an attempt to equate boundary settings across individuals. We conclude with analyses of other experiments using different age groups and speed-accuracy manipulations that demonstrate the generality of the experimental results.


SpeakerRouder, Jeffrey
Author 1Rouder, Jeffrey
University of Missouri
rouderj@missouri.edu
TitleA note on the affordances of ROC analysis
AbstractAnalysis of ROC plots remains important in understanding latent processing in memory, perception, and attention. Most psychologists learn a "standard story" for interpreting these plots in which the curvature and asymmetry of a single isosensitivity curve are the target of analysis. For example, most psychologist believe that discrete-state models predict straight-line curves, and that asymmetry in curves licenses the possibility of two mnemonic processes. In this talk I show that this standard story is based on tenuous assumptions that have no psychological content. For instance, the straight-line prediction is predicated on a detection state that leads with certainty to the correct response. Without this assumption, isosensitivity curves from discrete-state models may not be straight lines. Likewise, all signal detection model predictions are predicated on parametric assumptions, say that latent strength is distributed as a normal. I show that rather than focusing on shape and symmetry of individual curves, the important constraint in ROCs is in the relationships among several curves, such as among a family that result when the strength of the signal is manipulated. I introduce two new formal constraints on the relationship among curves. One, termed discrete-state representability, must hold if processing is mediated by discrete state. The other, termed shift-representability, is a good benchmark for latent strength theories. I show that recognition memory and perceptual identification of briefly flashed words yields response data that are better characterized by discrete states while the detection of orientation of gabor patches is better characterized by a latent strength account.


SpeakerScheibehenne, Benjamin
Author 1Scheibehenne, Benjamin
University of Basel
benjamin.scheibehenne@unias.ch
Author 2Pachur, Thorsten
Max Planck Institute for Human Development
pachur@mpib-berlin.mpg.de
TitleCognitive Models of Choice: (When) Do Hierarchical Bayesian Estimates Pay Off?
AbstractParameters of cognitive models are often used to study, measure, and describe meaningful individual differences and to gain insight into underlying cognitive processes. Using individually fitted parameters relies on the assumption that the parameter values estimated for a person remain relatively invariant across time Using two prominent models of risky decision making—cumulative prospect theory (CPT, Tversky & Kahneman, 1992) and the transfer-of-attention-exchange model (TAX; Birnbaum & Chavez, 1997)—we compare the use of Bayesian hierarchical versus independent, non-hierarchical estimation techniques for assessing two aspects of model generalizability: parameter consistency and predictive accuracy. Results indicate that hierarchical techniques did not improve parameter stability measured as test-retest correlations and yield a decrease in posterior predictive accuracy. Further analyses suggest that this is because the shrinkage induced by hierarchical estimation over-corrected for extreme yet reliable parameter values on the individual level. Further analyses indicated that in the case on hand, hierarchical techniques were only advantageous in particular conditions, for example when data on the individual level was limited.


SpeakerShiffrin, Richard
Author 1Johns, Brendan
Indiana University
johns4@indiana.edu
Author 2Shiffrin, Richard
Indiana University
shiffrin@indiana.edu
TitleOrthographic and Semantic Visual Priming
AbstractEight non-diagnostic subliminal or visible word primes appeared in a 3x3 grid, followed by a central word briefly flashed and masked, followed by target and foil choices. The primes were related semantically, orthographically (or both) to the target choice, the foil choice, or both. Semantic primes were chosen from Deese-Roediger-McDermott lists; orthographic primes overlapped in letters and position. In some conditions, one choice was related semantically to four primes, and the other orthographically to four other primes. The results were similar for semantic and orthographic priming: Brief masked primes produced a bias to choose the related item, and improved perceptual processing. Visible primes added enough noise to the decision process, and/or reduced perceptual processing enough, to harm performance (even with both-priming). In addition, long primes produced discounting, reducing the tendency to select the primed choice. We report a model for the findings.


SpeakerSikstrom, Sverker
Author 1Sikstrom, Sverker
Department of psychology
sverker.sikstrom@psychology.lu.se
Author 2Hellman, Johan
Department of psychology
hellman.hell@gmail.com
TitleThe Generalized Signal Detection Theory
AbstractSignal detection theory (SDT) and the Dual Process SDT (Yonelinas, 2001) are the most influential theoretical frameworks for quantifying the underlying familiarity distributions. However, neither provides a detailed account for the basic finding that the old item distributions have larger variability than the new item distribution, a phenomenon that has been accounted for by the idea of encoding variability (Wixted, 2007) or an additional retrieval process (Yonelinas, 2001). We present the Generalized Signal Detection Theory (the GSDT) where the familiarity distribution is a sum of signals that passes though a sigmoidal non-linear activation function. This theory suggests that the underlying distributions can be described by a binomial density function. The GSDT accounts for the larger variability of the old distribution when the non-linarites are emphasized, but is a special case of the standard SDT when the non-linarites are attenuated. A gain-parameter determines the slope of the non-linear activation function, and the resulting new to old item variability is estimated with the slope of the z-ROC. Because the gain-parameter previously has been shown to reflect changes in catecholaminergic states (Servan-Schreiber et al., 1998), the GSDT predicted that changes in attention would result in changes in z-slope. We tested the prediction on attentive and inattentive participants, and found a difference in z-slope as a result of difference in attentional performance. The slope of the z-ROC can be related to neural encoding density, as it is directly related to the number of active nodes used for representing new and old stimulus.


SpeakerSloutsky, Vladimir
Author 1Sloutsky, Vladimir
Ohio State University
sloutsky.1@osu.edu
TitleLanguage and Cognition: The Role of Category Labels in Categorization and Induction
AbstractHow do words affect generalization, and how do these effects change during development? One theory posits that even early in development, linguistic labels function as category markers and thus are different from the features of the stimuli they represent. Another theory holds that early in development, labels are akin to other features, but that they may become category markers in the course of development. We addressed this issue in experiments with infants, 4- to 5-year-olds and adults. In these experiments, participants learned categories and associated labels. They were then presented with a test, in which the category label was pitted against a highly salient feature. Results indicated that infants and children relied on the salient feature when performing induction, whereas adults relied on the category label. These results suggest that early in development, labels are features of items, but that they may become category markers in the course of development.


SpeakerSperling, George
Author 1Sperling, George
University of California, Irvine
sperling@uci.edu
Author 2Sun, Peng
University of California, Irvine
peng.sun@uci.edu
Author 3Wright, Charles E.
University of California, Irvine
cewright@uci.edu
Author 4Chubb, Charles
University of California, Irvine
c.chubb@uci.edu
TitleUsing centroid judgments to measure attention filters
AbstractSubjects use a mouse to position a pointer at the centroid--the center of gravity--of a briefly displayed cloud of dots. They receive precise feedback. Trained subjects judge the centroid of 2,4,8, or 16 dots as accurately as the position of single dot. In attention experiments, a subset of dots in a large dot cloud is distinguished by some characteristic, such as a different color, and subjects judge the centroid of only the distinguished subset, e.g., dots of a particular color. The analysis computes the precise contribution to the centroid of every color relative to the target color, i.e., the attention filter for that particular color and thereby the selectivity of attention for that feature in that context. A further computation of the minimum number of dots the subject must extract from the display in order to achieve the observed accuracy gives the "efficiency" of the attention filter. The procedure itself is efficient, yielding an accurate attention filter in a single session. Measured attention filters for selecting dots of one color from a mixture of isoluminant colors are remarkably precise. Filters for selecting dots of a particular gray level or saturation from among similarly colored dots of different saturations are less precise. As time permits, results will be shown for attention filters that select for combinations of colors, for lines of a particular slant, for dots in particular depth planes, and for various other features and combinations thereof in quest of a general theory of attention to features.


SpeakerTeale, Julia
Author 1Teale, Julia
University of St Andrews
jct22@st-andrews.ac.uk
Author 2MacLeod, Malcolm
University of St Andrews
mdm@st-andrews.ac.uk
TitleDo older people suffer from an inhibitory deficit during retrieval?
AbstractAn inhibitory control deficit has been suggested as a major contributor to general age-related memory decline in older people. The basic idea is that, as we grow older, we are less able to deal with interfering information which, in turn, affects our ability to selectively retrieve target memories. Recent research in this field has produced mixed results. Using different modified versions of the retrieval practice paradigm as a measure of memory inhibition, the present studies set out to determine whether the forgetting effects typically observed under standard retrieval practice conditions might have more to do with non-inhibitory mechanisms rather than inhibition per se. Retrieval-induced forgetting emerged for both younger adults (mean age 20 years) and older adults (mean age 70 years), indicating that age-related deficits in memory are unlikely to be a function of any loss in inhibitory control. Older adults, however, reported twice as many covert intrusions as young adults on a post-experimental questionnaire, suggesting that covert cuing may also be partly driving the retrieval induced forgetting effect in older adults.


SpeakerTrueblood, Jennifer
Author 1Trueblood, Jennifer
University of California, Irvine
jstruebl@uci.edu
TitleModeling Reference-dependent Preference Reversals
AbstractNumerous studies have demonstrated that preferences among options in riskless choice are often influenced by reference points. That is, an existing reference level or status quo can bias preferences towards new alternatives. Reference-dependent effects have typically been attributed to loss aversion (Tversky & Kahneman, 1991). This research provides new experimental evidence that three standard reference-dependent effects arise in a low-level perceptual decision task with nonhedonic stimuli. This casts doubt on explanations such as loss aversion, which are limited to high-level decisions with hedonic stimuli, and indicates that reference-dependent effects may be amenable to a general explanation at the level of the basic decision process. As an alternative to loss aversion, a dynamic model of preference called the multi-attribute linear ballistic accumulator model is presented. The model accounts for the three reference-dependent effects and makes new predictions about the influence of time pressure on the effects, which are confirmed experimentally.


SpeakerTurner, Brandon
Author 1Turner, Brandon
Stanford University
turner.826@gmail.com
Author 2Van Maanen, Leendert
University of Amsterdam
lvmaanen@gmail.com
Author 3Forstmann, Birte
University of Amsterdam
buforstmann@gmail.com
TitleA Mechanistic Account of the Default Mode Network
AbstractSpontaneous activity of the default mode network (DMN) has serious implications for trial-to-trial performance within a task. Previous examinations of the DMN have focused on relating brain activation patterns directly to behavioral measures such as accuracy or response time, aggregating across many trials. In this article, we use a flexible Bayesian framework for combining neural and cognitive models to form the Neural Drift Diffusion Model (NDDM). We fit the model to experimental data consisting of a speed-accuracy manipulation on a random dot motion task, where the stimulus on every trial is uniquely difficult. We use a hierarchical version of our model to map single-trial brain activity onto the cognitive mechanisms assumed by our model. By combining accuracy, response time, and the blood oxygenated level dependency response into a single, unified model, the link between cognitive abstraction and neurophysiology can be better understood. We use our cognitive modeling approach to show how pre-stimulus brain activity -- specifically, activity within the DMN -- can be used to simultaneously predict response accuracy and response time. Furthermore, we provide a mechanistic explanation for how activity in a brain region affects the dynamics of the underlying decision process.


SpeakerUsher, Marius
Author 1Bronfman, Zohar
Tel-Aviv University
zoharbronfman@gmail.com
Author 2Brezis, Noam
Tel-Aviv University
noam@madeira.co.il
Author 3Usher, Marius
Tel-Aviv University
marius@post.tau.ac.il
TitleCan we distinguish Phenomenal from Access Consciousness within a Sperling paradigm?
AbstractThe distinction between two types of conscious awareness, access vs phenomenal is a topic of intensive debate. According to one view, the visual experience is rich and it overflows the capacity of the attentional and working-memory system. On the other view this richness is an illusion that is caused by a sparse representation of the scene, with only the attended items popping into rich phenomenlogy whenever the attentional spotlight hits them. To examine this issue we use a variant of the Sperling paradigm in which, while observers usually can report 3-4 items, they also report that they saw more than that. To test this introspection we presented observers with an array of colored letters, with one row (random from trial to trial) pre-cued, which needs to be remembered for recall. After the letter-recall the participants were asked to report the color diversity of either the cued-row or the rest of the display. The color of the letters was manipulated so as to correspond to high or low color-diversity, and this variable was manipulated independently for the cued-row and the rest of the display. The results indicate that people can access about 3 letters for recall in all the conditions and that they can report simultaneously the color diversity of both the cued-row or of the rest-of the array. Moreover the accuracy for the color diversity of the cued-row increased (decreased) when the rest of the array had consistent (inconsistent) color diversity. These results suggest that color diversity -- a phenomenal content -- is registered automatically across the array without the resources of the attentional spotlight and of the working memory.


Speakervan Fraassen, Bas
Author 1van Fraassen, Bas
SAN FRANCISCO STATE UNIVERSITY
fraassen@princeton.edu
TitleUpdating Probability: Tracking Statistics as a Criterion
Abstract. If opinion is represented by an assignment of probabilities to propositions, the criterion proposed is that the assignment should match a possible assignment of proportions in a population. This criterion implies limitations on policies for updating in response to a wide range of types of new input. Satisfying the criterion is shown equivalent to the principle that the prior must be a convex combination of the possible posteriors. It is conjectured that this is equivalent to the requirement that prior expectation values must fall in the range spanned by possible posterior expectation values. The criterion is liberal; it allows for but does not require a policy such as Bayesian Conditionalization. It is offered as a general constraint on policies for managing opinion over time. We note that the much discussed policy of updating by maximizing relative entropy yields cases that violate the criterion.


Speakervan Ravenzwaaij, Don
Author 1van Ravenzwaaij, Don
University of New South Wales
d.vanravenzwaaij@unsw.edu.au
Author 2Boekel, Wouter
University of Amsterdam
Author 3Forstmann, Birte
University of Amsterdam
Author 4Ratcliff, Roger
Ohio State University
Author 5Wagenmakers, Eric-Jan
University of Amsterdam
TitleAction Video Games Do Not Improve the Speed of Information Processing in Simple Perceptual Tasks
AbstractPrevious research suggests that playing action video games improves performance on sensory, perceptual, and attentional tasks. For instance, Green, Pouget, and Bavelier (2010) used the diffusion model to decompose data from a motion detection task and estimated the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster information processing, reduced response caution, and no difference in motor responding. Because perceptual learning is generally thought to be highly context-specific, this transfer from gaming is surprising and warrants replication in a large-scale training study. We conducted two experiments in which participants practiced either an action video game or a cognitive game in five separate, supervised sessions. Prior to each session and following the last session, participants performed a perceptual discrimination task. In our second experiment we included a third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the action gamers, the cognitive gamers, and the non-gamers and suggest that, in contrast to earlier reports, playing action video games does not improve the speed of information processing in simple perceptual tasks.


SpeakerVandekerckhove, Joachim
Author 1Vandekerckhove, Joachim
University of California, Irvine
joachim@uci.edu
TitleCognitive latent variable models
AbstractWe introduce cognitive latent variable models, a broad category of formal models that can be used to aggregate information regarding cognitive parameters across participants and tasks. Latent structures are borrowed from a vast literature in the field of psychometrics, and robust cognitive process models can be drawn from the cognitive science literature. The new modeling approach allows model fitting with smaller numbers of trials per task if there are multiple participants, and is ideally suited for uncovering correlations between latent task abilities as they are expressed in experimental paradigms. Example applications deal with the structure of cognitive abilities underlying a perceptual task, and executive functioning.


SpeakerWagenmakers, Eric-Jan
Author 1Wagenmakers, Eric-Jan
University of Amsterdam
EJ.Wagenmakers@gmail.com
TitleA Bayesian Perspective on Replication Research
AbstractHere I outline a three-step paradigm for replication research. In the first step, the intended analyses are preregistered, allowing one to discriminate between exploratory and confirmatory tests. In the second step, evidence is monitored using one or more Bayes factors. In the third step, a sensitivity analysis inspects the robustness of the conclusions to changes in the statistical model. I illustrate the paradigm with recent replication attempts, including the effect of clipboard weight on the assessment of a job candidate, the effect of rotating paper towels on openness to experience, and the effect of horizontal eye-movements on memory.


SpeakerWeidemann, Christoph
Author 1Jacobs, Joshua
Drexel University
joshua.jacobs@drexel.edu
Author 2Weidemann, Christoph
Swansea University
ctw@cogsci.info
Author 3Kahana, Michael
University of Pennsylvania
kahana@psych.upenn.edu
TitleDirect recordings of grid-like neuronal activity in human spatial navigation
AbstractAbstract Grid cells in the entorhinal cortex appear to represent spatial location via a triangular coordinate system. Such cells, which have been identified in rats, bats, and monkeys, are believed to support a wide range of spatial behaviors. By recording neuronal activity from neurosurgical patients performing a virtual-navigation task we identified cells exhibiting grid-like spiking patterns in the human brain, suggesting that humans and simpler animals rely on homologous spatial-coding schemes.


SpeakerWillits, Jon
Author 1Willits, Jon
Indiana University
jwillits@indiana.edu
TitleAssociative Learning Processes CAN learn abstract, rule-based knowledge
AbstractA criticism of associative models is that they are incapable of learning and representing abstract rules (Bever, Fodor, Garrett, 1968). I will present evidence that this criticism is incorrect, for classes of associative models that posit internal, mediating variables. I will make this argument using two simulations with recurrent neural networks (RNNs), testing their ability to learn two classic types of rule-based structures. In Simulation 1, I present an RNN that learns non-adjacent sequential dependencies, and learns to transfer knowledge of those dependencies to distances on which the model has not been trained. This ability to learn distance-invariant representations of sequential structure is critical for representing knowledge of events, language, and motor plans. In Simulation 2, I present an RNN that learns abstract grammars (such as whether items in a sequence are following ABA, AAB, or ABB repetition patterns), and then transfers knowledge of the grammar to novel stimuli. It has been argued that associative models are fundamentally incapable of learning this kind of knowledge (Marcus, 1999). I will explain why RNNs succeed, and argue that the nature of this learning provides evidence that, as a general rule, learning rule-like representations will not be difficult for associative models with mediating variables. I will then discuss how this argument applies to some other mediated associative models, such as statistical models using latent variables like Latent Semantic Analysis (Landauer & Dumais, 1997) and Probabilistic Topics Models (Griffiths etal., 2007), as well as Hull's behavioristic learning theory.


SpeakerZednik, Carlos
Author 1Zednik, Carlos
University of Osnabrueck
czednik@uos.de
TitleThe role of rational analysis in cognitive scientific explanation
AbstractWhat role does Rational Analysis play in cognitive scientific explanation? Although it is often characterized as a form of computational-level analysis to be contrasted with algorithmic-level and implementation-level analysis, this characterization is only partially helpful: it remains largely unclear how the computational level informs and constrains lower levels of analysis. The philosophical framework of mechanistic explanation can be used to clarify this issue. As I will argue in my talk, Rational Analysis often plays a heuristic role in the development of algorithm-level mechanism-sketches, as well as a justificatory role when selecting from competing implementation-level accounts.


SpeakerZhang, Shunan
Author 1Zhang, Shunan
UCSD
s6zhang@ucsd.edu
Author 2Yu, Angela
UCSD
ajyu@ucsd.edu
TitleForgetful Bayes and myopic planning: Human learning and decision-making in a bandit setting
AbstractHow people achieve long-term goals in an imperfectly known environment, via repeated tries and noisy outcomes, is an important problem in cognitive science. There are two inter-related questions: how humans represent information, both what has been learned and what can still be learned, and how they choose actions, in particular how they negotiate the tension between exploration and exploitation. In this work, we examine human behavioral data in a multi-armed bandit setting, in which the subject choose one of four ``arms'' to pull on each trial and receives a binary outcome (win/lose). We compare human behavior to a variety of models that vary in their representational and computational complexity. Our result shows that subjects' choices, on a trial-to-trial basis, are best captured by a "forgetful" Bayesian iterative learning model (Yu and Cohen, 2009) in combination with a partially myopic decision policy known as Knowledge Gradient (Frazier et al. 2008). This model accounts for subjects' trial-by-trial choice better than a number of other previously proposed models, including optimal Bayesian learning and risk minimization, e-greedy and win-stay-lose-shift. It has the added benefit of being closest in performance to the optimal Bayesian model than all the other heuristic models that have the same computational complexity (all are significantly less complex than the optimal model). These results constitute an advancement in the theoretical understanding of how humans negotiate the tension between exploration and exploitation in a noisy, imperfectly known environment.