Cogn Process (2009) 10 (Suppl 2):S148–S165 DOI 10.1007/s10339-009-0327-2
ABSTRACT
Oral presentations (in alphabetical order, according to the first authors’ last names)
Spatial problem solving: assembling three-dimensional puzzles in real and virtual environments Sarwan Abbasi1, Jean-Marie Burkhardt1, Michel Denis2 1 ECI, Universite´ Paris Descartes, Paris, France 2 LIMSI-CNRS, Orsay, France Three-dimensional puzzles are a very special kind of problem-solving situation, which calls for a high number of cognitive skills, including visuo-spatial capacities. Participants have been invited to solve a task which consisted in assembling seven wooden blocks of various shapes, sizes, and colors to form a cube. The analysis of individual protocols, including the verbal reports by the participants, indicates that the task comprises two major steps. The first one consists of mentally visualizing the problem in a constrained work space, and the second one consists of reaching the target state by adequate manipulations. Performance was shown to depend on the classification of individual blocks according to their perceived complexity. Performance was also affected by the participants’ scores on two visuospatial tests (the Minnesota Paper Form Board and the Mental Rotations Test). Based on these behavioral data, we are developing another experiment in which the participants execute the same task in a virtual environment. Our objective is to identify micro- and macrostrategies and to integrate them into a computerized system of assistance to spatial problem solvers.
Alignment of spatial perspective Elena Andonova FBTR8 Spatial Cognition, University of Bremen, Bremen, Germany Spatial perspective refers to the use of reference systems in extended spatial descriptions. A route and an environment can be described from an ‘external’ or allocentric view in a survey perspective, or from an embedded, or egocentric view in a route perspective. Previous studies have shown the influence of several individual, environmental, and learning factors on perspective choices (Taylor and Tversky 1996; Taylor et al. 1999). Here, we examine whether a speaker’s perspective choice depends on their interlocutor’s previous use of perspective, i.e., priming across dialogic partners, in a route description task. To what extent can we find evidence of alignment of spatial perspective? Participants and confederates took turns describing routes on schematic maps. The confederate’s scripted descriptions involved the manipulation of spatial perspective in an early and a later phase— perspective was either consistent or not (alternating), and confederates switched perspective half-way through the experimental session. Analyses run on the ratio of route perspective choices out of all valid responses for each participant revealed alignment of spatial perspective on the early but not on the later phase indicating that speakers are influenced by interlocutors’ choices, yet in a limited way (initial bias),
123
and that alignment is weakened with inconsistent bias or perspective shifts by the interlocutor. These results are discussed in relation to a separate study which demonstrated spatial perspective priming in a route memory task.
Spatial and temporal cognition for the sense of agency. Neuropsychological evidences Michela Balconi, Davide Crivelli Laboratory of Cognitive Psychology, Department of Psychology, Catholic University of Milan, Milan, Italy Spatial and temporal dimensions are fundamental benchmarks for acting. Action implies an agent who moves in space and time interacting with object and modifying his spatial relations. He is an agent characterized by an embodied sense of movement ownership and the feeling of being the cause of action effects. Disruption of the spatio-temporal feedback may have main effect on agency and action. We have explored the consequences of this disruption that is related to cause–effect anomalous relationship. Introducing a systematic delay of the feedback or a spatial distortion of the executed action, we tested the neuropsychological correlates of coherent versus incoherent sense of agency. It is represented as a mismatch between attended and unattended feedback, and a concomitant dissonance in the experience of agentivity. It was found that the existence of specific cognitive and neural processes is associated with veridical error detection, such as an electrophysiological phenomenon the error-related negativity (ERN). We tested the hypothesis that similar processes may be found in response to external erroneous feedback to subjects’ response, and that different mechanisms and processes may be active for spatial- versus temporal-related discordance, being the first more externally monitored and the latter more internally monitored. 20 subjects were tested by an ERP procedure and the data have been statistically analysed by repeated measures ANOVAs. A specific ERN-analogous cortical phenomenon has been found for temporal and spatial features, consequently to an induced erroneous feedback. Results have been discussed at the light of spatial and temporal neuropsychological models.
Action simulation and spatial representation: an ERPs analysis applied to a motor imagery task Michela Balconi, Edoardo Santuccia Laboratory of Cognitive Psychology, Department of Psychology, Catholic University of Milan, Milan, Italy In the field of spatial cognition, motor imagery (MI) plays a significant role for its theoretical and empirical implications. MI can be defined as a dynamic state during which a subject mentally simulates a given action (Decety 1996). This cognitive
Cogn Process (2009) 10 (Suppl 2):S148–S165 process can be experienced focusing on kinaesthetic information of the action (internal perspective) or alternatively directing the attentive resources on visuospatial information of the act (external perspective). Therefore, MI could be envisaged as an intersection feature between perception, action and cognition, where motor and visual simulative elements are critical. This study analysed ERP (event-related potentials) variations obtained during a task where 25 subjects had to generate mental motor images of everyday acts. We used complex actions shared in accordance with the primary effectors required for the achievement of the act (upper extremities vs. whole body). In order to analyse late movement-related cortical potentials and perceptive/cognitive components of MI , we defined an epoch of 400 ms (-100/300). Repeated-measures ANOVAs were applied to peak and latency measures. We found out at 55 ms before the onset a pre-motion positivity over the vertex of the scalp and a motor potential at 30 ms over the occipital sites. A wide negative amplitude was found at 100 ms after the onset in parieto-occipital areas followed by a P3 over the fronto-central regions. We interpreted these results in the light of motor system involvement and spatial working memory recruitment during the generation, support and changing of motor images.
Towards testing route descriptions for plausibility using a markov chain Andrea Bauer, Dirk Wollherr, Martin Buss Institute of Automatic Control Engineering (LSR), Technische Universita¨t Mu¨nchen, 80290 Munich, Germany When humans are asked for the way, descriptions may vary between persons or even be incorrect in some cases. We envision a system that assesses route descriptions while they are given and that calculates the probability for the plausibility of every new input. To this end a user study was conducted, recording ten correct route descriptions. The data were evaluated to obtain probabilities for transitions between relative positions in the route and the next relative position, as given in the description. From these probabilities a transition matrix was deduced for the states ‘straight’, ‘left’, ‘right’, and ‘back’. The transition matrix is used in a Markov chain to calculate the probabilities of the distinctive directions ‘straight’, ‘left’, ‘right’, and ‘back’, which are compared to the input in a route description and therefore used for a plausibility check of the input. An evaluation of the plausibility check method using a Markov chain is presented.
Voluntary modification of musical execution by neurofeedback training O. M. Bazanova1, A. Kondratenko2, O. Kondratenko2, E. M. Mernaya3 1 State-Research Institute of Molecular Biology and Biophysics, Siberian Branch, Russian Academy of Medical Sciences, Novosibirsk, Russia 2 Academy of Music, Skopje, Macedonia 3 State Novosibirsk musical college, Novosibirsk, Russia Statement: Our previous investigation showed that musical performance in high skilled musicians be accompanied by the non-participating in execution muscle tension decrease with simultaneous EEG alpha-2 power increase. The goal of this study was to compare the impact of usual practice and practice combined with simultaneous individual alpha-2-EEG stimulating and EMG
S149 decreasing biofeedback (Alpha-EEG/EMG-BFB) on musical performance quality and individual alpha-activity indices. Materials and methods: With the task ‘‘to achieve a state of high-quality musical performance complimented with a feeling of comfort’’, 56 musical students (aged 16–25) were trained with 20 musical practice sessions. 29 among them had low (\10 Hz) and 27—high (C10 Hz) individual alpha peak frequency (IAPF), LAF and HAF subjects. The sample was randomized (by age, gender and IAPF) into two groups. Musical practice was combined in experimental group with Alpha-EEG/EMG-BFB in the control group, with mock-biofeedback. Results: Initially, HAF in comparison with LAF students demonstrated higher musical performance, learning efficiency and self-actualization scores. First Alpha-EEG/EMG-BFB session was more efficient in HAF than in LAF students. Two month AlphaEEG/EMG-BFB in comparison with mock biofeedback training displayed significant more improvements in the quality of musical performance, learning efficiency and self-actualization in LAF than HAF students. Individual alpha-activity level did not change in all participants in rest condition, but enhanced during musical performance. Conclusion: Alpha/EMG Biofeedback training increases musical performance quality. Higher musical skills, which HAF demonstrated initially, could be achieved by LAF too through the practice combined with Alpha-EEG/EMG-BFT.
Language and spatial cue integration: evidence from verbal interference studies Judith Bek1, Rosemary Varley1, Mark Blades2, Michael Siegal3,4 1 Department of Human Communication Sciences, University of Sheffield, Sheffield, UK 2 Department of Psychology, University of Sheffield, Sheffield, UK 3 Department of Psychology, University of Sheffield, Sheffield, UK 4 University of Trieste, Trieste, Italy Spatial reorientation involves representing conjunctions of environmental cues such as geometry and landmarks. Verbal shadowing has been reported to disrupt spatial reorientation in adults, and it has been claimed that natural language is necessary for integrating geometric and landmark information (Hermer-Vazquez et al. 1999). However, investigations with non-human species have demonstrated combined use of landmarks and geometry, and a number of researchers have failed to replicate the verbal interference findings. We investigated the effect of verbal shadowing on reorientation. 150 healthy adults were tested in experimental variations of the Hermer-Vazquez et al. reorientation paradigm. Verbal interference was found only when the task situation was ambiguous and extraneous visual features were present in the room. We also compared the effects on reorientation of continuous prose shadowing and non-word syllable shadowing. Non-word shadowing should occupy the phonological loop without disrupting the core linguistic mechanisms (syntax and lexicon) proposed to be involved in spatial cue integration. We found greater interference from syllables than from prose shadowing. Our results suggest that syntax and lexicon are not crucial to reorientation, but inner speech might aid reorientation when cognitive demands are high.
Motor control in young patients with unilateral brain lesions: an MEG study Paolo Belardinelli1,2,3,7, Luca Ciancetta3, Martin Staudt6, Vittorio Pizzella2,3, Alessandro Londei7, Niels Birbaumer4,5, Gian Luca Romani2,3, Christoph Braun4,5,6,8
123
S150 1
Brain Research Unit, Low Temperature Laboratory, Helsinki University of Technology, Finland 2 ITAB, Institute for Advanced Biomedical Technologies, ‘‘G. D’Annunzio’’ University Foundation, Chieti, Italy 3 Department of Clinical Sciences and Biomedical Imaging, University of Chieti, Italy 4 MEG Center, University of Tuebingen, Germany 5 Institute of Medical Psychology and Behavioral Neurobiology, University of Tuebingen, Germany 6 Department of Pediatric Neurology and Child Development, University Children’s Hospital, Germany 7 ECONA, Interuniversity Center for Research on Cognitive Processing in Natural and Artificial Systems, University of Rome ‘‘La Sapienza’’, Italy 8 CIMeC, Center for Mind/Brain Sciences, University of Trento, Italy Four young hemi-paretic patients with pre- and perinatally acquired brain lesions were investigated while performing their anomalous hand motor control. MEG signals were recorded. Patients performed a pinch grip (1-N isometric contraction) using their paretic and nonparetic hands alternatively. EMG was simultaneously recorded as a basis for coherence calculations. A set of different algorithms was developed for the data analysis. Coherence mapping was performed in the b frequency range by means of sLORETA and DICS. It was found that the effects of a peculiar cerebral reorganization allow the patients to move their paretic arm. The relocation of motor functions has shifted the motor control of the paretic arm from the lesioned to the contralesional hemisphere. While in case of nonparetic pinch grip, coherent activity originated from contralateral M1 exclusively, paretic grip produced coherent activity in ipsilateral M1 as well as in the ipsilateral cerebellum. The functional connectivity between M1 and cerebellum was also investigated in case of paretic pinch grip. Coupling direction analysis revealed that functional directionality goes from M1 to cerebellum throughout the whole pinch grip duration. Nevertheless, if only the first hundreds of milliseconds of motor action are taken into account, the coupling direction appears inverted. The present study verified the assumption that the intact hemisphere takes over motor control from the paretic (ipsilateral) hand in the presence of early unilateral brain lesion. Moreover, the role of cerebellum in motor deficit compensation and its close interaction with ipsilateral primary motor cortex was analyzed thoroughly.
The effects of ‘order’ and ‘disorder’ on human cognitive perception in navigating through urban environments Alexandra Brettel The Bartlett School of Graduate Studies, University College London, London, UK This paper investigates how ‘order’, ‘structure’ or ‘disorder’ in street layouts is perceived when navigating through an urban environment. It builds on the assumption that the perception of ‘rhythm’, under movement, is a key factor for the understanding of an urban context. Knowledge about urban layouts can be accrued by the traveller in different ways: from static viewpoints, from top-down maps, and from travelling through the scenery. Cognitive processes involved in the organisation of information retrieved from the built environment are known to simplify and schematise information. Research into movement behaviour has so far been conducted either through wayfinding experiments, or through investigations into the configuration of street networks. This paper, however, relates two different modes of
123
Cogn Process (2009) 10 (Suppl 2):S148–S165 cognising navigational choices to each other; the configurational knowledge of all possible routes; and the experiencing of ‘event’ sequences along a particular route. It thereby employs a methodological framework that, while focussing on the perception of ‘order’, ‘structure’ and ‘disorder’, allows for the comparison of quantitative measures from both perspectives: from ‘above’ and from ‘along within’ an urban environment. Space Syntax analytical tools and a virtual movement experiment with pre-chosen routes are applied to six city samples, thereby delivering a body of theoretical and empirical data which is evaluated in relation to configurational (view from above) and sequential (moving through a scenery) embodiments of ‘order’ and ‘disorder’. In conclusion, further exploration of this methodological approach is encouraged, while introducing the application of sequential string code computation in the spirit of probabilistic information theory.
Gestalt’s principles in the brain: left frontal eye field and the rule of good continuation Gianluca Campana1, Giorgio Fuggetta2 1 Dipartimento di Psicologia Generale, University of Padova, Padua, Italy 2 School of Psychology, University of Leicester, UK In a previous paper (Fuggetta et al. 2007) we found that the Gestalt’s principle of good continuation of target positions across successive trials can guide visual search, facilitating targets that appear in an implicitly expected position. In this study we aimed at investigating the neural basis of this effect. Using rTMS during the intertrialinterval to disrupt the implicit short-term memory of the sequence of positions that the target assumed during the previous trials, we found that this effect is nulled by TMS over the right FEF, and even reversed by TMS over the left FEF. Given a role of the left FEF in spatial cognition (e.g. priming of spatial position, as found by Campana et al. 2007), we discuss our present findings in terms of statedependent effects of TMS (Silvanto et al., in press).
Moving attention between objects and across space: how the perceptual visual characteristics influence spatial conflict resolution Paola Cappucci1,2, Marta Olivetti Belardinelli1,3, Juan Lupia´n˜ez2 1 Department of Psychology, University of Rome ‘‘Sapienza’’, Rome, Italy 2 University of Granada, Granada, Spain 3 ECONA, Interuniversity Center of Research on Cognitive Processing in Natural and Artificial System, Rome, Italy The relationship between perceptual visual characteristics and spaceversus object-based attention components has been thoroughly investigated. After the study by Egly et al. (1994) about attention moves within the object boundary, several paradigms only partially replicated the results (or even failed to replicate), so that the same object effect was only observed depending on perceptual properties of objects and task demands. In two experiments we have used a paradigm fusing the perceptual organization procedure used by Abrahami (1999) with another recent experimental paradigm investigating the modulation of space- versus object-based attention on spatial conflict resolution. A peripheral cue was presented in a context of vertical versus horizontal irrelevant background lines. After a short SOA a left–right pointing arrow was presented for participants to respond to its direction. Spatial
Cogn Process (2009) 10 (Suppl 2):S148–S165 Stroop was modulated by cueing and lines orientation, in opposite direction, depending on whether the cue was predictive or non-predictive. In general, these findings highlight the importance of spatial orientation organization in eliciting spatial and solving spatial conflict in interaction with exogenous versus endogenous attention.
S151 terms of CENTER-PERIPHERY image-schema). We present an analysis of CF in terms of grammaticalization and lexicalization of different frames of spatial reference (Dodge and Lakoff 2005), the intrinsic one (inherent features of the object) and the relative one (elements associated with the viewer of the scene). The interface between semantics, morphosyntax and lexicon is described through corpus data of languages belonging to different families and morphological typologies.
Assessing orientation categorisation in the left and right hemispheres of the infant brain D. Catherwood1, A. Franklin2, E. Axelson2, J. Alvarez2 1 University of Gloucestershire, Gloucestershire, UK 2 University of Surrey, Surrey, UK In adults, there is a left hemisphere bias in categorization of orientation. This experiment assesses whether orientation categorisation is also handled differently by the two hemispheres of the infant brain. Five-month-old infants were familiarised to displays of two identical lines arrayed one above the other, with stimulus presentations of 250 ms lateralised to the right or left visual field (between-subjects). Based on a method used by Quinn and Bomba (1986), the lines were at an oblique angle of -20, -21, -22, -23, -24 or -25 leftward from vertical (0), giving an average familiarisation angle of -22.5. Following familiarisation, a test phase was presented with displays consisting of pairs of lines either from the same orientation category (both oblique: leftward -22.5 and rightward +45) or different categories (leftward oblique vs. vertical: -22.5 and 0), so that the within-category pair was more dissimilar in angle than the betweencategory pair. The pattern of infant fixation was monitored to determine if more interest was shown to the novel angle in the betweencategory as opposed to the within-category displays. To assess whether there is a hemispheric asymmetry in orientation categorisation, the size of the category effect was compared for right and left visual field presentations. The results are considered in terms of whether there are developmental changes in the way in which orientation is categorised in the hemispheres of the brain. The implications for the development of spatial cognition in the natural visual environment are addressed.
Clothing terms and the body space: a cross-linguistic analysis of image-schemas and lexical fields Maria Catricala`, Annarita Guidi Roma Tre University, Rome, Italy The lexical field of clothing (CF) is permeated by an iconic component (Eco 2003; Geeraerts et al. 1994) and is an interesting one to be investigated through the perspective that sees language as embodied in experience (Johnson 1987) and founded on basic domains (Langacker 1987). The relation between body space and CF can be detected across languages, and the way that image schemas are expressed in a language is considered a ‘‘typological feature’’. CF can be seen as an ‘‘amplified’’ body space in which the salience of some image-schemas—as psychologically real (Gibbs 2005) recurring dynamic patterns of perceptual interactions (exteroception) and motor programs (proprioception)—is related to semantic categorization. Within CF, body parts are bounded in lexical roots (e.g., cavigliera, leggings); some image-schemas (e.g., UP-DOWN) emerge through affixes (e.g., sottogonna; overcoat; anteojos). Other image-schemas structure the clothing field in two ways: through presupposition (e.g., the ‘covering’ meaning implied in the lexical root of cappotto); through the salience of different features (e.g., piumino and giacca a vento categorize the same referent from a different point of view in
Some reflections about the so-called ‘‘mirror paradox’’ P. Chistolini, I. Ruspantini Technologies and Health Department, Istituto Superiore di Sanita`, Rome, Italy Why is a mirror image seen as right to left reversed but not, for instance, top to bottom? This question, well known from centuries, seems to be irrelevant and banal. Just for its apparent simplicity all of us we are suited to think to a simple and often hasty solution of this paradox. On the contrary, we do not think that there would be, at present, a completely satisfactory solution of it. Today, we know that the paradox, as before enunciated, is an ill posed question; nevertheless, in every day life we forget that, referring to geometry, rotations and reflections in space are reciprocally incompatible and irreducible among them. Ignoring such a difference produces an amount of ‘‘strange perceptions.’’ Even in scientific papers rotations and reflections are often confused among them. It is possible to find a relevant example of that in the ‘‘improper’’ definition of mirror neurons. In order to clarify this paradox better, we set up a series of pictures of words, objects and human body sketches to be presented rotated and reflected. The observation of such pictures allows: (1) to straightforward understand the nature of the paradox; (2) to disentangle the different aspects of the problem (i.e. optics, mathematics, psychology, etc.); (3) to give a common framework to the plethora of explanations reported in literature; (4) to help to understand the origin of frequent ambiguities within such explanations.
Spoken versus written route directions Marie-Paule Daniel LIMSI, CNRS, Orsay, France Finding one’s way constitutes one of the most current activities that humans have to undertake. The information about an itinerary can be presented verbally (verbal route descriptions) or graphically (maps or sketches). The verbal description of a route can be spoken or written. In both cases, working memory is implied, especially its visuo-spatial component, but a major difference between spoken and written languages lies in differential working memory limitations. Spoken language places significant demands on memory and may require special strategies to make the information memorable. Written language does not require specialized mnemonic devices: the continuous presence of the written word greatly reduces memory load and make repeated scanning possible. Therefore, the spoken situation, because of its specific characteristics, implies an increase of the cognitive load needed to achieve the route description. In this study, we conducted experiments intended, first, to check whether the difference of modality would result in noticeable differences in the route descriptions generated, secondly, to examine how the spatial skills of the describer would play a role on the characteristics of these route directions. To what extent would the spoken route directions be different from the written ones? Would the effect of the modality be
123
S152 different according to the spatial skills of the describers? Answering theses questions was the main aim of our study. The results show that the spoken situation leads to descriptions more prescriptive than the written ones but less enriched in landmarks. All the results are discussed according to the visuo-spatial individual differences.
Cogn Process (2009) 10 (Suppl 2):S148–S165 movements within the task. Of interest, the data showed that the movements requiring selective attention (in reduced capacity phases of the action) showed particular slowing with the older- compared to younger-adult age groups. The data are discussed with relation to how attention and capacity seem critically important when considering the effects of ageing on movement.
Photometric, figural and crossmodal factors in the perception of transparency and in depth stratification of layers
Spatial language for route-based humanoid robot navigation
Franco Delogu1,2,3, Marta Olivetti Belardinelli2,3, Cees van Leeuwen1 1 Laboratory for Perceptual Dynamics, RIKEN Brain Science Institute, Wako-Shi, Saitama, Japan 2 Department of Psychology, Sapienza University of Rome, Rome, Italy 3 ECONA, Rome
Mohammed Elmogy1, Christopher Habel2, Jianwei Zhang3 1 Technical Aspects of Multimodal Systems (TAMS) Group, Informatics Department, University of Hamburg, Hamburg, Germany 2 Knowledge and Language Processing (WSV) Group, Informatics Department, University of Hamburg, Hamburg, Germany 3 Technical Aspects of Multimodal Systems (TAMS) Group, Informatics Department, University of Hamburg, Hamburg, Germany
Perceptual transparency is inextricably linked to perception of depth. Two models, the Contrast Polarity Model and the Transmittance Anchoring Principle (TAP) describe the principles of the emergence of perceptual transparency and depth ordering of layers. These models predict that in certain contrast conditions there is a considerable room for ambiguity in the interpretation of which one of the layers is transparent and in their depth ordering. In this work, we summarize and discuss the results of two recent experimental researches by our group that demonstrated that non-photometric factors, namely the relative size of the displayed regions, their animation and auditory crossmodal cues, affects the depth stratification of the transparent layer. We found that the perceptual system makes precise and coherent decisions about depth ordering of multiple planes in the presence of transparency with little room left for depth ordering ambiguity. This decision, even if principally based on photometric conditions related to the computation of region contrast differences, can also rely on figural–geometrical and crossmodal factors.
A more natural interaction between the humans and the mobile robots can be achieved if there are ways to bridge the gap between the forms of spatial knowledge of both of them, enabling them to communicate using this shared knowledge. This spatial knowledge can be presented in various ways to increase the interaction between them. One of the most effective ways is by describing the route verbally to the robot. In this paper, we present a proposed spatial language which is called Route Instruction Language (RIL) to describe route-based navigation tasks for a humanoid robot. This language is implemented to present an intuitive interface that will be easy and natural for novice users to describe a route for a mobile robot in indoor environments. The route description is parsed and processed via the command interpreter to generate spatial relationships and main navigational actions which are classified into locomotion, position, notification, and change of orientation actions. The resulted actions and spatial relationships are grounded using perceptual anchoring in an abstract representation (topological map) to describe relationships among features of the route without any absolute reference system. The resulted topological map prevents the robot from getting trapped in local loops or dead-ends in unknown environments and is used to generate an initial path trajectory of the route which will be supplied to the robot motion planner. The RIL structure, command interpreter and its lexicon, and the topological map generation will be elucidated in details.
Effects of ageing and selective attention on movement speed Martin G. Edwards1, Jason A. Martin1, Chris Hughes2, Derek M. Peters2 1 School of Sport and Exercise Sciences, College of Life and Environmental Sciences, University of Birmingham, Birmingham, UK 2 Institute of Sport and Exercise Science, University of Worcester, Worcester, UK There is clear and consistent evidence across many published studies showing that as people get older, their movement speed reduces. In the data presented here, we report two experiments that explored the relationship between participant age and movement speed. In both studies, 66 adult participants were tested that were grouped into younger-, middle- and older-aged groups. In the first study, participants completed the Purdue Pegboard, Box and Block and Trail Making screening tests. The data were consistent with the literature, showing slower movement speeds on all tests for the older- compared to younger-aged participant groups. In addition, the older adults were particularly slow in the Trail Making B compared to A test, suggesting that selective attention may be associated with the age and movement speed relationship. In the second study, a modified Box and Block test was created, and participants’ movements were tracked using motion analysis. This allowed for the assessment of sub-
123
Learning a route in a virtual environment: the effects of differing cues on the performance of typical children and individuals with Williams syndrome Emily Farran, Yannick Courbois, Alice Cruickshank Universite´ de Lille 3, Lille, France Typically developing (TD) 6- and 9-year-olds, and participants with Williams syndrome (WS) viewed a virtual environment on a 30 in computer monitor and navigated through mazes using a joystick. Participants were shown a route through a maze, with 6-decision points (one correct and one incorrect direction). The floor of each path section was a different colour (route-learning cues). Path section floors were either verbalisable colours (focal condition) or non-verbalisable colours (non-focal condition), thus invoking verbal and visual memory, respectively. In the third, focal–verbal condition, path section floors were focal colours, and verbalised by the experimenter when demonstrating the route (prompting verbal memory). Memory for the colours employed for each route was tested once each route
Cogn Process (2009) 10 (Suppl 2):S148–S165 had been learnt. Results showed similar patterns across groups: the route in the focal–verbal condition was learnt after fewer attempts and received higher memory scores, than for the other conditions, where learning and memory scores were similar. The WS group needed more attempts to learn each route, and had lower memory scores than the TD groups. TD performance differed between groups for memory scores only (learning attempts: 6-year-olds = 9-year-olds; memory score: 6-year-olds \ 9-year-olds). Results are discussed in relation to visual and verbal memory and route-learning.
Geometry and space: the spatial-angle association of response codes (SANARC effect) Antonia Fumarola1, Konstantinos Priftis2, Flavio Sartoretto3, Carlo Umilta`2 1 Department of General Psychology, University of Trieste, Trieste, Italy 2 Department of General Psychology, University of Padova, Padua, Italy 3 Computer Science Department, Universita` Ca’ Foscari of Venezia, Venetia, Italy Dehaene et al. (1993) have reported that participants respond faster to small numbers (e.g., 1–4) when responses are executed in the left hemispace, whereas they respond faster to larger numbers (e.g., 6–9) when responses are executed in the right hemispace (Spatial Number Association of Response Codes; SNARC effect). Similar effects have also been reported for non-numerical ordered sequences (Gevers et al. 2003). We investigated whether there is an association between the magnitude of angle size and the side of response execution as a function of education and whether this association has the properties of a spatially oriented mental line. We tested two groups of participants: students of structural engineering and students of psychology. In a first task, participants were asked to judge whether angles presented in the centre of the screen were before or after 90. In a second task, participants were asked to judge whether the apex of the angle was oriented upwards or downwards and whether the arms of the angle were continuous or not. Results showed an association between angles and side of response execution, only in students of structural engineering: they responded faster with the left hand to large angles (105–150) and with the right hand to small angles (30–75). This association has the properties of a right-to-left oriented mental line, resembling the sequence of angles on the goniometer. We suggest that education may give rise, in the brain, to a spatial representation of geometric magnitudes such as angles.
Solving three spatial conflict types at a time: evidence for multiple conflict control mechanisms Marı´a Jesu´s Funes, Marı´a Rodrı´guez Bailo´n, Rocı´o Ruiz Pe´rez, Juan Lupia´n˜ez Departamento de Psicologı´a Experimental y Fisiologı´a del Comportamiento, Universidad de Granada, Granada, Spain There is controversy on whether the resolution of different types of conflict is mediated by a single resource control mechanism or by multiple, independent conflict-specific control mechanisms. We present three experiments employing a factorial design that combines different types of spatial conflict within a single task. The finding of independent conflict effects within the same domain (space), together with specific conflict adaptation effects for each conflict type, would
S153 support the existence conflict-specific control mechanisms. In experiment 1, we combined Spatial Stroop and Simon types of conflict and we found significant conflict effects for both types of conflict which were independent from each other. In addition, conflict adaptation effects were specific for each conflict type. In subsequent experiments we added a third source of spatial conflict by introducing flankers surrounding the target (experiment 2) or distracter stimuli appearing on each location other than the target (experiment 3). The introduction of flankers completely ruled out Spatial Stroop and Simon effects. However, a pattern of independency across conflict types was obtained in experiment 3. Meanwhile, the pattern of results obtained for the flanker experiment is consistent with an explanation in terms of target spatial coding decay prior to the filtering process, the results obtained in experiments 1 and 3 point towards the existence of conflict-specific control mechanisms working in parallel, able to solve up to three different types of conflict at a time.
A study on a shared control navigation system: human/ robot collaboration for assisting people in mobility Francesco Galluppi1, Cristina Urdiales2, Isabel Sanchez-Tato3, Francisco Sandoval2, Marta Olivetti Belardinelli1 1 Department of Psychology, Universita` di Roma ‘‘La Sapienza’’, Rome, Italy 2 ETSI Telecomunicacion, Universidad de Ma´laga, Malaga, Spain 3 Centro Andaluz de Innovacio´n y Tecnologı´as de la Informacio´n y las Comunicaciones, Malaga, Spain Loss of mobility represents one of the most common concerns for elderly people; robotics can help in this respect, while preserving independence and autonomy of individuals. This study focuses on efficiency in driving a power wheelchair when navigation control is constantly shared between the user and a purely reactive robotic navigation system. The shared control navigation system seeks to obtain a resulting emergent behaviour which is an efficiency weighted mix of the inputs received. The trajectories proposed by the human and those proposed by the reactive robotic navigation system are continuously mixed and weighted by local efficiency, thus combining human’s and robot’s input in the most efficient seamless way. The advantages of using such a collaborative system with an Assistive Technology (AT) are discussed, both from efficiency perspective and from the user’s satisfaction point of view. This system constantly allows the user to feel in control of the device, enhancing her experience and comfort with the AT, while helping her more, when needed. In fact, the system always adapts its behaviour to the human performance inputted, being capable of differentiating the amount of its contribution according to the user’s needs and conditions. In the final section, the empirical evidence is outlined. The navigation system has been implemented on a robotic Meyra power wheelchair and the relative tests were conducted at Fondazione Santa Lucia (FSL) in Rome, with the collaboration of volunteers affected by various disabilities. Results are presented, along with trajectories and efficiency analysis.
Effects on space representation of using a tool and a button Ilaria Gaudiello1, Daniele Caligiore2, Giuseppina Schiavone3, Antonino Salerno3, Fabrizio Sergi3, Loredana Zollo3, Eugenio Guglielmelli3, Domenico Parisi2, Gianluca Baldassarre2, Roberto Nicoletti1, Anna M. Borghi1
123
S154 1
University of Bologna, Bologna, Italy Istituto di Scienze e Tecnologie della Cognizione, CNR, Rome, Italy 3 Campus Bio-Medico, Roma, Italy 2
Recent studies on monkeys and humans reveal that tool-use may affect space representation, or at least engender an attentional shift. Studies with patients reveal that far space is remapped as near space when subjects actively use a tool. Here, we present an experiment in which subjects will be required to use either a tool or a button to achieve a target-object located at different distances from the subject, that is, either in the peripersonal or extrapersonal space. Behavioural data will be collected through both a kinematic sensor system and a non-intrusive eye-tracking technology. We predict that, compared to a baseline reaching condition, both the tool and the button affect space representation remapping far space as near space. We predict a stronger effect with the tool, which is supposed to be incorporated within the body schema, than the button, which is supposed to be perceived as an external device. Generally, our results will provide a contribute to assess whether actions are encoded in proximal or in distal terms, that is in terms of the effector movements or of overall action goal.
Using space to represent data: diagrammatic reasoning Valeria Giardino Marie Curie Fellow, Institut Jean Nicod (CNRS-EHESS-ENS), Paris, France In this talk, I present and discuss the criteria I propose to provide a diagrammatic classification. Such a classification is of use to explore in detail the domain of diagrammatic reasoning. Diagrams can be classified in terms of the use we make of them—static or dynamic— and of the correspondence between their space and the space of the data they are intended to represent. The investigation is not guided by the opposition visual versus non-visual, but by the idea that there is a continuous interaction between diagrams and language. Diagrammatic reasoning is characterized by the duality of being referred to an object, the diagram, with its spatial characteristics, and at the same time to a subject, the user, who has to interpret them. A particular place in the classification is occupied by constructional diagrams, which exhibit for the subject user instructions for the application of some procedures to the space, both perceptual and conceptual, of the diagram.
Cogn Process (2009) 10 (Suppl 2):S148–S165 suggests that the term is used to mark spatial relationships. For example, ‘Elisa esta triste’ means that Elisa is in a sadness space. This is similar to the English sentence ‘He’s in a bad space’. In our experiment, subjects matched a spoken sentence containing estar with the best of four image schematic drawings. These were designed so that some contained a marked spatial quality. Results support our hypotheses that estar is significantly sensitive to spatially qualified schemas. The results also contain preliminary evidence that this sensitivity might not be strictly a linguistic phenomenon. We found that results differed between males and females, and that these in turn differed between monolinguals and bilinguals.
Task complexity modulates trade-off between locomotion and working memory usage in a large copying paradigm G. Hardiess, K. Basten, H. A. Mallot Cognitive Neuroscience, Department of Biology, University of Tu¨bingen, Tu¨bingen, Germany Our capacity to remember prior visual impressions is limited. Visual information can either be stored in working memory (WM) or be recovered by re-fixating objects of interest. The balance between these two strategies depends on their costs for the observer. Overall costs may include metabolic costs for storing information, for redirecting attention, or perceptual load. Much work has been done to investigate the trade-off between different costs for gaze movements. In a comparative visual search task reduced gaze movement patterns were found in conditions where costs for acquiring visual information were increased (Hardiess et al. 2008). Additionally, the balance point of the trade-off between gaze movements and WM was shifted towards increased memory use. Here, we tested the existence and properties of this trade-off with respect to costs for locomotion. Therefore, subjects had to copy patterns of colored blocks from a model area to a distant place—the workspace area (Ballard et al. 1995). Replicating the model, subjects could pick up one block at a time from a third place—the resource area. The three areas were arranged in an equilateral triangle. In a between subjects factorial design we varied the costs for locomotion (i.e. distance 2.25 vs. 4.5 m) and the costs for WM load using two different complexities of block models (i.e. simple vs. complex). The results reveal an increase of memory use (higher number of visits at model area) with longer walking distances. In addition, memory use is also elevated in the complex pattern conditions.
Exploring the relationship between spatial language and spatial cognition using the Spanish copula estar Monica Gonzalez-Marquez Psychology Department, Cornell University, Ithaca, USA Studies abound on the ways that languages can encode space. Questions remain as to the grounding of spatial terms. Do naı¨ve speakers grasp the spatial nature of spatial terms? Is the relationship between space and language merely descriptive? Might there be implications beyond those for language? For example, there are welldocumented sex differences in spatial abilities. Does this difference also appear in spatial language? Finally, considering that different languages encode space differently, what about bilinguals? We asked these questions with regard to the Spanish copula estar (a copula is a verb like ‘to be’). Estar is also a locative verb, stemming from the Latin ‘stare’/to stand/. The term is commonly believed to identify temporary relationships between a noun and its adjective. Contrastively, viewing it as a spatial term, as the etymology indicates,
123
Graphic strategies in Williams syndrome and typically developing children Kerry Hudson1, Emily Farran2 1 Department of Psychology and Clinical Language Science, University of Reading, UK 2 Department of Psychology and Human Development, Institute of Education, UK A developmental progression is observed in graphic strategies for replication of simple and complex shapes in typical development (TD) and depicts the manner in which individuals use pre-determined rules and tactics to replicate figures. An understanding of the use of such strategies is poorly conceptualised in individuals with Williams syndrome (WS). This study was the first to attempt to understand integration and parsing of simple and geometric forms in WS.
Cogn Process (2009) 10 (Suppl 2):S148–S165
S155
Participants (WS, N = 19; TD children matched by non-verbal ability, N = 19) drew a series of simple (cross-like) and complex embedded two- and four-shape figures from models. Subsequently, participants constructed the forms from pre-drawn lines and shapes printed onto acetate sheets to assess construction ability. This allowed for determination of the homology of strategies across groups and the effects of differing task demands. It was hypothesised that the reduced graphomotor demand of the construction conditions, compared to drawing, would lead to the use of more developmentally advanced strategies by individuals with WS. It was found that, for two-shape complex figures and simple shapes, the groups did not differ in terms of strategy-use, though the WS group showed a longer response time. This suggests that individuals with WS are able to comprehend global spatial arrays in both drawing and construction as cohesive replications were produced in all conditions. Group differences emerged in the four-shape embedded figures in which the WS group showed poorer adherence to graphic strategies than the TD group. The root of this may lie in poor planning ability.
locate to the landmark being used. This shared constraint often results in an overlap in their range of applicability. Differentiating between the applicability of these prepositions is a problematic issue requiring recourse to both conceptual (Herskovits 1986) and functional (Conventry and Garrod 2004) information. This paper describes a psycholinguistic experiment that investigates whether the distinctions and overlaps between the applicability of these prepositions can be captured by distinctions among the topological relations between 3D objects. The stimuli for the experiment were drawings of topographical relationships between two objects, based on Borrmann et al.’s (2008) and Zliansyah et al.’s (2008) algebras of 3D topological relationships. During a trial, a subject was presented with one of the drawings and an English sentence describing the spatial relation between the two objects using one of the topological prepositions ‘‘at’’, ‘‘on’’ or ‘‘in’’. Subjects were asked to rate the applicability of the sentence to the drawing on a 10-point scale, with zero denoting not acceptable at all; four or five denoting moderately acceptable; and nine perfectly acceptable.
Background knowledge in human navigation: a case study in a supermarket
How fine-grained spatial analyses help us understand navigation behavior and performance with maps
Christopher Kalff, Gerhard Strube Center for Cognitive Science, University of Freiburg, Friedrichstr. 50, 79098 Freiburg, Germany
Gregory Kuhnmuench Center for Cognitive Science, University of Freiburg, Freiburg, Germany
The importance of background knowledge has widely been demonstrated for a range of cognitive tasks, e.g., in text comprehension, learning, and nowhere more than in all kinds of expertise. It has not played a major role in spatial cognition, however. While a lot of spatial cognition research for sound methodological reasons employs environments devoid of realistic features (e.g. Gillner and Mallot 1998), even in real-life scenarios and tasks the consideration of background knowledge as an influencing factor is neglected. The aim of the study we conducted was to clarify this shortcoming by exposing participants to an environment with a high semantic density—a medium-sized supermarket housing 15,000 products. Thirtyeight participants had to search for 15 food objects in a real supermarket. These objects were divided into three categories. One group comprised products that were placed in a background knowledge congruent manner (according to a food items categorization study we carried out with different participants, cf. Kalff and Strube 2008). The other two groups contained products that were placed in discordance with people’s background knowledge to varying degrees. Our results clearly show a search cost advantage for products adhering to background knowledge. While search time, travelled route distance and number of stops are fairly low for the congruency group, an incongruent placement results in significantly worse performance measures (controlling for visits to this particular store and stores of the same brand in general). We take this as a (first) experimental evidence that background knowledge plays an important role in human navigation.
When and where do people encounter difficulties when route-following with maps? A field experiment with three types of maps as a between-participants factor was conducted to answer this question. Maps differed in the amount of information (e.g., reduced to turning directions at decision points vs. a network map). Each participant used one of these maps while wayfinding and was asked to follow exactly the given route to reach the destination as quickly as possible. Locations and durations of stops were measured as an indicator of difficulties, as well as task completion time and the number of detours. A post-test was administered to measure inter-individual variability in sense of direction, situational anxiety, survey knowledge acquired in the experiment and previous experience with daily kinds of maps. As expected, some decision points were problematic (many branches or non-rectangular branches). Unexpectedly, the type of map did neither affect where people had to stop, nor task completion time. Previous map experience was no significant predictor of overall performance. Rather than inter-individual differences measured in the post-test, the differences in patterns of stops mattered: three groups of such patterns were identified using spatial cluster analyses. They differed significantly concerning location and number of detours and task completion time, the latter applying even for participants without any detours. Furthermore, an interaction between map type and cluster was found. These results demonstrate the usefulness of finegrained spatial methods for data acquisition and data analyses and have several implications for customized map design.
An investigation into the semantics of English topological prepositions
Actual versus perceived embodiment of a rhythmic pulse
John Kelleher, Colm Sloan, Brian MacNamme School of Computing, Dublin Institute of Technology, Dublin, Ireland
G. Luck, P. Toiviainen, M. R. Thompson University of Jyva¨skyla¨, Jyva¨skyla¨, Finland
The topological prepositions ‘‘at’’, ‘‘on’’ and ‘‘in’’ constitute a fundamental set of prepositions in English. The primary constraint on the applicability of these prepositions is the proximity of the object they
Background: When tapping in-time with point-light representations of rhythmic human movement, people tend to synchronise with certain kinematic features of the movements presented, most notably with
123
S156 peaks in acceleration along the trajectory. When moving spontaneously to auditory rhythmic stimuli, however, people tend to synchronize vertical movements, specifically peaks in high downward velocity, with the rhythm. In the former case, a visual pulse is perceived as being embodied by peaks of acceleration along the movement trajectory; in the latter case, an auditory pulse is actually embodied as peaks in vertical velocity. This suggests either that people embody rhythmic features differently depending on the precise nature of the task (indicating the pulse, or following the pulse), or that perceived embodiment of such features differs from actual embodiment. Aims: Here, we examine differences between perceived and actual embodiment of a rhythmic pulse. Method: Participants tapped in-time with point-light representations of rhythmic full-body movement derived from motion-capture recordings of adults moving spontaneously to beat-driven music (stimuli were visual-only). From the movement data, instantaneous velocity and acceleration of joint locations, as well as angular velocity and acceleration of limb segment direction vectors and joint angles, were estimated by numerical differentiation. Multiple linear regression was then employed to examine relationships between these features and participants’ tap point timing. Results and conclusions: Data collection is underway, and detailed results and conclusions will be reported at the conference.
Investigating spatial behaviour: an application of space analysis to criminal investigations Lorenzo P. Luini, Serena Mastroberardino, Francesco S. Marucci Department of Psychology, Sapienza University of Rome, Rome, Italy Geographical profiling (GP) is an investigative methodology which uses the crime-related scenes of a criminal series to determine the area in which the offender might live or conducts relevant activities (e.g., work and exercise). The idea underlying GP is that the offender tends to use familiar places and his own spatial representations to carry out his criminal activities. This methodology is often applied on serial crimes such as rapes, homicides and fire setters, but also on single crimes involving more than a place and it is based on the connection between the geographic information, the knowledge of the victim and the crime frame. The analyst uses tools and digital representations visualised as maps in which it is attempt to indicate where it is more probable that the offender lives and can be used by investigators to reduce the searching areas, select investigative strategies such as assign priority on a list of suspects, implement a search based on addresses, patrolling and so on. The aim is to reconstruct the cognitive map of the offender, to study the distance variable in its explorative strategies and mobility. In this study, we will present simulation data on real cases to assess whether the output map could be a useful tool in crime investigations. Results are discussed in the light of cognitive mapping and crime pattern approaches.
Solid waste incineration and distance distortion: distance estimation as a adaptation tool Sı´lvia Luı´s1, Nuno Marques2, Dalila Antunes2, Jose´ Palma-Oliveira1 1 FPCE, University of Lisbon, Lisbon, Portugal 2 Factor Social, Lda., Lisbon, Portugal This proposal concerns the relations between variables studied on the context of a Monitoring Program of a Solid Urban Waste Treatment
123
Cogn Process (2009) 10 (Suppl 2):S148–S165 Station (SUWTS), in action since 1999 to assure residents well-being, and a systematic distance distortion between individuals’ residence and the SUWTS (Municipal Solid Waste Incinerator).The theoretical approach that guides this program assumes that the psychosocial impacts typically associated might come not only from objective environmental changes due to the SUWTS functioning but also from the way these are interpreted by individuals, a regular perspective in psychology but quite ‘‘exotic’’ on the context of Environmental Impact Studies. One can hypothesize that the influence of objective environmental features (e.g., noise/air pollution) on psychosocial well-being indicators (e.g., anxiety) is tapped through the individuals’ assessment of this objective features (e.g., annoyance regarding pollution/risk perception). That assessment is influenced by psychosocial variables that go way beyond those objective features and are related with the SUWTS meaning to the individuals (e.g., perceived distance/ local identity).The analysis of almost 5,000 phone interviews made since 1999, using a questionnaire composed by Likert type scales shown that, demonstrate, for example, that results point to a systematic underestimation of the distance between SWUTS and individuals residence when these live within a range of 1 km from the SWUTS. Suggestions are that auto-related distance distortion might act as illusory factor that promotes individuals well-being in a typically stressful and unwanted situation and that the cognitive factors that explain it could be strategically used on projects development.
Considerations in designing research to evaluate wayfinding technologies James R. Marston Research Unit on Spatial Cognition and Choice, Department of Geography, UC Santa Barbara, Santa Barbara, USA The rapid increase in computer processing speed, video and audio streaming satellite navigation systems, and overall developments in electronic devices has led to may prototypes to assist the blind in wayfinding and navigation. However, few of these products have enjoyed successful adoption by the target population. One problem might be that the lack of benchmarks or standardization for testing and evaluations has made it hard to compare various devices and systematically improve and refine them. We suggest some areas where more standardization is needed. These include: • • •
•
• • • •
• • •
Subject selection: Age: product’s target audience, and should others also be tested? Degree of vision: use of objective clinical data, self-reports of users ability to see various objects pertinent to the task, or researcher-observed performance in the field prior to testing? Other disabilities: the majority of people becoming blind in the developed world are older. How do we, or should we, include subjects with hearing loss, lack of stamina, or other infirmaries? Use of a control group: Is one needed, how to be more objective when designing a control task. Measure of evaluation: Going beyond time and distance to include various task error variables, subjective measures of success, increases in spatial understanding, contingent valuations, user suggestions, and openended question. Additional suggestion for ‘‘real-world’’ experiments: Safety, preparedness, equipment problems. Precautions, special needs and adaptations Clear and objective evaluations, including deficiencies or negative findings, are key to enhance the state of the science.
Cogn Process (2009) 10 (Suppl 2):S148–S165
Spatial learning from tactile maps and virtual GPS audio guidance: evaluation and comparison James R. Marston Research Unit on Spatial Cognition and Choice, Department of Geography, UC Santa Barbara, Santa Barbara, USA Maps increase spatial awareness and provide access to critical navigation information that may otherwise be inaccessible to the blind traveler. The use of maps for pre-trip planning is particularly beneficial for blind people as they facilitate self-paced, offline learning of routes and spatial relations in a safe and low stress context. Two nonvisual methods for presenting spatial information for pre-trip planning are tactile maps and the virtual mode available on several speech enabled GPS-based navigation systems for the blind The information conveyed by these tools is somewhat different. Tactile maps provide a user with a global (survey) representation of the environment. With the virtual guidance mode on the digital map, the user inputs the origin and destination and then receives first-person step-by-step speech descriptions of the route connecting these points. The main objective of this experiment is to identify what types of spatial knowledge can be gained using two different methods and to determine if the use of both methods together will lead to increased spatial knowledge and understanding. We hope to show when it is best to use each method, what the disadvantages of each are, and also to determine if a combination of the two methods provides the best opportunity for accurate spatial knowledge acquisition and environmental awareness. To address these objectives, we are conducting an experiment that compares and contrasts the two most viable techniques for assisting the blind with trip planning and spatial awareness in new environments: tactile maps and virtual route guidance via audio descriptions.
Aging in mental representation from spatial descriptions: role of spatial perspective and visuo-spatial ability Chiara Meneghetti, Felicia Fiore, Rossana De Beni General Psychology Department, University of Padua, Padua, Italy The characteristics of spatial mental representations derived from spatial descriptions are investigated in the elderly, with focus on the effect of spatial perspective (survey vs. route) on mental representation and the relation with visuo-spatial abilities. Analysis of age differences will cast light on the nature of spatial mental models. Three groups of participants—young (23–28 years), adult (50–60) and elderly (61–77)—listened to route and survey texts. Recall of spatial texts was then measured and a series of visuospatial tasks administered. Verification test results showed the typical congruency effect in all groups, both young and old, i.e., better performance in questions with the same perspective as the text studied. All recall measures showed that older people had more difficulty in accurately recalling spatial information than young participants. Furthermore, visuo-spatial abilities were differently related to spatial text recall in the three groups: text recall of the young was associated to perspective-taking ability, but in adult and elderly groups to spatial-visualization ability. Overall, these results show that spatial mental representation of older participants, as for the young, is influenced by spatial perspective (see e.g., Perrig and Kintsch 1985; Pazzaglia et al. 1994). However, the elderly encounter more difficulty than the young in integrating spatial information into a mental model (e.g., Copeland and
S157 Radvansky 2007). Visuo-spatial abilities are, at least in part, responsible for these age differences in the construction of spatial mental representation.
Place cognition as an example of situated cognition. A study with evolved agents Orazio Miglino, Michela Ponticorvo Department of Relational Sciences, University of Naples ‘‘Federico II’’, Naples, Italy In the present paper, we describe a study on place cognition with evolved robots. Many studies have been dedicated to investigate how place, distance and direction are represented in the brain’s network, which comprises place cells and head direction cells of hippocampus and parahippocampus and grid cells in dorsocaudal medial entorhinal cortex. The spatial selectivity of place cells, in same case, is partially or totally lost. In the experiments where the spatial selectivity was impaired, rats were not allowed to move freely. This suggests the hypothesis that the activity of place cells may emerge through the interaction of the animal with its environment. We tested this hypothesis reproducing these experimental conditions with simulated agents controlled by Artificial Neural Networks whose weights were selected with a Genetic Algorithm. The evolved agents were tested both in free movement and restrained conditions while their spatial recognition mechanism activity was recorded. Results show that the pattern of responses of the spatial recognition mechanism only works in free movement condition, thus sustaining the hypothesis of an Embodied and Situated Cognition interpretation of place cognition. In fact, at least for these simulated agents, knowledge of space cannot be isolated from action. The activity of place and grid cells, often considered the building block and the neural substrate of map-like representation, can be reproduced in a framework of active perception.
Street morphology and its effect on pedestrian movement in Historical Cairo Nabil Mohareb Department of Architectural Engineering, College of Engineering, United Arab Emirates University, UAE In order to understand the cause and effect of the morphological cross-sections on the pedestrian movement inside historical Cairo, the research examines the building’s height to street width ratios, highlighting their effect on pedestrian movement’s speed, and density. The two-dimension morphology used by space syntax theory and methods, using axial lines and convex spaces, are not considering the height influence and how it affects the social interaction with the spatial configuration and the land use distribution in historical sites. The paper carries out three experiments to understand which cross-section data are highly correlated with the pedestrian movement and densities. The experiments are as follows: crosssection area, cross-section data categorized by the street width only, and finally cross-section data categorized by ratio between buildings heights to street width. These experiments are compared with the field survey data of pedestrian counting, number of entrance analysis and finally, the speed of movement for different users. The results show that the cross-section categorized by the ratio between building’s heights to street width is highly correlated than the others
123
S158
Cogn Process (2009) 10 (Suppl 2):S148–S165
cross-sections with the pedestrian movement, and densities inside historical Cairo.
Barrier effects in real-world compared to virtual reality macro-environments
Reorientation by slope cues in humans
Eva Neidhardt1, Michael Popp2 1 University of Lu¨neburg, Lu¨neburg, Germany 2 University of the Bundeswehr, Munich, Germany
Daniele Nardi, Amanda Y. Funk, Nora S. Newcombe, Thomas F. Shipley Spatial Intelligence and Learning Center, Department of Psychology, Temple University, Philadelphia, USA Spatial orientation in birds and mammals has been relatively well studied in the two-dimensional, horizontal plane, but little is known about the possible importance of the vertical dimension. This dimension has a very distinctive characteristic compared to the horizontal ones because it is parallel to the force of gravity. Locomotion on a surface extending in the vertical dimension provides a suite of multimodal sensory activations, which differ from locomotion on a horizontal surface, as a consequence of kinesthetic, vestibular, proprioceptive and visual stimuli. The increate effort of movements with a vertical component potentially renders this dimension more salient compared to horizontal dimensions. The simplest surface extending in the vertical dimension is a slope. As a gradient, slope can be considered a source of directional information, enabling the navigator to establish an allocentric reference frame based on the vertical axis (up– down, the direction of steepest descent) and the derived orthogonal axis (left–right, the direction of no descent) of the slope. The experiments on pigeons show that, in the presence of aslope, learning a place-finding task proceeds more rapidly, and that subjects rely heavily on a slope-based goal representation to solve the task. Furthermore, using a real-world experimental environment that can be tilted, new data are provided investigating to what degree humans can use a slope to encode a goal location, and the nature of the spatial representation that humans extract from terrain slope.
Development of spatial cue integration between and within modalities M. Nardini, R. Bedford, D. Mareschal Birkbeck College, University of London, London, UK Spatial cue integration is the ability to combine information from multiple sources (e.g. visual and vestibular) to maximise accuracy on spatial tasks such as reaching to a target or relocating a hidden object. Previous research shows that adults can be ‘‘statistically optimal’’ in combining multiple information sources appropriately to maximise spatial accuracy (Ernst and Banks 2002). I will present recent work on the developmental basis for this ability. Although even young infants know the correspondences between multisensory stimuli, and respond more rapidly to multimodal than unimodal events, the ability to combine multiple cues to improve spatial accuracy (reduce the variance of spatial estimates) seems to emerge very late. Whether given multiple cues across senses (Nardini et al. 2008), or multiple cues within a single sense (vision), children below 10 years seem not to improve their accuracy relative to the level achieved with just a single cue. This suggests that young children either (1) cannot compute weighted averages based on multiple cues, or (2) use a perceptual decision rule (criterion for how much sensory evidence to correct before responding) that is optimised for speed rather than accuracy. I will discuss some experimental and modelling work on the time course of spatial perceptual decisions that addresses this second hypothesis.
123
Kindergarten children, second-grade children and young adults walked a path of about 1 km length in a real macro-environment (RE). At six locations subjects stopped and they were asked to point with their outstretched arm and finger into the direction of the path origin. Half of the pointing locations were classified as ‘‘barrier points’’ because close to these locations in the correct pointing direction were large walls. These ‘‘barriers’’ were expected to decrease pointing accuracy. In the virtual reality (VR) condition a similar procedure was realized: The same environment was projected into the inner wall of a half globe. The projected environment moved as subjects walked on a treadmill in the centre of this half globe. In the VR condition pointing was done with a laser pointer. The effects of age, barrier condition and environment condition on pointing accuracy are analyzed. Results are discussed with respect to subject’s technical mastery of the treadmill and with respect to similarities and differences between spatial orientation in RE and in VR.
A study on the spatial perception in contemporary Japan through the analysis of the relationship between graphic and architectural language Maria Livia Olivetti Roma Tre University, Rome, Italy Writing in Japan has always been a matter of art, surrounded by a very long tradition. Calligraphy is only black and white. In this way it is possible to perceive immediately in the single composition the contrast generated between the background and the sign. Calligraphy consists in the graphic overlap of three different alphabets: Chinese ideograms, Japanese ideograms and the recent Japanese alphabet. In fact, calligraphy could combine into the same page (by virtue of its flexibility and of the overlaying capability of the Japanese culture) a written text together with vertical and horizontal directions. The same way, talking about architectural language, it is possible to detect more than a relation with the cultural flexibility characteristic and with the overlaying capability of the calligraphy. Both contemporary and traditional Japanese housing space are marked by a continuum between internal space and outdoor. Spatial perception in the Japanese culture is shaped by a dynamic and reversible attitude. Moreover, depth is due to the background empty space rather than to any other pictorial technique, exactly as in both calligraphic and figurative portrayals. In this way in the architectonic language the third dimension is due much more to the contrast between light and darkness than to the space structure as it could be obtained by means of orthogonal axes.
Compared influences of controlled and automatic spatial attentions on visuomotor priming effect Ge´rard Olivier1, Guy Labiale2, Laurent Casanova1 1 Cognitive and Social Psychology Laboratory, University of Nice Sophia-Antipolis, Nice, France 2 Research Team ‘‘Development, Cognition and Acquisition’’ University of Montpellier, Montpellier, France During a perceptual decision task, right-handed subjects visually fixed the center of a chessboard displayed on a computer screen. A central
Cogn Process (2009) 10 (Suppl 2):S148–S165 arrow oriented their attention either rightward or leftward, 300 ms before two chessmen appeared simultaneously, one on the right side, one on the left side. Moreover, both chessmen could be placed either on a proximal or on a distal chessboard row. Subjects were asked to grasp as fast and accurately as possible a proximal or distal switch depending on the colour of the chessman situated in the chessboard half previously indicated by the central target. Results confirm the effect on the reaction times of the interaction between the target position on the chessboard (proximal versus distal) and the manual response (proximal versus distal): spatial compatibility led to faster reactions than incompatibility, suggesting that subjects mentally simulated the manual movement permitting to reach the target (Olivier 2006). Moreover, this visuomotor priming effect was only observed: (1) when spatial attention was initially directed to the right chessboard’s half but not to the left one; (2) when the distractor was on a distal position but not on a proximal position. These data suggest that: (1) mental simulation of a right hand reaching movement needs to be coordinated with a rightward and controlled visual attention orientation; (2) a proximal distractor constituted an obstacle preventing subjects to mentally reach the distal target. This research contributes to precise the complex links existing between action and both controlled (toward target) and automatic (toward distractor) spatial attentions.
Mental imagery generation in different modalities activates sensory-motor areas Massimiliano Palmiero1,2,3, Marta Olivetti Belardinelli1,2, Davide Nardo1,2, Carlo Sestieri4,5, Rosalia Di Matteo2,4, Alessandro D’Ausilio2, Gian Luca Romani4,5 1 Department of Psychology, ‘‘Sapienza’’ University of Rome, Italy 2 ECONA, Interuniversity Centre for Research on Cognitive Processing in Natural and Artificial Systems 3 Perceptual and Dynamic Laboratory, Riken Brain Science Institute, Japan 4 Department of Clinical Sciences and Bio-Imaging, ‘‘G. d’Annunzio’’ University, Chieti, Italy 5 ITAB, Institute for Advanced Biomedical Technologies, ‘‘G. d’ Annunzio’’ University Foundation, Chieti, Italy According to the embodied cognition perspective the sensory-motor experience supplies the basis for all conceptual knowledge formation without any possibility of separating perception and action. In order to investigate whether imagery in different sensory modalities might be accounted for by the embodied perspective, involving sensory-motor activations, we analysed the activation elicited by the auditory presentation of 12 sentences for each imagery modality (visual, auditory, tactile, gustatory, olfactory, kinaesthetic, and somatic), contrasted with 12 abstract sentences. In general, sensory specific areas activations were found in visual, tactile, gustatory, kinaesthetic, and proprioceptive imageries according to subjects’ vividness level (as measured by the Italian version of the QMI). These results show that mental imagery also provides the building blocks for cognition, reproducing the conceptual knowledge grounded in sensory-motor systems. Interestingly, the contrast between each imagery modality and the resting condition revealed a significant activation in the left Pre-motor Cortex (BA 6) for all imagery modalities. This result confirms that mental imagery also embodies intentional mental states of action.
S159
Enhancing navigation performance by cognitively motivated representation of angle information in virtual environments. The case of 3D wayfinding choremes Denise Peters, Tassilo Glander Transregional Collaborative Research Center SFB/TR 8 Spatial Cognition, Universita¨t Bremen, Bremen, Germany Existing results in cognitive science (e.g. Klippel et al. 2005; Meilinger et al. 2007) suggest that navigation and orientation can be enhanced by a cognitively motivated graphical representation. Such representations of spatial information aim to align the external representations with a user’s mental representation to enhance the legibility and reduce the cognitive effort of processing the information. In this paper, we transfer an existing and tested 2D schematization approach called wayfinding choremes (Klippel et al. 2005) to a virtual 3D environment. We first discuss the theoretical background of this transfer and then present an evaluation study. The subjects in our study have to learn a route in a virtual urban environment. One group will learn the route in a choremized world (Glander et al. 2009). In this choremized world all angles of the intersections along the route are replaced by their prototypes. The prototypes are defined by an eight sector model (Freksa 1991). At every intersection, we rotated the outgoing leg to align it with prototypical directions in 45 increments. The second group will learn the route in a unchoremized version of the virtual environment. We will analyse if wayfinding choremes enhance navigation performance with regard to error rate and navigation time. Additionally we will consider how subjects remember the route and if facade information influences recognition. We will also test if subjects remember each intersection along the route and their action decision.
Influence of motion and itinerary control on episodic memory Gae¨n Plancher1, Julien Barra1, Alain Berthoz2, Pascale Piolino1 1 Groupe de recherche me´moire et apprentissage, Laboratoire de Psychologie et de Neurosciences Cognitives, CNRS UMR 8189, Universite´ Paris Descartes, Paris, France 2 Laboratoire de Physiologie de la Perception et de l’Action, CNRS UMR 7152, Colle`ge de France, Paris, France Episodic memory enables conscious recall of events in their spatiotemporal context. It has been shown that action improves verbal memory; however, the effect of action on the spatial component remains uncertain. We developed a paradigm in virtual reality testing the influence of action on all episodic memory components: what, where and when. We dissociated the motion control from the itinerary control in order to determine what in action actually improves memory. In our experiment, the motion control refers to the body motion and the itinerary control to a cognitive decision. The participants were immersed in a virtual town where they encountered different scenes and events. In the first condition, the subjects manipulated a steering wheel and pedals to move in the town and decided where to turn (MI). In the second condition, they manipulated the steering wheel and the pedals but the experimenter indicated
123
S160 where to turn (M). In the third condition, the experimenter drove the car and the subjects indicated where to turn (I). In the last condition, the experimenter drove the car and decided where to turn (0). The results showed that performance in memory decreased when the subjects had to control the movement (MI and M). However, the control of the itinerary (I) gave a better spatial memory compare to the condition without control (0). Our findings suggest that driving leads to a cognitive overload decreasing memory. However, itinerary control improves spatial memory suggesting that making a decision is what is advantageous in action.
Neural encoding underlying human action understanding and decision processes spread beyond the cerebral cortex: the dynamics of basal ganglia oscillations Alberto Priori1, Gabriella Pravettoni2 1 Centro Clinico per le Neuronanotecnologie e la Neurostimolazione, Fondazione IRCCS Ospedale Maggiore Policlinico, Mangiagalli e Regina Elena e Dipartimento di Scienze Neurologiche, Universita` di Milano, Milan, Italy 2 Dipartimento di Scienze Politiche e Sociali, Universita` degli Studi di Milano, Milan, Italy The human basal ganglia link external stimuli and movement processing, controlling both movement preparation and execution. Whether basal ganglia dynamics contribute to higher-level mechanisms, such those underlying action understanding and decision making, however, lacks a neurophysiological demonstration. We recorded subthalamic electrical activity in 12 patients with Parkinson’s disease, undergoing stereotactic neurosurgical procedures for deep brain stimulation electrode implantation, during the observation of an action and of objects related and unrelated to the action context, and during a decision-making task. Then, we studied the dynamics of oscillations recorded directly from the deep brain electrodes in the subthalamic nucleus (STN), particularly in the beta range (low-beta 10– 20 Hz and high-beta 20–35 Hz). Action observation desynchronized the low-beta but synchronized the high-beta oscillations. Conversely, only high-beta oscillations increased during the observation of actionrelated objects, hence showing that STN recognizes the action context. When patients performed the decision-making task, both the low- and the high-beta oscillations increased, thus providing neurophysiological demonstration that neural encoding of decisional processes also involves the basal ganglia. In conclusion, our results provide the first neurophysiological evidence in the human brain that neural encoding underlying action understanding and decision processes spread beyond the cerebral cortex, involving the basal ganglia with specific temporal dynamics. Our findings, for the high temporal resolution of neurophysiological measures, will contribute to the development of novel models for action understanding and decisional processes based on dynamic interactions between cerebral cortex and basal ganglia.
Cogn Process (2009) 10 (Suppl 2):S148–S165 (Levinson 2003; Taylor and Tversky 1996). While some spatial relational terms are perspective-independent, others, including the socalled projective relations, have considerable variability in perspective and reference frame dependence. Such variability poses a problem for language interpretation in spatial language dialogue systems where the goal of maintaining a common ground is crucial, but where the necessity for overly explicit and hence unnatural descriptions should be avoided. In this paper, we report on an implemented model of perspective assignment in the interpretation of verbal route descriptions. The model has been motivated by empirical findings which show notable variability in perspective choice even in a simple schematized environment. The presented model considers the assignment of perspective to projective relation terms in route dialogues as influenced by a number of contextual factors. These factors include physical constraints including agent pose in its environment, and the feasibility of interpretations as licensed by the environment’s affordances. Moreover, the model also considers discourse factors including recently salient perspectives. The presented model has been implemented as part of an extension to a spatial dialogue framework (Ross et al. 2008). As such, the perspective assignment model also takes advantage of the complete linguistic system to engage a user when an assigned perspective does not meet a minimal acceptance threshold. We see the presented model as an important step towards more fluid use of perspective in spatial dialogue systems.
The role of vision in egocentric and allocentric spatial frames of reference Gennaro Ruggiero, Francesco Ruotolo, Tina Iachini Department of Psychology, Second University of Naples, Naples, Italy
Assigning perspective in human-computer route dialogues: A contextual factors model
Spatial frames of reference are necessary to encode the position of objects in space and to maintain this information in memory. Egocentric frames define spatial information in relation to the body, whereas allocentric frames use external landmarks (Paillard 1991). What is the role of vision in this spatial processing? Some data showed that blind people do not differ from controls in memory for haptically explored spatial layouts, but others revealed specific limitations linked to different ages of onset (Thinus-Blanc and Gaunet 1997). Here, an experiment is reported that compared blind and sighted people in egocentric and allocentric judgments of spatial arrangements on the basis of locomotor/haptic or visual exploration of actual 3-D objects (dimensions from 42 9 42 cm to 56 9 77 cm). Different degrees of visual experience were tested: blind from birth (congenital), late onset of blindness (adventitious), short-term deprivation (blindfolded sighted) and full vision (sighted). Participants were guided by the experimenter to learn the position of three objects via movement alone (congenital, adventitious and blindfolded) or also with vision (sighted). Afterwards they had to give egocentric (‘‘What object was closer to you?’’) and allocentric (‘‘What object was closer to the cube?’’) judgments. Congenital blind persons resulted less accurate than both sighted groups in allocentric judgments. Presumably, congenital blind people rely mostly on egocentric strategies because the amount of distal information from the environment is limited, whereas adventitious blind persons could benefit from their residual ability of understanding a spatial arrangement as a whole.
Robert J. Ross Transregional Collaborative Research Center SFB/TR 8 Spatial Cognition, Universita¨t Bremen, Bremen, Germany
A visual sonificated web search clustering engine
It is well known that speakers make use of multiple perspectives and reference frames in the description of physical space and action
Alessio Rugo1, Maria Laura Mele2, Giuseppe Liotta1, Francesco Trotta1, Emilio Di Giacomo1, Simone Borsci3, Stefano Federici2,3,4
123
Cogn Process (2009) 10 (Suppl 2):S148–S165 1
DIEI, Department of Electronic and Information Engineering, University of Perugia, Perugia, Italy 2 CIRID, Interdisciplinary Research Centre on Disability and Technologies for Autonomy, ‘Sapienza’ University of Rome, Rome, Italy 3 ECoNA, Interuniversity Centre for Research on Cognitive Processing in Natural and Artificial Systems, ‘Sapienza’ University of Rome, Rome, Italy 4 Department of Human and Education Sciences, University of Perugia, Perugia, Italy This work aimed at expanding the architecture of WhatsOnWeb (WoW; Di Giacomo et al. 2007) following accessibility and usability state-of-the-art criteria. WoW is a visual Web search engine that conveys the indexed dataset using graph-drawing methods on semantically clustered data. In previous studies (Federici et al., under revision; Federici et al. 2008), we found that top–down representation of the most widespread Web search engines does not take into account the level of accessibility of information, and this ranking highlights the distance between the quantitative order of Web Popularity (WP) and the qualitative level of accessibility of the retrieved information. Conversely, WoW semantically analyzes the search results, and automatically links them in a network of concepts and sub-concepts. The whole information retrieved is presented to the user simultaneously in a interactive visual map, overcoming the gap between quantitative order and qualitative level of information. The redesigning process of WoW has been performed by following the user-centered design methodology (Hutchins et al. 1985) and in compliance with the WCAG 1.0. This way, we implemented a sonification algorithm that converts data relations and their spatial coordinates in non-speech sounds (sonification). Moreover, considering future developments on the BCI, a two-state navigation architecture has been enforced. With WhatsOnWeb, we aim at providing an autonomous assistive technology tool that allows for an independent navigation, based on both an integrated synthesizer for screen reading and a sonification system, by which conveying geometrical–spatial information representation. The result is an innovative web search and result-targeting approach that also reduces the gap between quantitative and qualitative result ranking, as seen on major web search platforms.
Failure of spatial remapping underlies constructional apraxia after right hemisphere stroke Charlotte Russell1,2, Cristiana Deidda2, Paresh Malhotra3,4, Sheila Merola2, Masud Husain4 1 Centre for Cognition and Neuroimaging, Department of Psychology, Brunel University, Uxbridge, Greater London, UK 2 Laboratorio di Neuropsicologia, Fondazione Santa Lucia IRCSS, Rome, Italy 3 Imperial College London, London, UK 4 Institute of Cognitive Neuroscience, UCL, London, UK Constructional apraxia (CA) is revealed through the inability of patients to accurately copy drawings or 3-D constructions. It is common after right parietal stroke, often persisting after initial problems such as visuo-spatial neglect have resolved. We examined whether a key deficit might be failure to integrate visual information from one fixation to the next. Specifically, whether this failure might concern remapping of spatial location as the right parietal lobe is involved in maintaining stable representations of locations across saccades. A group of patients with CA, a group of patients without CA and a group of healthy controls were compared. Participants judged whether a pattern changed in form (non-spatial) or whether it shifted position (spatial) with intervening two-saccade sequences or an equivalent pause. Results revealed CA patients to be specifically
S161 impaired in judging whether patterns had shifted position in the saccade condition. The two-saccade sequence was crucial, as performance was inferior when the first saccade of the sequence was to the right. A second study confirmed that single saccades to the right selectively impair CA patients’ perception of location shifts. These data suggest that rightward eye movements result in loss of spatial information from previous fixations, presumably due damage to specific parts of the right hemisphere involved in remapping locations across saccades. The importance of these remapping deficits in CA was confirmed through a correlation between present task performance and constructional impairment on standard neuropsychological tasks. These studies provide a ground breaking delineation of the specific mechanisms underlying right hemisphere CA.
The individual alpha EEG indices and nonverbal creativity in young children E. A. Sapina, O. M. Bazanova State Research Institute of Molecular Biology and Biophysics, Siberian Branch, Russian Academy of Medical Sciences, Moscow, Russia Statement: This research addresses to the question if the creativity is developed in children and what are its EEG predictors. Fink et al. (2006, 2008) approved with the number of studies that during creative problem solving EEG alpha power changes are observed. Bazanova and Aftanas (2007) showed that EEG individual alpha activity indices (i.e. posterior alpha peak frequency, alpha-spindles features and eyes open reactivity) are associated with creative ability in adults. Taking into account previous researches of individual alpha activity indices rise with age in children we proposed that creativity scores connected with alpha activity will rise too. The most limitation of creativity evaluation in young children is the lack of psychometric methods. The goal of this study is searching EEG nonverbal creativity predictors in ontogenesis. Methods: The creativity evaluation Williams’ test was developed to be appropriate for children under 5 years old. EEG was recorded in 356 in four children’s groups (3–4, 4–5, 5–6 and 6–7 years) of both sex. Results: The ANOVA showed that AGE factor impact was not significant for originality in creative task performance in girls but significant in boys. The fluency and flexibility in creative task performance were higher in older than younger children. The individual alpha peak frequency and alpha-band width, as predictors of fluency and flexibility correspondently increased in children with age too. Conclusion: The higher individual alpha activity correlated with fluency and flexibility found in older children provides evidence of a development in ontogenesis specific neural network related to these creative processes.
Trajectory reconstruction with human behavioral route choice heuristics Falko Schmid SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany When trajectories, thus temporally ordered samples from positioning sensors like GPS, are recorded under everyday conditions they are usually highly fragmented (e.g. caused by weak signal reception). When we require coherent trajectory data we have to find means to plausibly reconstruct the missing parts. Research in spatial cognition suggests that humans select their routes according to cognitive beneficial route choice heuristics (e.g. Golledge 1995). However, the results of these findings have either been produced in paper-and-pen tasks, artificial environments, or very limited street networks (e.g.
123
S162
Cogn Process (2009) 10 (Suppl 2):S148–S165
parts of university campuses). So far it is unknown if the identified strategies scale and hold for urban route choice across complex streetnetworks. To answer this question, we developed a trajectory reconstruction framework based on human behavioral route choice heuristics. We recorded a set of everyday trajectories across an irregular urban street network and generated data gaps of up to 95% within the original trajectories. By means of algorithmic reformulation of prominent route choice heuristics, and under the consideration of the human-shortest-path phenomenon (e.g. Wiener et al. 2008) we reconstructed the gaps computationally. In general, we identified the heuristic Fewest-Turns to be the most accurate explanation for the path selection of our subjects. However, if the trajectory was destructed to a high degree or followed a specific layout, no heuristic could restore the original course completely. We argue that we have to enrich existing route choice heuristics with local optimization criteria to explain route selection across complex street networks.
coordinates but not in egocentric coordinates. In Experiment 1, we found that participants did not learn about display statistics when there were no landmarks in the environment. In Experiment 2, a single landmark wall was added, which was either adjacent to or opposite the cued side—in this case participants showed a strong learning effect. In Experiment 3, the landmark was positioned orthogonally to the probability axis, and participants did not learn about the target distribution. These findings suggest that the presence of a landmark assists participants in their learning of environmental statistics. However, the relationship between the landmark and the environment is integral, as the landmark was only useful when it was positioned along the same axis as the probability distribution.
The effects of endogenous and exogenous spatial cueing in a sustained attention task
Mikael H. Sodergren1, Felipe Orihuela-Espina1,2, James Clark1, Ara Darzi1, Guang-Zhong Yang2 1 Department of Biosurgery and Surgical Technology, Imperial College London, St. Mary’s Hospital, London, UK 2 Institute of Biomedical Engineering, Imperial College London, London, UK
Mara Sebastiani, Maria Casagrande, Lisa Maccari, Antonino Raffone Department of Psychology, Sapienza University of Rome, Rome, Italy The decrease in sustained attention performance over time has been ascribed to the involvement of attentional top–down processes. However, to our knowledge, this hypothesis has never been experimentally tested. In order to differentiate the respective involvement of endogenous and exogenous spatial attention processes in the decrement of sustained attention performance over time, we devised a new experimental paradigm, which is a combination of the X-Type Continuous Performance Task (Rosvold et al. 1956) and the Spatial Cueing paradigm (Posner 1980). This task implies both attentional orienting and sustained attention processes. Two experiments were then conducted, one with exogenous cueing, and the other with endogenous cueing. We expected performance in the endogenous experimental condition to be more vulnerable to deterioration. Indeed, we observed a RT increase over time in the endogenous cueing task, but no effect with exogenous cueing. The observed effects are consistent with an endogenous control on sustained attention.
Mechanisms of large-scale environmental search: probability cueing depends on the relationship between landmarks and target distribution Alastair D. Smith1, Felicity Wallace2, Bruce Hood2, Iain D. Gilchrist2 1 School of Psychology, University of Nottingham, Nottingham, UK 2 Department of Experimental Psychology, University of Bristol, Bristol, UK Finding an object in our environment is an important human ability and also represents a critical component of foraging behaviour. One type of information that aids efficient search is the likelihood of the object being in one location over another. Here, we used a novel large-scale search paradigm to investigate the conditions under which individuals respond to this likelihood. Participants searched an array of locations, on the floor of a blank chamber, for a hidden target by pressing switches at each location. Probability of target location was manipulated so that the target appeared on one side of the display (left/right) on 80% of trials. Participants started each search at one of two ends of the display, so that the probability was fixed in allocentric
123
A model to characterise re-orientation strategies in Natural Orifice Translumenal Endoscopic Surgery (NOTES)
Notes represent a new paradigm in minimally invasive surgery whereby an operator uses a flexible instrument to navigate within a large anatomical space, similar to navigation in desktop virtual environments. Disorientation during NOTES is a significant problem because as in laparoscopic surgery the operator uses a 2D camera-monitor interface for visualisation but in addition the positional cue offered by the external component of the camera is absent and camera navigation is more cumbersome. When the operator is disorientated, it is important to be able to re-orientate efficiently to minimise patient danger during surgery. The hypothesis was that when surgeons become disorientated there exist discrete patterns in psychophysical behaviour which are associated with effective re-orientation and that these patterns are recognisable and describable. In this experiment, we profile visual reorientation behaviour in 18 subjects using eye tracker data in a NOTES model comprised selective image manipulation of everyday objects in a box. We characterise effective behaviour using a fixation sequence similarity-based hidden Markov model. We show that the output of this algorithm is reliable in differentiating visual behavioural sequences and that there are specific behavioural patterns and strategies associated with successful re-orientation in this model. Effective re-orientation strategy appears to rely on identification and heavy focus on a central object within the scenery and judging position of peripheral objects relative to this one suggesting integration of both geometric and featural information in a systematic way. Using selective featural cues for reorientation was associated with less effective re-orientation.
Developing spatial skills for adolescents Sheryl Sorby Michigan Technological University, Houghton, USA Spatial skills are critical to success in most scientific and technical fields. The author has developed software and a workbook focused on helping students improve their 3-D spatial skills and has taught a course for first-year engineering students on spatial skills development since 1993. In recent studies she tested the training materials with students in the middle school in the US (age *13). Middle
Cogn Process (2009) 10 (Suppl 2):S148–S165 school students were trained in three separate years during a required Integrated Technology course. In the first year, the materials were pilot-tested with a group of honors students (primarily girls). In the second and third years, the materials were tested in regular classes of Integrated Technology, with students in the third year spending more time on task with selected spatial skills problems when compared to the previous years. Performance in follow-on math and science courses were also tracked for the students in the pilot study group. In each year, the students demonstrated significant gains in spatial abilities. In the third year where students spent more time on spatial tasks, the gains for the girls in the class were significantly higher than they had been the previous year, narrowing the gender gap somewhat. Students in the pilot study group attempted more advanced level mathematics and science courses than did a comparable group of students who did not undergo spatial skills training.
Surgical wayfinding and navigation processes in the human body Thomas Stu¨deli Faculty of Industrial Design Engineering, Delft University of Technology, Delft, The Netherlands During minimally invasive therapies (MITs), surgeons and interventionists rely on medical images as cartographic overviews over the working and navigation area in the patient’s body. MITs are reported as highly demanding for the surgeon’s fine motor skills as well as for his cognitive capacities. Therefore, considerable amount of effort is actually undertaken in the development of tools that aim to support these medical professionals in their wayfinding and navigation process. In a series of observations and interviews the information need of surgeons and interventionists have been investigated. Results of these cognitive task analyses show that navigation and wayfinding processes in the patient’s body heavily rely on spatial mental models and are highly influenced by risk assessments to control actions and to secure safety of the patient. Medical wayfinders sometimes seem to live in two worlds: the world in which the necessary surgical actions and navigation processes take place and the world where the patient health is kept under surveillance and safety decision are made. A cognitive model for safe or prudent navigation processes is introduced. It combines the classical navigation process with its focus on spatial and orientation aspects and an iterative problem solving and quality control process with a focus on aspects of safety and accuracy. The aim of the proposed model is to support the design surgical decision support and navigation systems, though the model might also be used for other navigation environments and contexts with special demands on accuracy or prudence.
S163 hierarchies, choices from a set of semantically similar items (as with a thesaurus), as well as the relevant aspects of the encyclopedic lexical entries. Our aim in this paper is to take a novel systemic perspective on spatial and temporal linguistic expressions. We re-analyze insights from various sources in order to specify the contrast set that determines the interpretation of spatiotemporal terms according to context. In particular, we address the potential impact of linguistic context and (prosodic) focus, the current domain and thematic context, the situational context, and conceptual (hierarchical, functional, and causal) relations. All of these areas contribute in different ways to the identification of a relevant contrast set. The resulting comprehensive account is useful for various purposes including text and discourse analysis, interpreting and modelling spatiotemporal thought as represented in language, natural language engineering, and specification of the semantics–pragmatics interface for a subset of linguistic terms that reflect humans’ representations of basic domains of thought (space and time).
Changes in spatial anxiety when learning a new environment Valeria Tierno1,2, Stijn Massar1, Albert Postma1 1 Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands 2 Department of Psychology, University of Rome ‘‘La Sapienza’’, Rome, Italy Spatial anxiety (e.g. anxiety when finding the way through an unfamiliar environment) may lead to feelings of severe distress and potentially can hamper navigation. We studied how spatial anxiety develops during multiple encounters with a new environment and how it relates to memory performance and electrodermal measures of arousal. Twenty-two participants started with filling in an anxiety questionnaire. Next, they were exposed to movies of routes in a VR environment and subsequently their memory concerning various aspects of these routes was tested. This procedure was repeated over three sessions, with 1 week in between subsequent sessions. Finally, the spatial anxiety questionnaire had to be filled in a second time. Correlations with galvanic skins response measurements taken during the exposure to the VR routes and the various memory tests were relatively weak, though a positive correlation between anxiety and GSR amplitude during watching the VR routes was found. Interestingly, higher anxiety scores were accompanied by higher error rates on the memory tests. In general, wayfinding performance improved over the three VR sessions. We will discuss these results in terms of how our cognitive maps build up with experience and how this is accompanied by changes in experienced spatial anxiety.
Cognitive mapping analysis and regional identity Contrast sets in spatial and temporal language Thora Tenbrink, Christian Freksa SFB/TR8 Spatial Cognition, University of Bremen, Bremen, Germany One central aspect of interpreting the meaning of utterances lies in identifying those aspects that the utterance contrasts with. Each term is part of a (conceptually based semantic) network of options that is generally available; contextual (pragmatic and situational) factors then delimit the specific options and their associated interpretations. The precise choice of words reflects the speaker’s concept of the situation as well as their assessments of relevance in systematic ways. This concerns, for instance, the chosen level of granularity in terms of semantic
Renato Troffa1, Marina Mura1, Ferdinando Fornara2, Pierluigi Caddeo1 1 Department of Economic and Social Research, DRES, University of Cagliari, Cagliari, Italy 2 Department of Psychology, University of Cagliari, Cagliari, Italy The term cognitive map refers to an internal representation of the spatial models of the environment (Tolman 1966; Gould and White 1974). The cognitive mapping process, in turn, can be defined as a mental construction used to understand the environment through remembering and processing the spatial information (Kitchin and Freundschuh 2000). The wide investigations regarding cognitive maps provide an useful
123
S164 instrument based on graphical representation of the space: the sketch maps. Such a technique is suitable to understand the community awareness of its own environment and to approach the ‘‘interior mind landscape’’ (di Castri and Balaji 2002). In literature, sketch maps are often used to detect those psychosocial dimensions (such as social and place identity) which express the transactional pattern between people and their socio-physical environment. The present study aims to investigate the different representations of the environment, through the analyses of the sketch maps of different geographical regions. We expected that the higher the regional identification, the more detailed are the sketch maps. 200 inhabitants of two Italian regions participated to the research. They had to fill a questionnaire including a measure of local identification and to draw sketch maps of the involved regions. The maps have to be valued both qualitatively and quantitatively by two independent judges on the basis of a specific coding grid developed by design experts (Masala 2000). The mapping data are currently under analysis. The overall results and the importance of drawn elements will be discussed for in the light of social and place identity theories.
Attentional interference facilitates skilled anticipatory action Stefano Valenzi1,2, Marta Olivetti-Belardinelli1, Cees van Leeuwen2 1 Sapienza University of Rome, Rome, Italy 2 Laboratory for Perceptual Dynamics, RIKEN Brain Science Institute, Tokyo, Japan We investigated prospective motor control in a task involving interception of a moving target. In practiced subjects excessive focussed attention may become a hindrance for sensory motor task (Beilock et al. 2002). We studied whether such interference would enhance the timing of movement initiation in an interception task. Participants pointed to a touch screen or used a mouse button to intercept a target moving across a computer screen when it reached an interception zone. We asked them to reach the target in two modalities: with a fast arm movement, this means less than 500 ms, and shooting a missile. A target size based on individual skill was used. The moving target takes at least 2 s to reach the interception area. This means that participants had ample time to evaluate the velocity of the target to determine the onset and velocity of their own reaching movement prior to its initiation (Lee and Port 1997). We recorded and evaluated temporal distance from the interception point in conditions with and without Attentional Interference Task and with explicit (Rich) or implicit (Sparse) performance score rules. Participants repeated sessions for three times. Our results show that interception performance improved with repetition. No differences were observed between Hand and Missile Interception Modality. Maximal performance was reached in the last session under conditions of Attentional Interference. This finding supports the hypothesis that an excessive focus attention becomes an hindrance for prospective action (Beilock et al. 2002). Moreover, our results show that Attentional Interference facilitates the skilled execution of the action.
Spatial processes in mobile robot tele-operation Alberto Valero-Gomez1, Chiara Saracini2, Fabiano Botta2, Gabriele Randelli1 1 Department of Computer and Systems Sciences, Sapienza University of Rome, Rome, Italy 2 Department of Psychology, Sapienza University of Rome, Rome, Italy
123
Cogn Process (2009) 10 (Suppl 2):S148–S165 Intra-scenario operator mobility is claimed to be a strong advantage when acquiring situational awareness (SA) within a robot tele-operation. In particular, human–robot awareness involves location awareness, a mapbased concept that allows the user to localize the robot in the scenario, and surroundings awareness, regarding obstacle avoidance, which allows the recognition of the immediate surroundings of the robot. Spatial environment is processed following two different types of knowledge: survey (aerial, map-like view) and route (first-person perspective). The location awareness is related to survey knowledge, while surroundings awareness relates to route knowledge. This paper presents some results, which compare the usefulness of a PDA with a desktop interface in different space types within navigation and exploration tasks involving a human operator driving a robot. In the PDA condition, the operator had three different visibility conditions: total, partial and no visibility. Our aim was to determine the optimal way of distributing the control of a robot between a mobile operator using a hand-held device and a stationary operator using a desktop computer, taking into account the features of the navigated environment and the visibility. The results showed that (a) in the navigation task there is no difference between the two interfaces in partial and no visibility conditions, but (b) in total visibility conditions, the PDA interface showed in general a better performance compared to the desktop interface, and (c) in the exploration task, when using the desktop interface, subjects covered a wider area compared to when using the PDA device, during the same time interval.
The neural correlates of spatial reference frames processing Michal Vavrecˇka Department of Cybernetics, FEL CVUT, Prague, BioDat research Group, Gerstner Laboratory, Prague, Czech Republic The described research is focused on EEG correlates of the navigation in a virtual tunnel task. Subjects tend to represent the tunnel traverse by the adoption of egocentric or allocentric reference frame (they differ in the origin of the coordinate system). We explore features in the EEG signal to discriminate the employment of the particular reference frames. Extracted features from the EEG signal serve as an input for clustering algorithms. We have adopted clustering, hierarchical clustering and self-organizing maps to select the best features discriminating between the two mentioned reference frames. We have identified different activity in the Brodmann area 7 in agreement with a similar study (Gramann et al. 2006), but we have also detected other brain areas involved in this task. There are differences for the navigation in horizontal and vertical planes as well.
Influences of map view and environmental conditions on mental maps of map users Markus Wunderlich Department of Geography, University of Bonn, Bonn, Germany As navigation devices are regarded as popular mobile devices (Raper et al. 2008), the map display of these devices and environmental influences need to be focused by cartographic research as well as the needs of the map user (Hockenberry et al. 2006). This paper presents a study containing several empirical tests with different map views and different environments. In one test, the 113 participants had to follow a predefined route in the computer-simulated town of Heidelberg with help of the maps. In another test only the maps were shown to the participants. Differences in map view and environmental conditions within the study allow investigating possible
Cogn Process (2009) 10 (Suppl 2):S148–S165 influences of a map view and environment on the mental map of a map user. The screen size of mobile devices is an important limitation on cartographic visualization for mobile maps (Dillemuth 2007). As the tested perspective map view shows a greater area on the same map size, the study hypothesizes an advantage of perspective maps, building a more precise mental map of the former unknown region. A further hypothesis is that environmental influences, like confusing road arrangement or the used simulation, will have an effect on the mental maps via enlarged cognitive workload. For the influence of virtual environments on mental maps (see, e.g. Gyselinck et al. 2006). A first analysis of the study results shows no significant differences on the mental maps regarding the comparison of the two types of map view, but seems to indicate an influence of environment.
Ability of healthy standing subjects in orienting after imposed rotations Giulia Zanelli1,2 Maurizio Petrarca3, Paolo Cappa1, Alain Berthoz2 1 Department of Mechanics and Aeronautics, ‘‘Sapienza’’ University of Rome, Rome, Italy 2 Laboratoire de Physiologie de la Perception et de l’Action, Colle`ge de France, Paris, France 3 Neuro-Rehabilitation Division, Children’s Hospital ‘‘Bambino Gesu`’’ IRCCS, Rome, Italy
S165 Many studies evaluated ability of sitting and standing subjects to orient themselves. Imposing CCW rotations, results showed overestimation, even if a bias could affect them. Otherwise, imposing CW/ CCW rotations, directions are pooled. Aim of study was to evaluate perception and reproduction of imposed rotations in standing subjects. Ten subjects stood on a motorized platform, rotating around yaw-axis. Movements were recorded using eight cameras (VICON MX). Blindfolded subjects were asked to estimate imposed angles and to return back. Input was a cosinusoidal velocity profile with amplitude ±45, ±90, ±135 and ±360; two conditions were planned: peak velocity dependent on amplitude and duration constant (4 s); peak velocity constant (57/s) and duration dependent on amplitude. First condition was performed three times, last one for control purpose. Perception angle is estimated by subjects; reproduction angle is computed as difference between orientation at onset and offset of active movement. Analyzing perception, no significant difference was noted between CW/CCW values. In first condition, each angle was significant different by others while, in second one, there were significant differences only between 360 and group 45–180. Analyzing reproduction, there was no significant difference between CW/CCW values while significant differences were found between each angle and others in both conditions, except between -135 and 180 in second condition. Moreover, when input ranging from -180 to 180, perception is similar to reproduction and subjects overestimate it; instead, if input is ±360, subjects perceive well and underestimate in reproduction. Healthy subjects show a good orienting ability in whole space.
123