Sei sulla pagina 1di 58

Behavior/Performance and Human Factors

Stephen Ellis and Lawrence Palinkas, Co-Chairs Platform Presentations Vine Room, January 17 and 18
File# 300
001

Title Behavior/Performance and Human Factors


Crew Member and Crew-Ground Interactions During NASA/Mir Psychiatric Morbidity After Extended Isolation and Confinement in an Extreme Environment: The AntarcticSpace Analog Accessing Cognitive State from Physiological Data Towards Monitoring Cognitive Brain Function During Flight Missions Protecting Scientific Integrity Through Disclosure of Conflicts of Interest Meaningful Measurement of Visual Quality for Digital Imaging Applications Computer Modeling of Real-Time Dynamic Lighting Assessing and Optimizing the Perceptual Utility of Immersive Display Media Advanced Displays and Controls for 6 DOF Orientation and Navigation in Virtual Microgravity Haptic Interfaces to Augment Human Machine Interactions in Space Activities Kinesthetic Compensation for Sensorimotor Rearrangements Reduced Uncertainty Accounts for the Enhanced Sensitivity to Motion Trajectories Computer Vision-Based Quantitative assessment of Astronaut Activities (20) Technology Development for the Microgravity Investigation of Crew Reactions in 0-Gravity (Micro-G) Predicting Strength and Fatigue for Suited and Unsuited Conditions from Empirical Data A Shared Augmented Reality System for Communication Across Environments Support of Crew Problem-Solving and Performance with

Authors S. Ellis and L. Palinkas N. Kanas, V. Salnitskiy, E. M. Grund, D. S. Weiss, V. Gushin, O. Kozerenko, S. Sled, C. R. Marmar L. Palinkas, F. Glogower, M. Dembert, K. Hansen, R. Smullen T.-P. Jung, S. Makeig, D, Stillwell, D. Harm A. Gevins, Michael Smith, L. McEvoy B. Brody, S. V. McCrary, C. Anderson, L. McCullough, N. Wray A. B. Watson, L. Kreslake, C. Ramirez J. Maida, J. Pace, J. Novak M. K. Kaiser, D. R. Proffitt
C. M. Oman, A. M. Liu, J. Marquez, W. B. Sachtler, W. E. Hutchison, A. C. Beall, A. Natapoff

002

003 004 005 006 007 008 009

010 011 012 013 014 015 016 017

A. Raj, L. Roetzer, R. Cholewiak, S. Kass, A. Rupert S. R. Ellis, B. D. Adelstein, R. B. Welch


P. Verghese, S. P. McKee, D. Vreven

D. Metaxas and D. Newman


S. Lody, D. Newman, G. Baroni, G. Ferigno, A. Pedotti

J. Maida, L. J. Gonzalez, S. Rajulu W. P. Stadermann A. E. Majoros and U. Neumann

Augmented Reality 018 019 Augmented Reality for Tele-Operations Teleoperation of Life-Science Experiments with Telecommunication Time Delay Human Performance During Simulated Space Operations Under Varied Levels of System Autonomy Knowledge Sharing to Support Distributed Mission Control Communications Real-Time Embodied Agents for Multi-Person Task Simulation Event Representations: Communicating Change Information to Support Human Computer and HumanHuman Cooperation Framework Assessing Notorious Contributing Influences for Error (FRANCIE): Taxonomy Development, User Procedures, and Software Implementation Effectiveness of Principal Investigator-in-a-Box as an Astronaut Advisor for a Sleep Experiment WARP Communications System as Tool for Situational Information Display Noise Reduction Headsets for Otoacoustic Hearing Assessment of Space Station Crews Optical Computer Recognition of Behavioral Stress Cultural and Personality Determinants of Performance and Error Management Malleable Human Interfaces Integrated Crew Performance Assessment and Training Individuals and Cultures in Social Isolation L. Zamorano, A. Pandya, M. Siadat, J. Gong, Q. Li, J. Maida, I. Kakadiaris V. Shastri, D. Nitzan, J. DeCurtins, Y. Gorfu, P. Garcia, L. Mortensen, L. Hettinger

020 021 022 023

B. Lorenz, F. Di Nocera, R. Parasuraman B. Caldwell N. Badler, M. Palmer, A. Joshi


D. D. Woods, K. Christoffersen, R. Chow

024

L. N. Haney

025 026 027 028P 029P 030P 031P 032P

A. Atamer, M. Delaney, L. Young

A. Devereaux, D. Carr, T. Rathjen R. Kline-Schoder, J. Buckey, F. Musiek


D. Dinges, N. Rogers R. Helmreich, D. Musson D. Russo, J. Whiteley

R. E. Schlegel, R. L. Shehab, K. Gilliland J. Wood

BEHAVIOR/PERFORMANCE AND HUMAN FACTORS


Steven R. Ellis, Ph.D. Lawrence A. Palinkas, Ph.D.

INTRODUCTION
The posters and presentations included in this discipline area actually represent two distinct intellectual traditions. Behavior and performance (B&P) has traditionally been concerned with the development of empirically based scientific principles that identify the environmental, individual, group, and organizational requirements for long-term occupancy in space by humans. Space human factors engineering (SHFE) has traditionally focused on the role of humans in complex systems, the design of equipment and facilities for human use, and the development of environments for comfort and safety. Perhaps the underlying theme of the discipline presentations as a whole was the need to view the critical path roadmap, not as a hierarchical arrangement of discrete and independent issues in different tiers that determine their importance to the overall endeavor of long-duration space flight, but rather, as a continuum framed by the concept of prevention. This continuum exists in a number of different but interrelated dimensions. For instance, behavior and performance has traditionally been concerned with the prevention of performance decrements much the same way that physiologists have been concerned with prevention of bone loss or muscle atrophy in microgravity, while space human factors engineering has been concerned with performance enhancement (e.g., using wireless communications technology to improve communication between crewmembers as well as between the crew and ground control). However, performance enhancement and prevention of performance decrement are seen as being inextricably linked together along the same continuum of performance research such that one inevitably involves the other. This axiom is as true in the other bioastronautics disciplines as it is in BP/HF. Second, BP/HF research illustrates that extreme environments have a way of transforming the trivial into the catastrophic. A story that has become embodied in the legend of polar exploration in the Antarctic concerns two members of a winter-over crew at Russias Vostok Station. For months, the two played chess as a way of passing the time when they were not engaged in work. One player had a habit of taking his time when making his next move. This irritated his fellow crewmember, but nothing was ever said other than an occasional friendly jibe. One day, in late winter, the irritated crewmember had had enough, grabbed an ice axe, and attacked his offending opponent, mortally wounding him in the process. The story illustrates rather dramatically the point that whether it is growing irritation with the way a fellow crewmember eats his food or how loud she laughs, or whether it is a poorly designed work station, poor lighting or high background noise, seemingly minor things can become magnified into potential showstoppers in isolated environments. The goal of BP/HF research is to prevent this from occurring in a cost-effective manner. Hence, addressing Tier 3 and 4 issues are important to preventing Tier 1 and 2 issues.

SUMMARY OF PRESENTATIONS
The papers and presentations at this workshop fall within three different categories: 1) macrolevel issues related to behavior and performance; 2) micro-level issues related to human-system interface; and 3) micro/macro-level issues related to application of human factors engineering solutions to behavior and performance problems.

From NASA Ames Research Center, Human Information Processing Research Branch, Moffett Field, CA (S. Ellis) and University of California San Diego, Department of Family and Preventive Medicine, La Jolla, CA (L. Palinkas).

Behavior and performance


Presentations and posters at this workshop addressed three levels of behavior and performance: individual, interpersonal and organizational. Individual levels issues were examined from both a psychosocial and a neurobehavioral perspective. The three presentations addressing psychosocial issues all concerned themselves with the challenge of screening and selecting personnel for long-duration missions in isolated and confined extreme environments. The presentation by Palinkas and colleagues examined the risk of psychiatric morbidity associated with prolonged isolation and confinement in an extreme environment, using data collected in an analogue environment, the Antarctic. The study demonstrated that an estimated 5% of personnel on long-duration missions are likely to experience a clinically significant psychiatric disturbance, primarily mood, adjustment and circadian rhythm disorders. The implications of these findings are that select-out procedures alone are insufficient to prevent the occurrence of clinically significant psychiatric disorders. The two projects by Wood and colleagues and Helmreich and colleagues will be collecting data from members of the Australian Antarctic National Research Expedition (ANARE) and American astronauts to assess the association between personality characteristics and measures of performance, including task performance, social compatibility, and emotional stability. The same personality characteristics will be assessed in these two groups to test the cross-analog validity of using personality characteristics to select-in personnel best suited for long-duration missions in isolation and confinement. Four projects addressed the need for developing valid but non-intrusive techniques for monitoring neurobehavioral performance. The papers by Jung and Gevins were concerned with the assessment of cognitive performance during spaceflight using technology that has been tested and validated in groundbased studies. Jung and colleagues reported on their efforts to accurately estimate shifts in an operators cognitive state in a visual tracking task by monitoring changes in EEG power spectra. They also summarized their work on the development of an efficient, video-based eye-tracking system that enables them to non-intrusively extract blink activity, which is highly correlated with performance on a compensatory tracking task. Gevins work focused on the use of electroencephalogram recordings to monitor and measure working memory and other indicators of cognitive ability. He noted such techniques provide additional information not available from conventional psychometric tests and the low cultural bias associated with this approach. A new study to be conducted by Dinges and colleagues was summarized in a poster presentation. The goal of the project is to develop and test an optically based computer algorithm to effectively detect emotional distress, neurocognitive degradation, and neuroendocrine responses to behavioral stressors. Of particular concern in this project is to document the validity of this approach across age, gender and ethnicity. Schlegel et al. have taken this monitoring effort one step effort in their program to develop and validate methodology for self-assessment of cognitive and sensorimotor state which could be integrated with prescriptions for in-flight training and countermeasures. This new research effort builds upon their previous work in the development of a Performance Assessment Work Station (PAWS) for use in monitoring the cognitive performance of crewmembers in-flight.At the interpersonal level, Kanas and colleagues examined interpersonal relations within crews on the Mir Space Station and between astronaut/cosmonauts and ground control personnel. The study noted the phenomena of displacement of unpleasant emotions from crew members to mission control personnel, as well as displacement of tension from mission control to management. Important cultural differences in the source of tension were also noted. A new study conducted by Helmreich and colleagues will develop measures to assess team coordination and communication during spaceflight simulation, including assessments of threat and error management. These data are to be collected through interviews with astronauts and other subject matter experts in conjunction with structured observation of space operations training. Wood and colleagues will be examining the impact of group adaptation on individual adaptation. Weekly measures of several neuropeptides and other health outcomes will be compared with group characteristics such as leader traits, cultural variation, and group tensions. At the organizational levels, Brody reported on the failure of a substantial number of federal agencies, research institutions and scientific journals to establish and implement a policy regarding potential conflict of interest when working with human subjects. The results suggest that more external

accountability is required without necessarily imposing excessively rigid rules. This research has organizational relevance to all disciplines involved in bioastronautics research, not merely to behavior and performance or space human factors. Helmreich and colleagues will also be addressing perceived problem areas in the multi-organizational and multi-cultural environment of international space operations from the perspective of ground and flight personnel participation in the ISS. This research builds upon previous studies of the effects of national, organizational and professional culture on human performance in commercial aircrews, medical teams, and Antarctic research stations and takes as its starting point a model of national culture developed by the Principal Investigator. These presentations also point the way to the development of effective countermeasures to enhance performance and prevent performance degradation associated with long-duration spaceflight. These countermeasures range from the use of select-in psychological screening techniques to supplement existing select-out procedures (Palinkas, Helmreich, Wood), pre-flight training in individual coping (Palinkas) and group living (Kanas, Helmreich, Wood), and inflight-monitoring of psychosocial and neurobehavioral measures of performance (Palinkas, Kanas, Jung, Gevins, Dinges, Schlegel). They also include institutional, administrative and organizational changes to more effectively insure ethical as well as valid research on astronaut personnel and other human subjects (Brody) and to develop an assessment tool for identifying cultural differences in safety relevant attitudes for use with ISS personnel (Helmreich).

Space Human Factors Issues


Space Human Factors primarily involves study of the interaction of individuals with physical systems. It classically was concerned with the physical interaction but also may extend to the logical and cognitive aspects of user-system interactions. The papers summarized in this section deal entirely with the physical aspects of interaction. The general thrust of the research is to better understand user information processing capability so as to improve their physical interaction with displays and controls relevant to human performance in spacecraft and aircraft and the mission control centers that oversee their operation. The products of the research may be seen as providing a kind of impedance match between the system and its user, the kind which is prevented by the lack of precise engineering knowledge of the relevant users characteristics. The 16 reports summarized in this section do not describe work designed to answer a single overarching issue such as the design of a new space suit for building the International Space station. Rather they address specific human factors deficiencies that interfere with ideal user-system interaction required for long term crewed space missions, both on the vehicle and in mission control. In conclusion several papers regarding a new form of user interface called Augmented Reality are collected and used to argue that a radically new form of spacecraft will soon be possible to design. These new spacecraft will significantly reduce mass and power requirements while providing astronauts with increasingly complete and flexible spacecraft control. In terms of disciplinary focus the papers in this section fell into four categories: 1) vision (3 papers); 2) audition (1 paper); 3) haptics (4 papers); and spatial orientation (2 papers). There were six papers on general system development: 1) simulator development (1 paper); 2) teleoperator development (1 paper); and 3) augmented reality displays (4 papers). Human Vision: Verghese, McKee, and Vreven described a new model of observer sensitivity to visual motion that captures the human ability to predict future positions of a moving target. Models of this sort can be incorporated into video compression algorithms and into automatic video quality assessments leading to more efficient use of video bandwidth, e.g. lower latencies or higher frame. Such an application was illustrated by the second vision paper by Watson, Kreslake, and Ramirez. Visual simulations designed to more completely exploit human visual perception were also reported by Maida, Pace, and Novak who showed how computer simulations using real-time shadows and glare enhance realism of simulation training, especially of orbital daylight.

Audition: Noise reduction headsets are being developed by Kline-Schoder, Buckey, and Musiek, using feed forward cancellation techniques for objective hearing assessment in noisy environments. This technology is important because classical hearing assessment techniques do not work in the noisy environments often found in spacecraft. Since crew are at risk for hearing loss due to noise levels often found on spacecraft, techniques to track this loss are needed during the mission. Consequently, assessment techniques using otoacoustic hearing evaluation are being integrated into a noise reducing headset. Haptics: Tactile displays (tactors) arranged on the users body have been developed to provide orientation and spatial situation awareness by Raj, Roetzer, Cholewiak, Kass, and Rupert. Tactors are being evaluated using psychophysical tests to optimize the stimulus for 6 DOF EVA spatial situation awareness. Kinesthetic cues from a noncontrolling hand are shown as a potential countermeasure to breakdown of visual-motor coordination during teleoperation by Ellis, Adelstein, and Welch. The breakdown addressed is a form of visual-motor rearrangement caused by a remote camera not being oriented to preserve alignment between display and control axes. It can be largely removed when the noncontrolling hand is positioned to reveal the misalignment angle with respect to the users torso. Available strength and muscular fatigue during suited and unsuited user movement can be predicted by empirical data from subjects at the JSC Precision Air Bearing Facility as collected by Maida, Gonzalez, and Rajulu. These data are hard to obtain precisely and should aid predictions of EVA mission physical difficulty. Reaction forces caused by crew movement in micro-G will be wirelessly recorded along with joint angles by Lody, Newman, Baroni, Ferigno, and Pedotti to determine how to preserve very low level microG for life science experimentation on crewed spacecraft. These measurements should help crew answer the complaint that they may disturb micro-G-requiring experiments when they rattle around inside the vehicle. Spatial orientation: Spatial orientation abilities have been studied in displays with differing levels of immersion, desktop - virtual environment, to determine the type required to match user abilities/disabilities in the real world by Kaiser and Proffitt. The necessary level of immersion for simulation and training of spatially oriented tasks needs to be determined to guide decisions about in situ training and simulation systems. Display and control aids are being developed to aid 6 DOF, way-finding, virtual environment (VE) training for EVA by Oman, Liu, Sachtler, Hutchinson, Beall, and Natapoff. The aids are based on 3D versions of you are here maps, worlds in miniature; live video texture-mapped onto an enveloping spherical projection surface, and an understanding of users tendency to overestimate rotary motion. Simulators: As illustrated by Whitely, simple simulators need to be developed to allow crew to quickly synthesize and evaluate common crew interfaces that can be utilized for many different tasks during a mission, including in situ training Teleoperation System Development: A supervisory control, teleoperation system dividing manipulative tasks into subtasks is being developed by Shastri, Nitzan, DeCurtins, Gofu, Garcia, Mortensen, and Hettinger to overcome the time delay problem that interferes with the ability of PIs on the ground to directly operate manipulators on orbit. Low level machine vision algorithms, such as stereoscopic tracking, have been introduced to reduce the precision with which the operators must specify desired positions of end effectors. Augmented Reality Systems: Augmented reality (AR) systems are display systems that use either optical or electronic means to superimpose synthetic, spatially conformal imagery onto a user's view of the world or of real world imagery. They have many possible applications in spacecraft operation, simulation training, and ground operations, especially while the users hands are occupied. Low cost approaches to augmented reality based on video mixers and used for test-bed development for online tutoring across distributed environments were shown by Staderman. Camera-generated video streams were shown by Majoros and

Neuman to have image content that can be quickly annotated with text or symbols using feature tracking and uplinked to users. This capability allows AR users to markup dynamic video in ways heretofore only available for static imagery and should assist crew-mission ops pictorial communication about dynamic events. Alternative tool and camera tracking technologies have been evaluated by Zamorano, Pandya, Siadat, Gong, Li, Maida, and Kakadiaris for precise registration between the real and synthetic imagery as needed for AR displays. This group has demonstrated precise AR registration, which has been implemented for neurosurgery and can be adapted for use in Remote Manipulator System type tasks. A combination of visual pattern recognition and forward robot kinematics may be required for RMS tasks. Devereaux, Carr, and Rathjen showed wireless technology to be adaptable to developing an untethered, portable display appropriate for augmented reality. The refinement of personal, portable AR displays associated with miniature, wearable computers could dramatically transform the internal appearance of spacecraft by replacing the many large and fixedlocation workstations with smaller, portable, personalized AR displays, dramatically empowering the crew while lowering power consumption and mass.

BP-HF Interface Issues


As with the Behavior and Performance presentations, those presentations reflecting an integration of BP-HF issues also addressed individual, interpersonal and organizational levels of performance. Two presentations addressed the use of expert systems for assessing and improving individual cognitive performance in-flight. The Principal Investigator in a Box, an expert system developed by Young and his colleagues at MIT, provides an excellent model of how expert systems could be used in space to improve cognitive performance, not merely in the completion of a specific task, but in training and reviewing performance effectiveness. The PI in a Box was found to be significantly more effective in enabling individuals to monitor biomedical instrumentation in a sleep experiment. Its potential, however, extends beyond single experiments. Another human factors engineering effort dedicated to enhancing cognitive performance was described by Bernd Lorenz. The objective of this project has been to examine performance of complex cognitive tasks under varying levels of automation (LOA) using a modified version of the Cabin Air Management System (CAMS). Their findings suggest that the distance between levels of automation has a modulating effect on individual performance, which is stronger under multipletask than under single-task conditions. Another project involves the use of engineering technology to monitor and management motion during the performance of critical tasks. Metaxas et al. reported on efforts to monitor astronaut activity for real-time analysis and safety assessment based on computer vision methods, optimal control and customized wearable hardware. Recursive dynamics and optimal control are being used for modeling optimal motion in IVA and EVA tasks. Computer vision methods based on single and two camera input have been developed for use in establishing analytic and predictive measures of IVA and EVA space human factors. At the interpersonal level, two projects reported on the use of intelligent systems to create event representations to study the communication of change information within groups, especially in situations involving anomaly response and replanning. Christoffersen and colleagues reviewed studies in space mission control about how groups collaborate in anomaly response and replanning, and how the use of intelligent systems for event representations are used to study communication of change information. Their research demonstrates why capturing and displaying information about changes in events is difficult and how experts are able to extract information from raw telemetry data. The goal of this work is to describe principles and techniques for communicating change information in displays and in cooperative work tools. The focus of this project is on enhancing communication within the group. Badler et al. reported on efforts to use computer graphics and virtual environments (VE) in training crew interpersonal interactions for procedure validation, task allocation and unusual procedure simulation. In VE systems, at least one person is the participant while several more virtual human agents represented by human-like, embodied models are engaged in activities in the same virtual space. This project focuses on enhancing cooperation within the group during the performance of specific tasks.

Finally, two projects described the potential for applying human factors engineering solutions to behavior and performance issues that transcend more than one level. Haney reported on the development of a taxonomy of human errors, procedures for use and implementation of software for FRANCIE (Framework Assessing Notorious Contributing Influences for Error), a methodology and tool for reducing or minimizing risk of human error in space missions. The framework for this analytic tool is a hierarchy of error types, Generic Errors and associated contributing influences known as Performance Shaping Factors (PSFs). This work addresses issues of cognition, communication and cooperation at the individual, interpersonal and organizational levels. Caldwell reported on efforts to examine Flight Control Room coordination across phases of flight and rates of event dynamics with a focus on transmission and distribution of information. This work addresses issues of communication at both the interpersonal and organizational levels by focusing on the multiple feedback processes that must occur to support distributed supervisory control tasks in the Mission Control Center (MCC).

IMPLICATIONS FOR FUTURE RESEARCH


All of the papers presented in this session addressed the need for development of countermeasures that can improve performance on the one hand and prevent the occurrence of significant performance decrements on the other. Despite the difference in terminology, approach, and intellectual tradition, the acknowledgment of a common set of goals and objectives and the presentation of innovative use of space human factors engineering to address important issues of behavior and performance both suggested that much was to be gained through collaborative, multidisciplinary approaches to issues that were traditionally confined to one discipline or the other. In combination, Behavior/Performance and Human Factors research on the needs of long-duration missions represent a whole that is greater than the sum of their individual disciplines.

CREW MEMBER AND CREW-GROUND INTERACTIONS DURING NASA/MIR


N. Kanas1, V. Salnitskiy2, E. M. Grund1, D. S. Weiss1, V. Gushin2, O. Kozerenko2, A. Sled2, C. R. Marmar1 1 University of California and V.A. Medical Center, San Francisco, California 94121 2 Institute for Biomedical Problems, Moscow, Russia

INTRODUCTION
Anecdotal reports from space and results from simulation studies on Earth have suggested that space crew members may experience decrements in their mood and interpersonal environment during the 2nd half of a mission and that negative emotions may be displaced to outside monitoring personnel. The objectives of this study were to measure and characterize changes over time in a number of important interpersonal factors, such as tension, cohesion, leadership role, and the relationship between space crews and monitoring personnel on Earth.

CURRENT STATUS OF RESEARCH Methods


Five American astronauts, 8 Russian cosmonauts, and 42 American and 16 Russian mission control personnel who were involved in the NASA/Mir space program signed informed consent and participated in the study. The subjects completed a computerized or hardcopy questionnaire on a weekly basis before, during, and after each mission. The questionnaire consisted of three standard mood and interpersonal group climate measures (Profile of Mood States, Group Environment Scale, Work Environment Scale) and a critical incident log, and each questionnaire took 15-20 minutes to complete.

Results
Support for the displacement of unpleasant emotions from the crew members to mission control personnel was found in all six measures testing this effect, and displacement of unpleasant emotions from mission control personnel to management was found in five of the six measures. Limited support was found for the expected decrements in mood and interpersonal environment during the 2nd half of the missions: the crew members perceived a decline in support from the commander in the 2nd half, and for American crew members several measures showed an initial novelty effect in the first few months. Significant response differences were found between crew members and mission control personnel and between American and Russian participants, with Americans expressing more unhappiness with their work environment. All subject groups mentioned negative on-board events as one of their two most frequently cited critical incidents; Americans also listed interpersonal difficulties, and Russian ground subjects also listed resource and salary problems.

CONCLUSIONS
Countermeasures need to be developed to deal with the displacement of unpleasant emotions to outside individuals by both crew members and mission control personnel. Further work needs to be done assessing changes in mood and interpersonal functioning over time. Differences in American and Russian responses point out the importance of cultural factors during international space missions.

FUTURE PLANS
Crew and ground tension, cohesion, leadership role, and displacement will be further evaluated in our current NASA-funded International Space Station study. This study will allow us to evaluate the impact of the cultural and language background of our subjects on their mood and interpersonal interactions during the ISS missions. We also are developing a pre-mission psychosocial education training module that is intended to improve the interactions and performance of crew members and mission control personnel who will be involved in future long-duration manned space missions.

INDEX TERMS:
Crew Interactions, Crew Performance, Crew-ground Interactions, Psychosocial Issues, Displacement, Cultural Issues, Psychological Stress

PSYCHIATRIC MORBIDITY AFTER EXTENDED ISOLATION AND CONFINEMENT IN AN EXTREME ENVIRONMENT: THE ANTARCTIC-SPACE ANALOG PROGRAM
L.A. Palinkas1, F. Glogower2, M. Dembert3, K. Hansen4, and R. Smullen3 1 Department of Family and Preventive Medicine, University of California, San Diego, La Jolla CA 92093-0807. 2 National Naval Medical Center, Bethesda MD, 3Naval Medical Center, Portsmouth, VA,4Naval Medical Center, Bremerton, WA. INTRODUCTION As manned space missions increase in both frequency and duration, an understanding of the psychosocial risks associated with prolonged isolation and confinement in an extreme environment becomes paramount. These individuals are not considered at risk for psychiatric disorders because they are typically required to undergo psychiatric evaluations prior to their isolation. Those with a history of psychiatric illness, including illness that is managed through medication, are subject to select-out screening procedures and disqualified from participating in such missions. Nevertheless, anecdotal evidence of psychological disturbances in space and an increased rate of clinically significant depressive symptoms and symptoms of subsyndromal seasonal affective disorder in the Antarctic suggest that isolation and confinement in an extreme environment may pose certain risks, even in men and women who are clinically asymptomatic and who have no history of psychiatric disorder at the time they are screened and selected for such assignments. METHODS We assessed the psychological status of a cohort of American men and women who spent an austral winter in Antarctica to address two important questions: 1) what proportion of a non-psychiatric population will experience psychiatric morbidity subsequent to an extended period of isolation and confinement in an extreme environment; and 2) are certain individuals or groups of individuals more likely than others to experience such morbidity? Subjects for this study were 313 military and civilian personnel who spent an austral winter at South Pole Station (900S) or McMurdo Station (78051S), Antarctica over a four-year period. All members of one winter-over crew at South Pole and four crews at McMurdo were invited to participate in a debriefing conducted by a U.S. Navy psychiatrist or clinical psychologist. Twenty-six of the 27 (96.3%) crewmembers at South Pole and 111 of the 112 (99.1%) of the military crewmembers at McMurdo agreed to participate. Only 202 of the 766 (26.4%) civilian winter-over crew members at McMurdo Station agreed to participate. The lower response rate among civilians at McMurdo was due to the voluntary nature of the debriefing process. Many choose not to undergo debriefing for a variety of reasons, including mistrust of psychologists whose findings might interfere with prospects of continued employment with the United States Antarctic Program, a perceived lack of need for psychological support, or inconvenience of scheduling an appointment during a period of increased activity in preparation for the austral summer season. In contrast, military personnel are required to undergo psychological debriefing as a condition of their service in the U.S. Navy. Prevalence rates for the entire crew were therefore weighted using a formula based on the correlation between prevalence and participation rates for each group of civilian personnel. All winter-over crewmembers were required to undergo a medical and psychological evaluation prior to their deployment to Antarctica. The psychological evaluation consisted of an interview with a U.S. Navy psychiatrist and clinical psychologist and completion of a battery of standardized clinical assessments, including the MMPI, the PF-16 and the MAST. Study subjects arrived in Antarctica between late August and January of the following year and remained in Antarctica for approximately 12 months. Informed consent was obtained from each participant after the study objectives and data collection procedures had been fully explained. Navy psychiatrists and psychologists also conducted psychological debriefings at the end of the austral winter using a standardized protocol for conducting the debriefing. The protocol includes completion of a questionnaire by the crewmember describing his or her experiences over the winter, a brief interview by the clinician to address any issues that may have arisen over the course of the winter, and administration of the Structured Interview Guide for the Hamilton Depression Rating Scale, Seasonal Affective Disorders (SIGH-SAD). The clinicians were requested to assign a DSM-IV diagnosis to crewmembers who satisfied the criteria for such a diagnosis.

RESULTS Thirty-eight (12.1%) of the 313 men and women who participated in the psychiatric debriefings conducted at the end of an austral winter in the Antarctic presented with symptoms that met the criteria for a DSM-IV disorder. After weighting the prevalence to account for the low participation rate of civilian personnel, the prevalence of DSM-IV disorders was 5.2%. Mood disorders and adjustment disorders were the most common diagnosis, each accounting for 31.6% of all diagnoses, followed by sleep-related disorders (21%), substance-related disorders (10.5%), and personality disorders (7.9%). All of the DSM-IV disorders were identified in personnel who overwintered at McMurdo Station. Military personnel were 3.70 (95% C.I. = 1.83 7.42) times as likely to have a DSM-IV disorder as civilians. The prevalence of DSM-IV disorders was unrelated to age, sex, year, level of education, and prior winter-over experience (Table 1). As expected, mean Hamilton Depression Rating Scale and SIGH-SAD scores were significantly associated with a DSM-IV diagnosis. Military personnel had significantly higher mean scores than civilians; women had significantly higher mean scores than men; and personnel at McMurdo had significantly higher scores than personnel at South Pole. Depressive symptom scores also varied significantly by year. Table 1. Prevalence of DSM-IV Psychiatric Disorders and Mean SIGH-SAD Depressive Symptom Scores by Demographic Characteristics of Antarctic Winter-Over Personnel DSM-IV Year (rate per 100) 1994 5.8 1995 5.5 1996 6.6 1997 2.0 Age < 35 years 6.0 35+ years 4.5 Sex Men 5.4 Women 5.0 Occupation Military 13.5 Civilian 4.0*** Station South Pole 0.0 McMurdo 5.4 Education < 14 years 6.6 14+ years 4.4 Prior winter-over No 6.5 Yes 3.5 DSMIV Diagnosis No Yes * p < 0.05, ** p < 0.01, *** p < 0.001 CONCLUSION Select-out procedures are generally considered successful in reducing the risk of psychiatric morbidity associated with prolonged isolation and confinement in an extreme environment. Nevertheless, these results suggest that such procedures should be supplemented by additional countermeasures designed to provide psychological support to personnel during extended missions in isolated and confined extreme environments. Such countermeasures are especially likely to benefit the estimated 5% of crewmembers who experience clinically significant symptoms during such missions. (Supported by NASA: NAG 5-4571 and NSF: OPP-9019131) HDRS 21 Mean (S.D.) 2.9 (3.9) 4.0 (4.6) 4.9 (3.4) 3.0 (2.5)** 3.4 (3.3) 3.7 (4.0) 3.2 (3.3) 4.7 (4.6)** 4.0 (3.9) 3.0 (3.1)* 3.3 (3.4) 7.3 (4.3)*** 4.2 (4.0) 3.2 (3.5) 4.0 (4.4) 3.7 (3.3) 3.1 (3.2) 7.2 (5.1)*** SAD 8 Mean (S.D.) 1.6 (2.8) 2.3 (2.6) 3.9 (3.0) 1.9 (1.8)*** 2.4 (2.8) 2.3 (2.7) 2.1 (2.7) 3.1 (2.7)*** 2.8 (2.8) 1.6 (2.5)*** 2.2 (2.5) 4.3 (4.1)** 2.8 (2.9) 2.2 (2.6)* 3.0 (2.7) 2.3 (2.8)** 2.1 (2.5) 4.2 (3.6) SIGH 29 Mean (S.D. 4.5 (6.3) 6.3 (6.5) 8.8 (5.5) 4.9 (3.5)*** 5.8 (5.5) 6.0 (6.0) 5.3 (5.2) 7.8 (6.7)*** 6.8 (6.1) 4.6 (5.0)*** 5.5 (5.3) 11.6 (7.7)*** 7.0 (6.2) 5.7 (5.1) 7.0 (6.4) 6.0 (5.4) 5.3 (5.1) 11.4 (7.5)***

ACCESSING COGNITIVE STATE FROM PHYSIOLOGICAL DATA Tzyy-Ping Jung1,2, Scott Makeig1,2,3 & Don Stillwell4, Deborah Harm4
Institute for Neural Computation, University of California, San Diego, La Jolla CA 92093, 2Computational Neurobiology Lab, The Salk Institute, La Jolla CA 92037, 3Naval Health Research Center, San Diego CA 4Johnson Space Center, NASA, Houston TX INTRODUCTION On long and ultra-long duration missions (e.g., a Mars mission), to maintain the delicate, un-buffered homeostasis of their remote existence, the spacecraft and its inhabitants will be totally dependent on the proper operation of thousands of control loops in thousands of pieces of equipment. Without automation, such a mission would be impossible. But automated systems place the user in a mere monitoring role. Decades of human vigilance research show that for most or all operators engaged in attention-intensive tasks, retaining a constant level of alertness is rare if not impossible. Accurate and non-intrusive real-time monitoring of operator alertness is thus highly desirable. The ultimate goal of this project is to develop a dynamic, human-centered interface that will use operator psychophysiological measures, during the training and regular operational routines, (1) to cue operators to their own cognitive states, and/or (2) to adjust information transfer rate and presentation based on the operators' vigilance. CURRENT STATUS OF RESEARCH During the past six years, we (Makeig & Jung, 1995; Jung et al, 1997) have demonstrated the feasibility of accurately estimating shifts in an operator's level of alertness in a simple auditory detection task by monitoring the changes in EEG power spectra. A natural question is if the alertness monitoring technique we developed for a simple auditory task can be generalized to estimate the cognitive state of subjects performing more complex tasks. To this end, we have implemented a visual Compensatory Tracking Task (CTT) paradigm of Makeig & Jolley (1996) on a portable laptop computer and have used it to collect pilot data to test the feasibility of estimating an operator's cognitive state in a continuous visuomotor tracking task.
1

Methods
Compensatory Tracking Task In the CTT, subjects are instructed to manipulate a trackball to produce forces (proportional to velocity) countering unseen quasi-random forces that tend to "blow" a circular disk off an invisible "slippery hill" at screen center whose apex is marked by a target ring. Subjects are asked to use trackball movements to maintain the disk as near as possible to the ring (Makeig & Jolley, 1996). In constructing the moving-mean performance measure (as a behavioral correlate of alertness level), the 14/s disk-distance time series were first linearly smoothed, using a 1-min window moved through the data in 2-s steps, and then passed through an erf() sigmoid (Van Orden et al., 2000). Data Collection and Preprocessing Fifteen volunteer subjects performed at least two 30-60 min CTT experiments on different days. Data sets from 10 subjects comprising at least two sessions containing wide variations in task performance were selected for further analysis. Concurrent EEG and performance data were collected during each CTT experiments. Thirty-two EEG channels (placed according to International 10-20 system), referred to the right mastoid (A1), were recorded. For fair comparison to the results shown in Jung et al. (1997), only EEG data recorded at midline sites Cz (vertex) and Pz (central parietal) were used to estimate alertness. EEG spectra were extracted by Hanning-windowed FFTs performed on overlapping 2-s epochs from the continuous EEG data record, converted to dB power, and then linearly smoothed using a 1-min window moved through the data in 2-s steps. Principal component analysis (PCA) was then applied to the EEG log spectrum to extract the directions of largest variance for each session. Projections of the EEG log spectral data on the subspace formed by the eigenvectors corresponding to the largest four eigenvalues were then used as inputs to train a three-layer perceptron neural network to estimate the behavioral alertness (diskdistance) time series. During the training, two-thirds of the data points were used as training samples, and the remaining one-third were used as a cross-validation set to prevent over-fitting. PCA eigenvectors and neural network weights derived from one session were used to process data from the other session from the same subject.

Results
Figure 1 shows sample actual and estimated error rate time series for the two sessions from a typical subject. Note that in both cross-session estimations, the neural network model estimated changes in local error rate occurring

throughout the sessions very well. These results suggest that accurate alertness estimation based on EEG spectral data appears realistic, a finding consistent with our previous results using an auditory target detection task. Figure 1. Error-rate estimates for the two sessions from a normal subject. The error-rate estimates were obtained by training neural networks with twochannel EEG log power spectra projected on the four principal components. (Left panel) This model was trained on session 2 data and tested on EEG data from session 1. (Right panel) This model was trained on session 1 and tested on session 2. Correlations between the actual and estimated error are 0.94 and 0.90, respectively.

Advances in Physiological Measurement for Cognitive State Monitoring


To be practical for routine use in the workplace, the human-centered interface must use bio-sensors that are lightweight and easily donned and doffed. We here report a significant advance in EEG sensor technology: Non-invasive Dry Electrode Recording: We are collaborating with Don Tucker of Electrical Geodesics, Inc. (Eugene OR) to develop non-intrusive easy-wearable dry EEG sensors. Recent initial testing showed that new dry sensors (Integrated Biosensing Technologies, Menlo Park, CA) can provide good-quality EEG data with little or no skin preparation. We collected data in a 45-min CTT session and applied our algorithm to estimate the subject's behavioral alertness throughout the session (Figure 2).

Figure 2. Alertness estimates for the session recorded by dry EEG sensors placed near F7 and F8. The error-rate estimates were obtained by training three-layer feedforward neural network using the projections of two-channel EEG log power spectra on the four principal directions derived by PCA. As can be seen, the neural network again estimated changes in task performance occurring throughout the session impressively well (with overall correlation between the actual and estimated error of 0.86). A. B. Video-based Eye Tracking: Our collaborator, Javier Movellan of UCSD, has successfully developed a prototype of a highly efficient and robust eye detector based on the Sparse Network of Windows computational architecture (Carlson et al, 1999). The system has been trained and tested on the FERET database (Phillips et al., 1999) with excellent results (98.4% accuracy). We plan to use this non-intrusive eye tracking method to extract eye activity from subject face images to estimate slow changes in alertness (Van Orden et al., 2000).

CONCLUSION
We have successfully demonstrated the feasibility of accurately estimating shifts in an operator's cognitive state in a visual tracking task by monitoring the changes in EEG power spectra collected with wet or dry EEG electrodes. We have also developed an accurate, efficient video-based eye-tracking system that allows us to non-intrusively extract blink activity, which has been shown highly correlated with CTT task performance (Van Orden et al., 2000).

FUTURE PLANS
We will fuse multiple streams of psychophysiological information (tonic and phasic EEG spectral and changes in eye closure rate and duration) to improve accuracy of our human-centered interface. We will then test the effectiveness of auditory and/or visual feedback in helping operators maintain their best performance, and to adjust operator workload to combat drowsiness and/or to prevent lapses in concentration.

LITERATURE CITED
Carlson A, Cumby C, Rosen J, and Roth D, (1999) SNoW Users Guide. UIUC Tech Report UIUC-DCS-R-99-210. Jung T-P., Makeig S. Stensmo M & Sejnowski T.J. (1997), IEEE Tran. on Biomedical Engineering, 44(1), 60-69.

Makeig S. & Jung T-P. (1995) NeuroReport 7:213-216. Makeig S. & Jolley M. (1996). COMPTRACK: A compensatory tracking task for monitoring alertness, Technical Document 96-3C, San Diego: Naval Health Research Center. Van Orden K., Jung T-P, and Makeig S., (2000) Biological Psychology, 52(3):221-40. Phillips P, Wechsler H, Juang J, Rauss P. (1998) Image and Vision Computing Journal, 16(5):295-306, 1998. INDEX TERMS: cognitive state assessment, alertness, vigilance, fatigue, human-centered interface, non-invasive measurement, counter measurement, neural network, EEG, eye activity.

PROTECTING SCIENTIFIC INTEGRITY THROUGH DISCLOSURES OF CONFLICTS OF INTEREST


Baruch A. Brody, PhD, Baylor College of Medicine, PI S. Van McCrary, Ph.D., JD, Baylor College of Medicine, PM Cheryl Anderson, Ph.D., Baylor College of Medicine, CoPI Larry McCullough, Ph.D., Baylor College of Medicine, CoPI Nelda Wray, MD MPH, Baylor College of Medicine, COPI

INTRODUCTION TO THE RESEARCH PROJECT:


A broad consensus has emerged that disclosure policies can play a major role in managing conflicts of interest in research, but little is known about how current disclosure policies work and even less is known about how they might be improved. In Phase I of this project, the full range of currently existing disclosure policies will be collected and analyzed using deductive content analysis. In Phase II of the project, a questionnaire will be distributed to the various stakeholders ascertaining their views about proposed alternative policies. In Phase III of the project, a panel of experts will develop a set of ethically justified alternative policies and a national survey of stakeholders will be surveyed to see which alternatives are both ethically justified and acceptable. Agencies such as NASA can use these results in formulating their own policies.

CURRENT STATUS OF RESEARCH:


Phase I has been completed, and a manuscript summarizing its results was accepted by the New England Journal of Medicine on September 19, 2000. Because of its rules, the results cannot be published in advance in the abstract book, but a description of the study is presented below and we anticipate being able to distribute the full results at the meeting. Phase II is well under way, and will be described below.

PHASE I OF THE STUDY Methods :


We surveyed all medical schools and all research institutions receiving more than $5,000,000 in federal research support, 48 journals (basic science and clinical) with the highest immediacy index, and 17 federal agencies (the common rule agencies, including NASA, and the FDA) to obtain their policies on conflicts of interest. We identified the following domains of content: (a) the existence of disclosure requirements; (b) the type of conflict that must be disclosed; (c)the persons/entities for whom disclosure must be made; (d) the persons/institutions to whom disclosures must be made; (e) the timing of required disclosure; (f) the use of the information by officials to whom it is disclosed; (g) penalties for nondisclosure. 250 of the 297 institutions, 16 of the 17 agencies, and 47 of the 48 journals responded.

Results and Conclusions:


For reasons indicated above, this will be distributed at the meeting.

PHASE II OF THE STUDY Methods:


A questionnaire was developed involving, for each domain of content listed above, a variety of options drawn from the policies collected and analyzed in Phase I. Respondents were asked to indicate their agreement/disagreement with each option on a 1-7 scale. Two mailings have been sent to potential respondents and a third mailing is underway. The respondents include research institution officials (the vice presidents of research and the chairs of the IRBs), researchers (heads of NIH study sections and PIs on NIH funded multi center studies), and bioethicists (a sampling from the membership of ASBH). One of the major issues to be studied is whether the different groups respond differently. The responses of all of these groups will also be compared, in the next part of Phase II which will shortly begin, with the responses of a sampling of research subjects from Baylor College of Medicine research

protocols. Finally, a special study will be done of the attitudes of the astronaut corps, the crucial subjects in NASA life science research.

Results and Conclusions:


We have received responses from close to 40% of the three groups of respondents surveyed until now. A preliminary analysis has shown that, even correcting for the multiple look issue, there are statistically significant differences on most items. This will be reconfirmed after the third mailing results come in. We will also examine then whether these differences are significant from a policy perspective (greater than a 1 point difference on the scale) or merely statistically significant.

FUTURE PLANS: (a) complete Phase II; (b) do Phase III INDEX TERMS: Conflicts of interest, research ethics.

MEANINGFUL MEASUREMENT OF VISUAL QUALITY FOR DIGITAL IMAGING APPLICATIONS


1

Andrew B. Watson1, Lindsay Kreslake2, Cesar Ramirez2 NASA Ames Research Center, Moffett Field, CA 94035; 2Foothill College, Los Altos, CA

INTRODUCTION
The study of visual quality in digital communications, and the development of automated quality metrics, require accurate and meaningful measurement of visual impairment. A natural unit for impairment is the JND (justnoticeable-difference). In many cases, what is required is a measure of an impairment scale, that is, the growth of the subjective impairment, in JNDs, as some physical parameter (such as amount of artifact) is increased. We have developed a new method of efficient measurement of impairment scales, and we have applied it to a large set of video sequences.

CURRENT STATUS OF RESEARCH Methods


Measurement of sensory scales is a classical problem in psychophysics. In the method of pair comparison, each trial consists of a pair of samples and the observer selects the one perceived to be greater on the relevant scale. This may be regarded as an extension of the method of forced-choice: from measurement of threshold (one JND), to measurement of the larger sensory scale (multiple JNDs). While simple for the observer, pair comparison is inefficient because if all samples are compared, many comparisons will be uninformative. In general, samples separated by about 1 JND are most informative. We have developed an efficient adaptive method for selection of sample pairs. As with the QUEST adaptive threshold procedure, the method is based on Bayesian estimation of the sensory scale after each trial. We call the method EASE ("to make less painful", or Efficient Adaptive Scale Estimation). We have used the EASE method to measure impairment scales for digital video. Each video was derived from an original source (SRC) by the addition of a particular artifact, produced by a particular codec at a specific bit rate, called a hypothetical reference circuit (HRC). Different amounts of artifact were produced by linear combination of the source and compressed videos. On each pair-comparison trial the observer selected which of two sequences, containing different amounts of artifact, appeared more impaired. The scale is estimated from the pair comparison data using a maximum likelihood method. At the top of the scale, when all of the artifact is present, the scale value is the total number of JNDs corresponding to that SRC/HRC condition. We have measured impairment scales for twenty five video sequences, derived from five SRCs combined with each of five HRCs. We used three viewing distances (3H, 5H, and 7H, H=picture height).

Results
We find that EASE is to be a reliable method for measuring impariment scales and JNDs for processed video sequences. We will describe the variation of JND with SRC, HRC, and viewing distance. We have compared our JND measurements with mean opinion scores for the same sequences obtained at one viewing distance using the DSCQS method by VQEG Video Quality Experts Group (VQEG), and we find that the two measures are highly correlated. The advantages of the JND measurements are that they are in absolute and meaningful units (JNDs) and are unlikely to be subject to context effects.

CONCLUSIONS
We note that JND measurements offer a means of creating calibrated artifact samples, and of testing and calibrating video quality models. These new methods and measurements mark a new era in the science of visual quality.

Index Terms: vision, visual quality, digital video, digital imaging, impairment, image compression, video
compression

COMPUTER MODELING OF REAL-TIME DYNAMIC LIGHTING


1

J. Maida, M.S.1, J. Pace, M. S.2, J. Novak, Ph.D3 NASA Johnson Space Center, SF5, Houston, TX 77058, 2Johnson Engineering, 3NSBRI

INTRODUCTION
Space Station tasks involve procedures that are very complex and highly dependent on the availability of visual information. In many situations, cameras are used as tools to help overcome the visual and physical restrictions associated with space flight. However, these cameras are effected by the dynamic lighting conditions of space. Training for these is conditions is necessary. The current project builds on the findings of an earlier NRA funded project, which revealed improved performance by humans when trained with computer graphics and lighting effects such as shadows and glare. Previously, only the case of static lighting was examined. However, rapidly changing lighting conditions are common with a 45 minute (on average) orbital day. The current project will extend this effort to include dynamic lighting conditions. In addition, to enhance training and task execution during poorly or ambiguously illuminated cases, the project will also examine the use of augmented reality techniques or dynamic overlays to provide just in time assistance.

An example of dynamic lighting on orbit. Elapsed time is approximately 7 minutes.

CURRENT STATUS OF RESEARCH


Activities to date include the migration from a Unix based SGI workstation to an NT based SGI workstation. This was required, in order to reduce the cost of upgrading from an older system used for on the previous project, investigating enhancement of training using static lighting. The software development activities, required to create dynamic lighting, have been underway on the NT based system. In addition, the testing area has been moved to a single room, a more appropriate enclosure than the large, high ceiling, open area previously used. This move facilitates an improved networking configuration, as well as, an improved subject testing environment. Finally, all the hardware components, such as the camera pan/tilt units and translation table are being re-interfaced to the new image generation system in support of the dynamic overlaying system also under development.

Methods
Using OpenGL (TM) features, such as stencil planes, "fast shadowing" techniques are being employed. The software development process is using existing public domain software, which is being modified to handle the required lighting and shadowing conditions and to utilize specific geometric models required for the scene. To implement dynamic shadowing, careful arrangement of the objects in the scene is necessary. Handling all possible shadow projections will be too slow, so the selection of critical shadowing projections in the field of view is required. The light source will be the sun, moving at the appropriate rate expected on orbit. Polygon reduction technology is being used to optimize geometric model representations in a controlled manner for faster scene generation.

Results
With careful selection of critical shadow projections in the field of view, dynamic lighting conditions (i.e. "fast shadowing") can be created. The standard strategy of controlled polygon reduction, employed by training simulators, is being used to optimize scene geometry for higher scene generation rates.

CONCLUSION
Previous tests of training performance compared traditional computer generated scenes with lighting enhanced computer generated scenes using shadowing and static lighting. A statistically significant improvement in actual task performance was measured. Using dynamic lighting conditions, such as "fast shadowing", will provide an even more realistic depiction of orbital daytime.

FUTURE PLANS
With implementation of the dynamic lighting software and the dynamic overlay software, subject testing will begin.

INDEX TERMS
Dynamic Lighting, Fast Shadowing, Dynamic Overlays

"ASSESSING AND OPTIMIZING THE PERCEPTUAL UTILITY OF IMMERSIVE DISPLAY MEDIA"


Mary K. Kaiser, NASA Ames Research Center Dennis R. Proffitt, University of Viginia Advances in display technologies offer interface designers an increasing number of options, including the ability to create immersive virtual environments for use in training, simulation, and data visualization. Our work focuses on enhancing the functional utility of immersive interfaces. To that end, we are applying our understanding of human perception to optimize the choice of interface (including whether, in fact, the application justifies the expenses and constraints of immersive technologies), as well as to tune the display and simulation characteristics to the requirements of the user and task. Presently, our research focuses on ensuring effective orientation and locomotion in virtual environments. We are examining the roles played by landmarks (both designer and user-defined), spatial structure, and gravitational cues in users' acquisition of local and global spatial representations. Further, we are investigating what transitional dynamics enhance users' ability to effectively navigate and explore these virtual spaces, and to generalize their experiences to real-world environments.

Mary K. Kaiser, Ph.D. Mail Stop 262-2 NASA Ames Research Center Moffett Field, CA 94035-1000 (650) 604-4448 (voice) (650) 604-0255 (FAX)

ADVANCED DISPLAYS AND CONTROLS FOR SIX DEGREE OF FREEDOM ORIENTATION AND NAVIGATION IN VIRTUAL MICROGRAVITY
C. M. Oman1, A. M. Liu1, J. Marquez1, W. B. Sachtler2, W. E. Hutchison1, A. C. Beall3, and A. Natapoff1 Man Vehicle Laboratory, MIT, Cambridge, MA 02139, 2Research Lab of Electronics, MIT, 3Dept. of Psychology, University of California, Santa Barbara
1

INTRODUCTION
Our goal is to develop improved displays and controls for orientation and navigation in virtual environments, particularly for use by astronauts in simulations of weightlessness. We are working on three related projects: (1) We are conducting experiments to define the role of the rotatory component of virtual viewpoint motion on 2D and 3D path integration error. Klatzky et al. (1999) have shown that blind or sighted observers performing a triangle completion path integration task using real head movements make systematic errors, as if the ranges of turns and distances are compressed compared to actual values. In our experiments, subjects viewed a virtual environment without unique landmarks through color stereo head mounted display (HMD). In one condition, they were translated along three-legged parallelogram paths, and rotated (twice) so they always faced along the direction of motion. In the second condition, they were translated along the same paths, but without viewpoint rotation. In a third condition, their viewpoint was rotated through angles identical to those in the first condition, but without translation. In all cases, they then made a real head rotation to face the point of origin. Results show that subjects overestimated the rotatory component of virtual viewpoint motion, presumably due to the absence of vestibular and haptic cues (Sachtler et al., in preparation). We suggest an extension of the encoding-error model incorporating this effect. Our data support the view that VR training systems should be designed so that real head movements accompany visual viewpoint rotations. We have also tested subjects on a 2D triangle completion task in the sagittal plane where they make a head rotation in pitch rather than yaw when pointing back to the point of origin. Results from these studies indicate that subjects can in fact perform path integration along paths involving rotations about non-vertical axes and that subjects make systematic errors not unlike those found by Klatzky et al. (2) We are extending the World In Miniature navigation tool concept (Pausch et al., 1995) for use in 3D navigation of large virtual spacecraft (Spacecraft in Miniature or SIM). Based on the results from our previous experiments (see above), we eliminate the rotatory component of visual motion whenever the users viewpoint flies in to the model. Instead, the rotations are encoded with proprioception in the physical movement of the SIM. In our experiments to determine whether the use of a SIM reduces the path integration error, subjects perform a 3D path completion task in a virtual starfield similar to that previously used and again point toward the point of origin. (3) We have defined a panoramic viewing system concept (Virtual Video), which could be employed in windowless cockpit (e.g. X-38), teleoperation, RPV, immersive teleconferencing or onboard training applications. Our approach extends the concept of the desktop panoramic static image viewer (e.g. Quicktime VR; IPIX) by presenting a real time, dynamic, stereo video image to an HMD wearer. Video images of a real scene are captured and used to texture the interior of a virtual surface surrounding the observer. A graphics accelerator then renders the view of the virtual surface in head coordinates, minimizing the perceptual lag associated with head tracking. We have built a proof-of-concept HMD based system (Hutchison, 2000). Acknowledgements Supported by NASA Contract NAG9-1004 References W.E. Hutchison (2000) The Development of a Hybrid Virtual Reality/Video View-Morphing Display System for Teleoperation and Teleconferencing, Masters Thesis, Program in System Design and Management, Massachusetts Institute of Technology, June. W.L. Sachtler, C.M. Oman, A.C. Beall, and A. Natapoff. (In preparation) Path integration in virtual environments: effects of rotation and course estimates.

R. Pausch, T. Burnette, D. Brockway, and M.E. Weiblin. (1995) Navigation and locomotion in Virtual Worlds via Flight into Hand-Held Miniatures. ACM SIGGRAPH `95 Conference Proceedings, Computer Graphics, July. R.L. Klatzky, A.C. Beall, J.M. Loomis, R.G. Golledge, and J.W. Philbeck. (1999) Human navigation ability: Tests of the error-encoding model of path integration. Spatial Cognition and Computation, 1: 31-65.

INDEX TERMS
Orientation; Navigation; Virtual environments; Path integration error; Encoding-error model; Visual viewpoint rotations Spacecraft-in-miniature; Panoramic viewing system; Virtual Video

HAPTIC INTERFACES TO AUGMENT HUMAN MACHINE INTERACTIONS IN SPACE ACTIVITIES


A. Raj1, L. Roetzer1, P. Fatolitis1, R. Cholewiak2, S. Kass1, and A. Rupert3 1 Institute For Human and Machine Cognition, University of West Florida, Pensacola Florida, 32501, 2 Cutaneous Laboratory, Princeton University, 3Neurosciences Laboratory, Johnson Space Center.

INTRODUCTION
When determining orientation in 1-g environments, the human brain utilizes input from a number of sensory channels, predominately proprioception (includes vestibular and somatosensory inputs) and vision. In aerospace environments, however, the proprioceptive sense often provides useless or illusory information. As a result orienting in such environments is currently performed by vision alone. This leads to significant workload levels for individuals flying aircraft, performing extravehicular activities (EVA), or interacting with robotic devices, as all actions, including orientation, must be performed visually. We are developing the Tactile Situation Awareness System (TSAS) as a method to allow an operator to utilize the sense of touch to perform some of the orientation and situation awareness tasks. In this manner, TSAS allows the operator to utilize multiple sensory inputs to reduce overall workload (as measured by performance on multiple tasks) and allow tasks that require the sense of vision (such as reading alphanumeric displays) to be performed more effectively. The sense of touch has several limitations, however; these include limited bandwidth (the peak sensitivity of the Pacinian Corpuscles, for instance is only 250Hz, other receptors are sensitive to even lower frequencies), resolution (e.g, two point discrimination) and susceptibility to habituation (which occurs centrally as the brain filters out constant signals) and adaptation (which occurs peripherally when the skin itself becomes less sensitive to constant stimuli). We are endeavoring to determine the ideal tactile transducer (tactor) that will minimize these limitations. We have developed appropriate experimental hardware and have conducted three pilot studies to evaluate sensory threshold of various tactors across a number of frequencies.

CURRENT STATUS OF RESEARCH Methods


In all three tests, male and female subjects from 18-56 were tested following receipt of informed consent (eleven subjects in the first study 3 female, 8 male; ten subjects in the second 1 female, 9 male; 10 subjects in the third 1 female, 9 male). Base line sensory threshold levels were obtained using a Brel & Kjr (Denmark) vibration transducer in contact (200gm constant force) with the each subjects left thenar eminence. Digitally generated white noise was low-pass filtered to 180 Hz (Optimus 12-2112, Ft. Worth, TX) and then attenuated until the sensory threshold was reported by the subject. Repeated measures were used where the signal attenuation was increased or decreased to determine average sensory threshold level (SL). Two tactors in each study were presented in a balanced order fashion for comparison to the baseline SL. The tactors were activated with waveforms according to manufacturer specifications at 20, 40, 60, 80 and 120 Hz in the first two studies; measurements were also taken at 90, 100, and 110 Hz in the third. In the first study, two different pneumatically activated tactors were evaluated, one with a hard nylon casing and one with a soft vinyl casing (model P1H and P1S, respectively, Steadfast Technologies, Tampa, FL). Sinusoidal waves at 80% maximum amplitude by specification (8 Volts DC) were presented via either the P1H or P1S to each subject for 200ms, alternating with 200 ms bursts of white noise on the Brel & Kjr transducer. The tactors under test were placed on the torso over the costovertebral angle on either the right or left side using an elastic belt. The white noise was attenuated (again in decreasing or increasing fashion) until sensory a match was reported by the subject. The attenuation match determined for each repeated measure on each tactor at each frequency was then corrected for attenuator error and then compared to the corrected SL. A similar procedure was used in the second study where the P1H was tested against a solenoid type electromechanical tactor (TDI-15, Trans Dimension, Inc, Irvine, CA). The TDI-15 was activated with a unipolar square wave with a 50% duty cycle (according to manufacturers specification). Since the perceived strength of the TDI-15 at the manufacturer specified signal strength exceeded the maximum strength of the Brel & Kjr full signal, the TDI-15 was run at 40% of the specification (+2.8 volts DC). The P1H was run at 100% of the specification (10 volts DC) in this comparison. The third study compared a button type electromechanical tactor, the C1 (Engineering Acoustics, Inc, Winter Park,

FL) run at 100% manufacturers specification (6 VDC sine wave) and compared with the P1H, also run at 100% specification. The tactors moving contactor size was 7 mm for the Brel & Kjr tactor, 15 mm for the P1H, 13 mm for the P1S, 1 mm for the TDI-15, and 6 mm for the C1.

Results
No significant difference was seen between the P1H and P1S tactors at any of the tested frequencies. Peak sensitivity above SL was at 80Hz for the P1S and 90Hz for the P1H. The TDI-15 and P1H comparison showed no significant differences at 40, 60 or 120 Hz, however the TDI-15 was significantly stronger at 20 Hz (p<.01) and the P1H was significantly stronger at 80 Hz (p<.01). The C1 was marginally stronger than the P1H at 110 (p<.05) and significantly stronger at 120 Hz (p<.01) and otherwise equivalent.

Conclusions
The two pneumatically actuated tactors studied can be used interchangeably in applications utilizing the tested frequencies on the torso. In the second test, the P1H showed a higher loudness at 80 Hz, however, it was running at 100% capability while the TDI-15 was running at only 40% capability. Since the TDI-15 showed no significant difference from the P1H across most frequencies (and superior performance at 20 Hz), it would be the better choice for applications that require a range of intensities across low frequencies. The C1 tactor showed increasing loudness with frequency, the manufacturers specifications claim peak loudness at 250 Hz (the resonant frequency for this design) and would be a good candidate for flexibility in the higher frequency ranges (up to 300 Hz).

FUTURE PLANS
The next series of experiments will determine the response characteristics of an additional pneumatic tactor as well as a hydraulic tactor. These new tactors produce a signal at up to 300 Hz and will be tested in comparison with the C1 tactor. In addition, a separate study will be performed to isolate the range of peak response for each tactor. The final series of experiments will utilize the tactor determined to be most versatile in an array on the torso to evaluate countermeasures for adaptation and habituation to continuous tactile stimulation.

INDEX TERMS: Tactile Interface, Situation Awareness, Multimodal Display, Spatial Orientation, Tactile Transuducer, Tactile Adaptation, Tactile Habituation.

KINESTHETIC COMPENSATION FOR SENSORIMOTOR REARRANGEMENTS


Stephen R. Ellis, Bernard D. Adelstein & Robert B. Welch, NASA Ames Research Center A newly proposed technique to allow users to control a computer screen cursor in the presence of large rotational sensorimotor rearrangements is being evaluated. We call this technique "kinesthetic cueing." Such rearrangements commonly occur when users teleoperate manipulators while remote cameras provide views of the worksite which are rotated with respect to the direction of command action. If the direction and position of the camera were sensed, consistent display and control actions can be achieved computationally. This remapping is, however, not always possible and users of teleoperations systems often must train to work in rotated coordinate systems. Development of portable, body-worn information appliances such as head-mounted see-through and partially occlusive displays has introduced a new environment in which rotated coordinate system can interfere with system usability. These portable systems often have Graphical User Interfaces for which the user inputs spatially continuous information with a cursor control device. Since these appliances are portable, the cursor control device must be hand-held or body mounted. Our main question is whether kinesthetic information from holding or body mounting the cursor control device assists users who operate under conditions of rotational sensorimotor rearrangement. Kinesthetic cueing will be provided to our experimental users by having them contact the cursor control device with the hand not controlling the cursor. This hand, usually the nondominant left hand, will be rotated to copy the rotation, i.e., misalignment angle, with which the operator must deal. Preserving its rotated position, it is then placed in con-tact with the cursor control device. Informal observations suggest that the proposed technique works. Experiment 1 is intended to formally test the idea and determine whether the cueing hand needs to be the nondominant hand. If this technique can be successfully developed, it will solve two problems associated with training users to deal with sensorimotor rearrangements. First, it could reduce or eliminate the need for adaptation training. Secondly, it could remove concern about the perceptual aftereffects of adaptation. To examine this second possibility, Experiment 2 is specifically intended to measure the size of possible perceptual aftereffects experienced by users whose training to operate under conditions of sensorimotor rearrangement has been assisted by kinesthetic cueing. Finally, Experiment 3 will investigate the benefits of kinesthetic cueing with a variety of cursor input devices besides the graphics tablet used in Experiment 1. These will include trackpads, isometric joysticks, mice and tablet devices.

REDUCED UNCERTAINTY ACCOUNTS FOR THE ENHANCED SENSITIVITY TO MOTION TRAJECTORIES


P. Verghese, S. P. McKee, and D. Vreven Smith-Kettlewell Eye Research Institute, San Francisco, CA 94115

INTRODUCTION
Humans can detect a single dot moving along a straight trajectory among dense noise dots in Brownian motion. This sensitivity far exceeds the prediction based on combining information from local independent motion units (Verghese, et al 1999, Vision Research). It appears that the first part of the trajectory alerts the visual system to any consistent motion in the vicinity. Although a trajectory is most detectable when it continues along the same path, the sensitivity to any motion in close spatiotemporal proximity to the first part is enhanced (McKee, S.P. & Verghese, P. ECVP, 1998). Our goal in this study is to determine the basis for the enhanced sensitivity to motion trajectories.

CURRENT STATUS OF RESEARCH Methods


Four observers participated in this study. We compared sensitivity to contrast increments at the beginning (first 70 ms) or the end (last 70 ms) of a 200 ms trajectory. Proportion correct was measured as a function of contrast increment. The data were fit with a Weibull function with two parameters: contrast threshold, and slope. Contrast threshold is the contrast required for 82% correct performance. The slope refers to the log-log slope of the Weibull fit.

Results
When the trajectory was presented in isolation, thresholds at the beginning and at the end of the trajectory were almost identical. When the trajectory location was randomized by 2, thresholds for the beginning of the trajectory were significantly higher. Furthermore the slope of the psychometric function for the beginning of the trajectory was much steeper than that for the end of the trajectory. This is the classic uncertainty effect; observers appear much more uncertain about the beginning of the trajectory, rather than the end. This uncertainty effect can also be mimicked by adding noise dots to a trajectory in a known location.

CONCLUSION
These results show that the first part of a trajectory reduces uncertainty about the spatiotemporal location of subsequent segments, leading to enhanced detectability for predictable motion trajectories.

FUTURE PLANS
We will further investigate how prediction along the motion path is achieved, and how specific it is with respect to space, time, direction and speed. We will also determine whether the ability to predict future motion paths depends on prior experience with smooth motion. We will compare the data with the simulations of motion models that include predictive filters with varying degrees of prior knowledge.

INDEX TERMS
Uncertainty, motion trajectory, prediction, prior knowledge, contrast threshold.

COMPUTER VISION-BASED QUANTITATIVE ASSESSMENT OF ASTRONAUT ACTIVITIES


D. Metaxas(1) and D. Newman(2) (1) Dept of CIS, Univ. of Pennsylvania, Philadelphia, PA 19104-6389 (2) Department of Aeronautics and Astronautics, MIT, 33-119, 77 Massachusetts Avenue, Cambridge, MA 02139.

INTRODUCTION
To produce computerized human performance measurement, evaluation, and modeling techniques for intravehicular (IVA) and extravehicular (EVA) activities. Performance modeling is needed to assess task feasibility for routine, novel, or unexpected tasks in space. Crewmember tasks include EVA suited as well as unsuited activities. The target is to make tangible progress toward non-invasive video motion capture of astronaut activity for real-time task analysis, monitoring, and safety assessment based on computer vision methods, optimal control and customized wearable hardware. By building computer methods to capture, estimate, analyze and optimize human motions and interactions in space we will attain our proposed objectives. The emphasis of the proposed research activities will be on the development of real-time algorithms.

CURRENT STATUS OF RESEARCH Methods


We have developed a carefully planned collaborative research program between MIT, UPENN and NASA to extend our existing technology. In particular we have developed 1) performance models based on both empirical (NASA studies) and computer simulation data, 2) initial real-time computer vision methods for the real time monitoring and quantitative assessment of IVA and EVA tasks, 3) dynamically accurate simulations of motions in microgravity, 4) efficient optimal control methods coupled with the computer vision methods for generating improved minimum torque-based IVA and EVA tasks, and 5) miniaturized hardware so that the algorithms to be run in real time can be used in future spaceflight missions. Our methods are based on the use of recursive dynamics and optimal control methods for modeling optimal motions in IVA and EVA tasks. Computer vision methods based on the use of single or two camera input have been developed and used for tracking certain points on a human's body. Computer vision-based motion estimation will be used to establish analytic and predictive measures for IVA and EVA space human factors. Such simulations will provide direct inputs to force feedback devices for VR-based training.

Results
We will demonstrate out results by showing a video of optimal control methods for modeling and assessing IVA and EVA astronaut activities. In addition we will show real-time processing of video based on our computer vision algorithms. We will also demo some use of our software on the miniaturized wearable computer used at MIT.

CONCLUSION
We have developed a new class of optimal control and computer vision methods for the performance assessment of astronauts in IVA and EVA. Our computer vision algorithms run in close to real time and can be used in real time monitoring of human activity.

FUTURE PLANS
We plan to further develop our methods and their use in the assessment of astronaut performance based on computer vision and optimal control methods. In particular we will use strength data and video footage from our consultant Dr. Jim Maida to further tune our algorithms to NASA IVA and EVA task performance assessment.

INDEX TERMS
IVA and EVA Space Human Factors, Computer Vision, Simulation, Quantitative, Performance Assessment, Wearable Computers, VR-based training

TECHNOLOGY DEVELOPMENT FOR THE MICROGRAVITY INVESTIGATION OF CREW REACTIONS IN 0-GRAVITY (MICR0-G)
Sylvie Lody, Dava Newman, Guido Baroni, Giancarlo Ferigno, and Antonio Pedotti External disturbances to a spacecraft in orbit such as aerodynamic drag or thruster activity can be described in a simple analytical form and estimated well from vehicle and environmental parameters. Similarly, disturbances inside the spacecraft due to the operation of mechanical equipment such as pumps, fans, and valves can be foreseen and computed. Predicting astronaut-induced disturbances represents a far more challenging task due to the inherent randomness. An experiment on board the Skylab station in 1973, provided the first data collected in space on astronaut-induced disturbances and verified that astronauts can produce significant disturbance forces and moments if they so desire. During STS-32 in January 1991, acceleration measurements of the Orbiter quiescent periods in the middeck and payload bay were made. Additional experiments on STS-40 measured the disturbances caused by the rotating chair operations and crew sleep periods. The DLS (Dynamic Load Sensors) experiment on-board the Space Shuttle in March 1994, and the follow-on EDLS (Enhanced Dynamic Load Sensors) experiment on-board the Mir space station from May 1996 to May 1997, were the first efforts to describe, quantify, and predict astronauts' motions during normal on-orbit activities. The ultimate goal was to define how to obtain a stable, very low-level microgravity environment, necessary to assure that life science, material science, and astronomical investigations on board ISS yield the most accurate data. As the ISS is being developed, NASA defined a need for assessing human performance and astronaut-induced disturbances in microgravity. By collecting and evaluating the kinematic and kinetic data of astronauts in space, it becomes possible to characterize human motor strategies, postural behavior in weightlessness, improve the design of orbital modules, help maintain a quiescent microgravity environment for acceleration-sensitive science experiments, and optimize the human operative capabilities during long-duration space missions. Consequently, there is a need for a precise measurement of the forces and moments exerted by the astronauts on the space station and quantification of their postures and movements. An integrated system of advanced kinematic and kinetic instruments for the ISS, is being developed jointly by the Massachusetts Institute of Technology (MIT), NASA, Politecnico di Milano University, and the Italian Space Agency (Agenzia Spaziale Italiana, ASI) in a project known as Microgravity Investigation and Crew Reactions in 0Gravity (MICR0-G). Astronaut-induced forces and moments will be measured by an advanced version of the Dynamic Load Sensors that have flown on the Space Shuttle during Mission STS-62 and on the Russian orbital complex Mir. Crew motions will be captured by the ELITE-S2 system, an enhanced version of the real-time optoelectronic motion analyzers ELITE-S and Kinelite, flown respectively on Mir as part of the EuroMir '95 Mission and on Neurolab. ELITE-S2 is the human motion analysis system proposed to the European Space Agency (ESA) for the Experimental Physiology Module by ASI in collaboration with the Department of Bioengineering at Politecnico di Milano University with the contribution of the French Space Agency (Centre National d'Etudes Spatiales, CNES). The design of a ground-based prototype for the MICR0-G project is discussed. The new generation of sensors will be operational in January 2001, with initial tests on the January KC-135. Future plans include completion of spacequalification for the ground-based system. Space suit mobility with applications to extravehicular activity operations Patricia Schmidt, Dava Newman, and Ed Hodgson Computer simulation of extravehicular activity (EVA) is increasingly being used in planning and training for EVA. The space suit model is an important, but often overlooked, component of an EVA simulation. Because of the inherent difficulties in collecting angle and torque data for space suit joints in realistic conditions, little data exists on the torques that a space suit's wearer must provide in order to move in the space suit. A joint angle and torque database was compiled on the Extravehicular Maneuvering Unit (EMU), with a novel measurement technique that used both human test subjects and an instrumented robot. Based on the data collected in the experiment, a mathematical model that predicts EMU joint torques from joint angular positions was developed. The mathematical model was then applied to EVA operations by mapping out the reach and work envelopes for the EMU, based on the EMU angle-torque models and experimental data.

PREDICTING STRENGTH AND FATIGUE FOR SUITED AND UNSUITED CONDITIONS FROM EMPIRICAL DATA J. Maida, M.S.1, L. J. Gonzalez.2, Ph.D, S. Rajulu, Ph.D3, E. Miles, B.S.4 1 NASA Johnson Space Center, SF5, Houston, TX 77058, 2Spacehab, 3NSBRI, 4 Lockheed Martin INTRODUCTION The need for longer and more labor-intensive extra-vehicular activities (EVA) is required for construction and maintenance of the International Space Station (ISS). Issues pertaining to human performance while wearing a space suit (EMU) for prolonged periods have become more important. This project was conducted to investigate how a pressurized Extra-vehicular Mobility Unit (EMU) affects human upper body joint strength and fatigue and how to predict it from computer models based on the data collected. CURRENT STATUS OF RESEARCH The suited and unsuited data collection phase has been completed. Three female and three male subjects, experienced in the use of the EMU, were measured. Testing was conducted in the Precision Air Bearing Facility (PABF) at Johnson Space Center, allowing for rapid configuration changes involving heavy test equipment, such as the support system required for suited subjects. The collected data was processed into tables so that a particular direction of motion over time could be investigated. This permitted the angle, torque and time data to be better visualized and to allow for better application of curve fitting techniques. In addition, to comparing torque and torque decay for suited and unsuited conditions, total work was computed for each isolated joint case. Methods Using a dynamometer, dynamic torque was measured while working at 100% and 80% of their maximum voluntary torque (MVT). Dynamic torque was measured until the subjects reached 50% of MVT for three repetions. Five isolated isokinetic joint motions were measured: (1) shoulder flexion/extension, (2) shoulder abduction/adduction, (3) shoulder internal/external rotation, (4) elbow flexion/extension, and (5) wrist flexion/extension. All of the testing was performed on the subjects right side with the subject secured in an upright (standing) posture. It was found that the experimentally measured torque decay could be predicted by a logarithmic equation using the average torque per repetition over time for all subjects. Similarly for all subjects, a torque surface was developed from the combination of two normalized curves. The curve representing the torque decay as a function time and the curve representing the torque as a function of joint angle for the first repetition of the

fatigue test. The torque surface can be used to predict a specific torque at a specific angle and time. Results The average error in the torque decay predictions was found to be 9.2% and 9.5% for the unsuited and suited subject, respectively, working at 100% MVT, and 11.3% and 9.4% for the unsuited and suited subject, respectively, working at 80% MVT. The average error in the torque surface predictions was found to be 12.2% and 16.1% for the unsuited and suited subject, respectively, working at 100% MVT, and 13.5% and 18.7% for the unsuited and suited subject, respectively, working at 80% MVT. Conclusions A logarithmic function can be used to predict torque decay with a reasonable degree of accuracy. A torque surface representation can be used to predict torque as function of angle and time with a reasonable degree of accuracy. FUTURE PLANS The collection of EMU suited data is difficult, and it is anticipated that the number of subjects will always remain small, therefore extrapolation from the unsuited cases to the suited cases are needed. Strength of lower body joints and upper body isolated joints at specific postures need to be measured at different joint velocities. Finally, predictions of strength and fatigue need to be further integrated in a human computer model. INDEX TERMS Extra-Vehicular Activities (EVA), Extravehicular Mobility Unit (EMU), maximum voluntary torque (MVT)

A SHARED AUGMENTED-REALITY SYSTEM FOR COMMUNICATION ACROSS ENVIRONMENTS


W. P. Staderman1 1 Virginia Tech, 208 Turner Street Suite 101, Blacksburg VA 24060

INTRODUCTION
Development of a system of shared augmented reality (SAR) that links operators who may not be experts in a particular domain and experts across environments is proposed. Such a system will allow operators to communicate a particular environment rather than mere audio or video of that environment. This system can be exploited in a variety of situations where the access of experts to a particular environment is restricted (e.g.: extravehicular activities, mining, medicine, etc.). Similar systems are being developed that automate the augmentation. SAR maintains an active role in providing augmentation of a human expert.

PROPOSED SYSTEM: SHARED AUGMENTED REALITY (SAR)


Augmented Reality (AR) typically uses head-mounted displays (HMDs) and high precision head tracking to accurately overlay graphics and spatially locate sounds of simulated entities in the real world.

Unlike virtual reality technologies, augmented reality maintains the situational awareness of the operator by augmenting the real world, rather than substituting it with a virtual world. Most current approaches to such augmentation strive to automate the process and the majority of research in the AR domain has concentrated on resolving technical issues of combining real and computer-generated environments (McGarrity and Tuceryan, 1999). The registration errors that this research focuses on is associated with serious operator problems, such as the induction of motion sickness or simulator sickness caused by conflicting visual and vestibular cues. The popular approach to resolving this registration problem is focused on designing and building increasingly expensive hardware (e.g. trackers) and software to improve the synchronization of real world and computer-generated information. This synchronization is especially difficult when the real world image shifts according to movements of the operator. The proposed system allocates the synchronizing function to the operator. The operator is expected to abstract from an augmented scene to the real world. The proposed system offers many benefits compared to traditional systems. Since expensive hardware and software are not the driving force behind system success, SAR is much cheaper. Further, since software development is a slow and environmentally specific process, SAR is quicker to implement and it is more flexible than traditional approaches to augmented reality. Finally, SAR can be implemented in a given context quickly.

FUTURE PLANS
The proposed system relies on a few key presumptions that call for empirical study. Initially, such an augmentation system as described needs to be constructed. Next, the ability of operators to abstract from one image to another must to be studied. Are there particular cues? What moderates this process?

INDEX TERMS
augmented reality, abstraction, information display, communication, abstraction

SUPPORT OF CREW PROBLEM-SOLVING AND PERFORMANCE WITH AUGMENTED REALITY


1

A. E. Majoros1 and U. Neumann2 The Boeing Company, 2University of Southern California

INTRODUCTION Cognitive models suggest that scenes or viewpoints merging real and synthetic features (augmented reality) will complement human information processing by controlling attention, supporting short- and long-term memory, and aiding information integration. Augmented reality (AR) may therefore present an effective solution when long-duration space flight demands problem solving, inventiveness, and recall (e.g., for the resolution of in-flight anomalies). CURRENT STATUS OF RESEARCH Methods Augmented realities involve an end-user interface and a ground authoring function. The interface will incorporate features that direct and control attention as suggested by cognitive models, and the ground authoring function will be designed for rapid production and compatibility with mission control communications standards. Our initial approach is the development of a sample authoring function that builds sample end-user augmented realities. Results Preliminary authoring design accomplishments include input of substrate video and very rapid annotation of video through exploitation of a unique feature tracking capability. On the interface side, design decisions are based on awareness of cognitive capabilities (e.g., easy interaction with annotation allows for rehearsal; stable graphic links between annotations and world features support the development of associations and encoding of content) and interface design principles (e.g., appearance and interpretability of content). The sample program will allow demonstration to, and design input from, astronauts, mission control and mission support personnel and other subject matter experts. To date, an iterative design and development process has advanced the sample authoring/interface system, including user mode selection, comprehensive interface logic, and interface appearance upgrades. Concurrently, an analysis of mission support functions is relating the AR system to existing resources for in-flight resolution of anomalies. Conclusion AR can complement human information processing in just-in-time training and on-orbit maintenance procedures. A systems engineering approach appears likely to produce a practical tool for authoring and using augmented reality in human space flight. FUTURE PLANS Design and demonstrations will enable subject matter experts' input on the description of a system model that encompasses communications and authoring/end use of augmented realities. INDEX TERMS

anomaly, augmented reality, cognitive psychology, communication, human information processing, human computer interface, just-in-time training, on-orbit maintenance, video

AUGMENTED REALITY FOR TELE-OPERATIONS


L. Zamorano, M.D.*, A. Pandya, M.S.*, M. Siadat M.S.*, J. Gong, Ph.D*, Q. Li. M.D. Ph.D *, J. Maida, M.S.#, I. Kakadiaris, Ph.D+ * Wayne State University, Neurosurgery Department, Detroit, Mi. Neurosurgery Department 4160 John R- Suite 930 Detroit, Mi. 48201 Phone:(313)966-0364; Fax:(313)966-0368 Email: apandya@neurosurgery.wayne.edu # NASA/Johnson Space Center, Houston, Tx. + University of Houston, Houston, Tx.

INTRODUCTION
Human task performance for remote operations often depend entirely on visual feedback that is indirect. The overall goal of this proposal focuses on the Human Factors Analysis and improvement of 3-dimensional image-guided visualization aspects of telepresence. This is a technical proposal that has the goal of the development and evaluation of more advanced image data display capability. Its aim is to advance the development of cost-effective technologies that support seamless integration of the image data, the operator and the system elements. In this paper, we will show the relationship between a medical (neurosurgery) telepresence system and the Shuttle Remote Manipulator operations. We will show our progress in the technology of Augmented Reality (fusing real images with 3D graphics images) and how it can potentially assist both of these applications.

CURRENT STATUS OF RESEARCH


An Augmented Reality (AR) system generates a composite view for the user. It is a combination of the real scene viewed by the user and a virtual scene generated by the computer (a 3D model) that augments the scene with additional information. A relatively new field (Computer Controlled Image Guided Stereotactic Neurosurgery) blends the use of computer-based medical imaging data with real-time instrument position data capture to assist the surgeons in localizing and removing lesions. Surgeons also use robots (we are the first and only site in the US to use robots for neurosurgery) that can be guided to remote locations of the brain very precisely. Throughout every operation, a neurosurgeon must maintain a precise sense of complex three-dimensional relationships. Astronauts performing remote functions like operating the Remote Manipulator System (RMS) need to maintain a mental 3-Dimensional real-time environment map for successful operations also. The science of presenting and displaying complex 3dimensional images in an operationally meaningful way is the basis of this human factors research proposal (See Figure 1 and 2). Our goal is to have a tracked camera system and superimpose the camera's view with 3Dgraphical structures of interest related to the application-- the brain correctly registered with the patient or the Space Shuttle graphics models registered with live video from the various RMS cameras. In this paper we will discuss one of the most important problems associated with AR technology -- camera (view point) tracking methods. Methods One of the most important issues to consider for a very accurate AR application is the method for tracking the various elements of the environment such as the video camera. For camera tracking we have compared three different methods 1. Infrared camera tracking using a FlashPointTM 3000 camera 2. A camera mounted on precise Neuromate neurosurgery robot arm and 3. A single-camera calibration using pattern recognition techniques. In order to implement the single-camera /pattern recognition method, we used the ARToolKit. Presented here is a comparison of the prerequisite methods of providing an AR environment. Results/Conclusions There are some limitations and strengths for using each of the systems outlined for camera tracking. Line-of-site and lighting condition issues exist for both the pattern recognition and infrared tracking. The virtual objects will only appear when the tracking marks are in view and the lighting conditions are properly adjusted. A robotics-based camera overcomes both of these problems. There are also range issues. For the pattern recognition system, the larger the physical pattern the further away the pattern can be detected and so the greater the tracking volume. The infrared camera distance to the phantom limits the IR tracking (typically the range is 1 meter). The robotic solution is dependent on the robotic kinematics and the range of motion of each of the joints. For the restricted volume needed for neurosurgery applications, all the mentioned methods could be potentially used. For space applications, a pattern recognition method combined with forward kinematics available from the robotic system may be needed.

INDEX TERMS: Human Factors, Augmented Reality, Computer-Assisted Surgery, Visualization, Telesurgery,
Telemedicine, Telepresence, Remote Operations, Robotics

Monitor Displays Tracked Tool Position in Brain

Figure 1: A Neurosurgeon guiding a tracked endoscopic probe deep into the patients brain uses a monitor to view the location of the probe and the endoscopic view.

Figure 2: An Astronaut controlling the Shuttle RMS relying on camera and out the window views of the remote site.

TELEOPERATION OF LIFE-SCIENCE EXPERIMENTS WITH TELECOMMUNICATION TIME DELAY


V. Shastri, D. Nitzan, J. DeCurtins, Y. Gorfu, P. Garcia, L. Mortensen, and L. Hettinger SRI International, 333 Ravenswood Ave, Menlo Park, California 94025

INTRODUCTION
Life-science experiments, to be carried out by astronauts, are scheduled for the upcoming international space station. The cost and time entailed in astronaut training and experiment execution could be reduced considerably if scientists on the ground were able to conduct these experiments directly by teleoperating robotic arms in the space station. However, a telecommunication time delay (up to 8 seconds) renders such teleoperation impractical. The objective of this research is to minimize the adverse effect of the time delay so that scientists could perform their experiments efficiently from the ground. We are developing a teleoperation system using supervisory control: each task is divided into subtasks, and each subtask is commanded by the teleoperator and, after a single time delay, is executed autonomously by a remote sensor-based robot. Sensory feedback from the remote site allows the teleoperator either to specify correction to any errors or to proceed to the next subtask.

CURRENT STATUS OF RESEARCH Methods


We are utilizing SRIs existing telesurgical system, which includes left and right pairs of identical 6 degreeof-freedom (dof) master/slave arms (4 arms altogether). We have modified this system so that supervisory control is applicable to it: a master arm is used as a local arm for generating subtask commands (including joint trajectories), and a slave arm, equipped with sensors, is used as a computer-controlled remote robot for executing each subtask. To generate subtask commands, the teleoperator moves the local arm, as required by the experiment, while watching a stereo image of a simulated remote site. The subtask commands are then transmitted to the remote site and, upon arrival, are executed by the remote robot. Using video and force feedback, the teleoperator checks the resulting state of the remote site and then proceeds with the next commands either for error correction (of the current subtask) or for the next subtask.

Results
Task Selection: (1) We visited NASA ARC and learned about life-science experiments to be performed in a glove box in the upcoming international space station. (2) We analyzed these tasks and selected two for the development of our system: Wet Swab Sampling and Bee-Stick Application to Flower Pollination. Development of a Supervisory-Control System: (1) We have separated the local and remote 6-dof master/slave arms in SRIs telesurgical system so that the local arm can be used as an input device for generating arm motion, and the remote robot can be independently controlled by a remote computer as an output device for subtask execution. (2) We have implemented a command-and-control protocol for communication between the local arm and the remote robot. (3) We have modeled the local arm so that hands-on manipulation of this arm is simulated graphically in 3D in a virtual glove-box scene; as a result of this manipulation, joint-trajectory data are filtered, recorded, and, upon teleoperator command, sent to the remote robot for execution. (4) We have implemented a time-delayed execution of a sequence of poses sent to the remote robot; intermediate poses between neighboring poses are interpolated to maintain smoothness of motion and user-specified velocity. (5) We have implemented a 3D stereo image of a virtual glove-box scene, including the local arm, using shuttered LCD goggles.

Issues
While several issues were foreseen before the development of the supervisory-control system, others have become apparent during its development. We are currently encountering the following issues: (1) Simulation of the Remote Site: accuracy; range of movable viewpoint; force feedback to a moving local arm when a simulated arm contacts a simulated surface; object momentum, collisions, and stiffness in zero gravity. (2) Command and Control: usefulness of voice commands vs. keyboard commands; sensor-based control and manipulation at the remote site. (3) Error Correction: remote-site camera poses and control for best assessment of subtask execution; representation of force feedback to the teleoperator.

Conclusion
We have been developing a basic robotic system for time-delayed teleoperation of life-science experiments in the space station. While more remains to be developed, we will soon be ready to begin such experiments as we focus on the human factors associated with them.

FUTURE PLANS
We will complete the development of the supervisory-control system, address the above issues, employ subjects in the performance of the selected life-science experiments, and analyze the human factors associated with that performance.

INDEX TERMS
Life-science experiments, telepresence, human factors, telecommunication time delay, remote manipulation, teleoperation, time-delayed teleoperation, virtual reality, graphical simulation.

HUMAN PERFORMANCE DURING SIMULATED SPACE OPERATIONS UNDER VARIED LEVELS OF SYSTEM AUTONOMY
Bernd Lorenz, Francesco Di Nocera, and Raja Parasuraman Cognitive Science Laboratory, The Catholic University of America, Washington, DC 20064. INTRODUCTION To accommodate future manned long-duration space missions (ISS, Lunar outpost, or Mars) a variety of systems will be developed with full autonomous control capability. These include rovers and other vehicles, life-support systems, and propellant production plants. Such autonomous systems are expected to operate efficiently in response to changing situations. It is widely acknowledged, however, that these systems should be designed to allow a shift from full autonomous operation to some sort of team-shared, tightly coordinated control between the artificial and the human agent (Malin, 2000). According to the concept of adjustable autonomy (Dorais et al, 1998), human intervention at various levels should be enabled, ranging from low-level manual actions through mid-level procedure execution to high-level planning activities (Malin et al., 2000). Following a model proposed by Parasuraman et al. (2000), automated systems do not only differ as a function of different levels of automation (LOA) but also as a function of different types of automation (TOA) corresponding to which of the four information processing stages is supported by automation: information acquisition, information analysis, decision selection, and action implementation. Regardless of their degree of involvement humans will be required to rely on automation to maintain the integrity and health of their living environment. Such reliance raises the issue of determining the behavioral consequences of particular LOA within different TOA including the impact of dynamic LOA changes. The relevant issues include trust in automation, complacency, situation awareness, maintenance of manual skill, as well as the potential for increased mental workload for the space crew. Here, we describe two experimental settings that investigated these issues. CURRENT STATUS OF RESEARCH Methods We developed two simulation platforms: first, a telerobot workstation where subjects were asked to control a remote mobile rover equipped with robotic manipulators to pick up samples of rocks from the surface of a planet. At the same time, they were required to monitor several parameters associated with the maintenance and fault management of some space vehicles life-support functions. Second, we used a modified version of the Cabin Air Management System (CAMS) developed by Hockey et al. (1998). This task required monitoring of an autonomous air management system and performing fault management in case of system malfunction. The symptoms associated with potential faults vary in complexity and demand a high level of knowledge-based decisions about, first, appropriate immediate safety actions (changing automation settings or assuming manual control of subsystems) while, second, deriving a diagnosis about the cause of the fault. The task environment allowed the analysis of strategies in information gathering and in secondary tasks (alarm reaction time, prospective memory) supplementing primary task performance data (percentage of out-of-range parameter values). We added a second automation layer to that task, representing a fault diagnosis and recovery agent. Using the telerobot workstation we investigated the effects of two factors on performance as a function of LOA changes: First, the "distance" between different LOA, and second, the direction of a change in LOA (e.g. from low to high vs. high to low). Four different LOA were introduced: 1) absence of automation: the subject did everything without system support, 2) notification: the system provided information about what happened, 3) suggestion: the system notified and suggested the appropriate action to take, and 4) action: the system resolved the problem and consequently notified this to the subject. Although all subjects encountered all four LOA, four groups of subjects differ according to what LOA was dominant. Using the modified CAMS we extended the investigation of dynamic LOA changes to their impact on more complex higher-order reasoning and decision-making functions represented by the third information processing stage in the model of Parasuraman et al. (2000). Trust in automation and subjective workload were also assessed. Fault management (FM) was to be performed under four LOA: 1) No support is given. 2) The automation assists the operator to derive root causes from system irregularities by providing a computerized trouble-shouting guide resulting in a FM sequence of actions to be performed by the operator. 3) The automation suggests a fault diagnosis

including a FM sequence of actions but leave the operator to decide and to act. 4) The automation provides the fault diagnosis and executes the appropriate FM sequence after some time to allow the operator to veto. Results Data collection for both experiments is still ongoing. Results obtained so far with the telerobot workstation confirm our hypothesis that the distance between LOA has a modulating effect on the subjects performance: Moving from a lower to a higher LOA and vice versa determines proportional changes in performance. This effect is stronger under multiple-task than under single-task conditions, suggesting an interpretation in terms of changes in resource allocations. Conclusion Dynamically changing LOA itself has a modulating effect on performance. Subjects in our telerobot study, however, were not aware of the series of LOA changes they underwent during the experiment. Thus, the results may also reflect a lack of mode awareness (Sarter & Woods, 1995) that will become a critical issue in the design of adaptive automation or adjustable autonomy. Subjects in the CAMS experiments were informed about the LOA under which they operated. It has to be awaited whether the findings of the first study, if substantiated, can also be generalized across both simulation environments. FUTURE PLANS The results of these studies will inform the design of our future research and development efforts on adaptive automation systems for space operations. We will carry out a study that examines the impact of adaptive changes in LOA and TOA on system performance in both the telerobot and the CAMS simulations. In defining an optimal scheme for adaptive automation, definitions of crew task scheduling and workload will be framed for long duration space exploration. We will also investigate whether the temporary re-introduction of manual operation of automated tasksfound to be beneficial in aviation by Parasuraman et al. (1996), will extend to benefits for fault management during space operations. A crew task schedule that includes such prescribed manual operation intervals may increase overall mission safety. Following this work, we will examine the use of physiological measurement in adaptive systems. In addition to operator performance measures, we will use the following measures as real-time indicators of human cognitive processing: eye movements (e.g., point-of-gaze, and visual scanning); EEG, including ERPs, and heart rate variability (HRV). INDEX TERMS Adaptive automation, adjustable autonomy, level of automation, telerobotics, life-support-systems, fault management, human performance, automation trust, operator workload; physiological measures.

REFERENCES Dorais, G., Bonasso, R.P., Kortenkamp, D., Pell, B., & Schreckenghost, D. (1998). Adjustable autonomy for humancentered autonomous systems on Mars. 1st International Mars Society Convention Proceedings. Hockey, G.R.J., Wastell, D., & Sauer, J. (1998). Effects of sleep deprivation and user interface on complex performance: A multilevel analysis of compensatory control. Human Factors, 40, 233-253. Malin, J. T. (2000). Preparing for the unexpected: Making remote autonomous agents capable for interdependent teamwork. Paper presented to the 44th Annual Meeting of Human Factors and Ergonomic Society, San Diego. Malin, J.T., Kowing, J., Schreckenghost. D., Bonasso, P., Nieten, J. Graham, J.S., Flemming, L. MacMahon, M., & Thronesbery, C. (2000). Multi-agent diagnosis and control of an air revitalization system for life-support in space. Proceedings of 2000 IEEE Aerospace Conference. Parasuraman, R., Mouloua, M., & Molloy, R. (1996). Effects of adaptive task allocation on monitoring of automated systems. Human Factors, 38, 665-679. Parasuraman, R., Sheridan, T.B., & Wickens, C.D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, 30, 286 297. Sarter, N.B. & Woods, D.D. (1995). How in the world did we ever get into that mode? Mode error and awareness in supervisory control. Human Factors, 37, 5-19.

KNOWLEDGE SHARING TO SUPPORT DISTRIBUTED MISSION CONTROL COMMUNICATIONS


B. Caldwell School of Industrial Engineering, Purdue University, West Lafayette, IN 47907

INTRODUCTION
The present research study examines coordination between Flight Controllers communicating within and between multiple Mission Control Center (MCC) Flight Control Rooms (FCRs) controlling Space Shuttle (STS) and Space Station (ISS) operations. Our work examines FCR coordination across phases of flight and rates of event dynamics. The FCR consoles included in our study include spacecraft subsystem experts (electricity and power: EGIL / PHALCON); spacecraft communications and telemetry experts (INCO / CATO); local astronaut crew expertise (A/G, S/G), FCR computer system coordination (GC) and overall Flight Director global expertise (FD).

CURRENT RESEARCH PROGRESS


Based on Sheridan's descriptions of human-interactive and task-interactive systems, my research team has focused on the multiple feedback processes that must occur to support distributed supervisory control tasks in an MCC environment. Our research to date has highlighted distinct elements of knowledge sharing, as well as a hierarchy of data, information, knowledge, and expertise in a complex information environment, which must be coordinated within and across time scales. Knowledge sharing can be described according to two dimensions of coordination: domain / process: Knowledge of a content area (such as computer science) vs. knowledge of implementation processes (such as debugging local computer software environments) formal-technical / informal-social: Knowledge of theory and design specifications vs. knowledge of local expertise and needs for optimizing the system-as-implemented In the MCC environment, there are multiple time scales of system behavior and performance to which controllers must be sensitive. These time scales range from computer subroutine processing and telemetry update rates (measured in milliseconds to seconds), through spacecraft event processes, through Flight Controller recognition and response patterns, through local Flight Director decision cycles, through organizational evolution of Flight Rules and FCR configurations (measured in months to years). Task coordination in response to changing spacecraft conditions may require Flight Controller sensitivity to multiple time scales, and the need to conduct problem solving tasks that are suboptimal within any one time scale, but optimized for performance in a multiple-scale task environment. In addition, coordination tasks across FCRs require sensitivity to multiple sources of asynchrony. These sources of asynchrony may require a controller to wait, perform other preparation or contingency analysis tasks, or increase local expertise in order to better respond to new information when it becomes available. Controller response to asynchrony depends heavily on expectations for real-time vs. delayed response, as well as time constraints limiting the ability to wait for confirmation or system updates before critical decision making tasks based on current information.

FUTURE PLANS
Currently, there is no reliable means for MCC technology systems to distinguish either time scales for information sharing, or information asynchronies and local controller options for dealing with asynchrony. This paper will conclude with suggestions for information support technologies (using metadata techniques such as XML) to provide increased context knowledge support and knowledge sharing for controllers operating in a distributed MCC environment.

INDEX TERMS
Communication; distributed performance; group dynamics; information flow; information technology; knowledge engineering; knowledge sharing; mission control; supervisory control; task coordination

REAL-TIME EMBODIED AGENTS FOR MULTI-PERSON TASK SIMULATION


N. Badler, M. Palmer, A. Joshi Computer and Information Science Department, University of Pennsylvania, Philadelphia, PA 19104-6389 A challenging research area for computer graphics and virtual environments (VE) is training interpersonal interactions, for example, during cooperative tasks. In such a system, at least one person is the VE participant while several more virtual human agents (represented by human-like, embodied models) are engaged in activities in the same virtual space. The participants, whether live or virtual, should interact as if all were real. While superficial appearance (shape, attire, equipment complement) is certainly important to the live participant's perception of the other virtual beings, their appropriate behavior is arguably more crucial to mission success. Behaviors must execute in real-time and be responsive to other participants as well as to the physical context including machinery and equipment. In recently initiated NASA NRA 00-HEDS-01 research, we will be studying crew task simulation for maintenance, training, and safety. Ambitious space missions will present new challenges for space human factors as crews interact with each other and maintain complex equipment and automation. This demands that procedures for task design and rehearsal be based on on-ground validated procedure instructions. We wish to investigate requirements for formulating, interpreting, and validating procedure instructions. Crew instructions should express clearly and unambiguously complex actions and their expected results. As NASA moves toward a what to do rather than a how-to-do approach, it is essential that textual instructions convey what is meant and correlate with equipment function and construction. Natural language instructions must coordinate with graphical simulations to provide both textual and visual guidance and context-sensitivity for procedures, especially when the equipment functions and features are accessible to the simulation. Current manuals and procedures need all alternatives to be made explicit, leading to clumsy documentation that must be laboriously prepared in advance. Any documentation for a complex procedure should understand the equipment and the human factors of the crew. One important aspect of what-todo procedure documentation is that purposes and termination conditions of continuous processes are very frequently cited. These need to be generated from or interpreted by simulated actions. Although our focus in this project will be on ground-based procedure execution and validation, we envision future applications to spaceflight crews who will use natural language to communicate with each other, the on-board automation, and to training and refresher simulations. Our research proposal is to use high level user interfaces such as natural language instructions to control virtual crewmembers so that they may be used for procedure formulation, validation, inflight training, crew task allocation, and unusual procedure simulation.

INDEX TERMS: Computer graphics, natural language understanding, virtual environments, virtual reality, task simulation, embodied agents, autonomous agents.

EVENT REPRESENTATIONS: COMMUNICATING CHANGE INFORMATION TO SUPPORT HUMANCOMPUTER AND HUMANHUMAN COOPERATION
David D. Woods, Klaus Christoffersen and Rene Chow Institute for Ergonomics, The Ohio State University, Columbus, Ohio, USA There is a paradox in designing computer tools, automation and CSCW systems to support anomaly response and replanning. On the one hand, computer tools are grounded on collecting, manipulating, and displaying base data in the form of telemetry displays, alarms, and logs. On the other hand, from studies of expert performance and high proficiency teams, we find that practitioners work with and communicate event level information as the base level of analysis of the processes they control. Practitioners expertise is devoted to integrating the available data into an event which he/she recognizes as anomalous relative to expectations. Practitioners share information about surprising events and the analyses and activities they spawn to coordinate within and across teams. In this paper we review a series of studies in space mission control about what experts find informative and how they collaborate in anomaly response and replanning. One implication of these studies is the need for more advanced event representations to support evolving human roles and new forms of collaborative work. Ironically, little work in visualization or intelligent systems has dealt with the need for representing events and change. We analyze why capturing and displaying information about change is difficult and how expert practitioners are able to extract change information from raw telemetry data. These result are then used to describe principle and techniques for communicating change information in displays and in cooperative work tools.

FRAMEWORK ASSESSING NOTORIOUS CONTRIBUTING INFLUENCES FOR ERROR (FRANCIE): TAXONOMY DEVELOPMENT, USER PROCEDURES, AND SOFTWARE IMPLEMENTATION
Lon N. Haney Human Systems Engineering and Sciences Department, Idaho National Engineering and Environmental Laboratory, PO Box 1625, Idaho Falls, ID 83415-3855

INTRODUCTION Background
The objective of the project is to provide a methodology and analytic tool to reduce, or minimize the impact of, human errors in space missions and mission-related activities. The FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) is a framework and methodology for the systematic analysis, characterization, and prediction of human errors. It was developed (for analysis of airline maintenance tasks) in a prior NASA Advanced Concepts Project by Idaho National Engineering and Environmental Laboratory (INEEL), NASA Ames Research Center, Boeing, and America West Airlines, with input from United Airlines and Idaho State University. The development method included capturing expertise of human factors and domain experts in the framework, and ensuring that the approach addresses issues identified by project partners as important for future human error analysis. In the current Advanced Human Support Technology Project, FRANCIE is being refined for analysis of ground based maintenance and assembly activities for spacecraft and launch vehicles, and then for activities performed in microgravity. The project partners for the current project are INEEL, NASA Ames Research Center, State University of New York/University at Buffalo, and Idaho State University, with project administration and support by Johnson Space Center. A primary focus of FRANCIE is analysis of tasks to identify potential errors (or characterize observed errors), to identify and characterize the associated contributing influences to the errors, provide a level of prioritization, and to facilitate determination of appropriate countermeasures to reduce the occurrence and impact of the errors.

Framework Structure, Content, and Use


The framework is formed by a hierarchy of elements useful for the analysis of human errors. The core of the framework is formed by a hierarchy of Error Types, Generic Errors, and associated contributing influences to those errors called Performance Shaping Factors (PSFs). Each Generic Error is associated with a specific set of PSFs identified as important for that Generic Error. The PSFs are organized into eight General PSF Categories. These categories are: 1)Procedures, 2)Design, 3)Tools/Equipment, 4)Personnel, 5)Environment, 6)Organizational, 7)Work Group, and 8)Task Related. In addition to the core framework, a hierarchy of task analysis elements may be placed at the top of the framework. Also detailed information that supports development of error reduction strategies is attached to each Specific PSF at the bottom of the framework. The hierarchy of elements that form the basic structure of the framework (from the top to the bottom of the framework) are: 1)Task, 2)Subtask, 3)Generic Task Steps, 4)Error Types, 5)Generic Errors, 6)General PSF Categories, 7)Intermediate PSFs, 8)Specific PSFs, and 9)Human Factors-Based Countermeasures. Human factors and domain subject matter expertise is captured in the content and structure of the framework, and in the linkages between items. Human error analyst expertise is captured in the way the framework is used and in linkages to cognitive models, psychometrics, ergometrics, and error reduction strategies. To perform human error analysis, items in the framework are selected and assembled by the user into a model of human performance for a specific activity. The analysis develops the logic of an error event tree to support easy visualization of the structure of a task in terms of recovery actions, error chains, and error influencing dependencies. FRANCIE is designed to expand to other domains of human activity (e.g. operations, medicine, process control, other transportation industries, etc.) through taxonomy refinement and development. The structure of the framework and the procedures for applying the framework (for human error and human performance analyses) remain standard across domains. Expansion to other domains can be accomplished through actual use of the framework for performing analyses, or through sponsored efforts such as the current project. Expansion to other domains through actual use is demonstrated by application of FRANCIE for an aviation operations scenario for a new precision landing aid during Federal Aviation Administration certification of the new system.

CURRENT STATUS OF PROJECT Taxonomy Development


Tasks and activities observed during expendable launch vehicle assembly facilitated identification of several new Generic Errors and Performance Shaping Factors that were added to the taxonomy. The initial airline maintenance taxonomy contained helps/descriptions for approximately one third of the PSFs. These descriptions were expanded and descriptions were developed for the remaining PSFs, with focus on human factors-based countermeasures for reducing the occurrence of human error.

User Procedures
Procedures for the performance of FRANCIE analyses were developed. The procedures are comprised of modular procedural elements that are arranged in different ways to support different types of analyses. Procedures include detailed procedures and procedure guides for the task expert or error analyst with task expert, and for analysts that need to perform a task analysis. Procedures also include procedure guides for use during procedure writing, for use during incident investigation, and for use during design. A draft FRANCIE Users Manual was developed. The Users Manual presents FRANCIE background and theory very briefly. A primary focus of the manual is presentation of procedure guides, detailed procedures, and information and tables to support the performance of analyses using the FRANCIE framework and methodology. Additional information about the background and development of FRANCIE is contained in a journal article, conference papers, and a technical report referenced in the manual. The Users Manual underwent review by relevant project staff, and by potential users at organizations outside the project including the Federal Aviation Administration. Testing of procedures was performed in conjunction with the testing of FRANCIE software.

Software Implementation
A software application titled Basic User Task Analysis Application (BUTAA) was developed during the first project year. The application supports performance of custom task analysis for maintenance tasks and the selection of initial errors, associated PSFs, and corrective actions for the tasks. The development of BUTAA served as a pilot for the development of complete FRANCIE software in year two of the project. The FRANCIE software supports performance of analyses outlined in the FRANCIE Users Manual including characterization of task structure in terms of recovery actions, error chains, and disruptive dependencies between individuals or task steps, and also the assessment of associated PSFs and identification of human factors-based countermeasures. The FRANCIE software helps the user navigate the framework, access taxonomy lists, select taxonomy elements, and arrange the elements into a model of the task. Standardized reports generated by the software document the analyses.

FUTURE PLANS
The project research plan for the third year is development of a Generic Error and Performance Shaping Factor taxonomy for maintenance and assembly activities performed in microgravity. Information and data from task experts will be used to create the new taxonomy for activities performed in microgravity. The existing taxonomy for ground-based activities can be used as a starting point for taxonomy refinement and development.

INDEX TERMS
FRANCIE, Human Error, Human Reliability, Human Factors, Human Engineering, Human Performance

EFFECTIVENESS OF PRINCIPAL INVESTIGATOR-IN-A-BOX AS AN ASTRONAUT ADVISOR FOR A SLEEP EXPERIMENT*


Allen Atamer, Research Assistant ** Mindy Delaney, Research Assistant ** Laurence Young, Sc.D., Apollo Professor of Astronautics** The expert system, Principal Investigator-in-a-Box, or [PI], was designed to assist astronauts or other operators in performing experiments outside their expertise. Currently, it helps calibrate instruments for a Sleep and Respiration Experiment that flew on the Space Shuttles STS-90 and STS-95. [PI] displays electrophysiological signals in real time, alerts astronauts via light emitting diodes (LEDs) when a poor signal quality is detected, and advises astronauts how to restore good signal quality. This ground-based study sought to assess the utility of on-board expert systems, in general, for performing experiments and troubleshooting complex instrumentation systems. Results from this study can be applied to nuclear power plant control rooms, airplane cockpits, or wherever complex human-computer interaction is present. Thirty subjects received training on the sleep instrumentation and the [PI] interface. Each subject was then tested on two separate sessions with the instrumentation, once with [PI] assistance and once without. Results indicate a beneficial effect of [PI] in reducing anomaly troubleshooting time. Further, questionnaires showed that most subjects preferred monitoring the [PI] LEDs, together with monitoring waveforms, to monitoring of the waveforms alone. *This work was supported by the National Space Biomedical Research Institute through a cooperative agreement with the National Aeronautics and Space Administration (NCC 9-58). **Man-Vehicle Lab, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Bldg. 37-219, Cambridge 02139, Corresponding author: Laurence R. Young, (Tel) 617-253-7759, (E-mail) LRY@mit.edu

WARP COMMUNICATIONS SYSTEM AS TOOL FOR SITUATIONAL INFORMATION DISPLAY


A. Devereaux1, D. Carr2, T. Rathjen2 1 Communications Research Division, Jet Propulsion Laboratory, Pasadena, CA 91109, 2Advanced Human Support Technologies, Johnson Space Center, Houston, TX 77058.

INTRODUCTION
In the dynamic environment of Shuttle and ISS, real-time information retrieval and communications are critical. The WARP system provides a high rate data link between a computer and a wearable electronics package, with dedicated video, audio and data channels. The Phase II WARP prototype system was integrated into the JSC Shuttle Engineering Simulator for evaluation of the usefulness of the heads-up display of information for simulated docking and robotic arm manipulation scenarios.

CURRENT STATUS OF RESEARCH Methods


This is a hardware evaluation/proof of concept for a wearable communications prototype called WARP, built by JPL. The JPL WARP electronics are mated with a Sony Glasstron head-mounted LCD display for video feed; the Glasstron has been modified for inclusion of camera, microphone and speaker. The WARP wirelessly links back to a stationary base station computer, which may be located anywhere in the vicinity of the user and may be used to run Shuttle system software packages for remote display on the Glasstron. Included with the WARP system is an evaluation of Ames Research Center biosensor prototype from the Sensors 2000! for respiration monitor, activity monitor (motion detector), body temperature, and pulse oxymiter. The Ames biosensor prototype is also worn by the test participant, and the data from the biosensor is sent through the WARP communicator back to base computer for storage or forwarding. Test participants wear the WARP communicator and the Ames biosensors while working through a close proximity operations (Prox Ops) or robotic arm manipulation simulation with the Shuttle Engineering Simulator (SES) (aka dome simulator) in building 16 at JSC. During the simulations, the Glasstron display will allow test participants to view auxiliary data from the Rendezvous and Proximity Operations Program (RPOP), for Prox Ops scenarios, or from the Robotics Situational Awareness Display (RSAD) for manipulation scenarios. Commands to RPOP or RSAD are accomplished via a voice recognition interface. The Glasstron display can be adjusted so it covers the upper third of a test participants visual field, thus leaving the lower two thirds of the visual field available to observe the simulation. The WARP and Ames biosensor prototype units and battery packs to be worn by test participants weighs about 4 lbs. A separate battery pack for the Sony viewer weighs just over 1 lb (this unit is being evaluated by JPL, and functionality may be combined with WARP unit to save weight and volume). Approximately 5 test participants, both trainers and crew, will use the WARP with RPOP or RSAD and wear the sensors during 30 minutes of simulation activity. Their qualitative comments will then be collected on a post-test questionnaire. As mentioned earlier, this phase of the evaluation represents a demonstration and proof of concept for this technology. Neither the performance of test participants in the simulated docking or grapple maneuver nor the actual biosensor readings are a focus of this evaluation.

Results
Evaluations on the system validated the usefulness of heads-up display of remote information in not distracting from a task scan pattern, and the desire for a tetherless, robust communications system. Human factors considerations were the main issues for the test subjects. The size of the prototype electronics packages and the limited integration at this point between the communicator, the biosensor package and the commercial heads-up display made for some awkwardness in wearability, though did not limit the subjects performance of the tasks. The Sony heads-up display was considered clear and easy to read, but the form factor of its visor-type facepiece could be a hindrance to overhead vision, as in viewing through the overhead porthole at the Shuttle aft cabin console. The Sony displays color capability was beneficial in quickly discerning important information from the display without interrupting scan patterns.

CONCLUSION
Wireless communications, especially with video capability, can be an important addition to efficient performance of flight tasks. The WARP system shows that a wearable wireless system can provide video, audio and data communications without use of tether cables or hands-on laptops.

FUTURE PLANS
The next phase WARP electronics are being designed, to address the form factor issues of the prototype and to increase capability to multiple users. In parallel, a flight-suitable heads-up display will be commissioned or developed to address longterm wearability and avoidance of eye-fatigue. This Phase IIIL upgrade system will be developed for DTO flight testing, potentially on the Shuttle R-2 mission in 2002. Ames human performance researchers will be reviewing the data from the JSC tests and incorporating this into their on-going investigations into information display and user interfaces for a WARP-type communicator system. Ames and JSC input will go into heads-up display development or modification, as the heads-up display will be a primary interface to the WARP communications system, along with an audio-only headset.

INDEX TERMS
Communications, audio, video, data display, wearable computer, heads-up display, remote information retrieval, human factors

NOISE REDUCTION HEADSETS FOR OTOACOUSTIC HEARING ASSESSMENT OF SPACE STATION CREWS R. Kline-Schoder1, J. Buckey2, and F. Musiek3 1 Creare Incorporated, 2Dartmouth Medical School, 3Dartmouth-Hitchcock Medical Center INTRODUCTION Acoustic data from the International Space Station (ISS) indicate that there is a significant risk of damage to crew members' hearing. However, no method currently exists to monitor the hearing acuity of ISS crews. The objective of this project is to provide a means for reliable hearing assessment of ISS crews during extended space missions in the presence of moderately high noise levels. We are working to achieve this objective with an innovative design that combines digital active noise reduction and otoacoustic hearing evaluation hardware integrated within a functional headset. The noise reduction system will attenuate background noise and thus permit hearing assessment tests in space and relatively high noise environments on Earth. We conducted a pilot, ground-based, clinical study using passive noise reduction headsets whose results support the feasibility of our concept. CURRENT STATUS OF RESEARCH Methods In order to perform a pilot clinical study, we designed and fabricated a custom passive headset that is intimately integrated with otoacoustic hearing evaluation hardware. The Distortion Product Otoacoustic Emission (DPOAE) probe is connected to the base electronics unit with a cable that is passed through a hole in the headset. The hole is plugged to reduce the leakage of environmental noise from outside of the headset to inside the headset. The probe consists of two speakers and a microphone. The speakers are used to excite the ear with two pure tone signals and the microphone is used to sense the sound contained within the ear canal. The sensed sound is then transformed into the frequency domain where, if the background noise is low enough, the DPOAE amplitude can be measured. Using this hardware, we performed a pilot clinical evaluation using the custom passive headset described above combined with a DPOAE hearing evaluation test system in noise. We performed otoacoustic hearing evaluation on six test subjects, both with and without the custom hearing protectors, while subjected to ISS Service Module noise. The noise that was used during the evaluation was obtained from a recording of the actual ISS Service Module and was played at the noise level measured in the module (74 dBA). Results We obtained data in Creares acoustic testing facility using a simulated head and B&K microphone located near the exit of the simulated ear canal. The data show that without the hearing protection, the noise at the simulated ear canal is over 65 dB from 500 Hz to 3 kHz. After applying the custom headset and repeating the measurement, the measured noise at the simulated ear canal is under 40 dB from 100 Hz up to 4 kHz. These data show the noise reduction that can be achieved with a purely passive headset for this particular set up and hardware.

The pilot clinical data are shown in Figure 1. The data in this figure show the average signal-tonoise ratio (SNR) of the DPOAE amplitude during the tests which were performed three times on each subject. An SNR of the DPOAE amplitude that is greater than or equal to 10 dB is generally considered to be large enough for valid test conditions. Anything less than approximately 10 dB and the results of the test become suspect and may not be valid.
35

30

2kHz 3kHz

DPOAE Signal-to-Noise Ratio (dB)

25

4kHz

20

15

10

No headset, No Noise

Headset, No Noise

Headset, Noise

No Headset, Noise

Figure 1. Signal-to-Noise Ratio from DPOAE Measurement for Six Subjects (noise cases are with ISS noise at 74 dBA) Conclusion The data in Figure 1 show that, for the cases without noise, the DPOAE SNR is between 20 and 32 dB without the headset and between 25 and 31 dB with the headset. Both of these resulted in valid DPOAE amplitude measurements. For the case with noise and without the headset, the DPOAE SNR is between 1 and 6 dB. The measured values of SNR for these tests indicate that these measurements resulted in invalid DPOAE amplitude data. For the case with noise and with the headset, the DPOAE SNR is between 10 and 21 dB. These SNR measurements indicated that the DPOAE amplitude measurements were valid at 3 and 4 kHz and marginally valid at 2 kHz. Thus, for the six subjects tested: (1) otoacoustic hearing evaluation could not be performed when attempted in noise without the custom hearing protector and (2) otoacoustic hearing evaluation could be performed at most frequencies when attempted in the noise with the custom hearing protector. FUTURE PLANS The remaining work on this project will be to ensure that valid DPOAE measurements can be made at all testing frequencies exposed to an environment characterized by the ISS Service Module acoustic background by combining active and passive noise reducing technologies. INDEX TERMS digital noise control, feedforward control, otoacoustic testing, active noise reduction

OPTICAL COMPUTER RECOGNITION OF BEHAVIORAL STRESS David F. Dinges, Ph.D., Dimitris Metaxas, Ph.D., Naomi L. Rogers, Ph.D., Martin P. Szuba, M.D., Nicholas J. Price, B.S. University of Pennsylvania Manned space flights of increasingly longer durations (including inter-planetary missions) are being planned. There is evidence from both U.S. and Russian missions that astronauts involved in long-duration space flight will be exposed to stressors that can adversely affect subjective well-being, physiology, and operational performance capability. In order to identify and provide countermeasures for stressor-induced impairments in astronauts, objective, unobtrusive measures of the presence of stress reactions are needed. It is well established that human emotion and distress are universally expressed via neural control of facial muscles. This project involves collaboration between established laboratories at the University of Pennsylvania with demonstrated expertise in optical computer recognition of human subjects' subtle anatomical and motoric changes in facial expressions and gestures (Prof. D. Metaxas), and neurobehavioral performance under stressful and non-stressful conditions (Prof. D. Dinges). The goal is to develop and test an optically based computer recognition algorithm of the face to reliably detect the presence of stress during performance demands. The project specifically addresses critical path questions aimed at ways to objectively and unobtrusively identify emotional distress and its accompanying neurocognitive difficulties, and neuroendocrine responses to behavioral stressors during long-duration space flight. Video input to the system will be provided from an experiment in which n = 60 healthy adult subjects (males & females of different ages and ethnic backgrounds) will be exposed initially to both a control (no stress) and a standard/validated behavioral stressor (i.e., Trier Social Stress Test) for algorithm development. The developed optical computer recognition algorithms will then be prospectively tested for their accuracy in predicting both the presence and absence of stress reactions in the same 60 subjects exposed to three different types of behavioral stressors: (1) performance in the face of a physiological deficit (i.e., sleep-inertia challenge); (2) performance of very difficult cognitive tasks (i.e., workload difficulty challenge); and (3) performance in the face of an aversive expectation (i.e., venipuncture anticipation challenge). The importance to algorithm accuracy of age, gender, and ethnic background, as well as psychological (i.e., stress ratings, mood states), behavioral (i.e., cognitive performance), and physiological (i.e., cortisol secretion, heart rate variability) responses to the behavioral stressors will be explored. Development and validation of an optically based computer recognition algorithm will provide a critically needed method for detecting the development of stress responses in astronauts, and it will form a key component in the prevention and countermeasure strategies for stress.

Project Title: Cultural and Personality Determinants of Performance and Error Management Principal Investigator: Robert L. Helmreich, Ph.D. The University of Texas Human Factors Research Project 1609 Shoal Creek Blvd. Suite 101, Austin, Texas 78701 Phone: (512) 480-9997, Fax: (512) 480-0234 Email: helmreich@psy.utexas.edu Abstract

This research project consists of three related components. 1) Cultural issues in space operations. There has been growing awareness that cultural factors can influence the nature of crew interactions and can have significant influences on mission safety and effectiveness. The proposed research builds on investigations by our project into the effects of national, organizational, and professional culture on human performance in commercial aircrews, medical teams and Antarctic research stations. Using an established model of national culture developed by the PI, structured interviews will be conducted with ground and flight personnel of countries participating in the ISS. These data will be used to identify perceived problem areas in the multicultural environment of international space operations. This information will be used to develop an assessment tool that can be used to identify cultural differences in safety relevant attitudes in ISS personnel. 2) Personality as a predictor of astronaut performance. Previous research by our project has demonstrated that personality is highly predictive of performance in high stress and technically challeging environments. An initial investigation of the predictive power of personality on job performance in the astronaut corps was conducted by the PI and colleagues in the early 1990s. The long duration nature of ISS missions further increases the need to understand this relationship between personal characteristics and astronaut performance. An assessment of personality and performance will be completed on active astronauts who volunteer to participate in this study. Personality will be assessed by a battery of psychometric measures developed by the PI in the early 1990s. Performance will be assessed using a combination of peer and supervisory ratings. For astronauts who were active in 1992, at the time of our initial assessment, longitudinal comparisons of performance stability will be assessed. In the PIs earlier work, 77% of active astronauts completed the predictive personality battery along with 379 astronaut candidates in three selections. These data will be used to predict the multiple dimensions of performance relevant to astronaut selection and crew composition. The results should allow refinement of current select-in strategies for astronaut and crew selection for long duration missions. Concurrent investigations by our research group have involved the application of similar testing batteries to aircrew and Antarctic station personnel. Comparissons will be made between personality profiles and perfomrnace predictors in these various environments. 3) Team coordination and communication during simulation. Previous research by our group has shown that team communication and coordination are effective countermeasures to the various threats to safety that are found in complex and demanding environments. The final component of this project will develop measures to assess

team coordination and communication during spaceflight simulation, including assessments of threat and error management. These data will be collected through interviews with astronauts and other subject matter experts in conjunction with structured observation of space operations training. The resulting data could be employed in the development of a model of error management that could be used in mission design and refinement of training and procedures.

MALLEABLE HUMAN INTERFACES D. Russo, PhD.1, J. Whiteley, Ph.D.2, 1 NASA Johnson Space Center, SF1, Houston, TX 77058, 2Johnson Engineering, INTRODUCTION Traditionally most NASA systems, including the human-centered human/machine interfaces, have been discrete, purpose-built systems. Astronaut crews would face an enormous maintenance and training burden if each exploration mission had different equipment, supplies and spares. Central and very critical to the successful performance of these missions will be the human/machine system issues that must be resolved to enable continued successful human space flight missions. CURRENT STATUS OF RESEARCH This is the culmination of the first year of a three-year project. The first activity to be accomplished was to identify several tasks critical to the successful performance of exploration type missions, which had human/machine system issues, and which must be resolved to enable continued successful human space flight. Of the tasks identified, the task of performing a landing with a Surface Lander type vehicle on the surface of a planet/moon, and identifying and incorporating display and navigation/safety display ideas for a rover type vehicle were selected as the tasks to go into further depth and evaluation. This selection was made with the help of the Exploration Office at JSC, as they had identified planetary landings, especially in rough terrain, as a critical need to their continuing activities. Our team selected this task, and secondarily the Rover task, on the basis that they both had a good mix of human interfaces issues, as well as technology and automation content. Once the initial tasks were selected, we focused on the assignment to identify whether we could actually host the task and accomplish the activity within the Concept Exploration Laboratory using the virtual reality equipment that was available. This activity has taken the better part of the first year to accomplish and is still underway. One of the largest activities within this realm was learning the Pilot Reconfigurable Intelligent Symbology Management System (PRISMS) and attempting to expand this virtual reality program from a Earth-centered helicopter flying environment into space-based scenarios which would allow the development and inclusion of other environments such as Mars or the Moon, while still allowing the ability to collect performance data. This effort was implemented to substantially save money by not having to custom build an entire virtual environment and allowed the transfer of previously gained PRISMS knowledge into this project. Methods To begin the first phase of the three-part project, critical tasks were determined through interviews and discussions with subject matter experts and team members alike. Once several tasks were proposed, an investigation into the capabilities of the laboratory hardware and software was accomplished to determine if particular tasks were better suited for the virtual environment than others. Two candidate tasks have been selected, based upon top-level comparisons and in-depth examinations into the software architecture and programming capabilities of PRISMS. As the project progresses, the PRISMS software will be modified and/or used as currently configured to create

scenarios that are, in some manner, representative of the critical task/s. Now that it has been determined that the visual information and/or performance data can be collected or viewed, more detailed analyses will be performed on the task. This will take us into the second phase of the three-part project. Results Actual Mars terrain data has been successfully migrated into our terrain database and additionally we have incorporated the atmospheric characteristics and gravitational fields of the Martian surface within this model. The team has effectively converted a Comanche helicopter into a quasi-land roving model and has been able to have this model interact with the terrain. For example, the Rover vehicle can go through a simulated rock/rough terrain area and interact with the rocks such that the Rover bounces as if, in fact, it were contacting an object. There has been limited experimentation with notional navigational techniques to help the users traverse the surface without the aid of detailed maps, using only virtual cues. At the conclusion of one of the discussion sessions with the Exploration Office at Johnson Space Center, they offered a copy of the 3-D conceptual Lunar Lander vehicle model they had developed to use in the simulator. After converting the 3-D model into a usable format, it was migrated into the virtual environment and manipulated. Viewing the conceptual Lunar Lander in the virtual environment, while wearing the head mounted display, provided the Exploration Office personnel with very unique information, since it allowed the developers to view the inside of their vehicle for the first time ever. Although the individuals had constructed the 3-D model, they had never actually been able to view the inside of the Lander from the perspective of one viewing it as if they were inside the Lander and looking around. This was a very enlightening experience, and has significantly helped further the collaboration between both groups. Conclusion This first year was a proof-of-concept to determine if we could model aspects of identified critical tasks using existing laboratory hardware and software, without requiring expensive additions to the existing laboratory compliment. Our initial tests have demonstrated that we can be successful using the virtual environment that we have created within PRISMS as a foundation for our performance and baseline testing. Task analysis and display requirements work, along with experimenting with the performance measurement capabilities of our system, are activities that are scheduled and will demonstrate the utility of performing concept exploration type projects/studies prior to making large commitments of research and development funding. FUTURE PLANS Proceed into phase 2 of this project by selecting the primary critical task and focusing upon performing a top-level task analysis. INDEX TERMS Human/Machine Interface, Concept Exploration, and Virtual Reality.

INTEGRATED CREW PERFORMANCE ASSESSMENT AND TRAINING Robert E. Schlegel, Randa L. Shehab, Kirby Gilliland University of Oklahoma INTRODUCTION
As future space missions become longer and more complex, the human performance requirements associated with those missions will change. Detailed training for all mission tasks will not be completed pre-flight, but in many cases will be delivered "just-in-time" (JIT). The anticipated changes in mission requirements, when coupled with the anticipated changes in operator state associated with the longer mission, will accentuate the importance of an advanced accurate performance assessment system. An integrated system that would also present recommendations for new or refresher training for upcoming activities would provide even greater utility. Full implementation of the integrated system would be based on knowledge of all planned mission activity, the relevant timeline, and identification of all critical tasks. Task analysis (including cognitive task analysis) applied to each critical task would be used to determine the component capabilities and skills essential for successful completion of the task. A project management critical path approach would be applied to determine critical lead times for new or refresher training in the requisite skills (perhaps involving on-board simulator training). Based on a model of skill acquisition and decay/retention for the critical skills, the level and timing of refresher training would be prescribed to guarantee that the minimum level of proficiency is achieved prior to the scheduled time for the operational task. A similar framework exists with respect to the diagnosis of risk factor effects and the prescription of countermeasure interventions. Critical to the success of the overall system is the development of the assessment instrument. The development of this instrument forms the starting point for our research. The autonomous nature of long-duration space flight will necessitate self-assessment of psychophysical, sensorimotor, cognitive, and social states, and it is envisioned that appropriate training and countermeasure interventions will be selected and administered in real-time during flight. These interventions are likely to encompass adaptable training programs for specific upcoming flight activities as well as generic, yet challenging, mental and psychomotor gymnastics exercises. Specific countermeasures, such as expedient changes to astronaut sleep and work/rest schedules and targeted pharmaceutical treatments, may also be prescribed. The objective of the planned research is to develop and validate a methodology for the self-assessment of astronaut cognitive and sensorimotor state and to establish a framework for integrating the assessment results with prescriptions for training and countermeasure interventions. The research will provide the assessment methodology and implementation for one dimension of cognitive/sensorimotor performance (Year 1), implementation of the methodology with a simulated NASA operational task (Years 1 and 2), a ground-based validation (Year 2), and a multivariate expansion of the system (Year 3). At the end of Year 2, the basic assessment system will be available for incorporation in tests of the BIO-Plex to monitor cognitive and sensorimotor health.

RESEARCH PROGRAM Previous Research


One of the most comprehensive studies of on-orbit cognitive and psychomotor performance changes was conducted by Schiflett, Eddy, Schlegel, and Shehab using the NASA Performance Assessment Workstation (PAWS), a computerized battery of performance assessment tasks. The research focused on the assessment of the effects of microgravity and other space flight risk factors on cognitive and psychomotor performance during shortduration space shuttle missions (IML-2 and LMS). Reductions in cognitive performance as a function of fatigue were most noticeable when assessed using complex tasks involving directed attention and time-sharing resources. In this and other space flight research, simple cognitive tasks have demonstrated minimal sensitivity to space flight risk factors. Even for workers in safety critical jobs on earth, the impact of impaired performance due to various risk factors can be serious. To minimize problems associated with on-the-job impairment, many employers have implemented screening programs designed to assess an employee's readiness to perform (RTP) at the workplace. Gilliland and Schlegel (1993) defined readiness to perform as "that state in which a person is prepared for a job, is capable of performing it, and is free of any transient risk factors that might influence performance." RTP testing has been undertaken with the goals of identifying changes in performance that may have been driven by exposure to risk factors and of determining the specific aspects of performance that are immediately impacted. The RTP concept can

be extended to apply to the need for training, either JIT training for new tasks or refresher training for critical skills. Thus, RTP assessments can jointly certify (1) freedom from risk factor impact, and (2) training proficiency. Computer-based RTP testing represents an application of computer-based tasks for cognitive performance assessment with a directed focus towards the identification of risk factor effects in the work environment. Consequently, the RTP concept can be readily adapted to the development of an integrated system for (1) the selfassessment of cognitive and sensorimotor state by astronauts during space flight, followed by (2) the prescription of countermeasure and training interventions. The self-assessment system can be applied according to a predetermined schedule or as needed to personally monitor cognitive and visuomotor state.

Planned Research Objectives


The objective of the planned research is to further the self-assessment of astronaut cognitive and sensorimotor state as it relates to crew performance, and to prescribe training and countermeasure interventions based on the assessments. In the arena of self-assessment, it is important for each astronaut to determine his or her current state in relation to individual capacity. This necessitates an assessment of reserve capacity in a relevant mission setting. Traditionally, reserve capacity has often been assessed using a secondary task method. In contrast, the proposed solution in this research is the development of "dynamic load" tasks, whose level of difficulty changes as the task progresses. The planned research approach involves innovative transformation of traditional cognitive assessment tasks to measure critical limits at the time of testing in relation to an unstressed baseline obtained at the maximum trained proficiency. In addition, an appropriate metric to indicate an "at risk" condition will address resource depletion. The focus of the planned research is on the development of an integrated system for self-assessment of cognitive state followed by intervention prescription. The research hypothesis is that appropriately constructed dynamic load tasks and highly-integrated complex tasks will reveal cognitive decrements sooner than simple tasks (i.e., at an earlier point of training proficiency loss or at a lower exposure to the risk factor). This hypothesis will be tested by comparing the sensitivity of dynamic load (i.e., critical-type) tasks, integrated complex tasks, and simple static load tasks comprising similar task elements with respect to time-induced skill decay and fatigue-induced changes in cognitive and visuomotor state.

Research Activity - Year 1


Several major activities comprise the first year of the research: evaluation of assessment strategies based on additional literature review and further strategy evolution, selection of an appropriate complex NASA operational task with an existing laptop or workstation simulator version (e.g., Remote Manipulator System Dynamic Skills Trainer), development of a dynamic load task for the self-assessment of visuomotor state, and an initial evaluation of the task, using the complex simulator task as a reference criterion.

Research Activity - Year 2


In Year 2, participants will be tested using the self-assessment system. In addition to expanding the database for evaluating task reliability and validity, this segment of the study forms the primary test of the research hypotheses. Once trained, participants will also engage in one of two additional studies to evaluate the effects of training decay or risk factors. By the end of Year 2, a validated cognitive performance assessment and training intervention system in the area of visuomotor skill will be available for incorporation in the BIO-Plex, an excellent test-bed for isolation and confinement effects. The self-assessment system can be introduced as part of the daily routine, including extended periods with minimal training on future operational tasks, and can assess the impact of risk factors related to fatigue and sleep loss.

Research Activity - Year 3


Year 3 will focus on a multivariate expansion of the system to develop additional dynamic load tasks in the areas of spatial processing, directed attention, time sharing and resource allocation, along with a multivariate scoring approach to assess readiness to perform.

INDEX TERMS: cognitive performance, countermeasures, Performance Assessment Workstation (PAWS),


readiness to perform (RTP), remote manipulator system (RMS), self-assessment, training, visuomotor performance.

Dr. JoAnna Wood Individuals and Cultures in Social Isolation


The proposed research is designed to study the roles of personality, culture, and group influences on behavior, performance, and health outcomes in winter-over Antarctic research stations. These remote and isolated habitations provide an environment analogous to long duration space missions, such as those planned for the International Space Station and eventually a piloted expedition to the planet Mars. The ultimate objectives of this project are to: 1. Increase our understanding of the effects of personality, culture, and group characteristics on both individual and group performance in extreme environments. 2. Identify those elements of leadership that maximize crew functioning in extreme environments. 3. To understand how individual and group factors affect physical health under prolonged stress. We will examine changes in weekly self-assessment of individual and group adaptation, monthly levels of several neuropeptides, and other health outcomes, as a function of individual (personality, demographic, personal history) and group characteristics (leader traits, culture mix, group tensions) and local events. This study will use Hierarchical Linear Modeling to partition variance in our dependent variables among relevant individual, group, and time factors.

Potrebbero piacerti anche