In light of the U.S. Army’s intent to leverage advances in artificial intelligence (AI) for augmenting dismounted Soldier lethality through developing in-scope and heads-up display-based augmented target recognition (ATR) systems, the Combat Capabilities Development Command (CCDC) – U.S. Army Research Laboratory’s (ARL’s) Human Research and Engineering Directorate (HRED) identified several critical gaps that must be addressed in order to effectively team the Soldier with ATR for the desired augmented lethality. One of these areas pertains to the way in which ATR is displayed and requires a thorough understanding and leveraging of relevant cognitive processes that will enable this technology. Additionally, insufficient consideration of perceptual, attentional, and cognitive capabilities increases the risk of burdening the Soldier with excessive, unnecessary, or distracting representations of information, which may impede lethality rather than augment it. HRED’s planned and ongoing research is intended to develop novel mechanisms through which Soldiers teamed with ATR will perform more adaptively and effectively than either the Soldier or intelligent system could accomplish individually. Based on HRED’s significant expertise in the cognitive sciences and coupled with familiarity with the military-relevant domain spaces, we make the following initial recommendations for ATR information display requirements:
- ATR highlighting should leverage a non-binary display schema to continuously encode threat information (e.g., target class/ identity, uncertainty, and prioritization).
- ATR highlighting should be integrated with the target itself instead of functioning as a discrete feature of the display (i.e., highlight the target rather than highlighting a region with the target inside).
- Information about threat certainty or classification confidence (which can also include priority) should be embedded into ATR highlighting.
- Yellow highlights may offer advantages for display.
- Changing information (e.g., target certainty) should be accomplished through formation or modification of highlight gradients rather than sudden changes in the display.
- Human performance evaluations of ATR should consider incorporating changing threat states and contexts into scenarios for more ecologically relevant findings.
- Human performance evaluations of ATR should consider incorporating uncued (nonhighlighted) targets and miscued targets (false identifications; e.g., ATR identifies non-threat as threat) for more relevant findings.
INITIAL CONSIDERATIONS FOR TARGET ACQUISITON SYSTEMS’ ATR DISPLAY
The Army plans to leverage advances in AI through implementation into future dismounted Warfighter systems to augment situational awareness and target acquisition capabilities. The unique constraints of dismounted operations necessitates a cognitive-centric approach in which human capabilities are effectively teamed with intelligent systems. This will provide a total systems performance capability that exceeds what the Soldier or the system can accomplish individually. Successfully teaming the human and AI in this manner will enable the Soldier to allocate his/her limited cognitive resources more effectively, decreasing time and increasing accuracy of target identification and engagement decisions while simultaneously enabling greater situational awareness through the target acquisition system.
Design principles developed to support and accelerate, not replace, Soldier decision making are central to the success of ATR and other intelligent systems. Given the constraints of technology, coupled with the dynamics of the battlefield (or any complex, real-world context), it is important to consider ATR implementations that convey real-time information about the status of threats in the environment. Further, because intelligent ATR systems will not perform perfectly (e.g., classification of threat statuses across subtly different target categories or due to obscured sensors, limited training data sets, etc.), efficient target detection and engagement decision making will also depend on conveying information for real-time fluctuations in classification certainty in a manner that is intuitive and reliable.
In addition to algorithm uncertainty, it is conceivable that the probable threat status of an actor on the battlefield, particularly as determined by ATR, will also fluctuate. For example, someone with a weapon may conceal it, and someone else may pull out a weapon that was previously undetected. Conventional considerations associated with ATR often do not consider fluctuations in target state or system uncertainty, focusing instead on a binary system in which targets are statically highlighted as either threats or non-threats. When consideration is paid to fluctuations in probability of a given target being a threat, the fluctuation is thought of from the context of algorithm confidence in its classification and not from the context real-world dynamics that may render actual target threat state uncertain (e.g., a target with a weapon that is not consistently in view).
AREAS OF CONCERN
Such conceptualizations, if implemented in the real-world battlefield, may result in target highlights that frequently change from one threat category to another (e.g., green to red or highlighted to unhighlighted). There are several potential areas of concern associated with this that include the following:
- Inefficiency of the ATR display to convey intuitive information: rapidly changing between threat categories negates the recognition component of the ATR and reduces it to automatic target detection (ATD).
- Inefficiency of the ATR display to convey usable information: high-certainty non-threat targets may appear more salient than low-certainty threat targets.
- Inefficient or detrimental allocation of attentional resources:
- Rapid changes in the display can create a high-salience cue to attention that is distracting, resulting in unintended attention capture. For example, switching between colors or other means to convey categorical distinctions may effectively display as a flicker or result in tunnel vision to specific regions of an image or environment at the cost of dispersed attention where other targets may be present.
- Targets initially displayed as non-threats may trigger inhibition of attention to the target location, thus failing to capture attention upon target state change or even the appearance of a threat target near that location.
- Distributed attentional resources across all highlighted targets (e.g., such as ATD) will reduce processing allocated to true threats [1].
- Crowding visual information may reduce the ability to discriminate between targets and nontargets [2]; information displays must consider perceptual limitations, such as the drop-off in visual acuity outside the fovea.
- Ineffective engagement decision making: the human may equally distribute attentional resources across similarly appearing targets without understanding that one target may be a high-certainty threat while another may be a low-certainty threat.
IMPLICATIONS
Significant work is needed to understand the underlying cognitive processes critical to effective target acquisition and engagement decisions and translate that understanding into designing novel ways to most effectively display information at the point of need. It is essential that these methods consider, complement, and leverage these cognitive processes into mechanisms for effective human-AI pairing that go beyond simply adding more information to the dismounted Soldier’s already-burdened cognitive load. However, based upon a holistic consideration of battlefield dynamics and system capabilities discussed, certain implications can already be leveraged. These include the following.
1. Conveying Uncertainty Information to Aid Engagement Decision Making
As described in Geuss et al. [3], future ATR systems are unlikely to perfectly categorize targets as threats or nonthreats due to targets being partially occluded, imperfect data to train machine learning algorithms, and lack of ability to understand or integrate contextual constraints on target relevance. Uncertainty in target classification will also arise from the nature of the dynamic battlefield. Enemy targets will adapt within and across engagements by concealing weapons, altering tactics, and employing deception. ATR systems are likely to either falsely cue targets that are not threats (false alarms) or leave threatening targets unnoticed (misses). However, quantifying and communicating the associated levels of uncertainty about target classification in an intuitive manner will improve effective decision making and promote greater trust in the ATR system’s capability, if properly displayed.
Several papers have demonstrated that communicating uncertainty information can improve decision making [4–6]. However, the way in which uncertainty information (e.g., the specific visual encoding method used) is displayed can determine whether people ignore uncertainty information or effectively integrate it into their engagement decisions. For example, people use common schema to interpret representations of information that, if misused, can result in misinterpretations, slower processing, inappropriate generalizations, and incorrect decisions.
Another example is “the cone of uncertainty” used to represent the potential path of a hurricane; it is often misinterpreted as a measure of the danger posed by the hurricane due to growing size of the hurricane itself rather than decreasing certainty about its future path [5]. Additional research is needed to identify optimal visual encoding techniques for communicating uncertainty in target classification based on understanding common cognitive heuristics in operational contexts and how encoding methods could adapt to Soldier state and dynamics. However, it is clear that this is absolutely essential to ensure that proper engagement decisions are made.
2. Conveying Threat Information Along a Continuum Rather Than as Associated With Two Discrete (Binary) Categories
The full limitation spectrum of conventional means for displaying computer-aided visual techniques will not be discussed here. However, Kneusel and Mozer [7] provide a compelling case for using “soft highlighting,” described as blurring the boundaries between the target, highlight, and environment, as opposed to “hard highlighting.” Hard highlighting is the more typically conveyed bounding box (or shape) consisting of an augmented reality (AR) object, distinct from its content, and overlaid onto the scene. In their paper, the authors describe soft highlighting as a means to reduce the detrimental effect that ATR and similar systems have on detecting uncued targets (missed by the system). While numerous mechanisms may cause this effect (the subject of future research), this finding is consistent with findings from radiology and related literatures.
These findings have shown that computer-aided design systems, which use traditional hard highlights to assist radiologists to detect the presence of tumors in scans, result in very little net gain for detecting and identifying the presence of tumors [8, 9]. The soft highlighting approach leverages opacity to signify target certainty, allowing identification of uncertain nontargets that do not cross the threshold for target status required for visualization using a binary approach. Additionally, soft highlighting is less likely to restrict attention exclusively to targets and obscure adjacent portions of an image or environment.
In addition to the benefits of a soft highlighting technique laid out by Kneusel and Mozer [7], soft highlighting advantages are consistent with findings that suggest having to selectively attend to individual features in object representations may come at a cost to active visual working memory maintenance processes [10]. A hard highlight distinct from its content may require the viewer to attend to the highlight itself and the content of the highlight in order to derive all required information. This also applies to the idea of portraying information about uncertainty as a distinct feature (i.e., a percentage displayed with the highlight).
Visual working memory (VWM) has limited capacity. Processing conjunctions about an object complicates the representation of the object, thereby taxing VWM resources and possibly resulting in less effective (e.g., slower and/or less accurate) processing (see Schneegans and Bays [11] for a comprehensive review). This is consistent with Treisman’s [12] Feature Integration Theory, which posits that different dimensions of the same feature can be processed in parallel, in contrast to an equal number of different features (e.g., three shades of the same hue vs. three different hues). As such, presenting information about targets in a way that allows a strong, cohesive object representation minimizes additional processing associated with multiple features that need separate attention and bound to form a percept. This may better support the desired intent of the ATR display.
Additionally, it has been shown that static cuing paradigms indicate a very rapid decay in enhanced processing effects (e.g., Von Grünau et al. [13]). Burra and Kerzel [14] found that attention capture to a salient distractor is inhibited by the predictability of the presented target (i.e., same or similar target in all search trials), which is consistent with the moderation of efficacy of suppression mechanisms resulting from changing (in this case, unchanging) cognitive demands of the task [15]. This may indicate an advantage associated with somewhat nonstatic or predictable/consistent cues, where attention is allocated efficiently to cued targets within the usable field of view.
Of course, a cuing mechanism that is too dynamic or unpredictable may have other detrimental effects. The sudden onset of novel stimuli can capture attention and distract viewers from their primary task, particularly in cases of similarity between the distractor and the true target [16]. Distraction of attention from a given location can reduce perceptual sensitivity at that location (where attention should be allocated [17]), as well as result in other perceptual effects (e.g., modifications to motion perception [18]). Finally, misallocations of attention to a distractor are associated with delayed attention allocation to the relevant target [19].
Note that there are also several efforts suggesting such attention capture is largely under cognitive control (e.g., Theeuwes [20]). However, when inappropriate attention capture is reduced, it is often done through mechanisms of inhibiting processing (reactive mechanism) and suppressing response (proactive mechanism) to distractors (see Geng [21]). This is not necessarily an ideal effect to invoke with a system intended to ensure attention can be cued as needed to multiple objects (targets) within the scene. Additionally, the amplitudes of event-related potentials (ERPs) associated with attention (i.e., N2pc) are reduced for target processing in the presence of even a distractor that failed to elicit that ERP itself. This suggests that even when cognitive control prevents capturing attention by distracting stimuli, it does not eliminate the negative impact of the distractor’s presence [22].
A soft highlighting technique lends itself very well to conveying a continuum of certainty in a nondistracting manner. A low-salience, soft highlight can be applied to all targets (e.g., people) detected within the scene, with changes in a relevant dimension (e.g., opacity, intensity, and size) associated with fluctuations in state of threat certainty. This design supports the parallel feature processing described by Feature Integration Theory and may strike the much-needed balance between static and dynamic cuing paradigms to optimize attentional allocation. In such an implementation, all targets may softly “glow” in a uniform hue, thereby, distinguishing them from the rest of the scene for visual access ease. As the probability of threat associated with a given target increases, visual access increases through saliency (e.g., brighter) manipulation. This, in turn, may decrease as threat state or certainty of threat state changes.
The method of displaying ATR may offer several advantages when minimizing the need to attend to individual features of an object and supporting and facilitating efficient, feature-based object binding. Derived by considering underlying visual-cognitive processes, this method may distinguish targets from background clutter and provide usable and intuitive information about relative target importance to the Soldier while minimizing potential negative effects associated with battlefield uncertainty and attentional resources.
Furthermore, continuous increments of salience can be implemented gradually to optimize the trade-off between attenuating to static/consistent cues and inappropriate attention capture through excessively dynamic cues. This implicitly manipulates representations of target salience to reduce the likelihood of attentional capture due to sudden changes in saliency (see Figure 1). Unhighlighted targets in a visual search task (A) identified by ATR can be presented using many different strategies. This includes hard binary highlights that appear less intrusive than typical bounding boxes (B) or soft highlights that convey nonbinary information representations by varying the brightness (C) or size (D) of the highlights. Softer highlighting enables a higher-dimensional degree of information to convey to the human while simultaneously minimizing the distraction and environmental obscuration induced by the highlight itself.
3. Color of Highlight
Research is likely needed in order to truly ascertain the appropriate color to highlight targets via ATR. However, logic dictates that some preexisting associations may exist with colors such as red and green. This may also be nonideal because of confusion with reticle or foliage, respectively, and perceptual issues of these hues for color-blind viewers. Tombu et al. [23] and Reiner et al. [24] demonstrated utility of yellow-colored highlights in their ATR simulation experiments that serves as a recommended starting point. However, it should be noted that these experiments were conducted in indoor simulator environments. The interaction of this color with natural light and time of day and the type of outdoor environment requires further investigation.
4. Performance Characterization Efforts That Realistically Depict the Fluctuating State of Certainty (System and Human Driven)
Understanding the true impact of conveying uncertainty to Soldiers through ATR or similar systems must involve evaluating potential display techniques under circumstances likely to interact with technique effectiveness. In the case of fluctuating battlefield certainty, we recommend that scenarios be incorporated into evaluations that include changes in certainty associated with naturalistic human behavior in the real world. This can include object-based obscuration of weapon systems (e.g., threat with weapon walks through brush where weapon is obscured), intentional obscuration of weapon systems (e.g., weapon system is put away or hidden on person), and new manifestations of weapon systems on existing actors (e.g., person takes out a weapon system), with the ATR response adjusted accordingly.
Note that some training will be required to familiarize participants with the construct of continuous threat ATR.
5. General Performance Characterization Considerations
Critical to truly understanding the impact of ATR and related features on Soldier engagement performance, ATR successes and failures must be considered in performance evaluations. These include, but are not limited to, constructs from traditional signal detection theory—hits (correctly labeled threat targets), correct rejections (correctly unlabeled nonthreat targets), misses (failure to label threat targets), and false alarms (mislabeling of nonthreat targets). Understanding the way in which the ATR display interacts with human visual and cognitive processes in this context is particularly relevant to evaluating Soldier-ATR performance.
CONCLUSIONS
The literature reviewed here and the recommendations introduce several new research questions that will be addressed over the course of the ARL-HRED Human-AI Interactions for Intelligent Squad Weapons program. However, leveraging our understanding of both the problem space and the relevant literature in support of the scientific development of this program provides a recommendation for depicting target type and uncertainty in a way that considers cognitive implications of ATR display. Further, empirical evaluation scenarios that allow characterizing performance in conditions of real-world certainty state changes will provide a deeper understanding of how uncertainty information can affect target acquisition and engagement decisions. A trade-off is anticipated between optimizing response to target, optimizing detection of uncued targets, and other critical aspects of performance through the usable field of view. However, an informed conversation about that trade-off is necessary in order to influence Army decisions toward Soldier-centric, optimized target acquisition systems.
ACKNOWLEDGMENTS
Chloe Callahan-Flintoft was supported by the U.S. Army Research Laboratory’s Postdoctoral Fellowship Program ad-ministered by the Oak Ridge Associated Universities under Cooperative Agreement Number W911NF-16-2-0008. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of CCDC ARL or the U.S. government. The U.S. government is authorized to reproduce and distribute reprints for government purposes not-withstanding any copyright notation.
REFERENCES
- Wolfe, J. M. “Guided Search 2.0: A Revised Model of Visual Search.” Psychonomic Bulletin & Review, vol. 1, no. 2, pp. 202–238, 1994.
- Whitney, D., and D. M. Levi. “Visual Crowding: A Fundamental Limit on Conscious Perception and Object Recognition.” Trends in Cognitive Sciences, vol. 15, no. 4, pp. 160–168, 2011.
- Geuss, M. N., G. Larkin, J. Swoboda, A. Yu, J. Bakdash, T. White, M. Berthiaume, and B. Lance. “Intelligent Squad Weapon: Challenges to Displaying and Interacting With Artificial Intelligence in Small Arms Weapon Systems.” International Society for Optics and Photonics, in Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, vol. 11006, pp. 110060V, May 2019.
- McKenzie, G., M. Hegarty, T. Barrett, and M. Goodchild. “Assessing the Effectiveness of Different Visualizations for Judgments of Positional Uncertainty.” International Journal of Geographical Information Science, vol. 30, no. 2, pp. 221–239, 2016.
- Ruginski, I. T., A. P. Boone, L. M. Padilla, L. Liu, N. Heydari, H. S. Kramer, M. Hegarty, W. B. Thompson, D. H. House, and S. H. Creem-Regehr. “Non-Expert Interpretations of Hurricane Forecast Uncertainty Visualizations.”Spatial Cognition & Computation, vol. 16, no. 2, pp. 154–172, 2016.
- Munzner, T. Visualization Analysis and Design. AK Peters/CRC Press, 2014.
- Kneusel, R. T., and M. C. Mozer. “Improving Human-Machine Cooperative Visual Search With Soft Highlighting.”ACM Transactions on Applied Perception (TAP), vol. 15, no. 1, p. 3, 2017.
- Fenton, J. J., S. H. Taplin, P. A. Carney, L. Abraham, E. A. Sickles, C. D’Orsi, E. A. Berns, G. Cutter, R. E. Hendrick, W. E. Barlow, and J. G. Elmore. “Influence of Computer-Aided Detection on Performance of Screening Mammography.”New England Journal of Medicine, vol. 356, no. 14, pp. 1399–1409, 2007.
- Fenton, J. J., L. Abraham, S. H. Taplin, B. M. Geller, P. A. Carney, C. D’Orsi, J. G. Elmore, W. E. Barlow, and Breast Cancer Surveillance Consortium. “Effectiveness of Computer-Aided Detection in Community Mammography Practice.” Journal of the National Cancer Institute, vol. 103, no. 15, pp. 1152–1161, 2011.
- Park, Y. E., J. L. Sy, S. W. Hong, and F. Tong. “Reprioritization of Features of Multidimensional Objects Stored in Visual Working Memory.” Psychological Science, vol. 28, no. 12, pp. 1773–1785, 2017.
- Schneegans, S., and P. M. Bays. “New Perspectives on Binding in Visual Working Memory.” British Journal of Psychology, vol. 110, no. 2, pp. 207–244, 2019.
- Treisman, A. “Feature Binding, Attention and Object Perception.” Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, vol. 353, no. 1373, pp. 1295–1306, 1998.
- Von Grünau, M. W., J. Faubert, M. Iordanova, and D. Rajska. “Flicker and the Efficiency of Cues for Capturing Attention.” Vision Research, vol. 39, no. 19, pp. 3241–3252, 1999.
- Burra, N., and D. Kerzel. “Attentional Capture During Visual Search Is Attenuated by Target Predictability: Evidence From The N2pc, Pd, and Topographic Segmentation.” Psychophysiology, vol. 50, no. 5, pp. 422–430, 2013.
- Gaspelin, N., and S. J. Luck. “The Role of Inhibition in Avoiding Distraction by Salient Stimuli.” Trends in Cognitive Sciences, vol. 22, no. 1, pp. 79–92, 2018.
- Folk, C. L., R. W. Remington, and J. C. Johnston. “Involuntary Covert Orienting Is Contingent on Attentional Control Settings.” Journal of Experimental Psychology: Human Perception and Performance, vol. 18, no. 4, pp. 1030–1044, 1992.
- Theeuwes, J., A. F. Kramer, and Kingstone, A. “Attentional Capture Modulates Perceptual Sensitivity.”Psychonomic Bulletin & Review, vol. 11, no. 3, pp. 551–554, 2004.
- Watanabe, K., and S. Shimojo. “Attentional Modulation in Perception of Visual Motion Events.” Perception, vol. 27, no. 9, pp. 1041–1054, 1998.
- Liesefeld, H. R., A. M. Liesefeld, T. Töllner, and H. J. Müller. “Attentional Capture in Visual Search: Capture and Post-Capture Dynamics Revealed by EEG.” NeuroImage, vol. 156, pp. 166–173, 2017.
- Theeuwes, J. “Exogenous and Endogenous Control of Attention: The Effect of Visual Onsets and Offsets.” Perception & Psychophysics, vol. 49, no. 1, pp. 83–90, 1991.
- Geng, J. J. “Attentional Mechanisms of Distractor Suppression.” Current Directions in Psychological Science, vol. 23, no. 2, pp. 147–153, 2014.
- Hilimire, M. R., and P. M. Corballis. “Event-Related Potentials Reveal the Effect of Prior Knowledge onCompetition for Representation and Attentional Capture.” Psychophysiology, vol. 51, no. 1, pp. 22–35, 2014.
- Tombu, M., K. Ueno, and M. Lamb. “The Effects of Automatic Target Cueing Reliability on Shooting Performance in a Simulated Military Environment.” DRDCRDDC-2016-R036, DRDC Scientific Report, Toronto, Canada, 2016.
- Reiner, A. J., J. G. Hollands, and G. A. Jamieson. “Target Detection and Identification Performance Using an Automatic Target Detection System.” Human Factors, vol. 59, no. 2, pp. 242–258, 2017.
BIOGRAPHIES
GABRIELLA BRICK LARKIN is a research psychologist with ARL, where she focuses on visual perception and human-AI interactions for optimized situational awareness and target acquisition. Dr. Brick Larkin holds a doctor-ate in experimental psychology: cognition, brain, and behavior from the Graduate Center of the City University of New York.
MICHAEL GEUSS is a research psychologist with ARL’s HRED. He has worked as a research scientist at the Mac Planck Institute for Biological Cybernetics, where he received funding from the Alexander von Humboldt fellowship. His research interests include investigating methods to visualize uncertain and dynamic information in AR. Dr. Geuss holds a Ph.D. in cognitive psychology, with a focus on space perception, from the University of Utah.
ALFRED YU is a research psychologist with the ARL’S HRED. He has managed collaborations between the Army, academic institutions, and industry to enhance visuospa-tial task performance using neuromodulatory approaches, including contemplative practice and neurostimulation. He uses supercomputing resources to investigate the effects of neurostimulation on neuronal function. Sup-ported by the U.S. Department of Defense’s Science Mathematics and Research for Transformation (SMART) Fellowship, Dr. Yu holds a Ph.D. in cognitive psychology from Washington University, St. Louis, with a focus on spatial cognition and perception-action coupling.
JOE REXWINKLE is a biomedical engineer with ARL’s HRED. His primary research focus is human enhance-ment, with related interests in machine learning, brain-computer interfaces, and human-autonomy teaming. Dr. Rexwinkle holds a B.S. in bioengineering and a Ph.D. in mechanical engineering from the University of Missouri.
CHLOE CALLAHAN-FLINTOFT is an Oak Ridge Associated Universities journeyman fellow with ARL’s HRED. Her re-search interests include understanding and modeling how the temporal autocorrelation of an object’s features in-crease the duration of attentional engagement. Her work has been published in Vision Research and Journal of Experimental Psychology: General. Dr. Callahan-Flintoft holds a Ph.D. in cognitive psychology from Pennsylvania State University, a B.A. in mathematics and psychology from Trinity College Dublin, and an M.S. in statistics from Baruch College, City University of New York.
JONATHAN Z. BAKDASH is a research psychologist with ARL South at the University of Texas at Dallas and an adjunct associate professor at Texas A&M-Commerce. He was previously a postdoctoral fellow at the Patient Safety Center for Inquiry, Veterans Administration Salt Lake City Health Care System. His current research interests are decision making, human-machine interaction, visual per-ception, applied statistics, and cybersecurity. Dr. Bakdash holds a B.S. in economics and psychology from the Univer-sity of Minnesota and a Ph.D. in cognitive psychology from the University of Virginia.
JENNIFER SWOBODA is a research psychologist with the Weapons Branch of the Human Systems Integration Division, Data, and Analysis Center at Aberdeen Proving Ground, MD. She is currently involved in numerous test and evaluation efforts for Soldier-systems performance with various small arms and optics systems. Her prior work focused primarily on examining the effects of physi-cal impairment on dismounted Soldier performance. Ms. Swoboda holds a B.A. in psychology from Washington Col-lege and has pursued graduate-level course work through Towson University and Virginia Tech.
GREGORY LIEBERMAN is a cognitive neuroscientist at ARL’s HRED researching brain-computer interfaces and human-autonomy teaming. He leads the team in developing the Learning Warfighter-Machine Interface. He conducted predoctoral research at the Mass General Institute for Neurodegenerative Disease and postdoctoral research at the University of New Mexico Psychology Clinical Neuroscience Center and jointly at ARL and the University of Pennsylvania Department of Biomedical Engineering. His primary research interests include human-autonomy teaming, cognitive enhancement, learning-related neuroplasticity, and the overlaps between biological and machine learning. Dr. Lieberman holds a B.A. in psychology from the University of Massachusetts Amherst and a Ph.D. in neuroscience from the University of Vermont.
CHOU P. HUNG is a neuroscientist at ARL’s HRED and an adjunct professor at Georgetown University. He was previously a postdoctoral associate at Massachusetts Institute of Technology and assistant professor of neuroscience at National Yang-Ming University (Taiwan) and Georgetown University. He has published over 35 technical articles on brain computations and circuitry underlying visual recognition and surface perception and is interested in the intersection of neuroscience, machine vision, and human autonomy teaming. Dr. Hung holds a B.S. in biology from California Institute of Technology and a Ph.D. in neuroscience from Yale University.
SHANNON MOORE is a research psychologist, postdoctoral fellow at ARL’s HRED, where her primary research focus is on the qualities or characteristics that lead to superior team performance. Dr. Moore holds a Ph.D. in social psychology from the University of Utah.
BRENT LANCE is a research scientist at ARL’s HRED, where he works on improving human-AI integration for dismounted Soldiers. He previously worked at the University of California (USC) Institute for Creative Technologies as a postdoctoral researcher. He is a senior member of the Institute for Electrical and Electronics Engineers and has published over 50 technical articles, including a first-author publication on brain-computer interaction in the “100th Anniversary Edition of the Proceedings of the IEEE.” Dr. Lance holds a Ph.D. in computer science from USC.