New research examines trust in AI as first responders train with robotic teammates

KENNESAW, Ga. | Mar 11, 2026

Hansol Rheem
Hansol Rheem
When a mass-casualty event like an earthquake or transportation accident occurs, every second counts. Emergency responders must quickly make life-saving decisions in assessing victims and prioritizing injuries in a high-pressure situation.

食色视频 researcher Hansol Rheem is exploring how virtual reality and robotic teammates could help prepare emergency responders for those moments. A cognitive psychologist and human factors engineer, Rheem鈥檚 research focuses on improving human鈥揂I teamwork, particularly in high-anxiety, time-pressured situations scenarios.

Rheem鈥檚 study examines how people train and perform when paired with a robot teammate in a simulated emergency scenario, where robots help with triaging. The aim is to find out whether this technology improves learning, but the research also explores the psychological aspect of how framing the robot鈥檚 role as an observer, collaborator, or competitor changes the way people use the robot as a learning companion.

鈥淢ass casualty triage has been at the center of interest in the human factors community because of its unique situation,鈥 said Rheem, an assistant professor of psychology in the Norman J. Radow College of Humanities and Social Sciences. 鈥淭hese situations are complex and spontaneous, so effective planning and training are critical.鈥

Training for these high-pressure situations are usually through lecture-based instruction or live simulations staged in large spaces such as gyms, where volunteers pose as injured victims.

鈥淟ecture-based training is efficient, but it鈥檚 not as interactive or engaging as it should be,鈥 Rheem said. 鈥淟ive simulations are more realistic and effective, but they take a lot of time and money to set up.鈥

Rheem and his team designed a video game that recreates the aftermath of a mass casualty event. Participants navigate the scenario either on a computer-based or virtual reality version of the game, examining patients鈥 vital signs and making triage decisions. They are partnered with a robot teammate that provided subtle hints during the simulation to help participants categorize and prioritize patients.

Study participants were divided into three groups. A control group called the observer group was told the robot was operated remotely by another human. The collaboration group was told the robot was powered by artificial intelligence, capable of thinking independently and working with them as a collaborator. The third group, the competitor group, was told they were competing against the AI-powered robot.

Initially, Rheem had expected that participants in the collaborator and competitor groups who believed the robot was autonomous would treat it as a true collaborator, pay more attention to its message, and show the greatest learning gains.

Control and Collaborator group attended to the robot鈥檚 message more frequently and felt more connected to it than Competitor group participants.

Participants in the observer group who believe the robot was controlled by a human, showed the greatest learning gain. They also attributed their success to the robot and perceived its intentions to be clearer. The collaborator group was more inclined to blame failures on the robot.

Rheem relates these findings to the psychological concept of trust in AI, which refers to people鈥檚 attitudes about the extent to which an AI agent will help them accomplish their goals.

鈥淲hen we believe the robot is controlled by a human, we may set lower expectations than we would for an autonomous robot. Expectations for an autonomous robot can sometimes be unrealistic, much like how many people expect near-perfect performance from systems like ChatGPT or Gemini.

鈥淏ut when those expectations are not met, for example when the AI gives a hint rather than a specific answer, trust in the AI can drop quickly and we may become more inclined to blame the AI and ignore its advice.鈥

鈥淲orking on this project allowed me to apply what I鈥檝e learned in class to real research, from collecting data to analyzing results,鈥 said Drey Bailey, a psychology junior who worked alongside Rheem on the study. 鈥淚t鈥檚 interesting to see how this study can open the door to other types of VR training designed to improve decision-making in high-anxiety, time-pressured situations.鈥

 By understanding how trust and the way an AI鈥檚 role is framed can improve collaboration with AI, Rheem hopes this study will progress to preparing professionals to work confidently alongside intelligent machines.

鈥淎s AI becomes more integrated into society, we鈥檙e going to see more hybrid teams which include both humans and artificial intelligence systems working together,鈥 Rheem said. 鈥淲e鈥檙e already seeing it with self-driving cars. When the system makes a mistake, the human has to step in. If we can design training that strengthens both expertise and collaboration with AI, that鈥檚 a significant step forward. It鈥檚 about preparing them for a future where they鈥檙e asked to coordinate closely with AI systems. We need to be ready when that day comes.鈥

The team is now expanding the study to test various methods help learners gradually develop the right level of trust in the AI, so they are more likely to listen to its advice.

鈥 Story by Christin Senior

Photos by Darnell Wilburn

Related Stories

A leader in innovative teaching and learning, 食色视频 offers undergraduate, graduate, and doctoral degrees to its more than 51,000 students. 食色视频 State is a member of the University System of Georgia with 11 academic colleges. The university's vibrant campus culture, diverse population, strong global ties, and entrepreneurial spirit draw students from throughout the country and the world. 食色视频 State is a Carnegie-designated doctoral research institution (R2), placing it among an elite group of only 8 percent of U.S. colleges and universities with an R1 or R2 status. For more information, visit kennesaw.edu.