Hi, I am a fifth-year Ph.D. student in Intelligent Systems Program at University of Pittsburgh. I work with Prof. Michael Lewis and collaborate closely with Prof. Katia Sycara at Carnegie Mellon University.
My research aims at facilitating the collaboration between humans and artificial intelligent (AI) agents. I have been working on transparency, trust, adaptation, communication, and Theory of Mind reasoning in human-agent teams. I am also interested in applying research methods from Psychology to study machine behaviours, such as reinforcement learning (RL) and large language model (LLM) agents.
I worked as a Research Intern at the Honda Research Institute USA and Alibaba Group. I got my Master's degree in Information Science here at Pitt. Before coming to Pittsburgh, I obtained my Bachelor's degree in Applied Psychology at Zhejiang University, and worked with Prof. Zaifeng Gao.
E-mail / CV / Google Scholar
While Large Language Models (LLMs) have demonstrated impressive accomplishments in both reasoning and planning, their abilities in multi-agent collaborations remains largely unexplored. This study evaluates LLM-based agents in a multi-agent cooperative text game with Theory of Mind (ToM) inference tasks, comparing their performance with Multi-Agent Reinforcement Learning (MARL) and planning-based baselines. We observed evidence of emergent collaborative behaviours and high-order Theory of Mind capabilities among LLM-based agents.
Learning interpretable communication is essential for multi-agent and human-agent teams (HATs). In multi-agent reinforcement learning for partially-observable environments, agents may convey information to others via learned communication, allowing the team to complete its task. However, the utility of discrete and sparse communication in human-agent team experiments has not yet been investigated. In this work, we analyze the efficacy of sparse-discrete methods for producing emergent communication that enables high agent-only and human-agent team performance.
The ability to infer another human’s beliefs, desires, intentions and behaviour from observed actions and use those inferences to predict future actions has been described as the Theory of Mind (ToM). In conjunction with researchers at CMU we are working to develop AI agents capable of making such ToM inferences. This research started with modelling single human's mental state in a search and rescue task, and later extended to modelling the team state (i.e. shared mental model) of three human rescuers.
- Li, H., Fan, Y., Zheng, K., Lewis, M., & Sycara, K. (2023). Personalized Decision Supports based on Theory of Mind Modeling and Explainable Reinforcement Learning. In 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2023)
- Li, H., Oguntola, I., Hughes, D., Lewis, M., & Sycara, K. (2022, August). Theory of Mind Modeling in Search and Rescue Teams. In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (pp. 483-489). IEEE.
The ability to collaborate with previously unseen human teammates is crucial for artificial agents to be effective in human-agent teams (HATs). However, it is hard to develop a single agent policy to match all potential teammates. In this research, we study both human-human and human-agent teams in a dyadic cooperative task, Team Space Fortress (TSF). Human-human team results show that the team performance is influenced by both players’ individual skill level and their ability to collaborate with different teammates by adopting complementary policies. We propose an adaptive agent that identifies different human policies and assigns a complementary partner policy to optimize team performance. Evaluations in HATs indicate that both human adaptation and agent adaptation contribute to team performance.
As domestic service robots become more common and widespread, they must be programmed to act appropriately by efficiently accomplishing tasks while aligning their actions with relevant norms. We are taking the first step to equip domestic robots with normative reasoning competence which is understanding the norms that people apply to the behaviour of robots in specific social contexts.
Deep reinforcement learning has made remarkable achievements in diverse domains. However, the working process of them remains opaque to both end-users and designers. Our research proposed new methods to increase the transparency of DRL systems for them to be better accepted and trusted.
The interaction between swarm robots and human operators is significantly different from the traditional human-robot interaction due to unique characteristics of the system, such as high cognitive complexity and difficulties in state estimation. This project focused on the human factors during this process including trust, level of automation and system transparency.
Studied the driving distraction from multimodal interactions with in-car smart devices in simulated environments.
Investigated the influence of pictorial realism on the comprehension of safety briefing cards combining eye-tracking technique and mock cabin manipulation task.
Explored the user experience issues in e-commerce websites/mobile apps. For example, layout of category navigating and information architecture of product detail pages.