Huao Li

Hi, I am a fifth-year Ph.D. student in Intelligent Systems Program at University of Pittsburgh. I work with Prof. Michael Lewis and collaborate closely with Prof. Katia Sycara at Carnegie Mellon University

My research aims at facilitating the collaboration between humans and artificial intelligent (AI) agents. I have been working on transparency, trust, adaptation, communication, and Theory of Mind reasoning in human-agent teams. I am also interested in applying research methods from Psychology to study machine behaviours, such as reinforcement learning (RL) and large language model (LLM) agents.

I worked as a Research Intern at the Honda Research Institute USA and Alibaba Group. I got my Master's degree in Information Science here at Pitt. Before coming to Pittsburgh, I obtained my Bachelor's degree in Applied Psychology at Zhejiang University, and worked with Prof. Zaifeng Gao

E-mail / CV / Google Scholar

Current Projects

LLM-Agent Collaboration

While Large Language Models (LLMs) have demonstrated impressive accomplishments in both reasoning and planning, their abilities in multi-agent collaborations remains largely unexplored. This study evaluates LLM-based agents in a multi-agent cooperative text game with Theory of Mind (ToM) inference tasks, comparing their performance with Multi-Agent Reinforcement Learning (MARL) and planning-based baselines. We observed evidence of emergent collaborative behaviours and high-order Theory of Mind capabilities among LLM-based agents. 

- Li, H., Chong, Y. Q., Stepputtis, S., Campbell, J., Hughes, D., Lewis, M., & Sycara, K. (2023). Theory of Mind for Multi-Agent Collaboration via Large Language Models. In EMNLP 2023

Interpretable Communication

Learning interpretable communication is essential for multi-agent and human-agent teams (HATs). In multi-agent reinforcement learning for partially-observable environments, agents may convey information to others via learned communication, allowing the team to complete its task. However, the utility of discrete and sparse communication in human-agent team experiments has not yet been investigated. In this work, we analyze the efficacy of sparse-discrete methods for producing emergent communication that enables high agent-only and human-agent team performance. 

- Karten, S., Kailas, S., Li, H., & Sycara, K. (2023). On the Role of Emergent Communication for Social Learning in Multi-Agent Reinforcement Learning. In AAMAS 2023
- Karten, S., Tucker, M., Li, H., Kailas, S., Lewis, M., & Sycara, K. (2023). Interpretable learned emergent communication for human-agent teams. IEEE Transactions on Cognitive and Developmental Systems.
- Tucker, M., Li, H., Agrawal, S., Hughes, D., Sycara, K., Lewis, M., & Shah, J. A. (2021). Emergent discrete communication in semantic spaces. Advances in Neural Information Processing Systems (NeurIPS 2021), 34, 10574-10586.

Modelling Theory of Mind

The ability to infer another human’s beliefs, desires, intentions and behaviour from observed actions and use those inferences to predict future actions has been described as the Theory of Mind (ToM).  In conjunction with researchers at CMU we are working to develop AI agents capable of making such ToM inferences.  This research started with modelling single human's mental state in a search and rescue task, and later extended to modelling the team state (i.e. shared mental model) of three human rescuers.


- Li, H., Fan, Y., Zheng, K., Lewis, M., & Sycara, K. (2023). Personalized Decision Supports based on Theory of Mind Modeling and Explainable Reinforcement Learning. In 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2023)

- Li, H., Oguntola, I., Hughes, D., Lewis, M., & Sycara, K. (2022, August). Theory of Mind Modeling in Search and Rescue Teams. In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (pp. 483-489). IEEE.

- Li, H. Le, L., Chis, M., Zheng, K., Hughes D., Lewis, M., Sycara, K., (2021). Sequential Theory of Mind Modeling in Team Search and Rescue Tasks. In AAAI Fall Symposium of Computational Theory of Mind for Human-Machine Teams 
- Jain, V., Jena, R., Li, H., Gupta, T., Hughes D., Lewis, M., Sycara, K., (2020). Predicting Human Strategies in Simulated Search and Rescue Tasks. Artificial Intelligence and Humanitarian Assistance and Disaster Response Workshop at NeurIPS 2020 (AI+HADR'20) [PDF]

Adaptation in Human-agent teams

The ability to collaborate with previously unseen human teammates is crucial for artificial agents to be effective in human-agent teams (HATs). However, it is hard to develop a single agent policy to match all potential teammates.  In this research, we study both human-human and human-agent teams in a dyadic cooperative task, Team Space Fortress (TSF). Human-human team results show that the team performance is influenced by both players’ individual skill level and their ability to collaborate with different teammates by adopting complementary policies. We propose an adaptive agent that identifies different human policies and assigns a complementary partner policy to optimize team performance. Evaluations in HATs indicate that both human adaptation and agent adaptation contribute to team performance. 

- Li, H., Ni, T., Agrawal, S., Jia, F., Raja, S., Gui, Y., Hughes, D., Lewis, M. and Sycara, K., (2021). Individualized Mutual Adaptation in Human-Agent Teams.  In IEEE Transactions on Human Machine Systems (T-HMS) [PDF]
- Ni, T., Li, H., Agrawal, S., Hughes D., Lewis, M., Sycara, K., (2020). Adaptive Agent Architecture for Real-time Human-Agent Teaming. Plan, Activity, and Intent Recognition workshop at AAAI 2021 (AAAI-PAIR 2021) [PDF] 
- Li, H., Hughes D., Lewis, M., Sycara, K., (2020). Individual adaptation in teamwork. Proceedings of 42nd Annual Meeting of the Cognitive Science Society  (CogSci 2020) [PDF]
- Li, H., Ni, T., Agrawal, S., Hughes D., Lewis, M., Sycara, K., (2020). Team Synchronization and Individual Contributions in Coop-Space Fortress. Proceedings of 64th Annual Meeting of the Human Factors and Ergonomics Society (HFES 2020) [PDF]

Norms of Domestic Robots

As domestic service robots become more common and widespread, they must be programmed to act appropriately by efficiently accomplishing tasks while aligning their actions with relevant norms. We are taking the first step to equip domestic robots with normative reasoning competence which is understanding the norms that people apply to the behaviour of robots in specific social contexts.

- Li, H., Milani, S., Krishnamoorthy, V., Lewis, M., Sycara, K., (2019, Feb). Perceptions of Domestic Robots' Normative Behavior Across Cultures. Proceedings of AAAI conference on Artificial Intelligence, Ethics, and Society (AIES 2019) [PDF]

Explainable AI

Deep reinforcement learning has made remarkable achievements in diverse domains. However, the working process of them remains opaque to both end-users and designers. Our research proposed new methods to increase the transparency of DRL systems for them to be better accepted and trusted.

- Lewis, M., Li, H., & Sycara, K. (2020). Deep Learning, Transparency and Trust in Human Robot Teamworkpp. 321-352. Academic Press.[PDF]
- Iyer, R., Li, Y., Li, H., Lewis, M., Sundar, R., & Sycara, K., (2018, Feb). Transparency and Explanation in Deep Reinforcement Learning Neural Networks. Proceedings of AAAI conference on Artificial Intelligence, Ethics, and Society (AIES 2018) , New Orleans, LA (Best Paper Award) [PDF] [Poster] [Slides]

Trust in Human-Swarm Interaction

The interaction between swarm robots and human operators is significantly different from the traditional human-robot interaction due to unique characteristics of the system, such as high cognitive complexity and difficulties in state estimation. This project focused on the human factors during this process including trust, level of automation and system transparency.

 - Li, H. (2020) A computational model of human trust in supervisory control of robotic swarms.  Master's Thesis, University of Pittsburgh. [PDF]
- Li, H., Lewis, M., Sycara, K., (2020). A Kalman estimation model of human trust in supervisory control of robotic swarms. Proceedings of 64th Annual Meeting of the Human Factors and Ergonomics Society (HFES 2020) [PDF]
- Nam, C., Walker, P., Li, H., Lewis, M., & Sycara, K., (2020). Models of Trust in Human Control of Swarms With Varied Levels of Autonomy. IEEE Transactions on Human-Machine Systems. vol. 50, no. 3, pp. 194-204, June 2020, doi: 10.1109/THMS.2019.2896845.
- Li, H.*, Bang, J.*, Nagavalli, S., Nam, C., Lewis, M., & Sycara, K., (2018, Oct). Human Interaction Through an Optimal Sequencer to Control Robotic Swarms. Proceedings of IEEE International Conference on Systems, Man, and Cybernetics (SMC 2018), Miyazaki, Japan [PDF] [Poster]
- Nam, C., Li, H., Li, S., Lewis, M., & Sycara, K., (2018, Oct). Trust of Humans in Supervisory Control of Swarm Robots with Varied Levels of Autonomy. Proceedings of IEEE International Conference on Systems, Man, and Cybernetics (SMC 2018), Miyazaki, Japan [PDF]


Past Projects

Driving Distraction

Studied the driving distraction from multimodal interactions with in-car smart devices in simulated environments. 


Safety Briefing Cards

Investigated the influence of pictorial realism on the comprehension of safety briefing cards combining eye-tracking technique and mock cabin manipulation task.

Paper (English abstarct)

BM Working Memory

Researched the process of retaining biological motion information in working memory. And found the processing priority of social interaction BM and its mechanism. 

Demo    Poster


Explored the user experience issues in e-commerce websites/mobile apps. For example, layout of category navigating and information architecture of product detail pages.