Hi, I am a third-year Ph.D. student in Intelligent Systems Program at University of Pittsburgh. I work with Prof. Michael Lewis and collaborate closely with Prof. Katia Sycara at Carnegie Mellon University.
I got my Master's degree in Information Science here at Pitt. Before coming to Pittsburgh, I obtained my Bachelor's degree in Applied Psychology at Zhejiang University, and worked with Prof. Zaifeng Gao.
I am interested in understanding the mutual interaction between humans and AI from a human-centered perspective. My research covers both directions in human-AI interaction. My work helps AI understand human mental states based on computational cognitive modeling, and facilitates humans to understand AI's decision-making process by implementing explainable AI. Specifically, I have been working on human-agent communication, computational trust modeling, adaptive strategy in human-agent teaming, Theory of Mind modeling, normative reasoning of social robots, and explainable AI.
Check my publications on Google Scholar.
Download my current CV here.
Contact me via e-mail.
Modeling Theory of Mind
The ability to infer another human’s beliefs, desires, intentions and behavior from observed actions and use those inferences to predict future actions has been described as the Theory of Mind (ToM). In conjunction with researchers at CMU we are working to develop AI agents capable of making such ToM inferences. This research started with modeling single human's mental state in a search and rescue task, and later extended to modeling the team state (i.e. shared mental model) of three human rescuers.
- Li, H. Le, L., Chis, M., Zheng, K., Hughes D., Lewis, M., Sycara, K., (2021). Sequential Theory of Mind Modeling in Team Search and Rescue Tasks. In AAAI Fall Symposium of Computational Theory of Mind for Human-Machine Teams
-Li, H., Zheng, K., Sycara, K., Lewis, M. (2021). Human Theory of Mind Inference in Search and Rescue Tasks. In Proceedings of the 65th Annual Meeting of the Human Factors and Ergonomics Society. (HFES 2021) [PDF]
- Jain, V., Jena, R., Li, H., Gupta, T., Hughes D., Lewis, M., Sycara, K., (2020). Predicting Human Strategies in Simulated Search and Rescue Tasks. Artificial Intelligence and Humanitarian Assistance and Disaster Response Workshop at NeurIPS 2020 (AI+HADR'20) [PDF]
Adaptation in Human-agent teams
The ability to collaborate with previously unseen human teammates is crucial for artificial agents to be effective in human-agent teams (HATs). However, it is hard to develop a single agent policy to match all potential teammates. In this research, we study both human-human and human-agent teams in a dyadic cooperative task, Team Space Fortress (TSF). Human-human team results show that the team performance is influenced by both players’ individual skill level and their ability to collaborate with different teammates by adopting complementary policies. We propose an adaptive agent that identifies different human policies and assigns a complementary partner policy to optimize team performance. Evaluations in HATs indicate that both human adaptation and agent adaptation contribute to team performance.
- Li, H., Ni, T., Agrawal, S., Jia, F., Raja, S., Gui, Y., Hughes, D., Lewis, M. and Sycara, K., (2021). Individualized Mutual Adaptation in Human-Agent Teams. In IEEE Transactions on Human Machine Systems (T-HMS) [PDF]
- Ni, T., Li, H., Agrawal, S., Hughes D., Lewis, M., Sycara, K., (2020). Adaptive Agent Architecture for Real-time Human-Agent Teaming. Plan, Activity, and Intent Recognition workshop at AAAI 2021 (AAAI-PAIR 2021) [PDF]
- Li, H., Hughes D., Lewis, M., Sycara, K., (2020). Individual adaptation in teamwork. Proceedings of 42nd Annual Meeting of the Cognitive Science Society (CogSci 2020) [PDF]
- Li, H., Ni, T., Agrawal, S., Hughes D., Lewis, M., Sycara, K., (2020). Team Synchronization and Individual Contributions in Coop-Space Fortress. Proceedings of 64th Annual Meeting of the Human Factors and Ergonomics Society (HFES 2020) [PDF]
Norms of Domestic Robots
As domestic service robots become more common and widespread, they must be programmed to act appropriately by efficiently accomplishing tasks while aligning their actions with relevant norms. We are taking the first step to equip domestic robots with normative reasoning competence which is understanding the norms that people apply to the behavior of robots in specific social contexts.
- Li, H., Milani, S., Krishnamoorthy, V., Lewis, M., Sycara, K., (2019, Feb). Perceptions of Domestic Robots' Normative Behavior Across Cultures. Proceedings of AAAI conference on Artificial Intelligence, Ethics, and Society (AIES 2019) [PDF]
Deep reinforcement learning has made remarkable achievements in diverse domains. However, the working process of them remains opaque to both end-users and designers. Our research proposed new methods to increase the transparency of DRL systems for them to be better accepted and trusted.
- Lewis, M., Li, H., & Sycara, K. (2020). Deep Learning, Transparency and Trust in Human Robot Teamwork. pp. 321-352. Academic Press.[PDF]
- Iyer, R., Li, Y., Li, H., Lewis, M., Sundar, R., & Sycara, K., (2018, Feb). Transparency and Explanation in Deep Reinforcement Learning Neural Networks. Proceedings of AAAI conference on Artificial Intelligence, Ethics, and Society (AIES 2018) , New Orleans, LA (Best Paper Award) [PDF] [Poster] [Slides]
Trust in Human-Swarm Interaction
The interaction between swarm robots and human operators is significantly different from the traditional human-robot interaction due to unique characteristics of the system, such as high cognitive complexity and difficulties in state estimation. This project focused on the human factors during this process including trust, level of automation and system transparency.
- Li, H. (2020) A computational model of human trust in supervisory control of robotic swarms. Master's Thesis, University of Pittsburgh. [PDF]
- Li, H., Lewis, M., Sycara, K., (2020). A Kalman estimation model of human trust in supervisory control of robotic swarms. Proceedings of 64th Annual Meeting of the Human Factors and Ergonomics Society (HFES 2020) [PDF]
- Nam, C., Walker, P., Li, H., Lewis, M., & Sycara, K., (2020). Models of Trust in Human Control of Swarms With Varied Levels of Autonomy. IEEE Transactions on Human-Machine Systems. vol. 50, no. 3, pp. 194-204, June 2020, doi: 10.1109/THMS.2019.2896845.
- Li, H.*, Bang, J.*, Nagavalli, S., Nam, C., Lewis, M., & Sycara, K., (2018, Oct). Human Interaction Through an Optimal Sequencer to Control Robotic Swarms. Proceedings of IEEE International Conference on Systems, Man, and Cybernetics (SMC 2018), Miyazaki, Japan [PDF] [Poster]
- Nam, C., Li, H., Li, S., Lewis, M., & Sycara, K., (2018, Oct). Trust of Humans in Supervisory Control of Swarm Robots with Varied Levels of Autonomy. Proceedings of IEEE International Conference on Systems, Man, and Cybernetics (SMC 2018), Miyazaki, Japan [PDF]