Zizhan Zheng, SSE Department of Computer Science>

Modern AI systems increasingly work in teams, where multiple agents collaborate to solve complex tasks. In this project, you will study what happens when one or more AI teammates behave maliciously, either due to bugs, attacks, or intentional manipulation, and how to design cooperative AI systems that remain reliable and safe in these settings. You will work with large language model (LLM)-based agents (e.g., OpenClaw agents) in multi-agent environments and explore how insider attacks can disrupt coordination, decision-making, and trust. Together with Dr. Zheng and his PhD students, you will implement both attack strategies and defense mechanisms, run experiments on multi-agent tasks, and analyze system behavior under adversarial conditions. This project is supported by a National Science Foundation (NSF) grant through an REU supplement.
What You Will Do
What You Will Gain
By the end of the project, you will have helped build and evaluate attack and defense techniques for cooperative AI systems, contributed to reusable experimental tasks and datasets, and participated in preparing a research paper summarizing the results.
Time, eligibility, and other details
| Expected workload | 40 hours per month |
| Skills required | Strong programming skills; experience with or a strong interest in AI and machine learning; prior experience with large language models and AI agents is preferred but not required. |
| Who is eligible | Tulane majors in computer science, mathematics, or engineering |
| Core partners | |
| Sponsoring party | Faculty |
| Volunteer, Paid, or Credit-eligible? | Paid ($700 per month) and credit-eligible |
| Forms Required | CV and transcripts |
To apply for an opportunity, click here to login with your Tulane student ID.