Summary
Linköping University is offering a PhD position in the innovative field of AI security, focusing on “Memory Poisoning in LLM Agents: Foundations, Attacks, and Defenses.” This role aims to address critical security challenges in large language model (LLM) agents, which are increasingly utilized across various domains.
PhD Student in AI Security, Linköping University, Sweden
Designation
PhD Student in AI Security
| Criteria | Details |
|---|---|
| Research Area | AI Security, Memory Poisoning, LLM Agents |
| Location | Linköping University, Sweden |
| Eligibility/Qualification | Master’s degree in Computer Science, Electrical Engineering, or Applied Mathematics with 240 credits (60 in advanced courses). |
| Last Date to Apply | April 20, 2026 |
Description
The selected candidate will engage in groundbreaking research within the Division for Communication Systems at Linköping University. The project will examine memory poisoning in LLM agents, exploring how these vulnerabilities can lead to biased decision-making and various security risks in real-time systems like healthcare and finance. The position will involve collaboration with leading experts across prominent institutions and may include teaching responsibilities.
How to Apply
Interested candidates should apply by clicking the “Apply” button on the official university webpage. Ensure that your application reaches Linköping University by April 20, 2026.
For further information, you can refer to the official job page or contact the respective faculty members for any queries.
Join us to contribute to the transformative research in AI security!








