Summary
Linköping University is inviting applications for a Ph.D. Position in the field of AI security, focusing on “Memory Poisoning in LLM Agents: Foundations, Attacks, and Defenses.” This opportunity allows students to engage in innovative research at the intersection of AI, communications systems, and cybersecurity.
Ph.D. Position in AI Security, Linköping University, Sweden
Designation
Ph.D. Student in AI Security
| Field | Details |
|---|---|
| Research Area | AI Security, Large Language Models |
| Location | Linköping University, Sweden |
| Eligibility/Qualification | Master’s degree in Computer Science, Electrical Engineering, or Applied Mathematics (240 credits minimum, including 60 advanced) |
| Salary & Employment | Locally negotiated salary progression |
Description
This Ph.D. position is funded by the Swedish Research Council (VR) and focuses on understanding and mitigating memory poisoning attacks on large language model (LLM) agents. These agents are increasingly utilized in various sectors such as healthcare, finance, and cybersecurity. The candidate will work with a dynamic research team across Linköping University and collaborate with Chalmers University and Recorded Future.
How to Apply
- Click on the “Apply” button on the Linköping University webpage.
- Ensure that your application reaches Linköping University no later than April 20, 2026.
Last Date to Apply
April 20, 2026
For further information regarding the research, qualifications, and application procedures, please refer to the Linköping University website.
Join us in contributing to the cutting-edge research on AI security and making a significant impact in the field!







