PhD Position in Mechanistic Interpretability: The University of Amsterdam invites applications for a PhD position focused on mechanistic interpretability of machine learning (ML) models. This is an exciting opportunity for candidates with a strong background in artificial intelligence, computer science, or related fields to conduct independent research in this emerging subfield of ML. The role involves developing and evaluating techniques to understand how modern deep learning architectures operate internally.
Designation
PhD Candidate in Mechanistic Interpretability
Detail | Information |
---|---|
Research Area | Mechanistic Interpretability of ML Models |
Location | University of Amsterdam, Netherlands |
Eligibility/Qualification | MSc in AI, Computer Science, Engineering, Mathematics, Physics, or related discipline; background in machine learning; excellent software engineering skills in Python; fluent in English. |
Job Description | Conduct independent research, develop novel techniques for understanding deep neural networks, create evaluation frameworks, connect empirical findings to theoretical computation frameworks, publish findings, and assist in teaching undergraduates and master students. |
Salary | Monthly gross salary ranging from €2,872 to €3,670 plus 8% holiday allowance and 8.3% year-end allowance. |
Contract Duration | Temporary contract for 4 years (initially 18 months with extension based on performance). |
Fringe Benefits | 232 holiday hours per year, educational program, parental leave, and housing assistance for international candidates. |
Last Date to Apply | December 31, 2024 |
Research Area
The PhD candidate will focus on mechanistic interpretability, exploring various types of AI models and applications, including social network analysis and molecular simulation.
Location
This position is based at the University of Amsterdam, which hosts a vibrant academic environment with a diverse community of students and researchers.
Eligibility/Qualification
- Master’s degree in artificial intelligence, computer science, engineering, mathematics, physics, or a related discipline.
- Demonstrable background in machine learning.
- Excellent software engineering skills in Python.
- Proficiency in English (spoken and written).
Job Description
As a PhD candidate, you will:
- Conduct independent research on mechanistic interpretability.
- Develop and evaluate post-hoc interpretability techniques for ML models.
- Create frameworks to assess the accuracy of interpretability methods.
- Analyze model behavior in conjunction with theoretical computation frameworks.
- Publish and present research at international AI conferences (e.g., NeurIPS, ICML, ICLR).
- Assist in teaching undergraduate and master’s courses.
How to Apply
Interested candidates should submit the following documents:
- Letter of motivation (1 page) describing research interests and explaining the desire to apply.
- List of all Master-level modules taken, with an official transcript of grades.
- Writing sample (Master’s thesis, term paper, or publication).
- Detailed CV, including education and work experience dates.
- Relevant project links (GitHub, portfolio) showcasing skills.
All applications must be submitted as a single PDF file via the application portal on the University of Amsterdam’s website.
Last Date to Apply
Applications will be accepted until December 31, 2024.