Intro Seminars
Every semester, we host 8-session programs diving into AI alignment and governance.
Every semester, we host 8-session programs diving into AI alignment and governance.
Do you want to help make AI beneficial to humanity? Explore AI Safety as a participant or facilitate discussions as a moderator in our comprehensive 8-week program.
Applications to this program are now closed. Sign up to our monthly newsletter to receive updates about the next iteration.
Course Highlights:
AI Governance Track by BlueDot Impact: Syllabus developed by researchers from Harvard, Oxford, and Cambridge.
AI Alignment Track by AI Safety Atlas: Created in collaboration with experts from OpenAI, Cambridge, and CeSIA.
Dates: 4 August - 27 October, 2025 (8 weeks main course + 4 weeks optional project phase)
Format: Available online or in-person
Languages: Sessions in English and additional languages
Costs: Free of charge
Benefits: International networking, career opportunities, and a LinkedIn certificate upon completion
(Curriculum by BlueDot Impact):
Introduction to Foundational Concepts in Artificial Intelligence
Exploring the Positive Effects of AI
Understanding the Potential Perils of AI
Challenges in Controlling AI Systems
Policy Levers for AI Governance
Real-World AI Policy Analysis
Policy Proposal for Governing AI
Contributing to AI Governance
(Curriculum by AI Safety Atlas):
Capabilities
Risks
Strategies
Governance
Evaluations
Specification
Generalization
Scalable Oversight
Interpretability (optional)
Consider watching this introduction video: Introduction to AI Safety
AI is rapidly advancing with the potential for significant positive impact. However, according to a survey, 48% of AI experts believe the risk of human extinction from AI is greater than 10% (Grace et al, 2022).
This concern is shared by leading AI scientists and companies who signed the Statement on AI Risk, equating AI extinction risk with pandemics and nuclear war.
Recently, Geoffrey Hinton, 2024 Nobel Prize winner in Physics for his work in AI, estimated a 10-20% chance of AI leading to human extinction within the next three decades.
There are fundamental questions to be answered and technical challenges to be overcome to ensure that advanced AI systems are beneficial rather than harmful.
This course is for people from any background who have an interest in and are relatively new to the field of AI Safety. You can apply regardless of your nationality and professional backgorund. Unfortunately, we are unable to accommodate minors (under 18) in the program.
Since we have limited capacity, there is going to be a selection process.
While we think it's important to raise awareness of the risks of AI in the general public, if we receive more applications than we can accept, we will give priority to those who are seriously considering taking action in AI Safety (thinking of a career transition, donating etc). Nevertheless, we encourage you to apply even if you are uncertain about how you'd be able to contribute to AI Safety. This is an introductory program after all - we might be able to give you a few ideas :)
Since this program consists of discussion sessions based on weekly readings, its success is very much based on active participation. By applying to the course, you commit to dedicating 5 hours of your time to the course for at least 8 weeks (there's an optional project phase at the end).
If you are unsure about applying, consider the option of self-studying the course material (it's freely available, find links above). Taking part in the program has the benefits of accountability, exposure to diverse opinions and networking, but as mentioned earlier, it also requires you to put in the effort. If you're still unsure, we encourage you to err on the side of applying!
A keen interest in how we can make AI beneficial to humanity
No technical background required (though helpful for the Alignment Track)
English proficiency is required; the reading materials are in English, discussions might be held in other languages, depending on participant availability
Open to all international applicants; non-EU applicants should consider time zone differences
Alignment (Technical) Track: Best for those with a CS, Math, or related technical background.
Governance (Policy) Track: Ideal for those in Law, Economics, PR, or technical fields looking to explore regulatory aspects.
Even if your background is in another field, we encourage you to apply - the field of AI Safety needs professionals from a number of different backgrounds.
Duration: 4 August - 27 October, 2025
Weekly Commitment: For the 8 course weeks, about 5 hours per week, including 2 hours for readings, 1 hour for exercises, and 2 hours for discussion sessions. Flexibility is provided during exam periods. The optional 4-week project phase after the main course will consist of independent worka and time commitment will depend on the project that you choose to pursue.
English proficiency is required; the reading materials are in English, discussions might be held in other languages, depending on participant availability
Email the course coordinators at contact@enais.co or info@aishungary.com. We will strive to answer within 48 hours.