Intro Seminars
Every semester, we host programs with live discussion groups diving into AI alignment and governance.
Every semester, we host programs with live discussion groups diving into AI alignment and governance.
Do you want to help make AI beneficial to humanity? Explore AI Safety as a participant or facilitate discussions as a moderator in our comprehensive 4-week program.
Applications to this program are now open. Sign up here until 14 January!
Course Highlights:
Curriculum by AI Safety Atlas: Created in collaboration with experts from OpenAI, Cambridge, and CeSIA.
Dates: 26 January - 1 March, 2026 (1 week for orientation + 4 weeks of discussion sessions)
Format: Available online or in-person
Languages: Sessions in English and additional languages
Costs: Free of charge
Benefits: International networking, career opportunities, and a LinkedIn certificate upon completion
Consider watching this introduction video: Introduction to AI Safety
AI is rapidly advancing with the potential for significant positive impact. However, according to a survey, 48% of AI experts believe the risk of human extinction from AI is greater than 10% (Grace et al, 2022).
This concern is shared by leading AI scientists and companies who signed the Statement on AI Risk, equating AI extinction risk with pandemics and nuclear war.
Recently, Geoffrey Hinton, 2024 Nobel Prize winner in Physics for his work in AI, estimated a 10-20% chance of AI leading to human extinction within the next three decades.
There are fundamental questions to be answered and technical challenges to be overcome to ensure that advanced AI systems are beneficial rather than harmful.
This course is for people from any background who have an interest in and are relatively new to the field of AI Safety. You can apply regardless of your nationality and professional backgorund. Unfortunately, we are unable to accommodate minors (under 18) in the program.
Since we have limited capacity, there is going to be a selection process.
While we think it's important to raise awareness of the risks of AI in the general public, if we receive more applications than we can accept, we will give priority to those who are seriously considering taking action in AI Safety (thinking of a career transition, donating etc). Nevertheless, we encourage you to apply even if you are uncertain about how you'd be able to contribute to AI Safety. This is an introductory program after all - we might be able to give you a few ideas :)
Since this program consists of discussion sessions based on weekly readings, its success is very much based on active participation. By applying to the course, you commit to dedicating 5 hours of your time to the course for at least 4 weeks.
If you are unsure about applying, consider the option of self-studying the course material (it's freely available, find links above). Taking part in the program has the benefits of accountability, exposure to diverse opinions and networking, but as mentioned earlier, it also requires you to put in the effort. If you're still unsure, we encourage you to err on the side of applying!
A keen interest in how we can make AI beneficial to humanity
No technical background required
English proficiency is required; the reading materials are in English, discussions might be held in other languages, depending on participant availability
Open to all international applicants; non-EU applicants should consider time zone differences
Duration: 26 January - 1 March, 2026
Weekly Commitment: For the 4 course weeks, about 5 hours per week, including 2 hours for readings, 1 hour for exercises, and 2 hours for discussion sessions. Flexibility is provided during exam periods.
English proficiency is required; the reading materials are in English, discussions might be held in other languages, depending on participant availability
Email the course coordinators at contact@enais.co or info@aishungary.com. We will strive to answer within 48 hours.