
Dilemmas and Dangers in AI
How YOU can make a difference
Help tackle the biggest risks from artificial intelligence
5-week, donation-based (i.e. free if needed) interdisciplinary online fellowship helping you explore how to steer cutting-edge AI technology toward benefits for humanity.
Dates: Next cohort 30th June - 3rd August 2025. Apply by 1st May!
Who: Smart, curious, and ambitiously altruistic students aged 15-19 in the UK or Ireland who haven’t yet started university
Applications: Apply here and tell us a bit about yourself, your accomplishments, and your aspirations, then take a fun puzzle test
All applicants will receive access to our self-paced online courses even if you are not selected as a Finalist
NEW! Top Finalists (accepted course participants) can apply to participate in a selective August Fellowship with additional custom mentorship, funding, and support!
What you can expect
Facilitated discussion groups with a cohort of smart, curious students
Talks and Q&As with professionals addressing existential risks from AI
Mentorship and up to £1,000 in grant funding for selected Fellows to pursue follow-up projects
Support with UCAS applications and Oxbridge interviews

9/10 participants would recommend
Surveyed at the end of our first cohort, Fellows gave an average of 9.0/10 for how likely they would be to recommend the Fellowship to a friend interested in having a big impact.
Plus, 83% of Fellows feel both more confident in their ability to make a positive impact and more ready to take ambitious actions as a result of participating.
Course overview
-
- Why AI might be the next industrial revolution
- The path to ChatGPT, and where it leads now
- Existing and emerging risks of today’s AI landscape
Example resources:
🎧 Blog readout by Holden Karnofsky: “This Can't Go On”
🌐 Interactive website by Epoch AI: AI Trends
🎙️ Video interview by the Institute of Art and Ideas: “Immortality in the future”
-
Foundations of AI agents
Could AI cause extinction this century? Where the experts stand
The real Black Mirror: AI-enhanced dystopia
🎦 YouTube video by Rational Animations: “Specification Gaming: How AI Can Turn Your Wishes Against You”
📺 Documentary on Netflix: Coded Bias
📚 Book by Brian Christian: The Alignment Problem
-
How and why AI lies to us
Bias and ethics in current AI systems
How we accidentally train AI to overpower us
📝 Writeup by the Center for AI Safety: “An Overview of Catastrophic AI Risks “
🎥 YouTube video by 80,000 Hours: “Could AI wipe out humanity? “
💬 Talk by Max Daniel: “S-risks: why they are the worst existential risks, and how to prevent them”
-
Who wields AI power and how?
Promises of and challenges to alignment interventions
Governance of AI and analogies from history
Example resources:
📝 Blog post BlueDot Impact: “A Brief Introduction to some Approaches to AI Alignment”
🔍 Cause profile by 80,000 Hours: “We can tackle these risks” in “Preventing an AI-related catastrophe”
💬 Talk by Michael Aird: “AI Governance: overview and careers”
-
Is this a problem you should prioritize?
How to help tackle AI risks without even learning to code
Degrees that set you up to tackle AI risks
Example resources:
⏸️ Blog post by Scott Alexander: “Pause For Thought: The AI Pause Debate”
📝 Article by Probably Good: “The SELF Framework” — a simple tool to help you assess a role’s potential for good
🌐 Tag on the EA Opportunities Board: AI Safety & Policy
Weekly structure
-
Work through weekly interactive videos, questions, articles, and other activities to prepare for the week’s exploration sheet and discussion call.
-
Weekly discussion with a Leaf facilitator and small-group breakouts to develop your critical thinking in conversation with intelligent, interesting, like-minded teens.
We’ll do our best to find a slot that works around your other commitments!
-
From predicting trends in AI systems & policies to jailbreaking AI systems yourself (can you get it to reveal it's secret password?) to getting started on coding, you’ll explore practical skills to help you tackle the dangers of AI and explore your opinions by reflecting on the week’s content for yourself before your discussion call.
-
Meet professionals working at the forefront of tackling existential risks from artificial intelligence.
-
Share your own knowledge plus join sessions run by Fellows and alumni! Meet inspiring peers with shared interests; collaborate on projects; discover exciting new ideas.
-
Discord channel, paired 1:1s, and opportunities to get to know peers with different backgrounds but shared passions.
-
Weekly competitions for prizes! Eg. technical tasks, creative writing, or more extensive activities like Intelligence Rising, an immersive simulation developed by researchers at Cambridge University’s Centre for the Study of Existential Risk.
(Participation is not guaranteed — you may need to sign up or apply internally, and it depends on your time availability.)
2025 Speakers Will Include:
Isaac Dunn, an Open Philanthropy Century Fellow studying global catastrophic risk and AI alignment
Emma Lawsen, Senior Policy and Operations Strategist at the Centre for Long-Term Resilience
Buck Shlegeris, CEO of AI safety and security research organization Redwood Research
Meet some staff & former speakers
Leaf is supported by a wide range of experts, facilitators, and alumni to help support the growth of our Fellows.
Connor Axiotes
Conjecture, ex-Adam Smith Institute & UK Parliament
-
Connor is an AI policy and communications expert. Most recently, he worked as Strategic Communications Lead at Conjecture, a startup building a new AI architecture to ensure the controllable, safe development of advanced AI technology. He was previously Director of Communications & Research Lead for Resilience, Risk and State Capacity at the Adam Smith Institute (a UK think tank). He has worked in communications for Members of Parliament and Rishi Sunak’s Prime Ministerial campaign. He has a master’s in Global Politics from Durham Uni.
Noah Siegel
Google DeepMind
-
Noah is a Research Engineer at Google DeepMind. After his research at the Allen Institute for AI in Seattle, he worked on machine learning for robotics at DeepMind before switching to focus on language model reasoning and explanations as part of AI Safety and alignment research. Having studied computer science, economics, and maths as an undergraduate, he is now pursuing a PhD in Artificial Intelligence at University College London via the UCL-DeepMind joint PhD program.
Jai Patel
University of Cambridge
-
Jai researches AI governance at Cambridge University's Centre for the Future of Intelligence, working on policies to support AI to scale responsibly, and is soon starting full time at the UK government’s AI Safety Institute. He also works part time at EdTech startup Wellio, and previously co-developed CitizenAI, a GPT-wrapper which provides quick, independent, expert advice to address common issues such as employment or housing concerns. His undergrad was in PPE at LSE.
Yi-Ling Liu
Writer, editor & journalist
-
Yi-Ling is writing a book for Penguin Press on the Chinese Internet, was previously the China editor of Rest of World, and wrote freelance for various outlets (New Yorker, WIRED, New York Times Magazine) on Chinese tech, society & politics. She is an affiliate at Concordia AI, which promotes international cooperation on AI safety and governance. Her undergrad was in English and creative writing at Yale University.
Jack Parker
HiddenLayer
-
Jack is a computer security engineer whose speciality is offensive security. His education at Middlebury College and Duke University focused on math, machine learning, software engineering, and education. He is especially interested in solving technical safety and security problems to improve the reliability and trustworthiness of AI systems.
Jamie Harris
Leaf
-
After graduating with a first-class degree in history from the University of Oxford, Jamie taught history for several years at a Sixth Form college. He then joined the think tank Sentience Institute where he researched the moral consideration of AI entities, the history of AI rights research, and which psychological factors influence moral concern for AIs. He has co-founded and led multiple nonprofits providing impact-focused advice.
Sam Smith
Course designer and facilitator
-
Sam is the course designer and facilitator of DDAI. They are a Leaf alum and Non-Trivial winner who has facilitated several courses for both organisations. They are studying maths and philosophy at the University of Bristol and have particular interests in applied mathematics and ethics.
The opportunities don’t end after 5 weeks
Our August Fellows programme supports top Finalists with additional next steps
Up to £1,000 in grant funding plus mentorship for projects
Group accountability calls and an ongoing peer network
Referrals to and application support for partner programmes
Virtual work experiences with a high-impact nonprofit
*These follow-ups are applied for or earned during the Finalist June/July cohort, not guaranteed!
Plus, all Finalists will still have access to the online learning platform, the Discord channel, and your new friends after the five weeks end.
Alumni from our recent Fellowships are being supported to pursue projects like:
Research projects into topics they were fascinated by during the course content focused on various risks from AI and opportunities to tackle them
A more in-depth investigation of the risks from AI through a Leaf-specific cohort of BlueDot Impact’s AI Safety Fundamentals courses
An international youth debate organization focusing on high-impact topics
Cambridge AI Safety Hub
Alumni have gone on to work with:
Center for Youth and AI

Alumni Perspectives
Where Leaf alumni are now
Oxford University
Cambridge University
Harvard University
London School of Economics
More questions? See our “FAQ” page.
Deadline to apply is 1st May, but early applications will be considered sooner!