Research Irene Solaiman Fun Stuff



I'm an AI & technology
expert safety researcher conditional optimist policy expert
working to make technology work for everyone.


Irene Solaiman is an AI safety and policy expert. She is Head of Global Policy at Hugging Face, where she is conducting social impact research and leading public policy. Irene serves on the Partnership on AI's Policy Steering Committee, the Center for Democracy and Technology's AI Governance Lab Advisory Committee, and Aspen Digital's AI Elections Advisory Council. Irene advises responsible AI initiatives at OECD and IEEE. Her research includes AI value alignment, responsible releases, and combating misuse and malicious use. Irene was recently named and was named MIT Tech Review's 35 Innovators Under 35 2023 for her research.


Irene formerly initiated and led bias and social impact research at OpenAI, where she also led public policy. Her research on adapting GPT-3 behavior received a spotlight at NeurIPS 2021. She was recently also Tech Ethics and Policy Mentor at Stanford University and an International Strategy Forum Fellow at Schmidt Futures. She formerly built AI policy at Zillow Group and advised policymakers on responsible autonomous decision-making and privacy as a fellow at Harvard’s Berkman Klein Center.


Outside of work, Irene enjoys her ukulele, making bad puns, and mentoring underrepresented people in tech. Irene holds a B.A. in International Relations from the University of Maryland and a Master in Public Policy from the Harvard Kennedy School.


Catch Me In...

MIT Tech Review 35 Innovators Under 35: Artificial Intelligence – 2023

TechCrunch's Women in AI Making A Difference Profile of Irene Solaiman - 2024

AIM 100: The Most Influential Global Leaders in AI:Irene Solaiman - 2024

My Latest Research and Publications

Solaiman and Talat et al, Evaluating the Social Impact of Generative AI Systems in Systems and Society, preprint, 2023.

Solaiman, The Gradient of Generative AI Release: Methods and Considerations, FAccT, 2023.

Solaiman, Generative AI Systems Aren't Just Open or Closed, Wired Ideas, 2023.

Solaiman,Five Myths About How AI Will Affect 2024 Elections, Tech Policy Press, 2024.

Solaiman and Dennison, Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets, NeurIPS (spotlight), 2021.

Solaiman et al, Release Strategies and the Social Impacts of Language Models, OpenAI, 2019.

Selected Media + Panels

World AI Conference Keynote, The Role of Openness in International AI Safety, July 2024.

TechCrunch Disrupt Panel,The increasing support and advantages of “open” AI with Ai2 and Hugging Face, October 2024.

Data and Society Workshop Panel, What's Trust Got To Do With It?, March 2024.

The Gradient Podcast, Irene Solaiman: AI Policy and Social Impact, April 2023.

NPR's Know It All: 1A and WIRED's Guide to A.I., What Is AI And How Will It Shape The Future?, February 2023.

Tech Policy Press, Responsible Release and Accountability for Generative AI Systems, May 2023.

NPR Morning Edition, AI-generated deepfakes are moving fast. Policymakers can't keep up, April 2023.

In The News

Rest of World, What the AI boom is getting wrong (and right), according to Hugging Face’s head of global policy, June 2023.

AI Roundtable, Barron's: AI Is the Real Deal—if You Understand It. Our 5 Roundtable Pros Are Here to Help, August 2023.

Release Gradient and Chatbots, Wired: Khari Johnson on Chatbots Got Big—and Their Ethical Red Flags Got Bigger, February 2023.

PALMS and the Problems of Fairness, Vox: Sigal Samuel on Why it’s so damn hard to make AI fair and unbiased, April 2022.

Speaking Inquiries

Please contact London Speaker Bureau