Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Student Affinity Groups | Stanford HAI
Patrick Beadouin

Student Affinity Groups

We provide a space for students to share ideas, develop intellectually and strengthen the community of future leaders dedicated to building AI that benefits all of humanity

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

Overview

Student Affinity Groups 2024-2025

Student Experiences

2023-24 Academic Year

2022-23 Academic Year

Are you a Stanford student interested in creating meaningful interdisciplinary connections within the Stanford community? Do you have ideas on advancing AI to improve the human condition?

HAI Student Affinity Groups are small teams of interdisciplinary Stanford students (undergraduates, graduates) or postdocs who have a shared interest in a topic related to the development or study of human-centered AI. Affinity Groups provide a space for students to share ideas, develop intellectually and strengthen the community of future leaders dedicated to building AI that benefits all of humanity. 

Learn more about the 2024-2025 groups below and feel free to reach out to the student co-leads if interested in joining (open to current Stanford students and postdocs).

Our group will explore adapting AI for non-Western cultural contexts, focusing on indigenous and traditional knowledge systems. We aim to tailor AI to respect and enhance diverse worldviews, support cultural preservation, and bridge traditional practices with modern technology. This work will promote inclusive, culturally sensitive AI development that benefits and reflects diverse human experiences.

Name

Role

School

Department

Michael Baiocchi

Faculty Sponsor

School of Medicine

Epidemiology and Population Health

Enkhjin Munkhbayar

Student Co-Lead

School of Humanities & Sciences

Data Science

Erdenetangad Soronzonbold

Student Co-Lead

School of Engineering

Computer Science

Khongorzul Bat-Ireedui

Graduate Student

Graduate School of Education

Learning Design and Technology

     


Artificial Intelligence (AI) holds the potential to transform education by tackling some of the most pressing challenges, particularly inequitable access to quality education. Our affinity group will focus on using AI to bridge educational gaps for students from underrepresented and socio-economically disadvantaged backgrounds. Throughout the year, our interdisciplinary group will bring together students from various academic backgrounds- engineering, business, policy, and more- to collaboratively design and implement AI-related solutions to the issue. We envision our group to be an innovative space for students to share ideas, develop impactful tools, and contribute to the broader mission of ensuring that the benefits of AI in education are accessible to all.

Name

Role

School

Department

Candace Thille

Faculty Sponsor

Graduate School of Education

Education

Yifei Cheng

Student Co-Lead

School of Humanities & Sciences

History

Giada Fang

Student Co-Lead

Graduate School of Business

GSB MSx Program 

Yingkai Xu

Graduate Student

Graduate School of Business

GSB MBA Program 

Alexis Brown 

Graduate Student

Graduate School of Business

GSB MSx Program 

Phone calls are an API to the world – and AI takes this to the next level. Voice agents can provide access to human-grade services without the need to pay or “match with” an actual human. This means voice agents can take expensive or inaccessible human services and replace the supplier with AI. Currently, this includes therapy, coaching, and tutoring. In the future, this is likely to encompass a much broader range of experiences built around voice. Voice agents provide an opportunity to engage consumers at a level never seen in software — truly mimicking the human connection. This may manifest in the agent as the product, or voice as a mode of a broader product. Our group is interested in exploring the following topics: AI voice agents for personal growth, ethical design of emotive AI assistants, and societal impacts of AI-driven self-improvement.

Name

Role

School

Department

Andrew Maas

Faculty Sponsor

School of Engineering

Computer Science

Valerie Fanelle

Student Co-Lead

Graduate School of Business

Graduate School of Business

Samidh Mehta

Student Co-Lead

School of Engineering

Electrical Engineering

​​Sravan Patchala

Graduate Student

Graduate School of Business

Graduate School of Business (MBA)

Yingkai Xu 

Graduate Student

Graduate School of Business

Graduate School of Business (MBA)

Miroslav Suzara

Graduate Student

Graduate School of Education

Learning Sciences and Technology Design (PhD) and Masters in Computer Science

Njenga Kariuki

Graduate Student

Graduate School of Business

Graduate School of Business (MSx)

Saketika Chekuri

Graduate Student

School of Engineering

Electrical Engineering (MSEE)

AmpliFLI: Bridging AI & Equity Gaps Through Marginalized Voices will focus on the intersection of AI development and the lived experiences of first-generation, low-income (FLI) students & other marginalized backgrounds. Our group will create a space to critically discuss how AI technologies often overlook or misrepresent the voices and needs of underrepresented groups due to the lack of diversity and resources to achieve success in those spaces. 

We aim to promote human-centered issues in AI highly relevant to the FLI experience, such as the digital divide, lack of diversity in AI, data annotation, criminal justice, job loss due to automation, access to education and healthcare technologies, and the representation of underrepresented groups in traditionally wealthy-dominated fields. We are here to discuss how technology can be used/developed ethically and inclusively to increase access to technology for issues that our communities struggle with, instead of perpetuating biases of cultures, languages, and other identities.

By discussing current AI research, news, and policies, AmpliFLI aims to foster a deeper understanding of how AI can be more ethical, accessible, and equitable. Our group will explore how intersecting identities such as refugee status, immigrant backgrounds, gender, sexuality, ability, and geography contribute to shaping the roles that marginalized individuals can play in AI development, ensuring that AI technologies are truly designed to benefit all of humanity, instead of the powerful few.

Name

Role

School

Department

Ruth Starkman

Faculty Sponsor

School of Humanities & Sciences

Program in Writing and Rhetoric (PWR)

Angela Nguyen

Student Co-Lead

School of Engineering

Computer Science

Esscence Pogue

Student Co-Lead

School of Humanities & Sciences

Sociology and Comparative Studies in Race and Ethnicity

Carla Andazola Villela

Undergraduate Student

School of Engineering

Electrical Engineering

Lindsay Flores

Undergraduate Student

School of Humanities & Sciences

Human Biology

Zhangyang Wu

Undergraduate Student

School of Humanities & Sciences

Math

Gil Silva

Undergraduate Student

School of Humanities & Sciences

Symbolic Systems

Mohamed Musa

Undergraduate Student

School of Humanities & Sciences

Computer Science and Economics

Mathias Becerra Sanchez

Undergraduate Student

School of Humanities & Sciences

Symbolic Systems

Our HAI Affinity Group, “Bridging the Gap,” will explore how AI can be harnessed to make government, social, and community resources more accessible to diverse groups of people—students, the elderly, parents, and low-income workers—who struggle to find the support they need to thrive in the United States. Our research examines the intersection of policy interventions, supportive services, and user experiences with government and social programs. We’ll focus on the efficiency, access, and equity of service delivery, examining gaps and opportunities for improvement on various axes of human welfare: from housing stability and food access, to high-quality education and health care. We’ll analyze the extent to which AI can overcome the “iron triangle” of access, cost, and quality across public services: not just highlighting where the gaps in social services are, but identifying actionable solutions to fill them. We will also investigate incentive structures for social service providers, their ability to scale, funding sources, and the challenges of cross-sectoral data-sharing. Through these efforts, we aim to understand how AI-driven solutions may proactively connect marginalized people to the right resources at the right time, providing them the tools they need to overcome challenges and thrive.

Name

Role

School

Department

Dan Iancu

Faculty Sponsor

Graduate School of Business

Information and Technology

Katie Deal

Student Co-Lead

Graduate School of Business

Graduate School of Business

Syamantak Payra

Student Co-Lead

School of Engineering

Electrical Engineering

Olivia Martin

Graduate Student

School of Law

Law

Longsha Liu

Graduate Student

School of Medicine

Medicine

Emerson Johnston

Graduate Student

School of Humanities & Sciences

International Policy

Our goal is to foster discussion and research in human-inspired cognitive artificial intelligence at Stanford HAI through interdisciplinary collaboration. Cognitive AI represents a crucial frontier in AI research, and we are motivated by human cognitive capacities that outperform AI, and the prospects of unlocking new capabilities in machines to think or learn to solve tasks such as cognitive or logical reasoning, physical understanding, planning, motor control, and others. We invite individuals from diverse backgrounds who are passionate about bridging the gap between human cognition and AI, including but not limited to computational and cognitive sciences, electrical engineering and computer science, neuroscience, psychology, and linguistics. Through our speaker series and reading group, we aim to combine insights from various fields and extend our understanding to tackle questions that drive forward the development of AI systems that can match or exceed human performance across a broader range of cognitive tasks.

Name

Role

School

Department

Nick Haber

Faculty Sponsor

Graduate School of Education

Education, Computer Science

Tasha Kim

Student Co-Lead

School of Engineering

Institute for Computational and Mathematical Engineering (ICME)

Julian Quevedo

Student Co-Lead

School of Engineering

Computer Science

Malik Ismail

Graduate Student

Graduate School of Business

Business / Sustainability

We believe that art plays a fundamental role in society in not only sharing and affirming human values, but fostering a sense of a collective human experience. Traditionally, responding to art has been a practice of empathy and openness, helping us to find belonging and meaning. The sophistication of generative AI models like DALL-E have opened up the possibility for hyper-specific, personalized artwork, often created by user-generated prompts. Through this group, we want to study how, and if, “personalized” AI art changes how we respond to art. Some of our questions are: Do personalized and traditional art fulfill the same function? Does personalized art allow us to feel a deeper sense of belonging, or do they alienate us from the larger human experience? Does personalized art impact our ability to empathize and contextualize? Can personalized art contribute something that traditional art cannot? We seek to gain a deeper understanding of how personalization can affect our relationship to art, through open discussion with faculty experts and other students, play with generative artwork, and psychology experiments.

Name

Role

School

Department

David Eagleman

Faculty Sponsor

School of Humanities & Sciences

Psychology

Judith Fan

Faculty Sponsor

School of Humanities & Sciences

Psychology

Cecilia Lei

Student Co-Lead

School of Humanities & Sciences

English

Varun Agarwal

Student Co-Lead

School of Engineering

CS, Biology

Our team will focus on the future of Embodied AI for Good, with a specific focus on how AI-enabled hardware can be integrated into various aspects of daily life. As embodied AI becomes increasingly critical, it is essential to examine the technical, ethical, and social implications of AI beyond digital interfaces, particularly in human-machine interaction. We aim to build a multidisciplinary community dedicated to advancing the responsible development and deployment of embodied AI, ensuring it enhances the human experience in positive and meaningful ways.

Name

Role

School

Department

Mykel Kochenderfer

Faculty Sponsor

School of Engineering

Computer Science,Aeronautics and Astronautics

Chloe Jin

Student Co-Lead

Graduate School of Education

Graduate School of Education

Changan Li

Student Co-Lead

School of Engineering

Computer Science

Yuanzhe Dong

Graduate Student

School of Engineering

Mechanical Engineering

Andy Tang

Undergraduate Student

School of Engineering

Computer Science

Alan Ma

Undergraduate Student

School of Engineering

Electrical Engineering

Tao Sun

Graduate Student

School of Engineering

Civil and Environmental

Since the launch of ChatGPT in 2022, rapid advancements in generative AI have intensified discussions on governance, culminating in landmark initiatives by the end of 2023, including President Biden’s AI Executive Order, the UK’s Bletchley Declaration for AI Safety, the EU AI Act, the G7’s Hiroshima Process for an AI Code of Conduct, and the UN’s AI Advisory Board. Many of these initiatives are shaped by underlying geopolitical interests, risking divergence rather than convergence in governance approaches. This group tries to create a network of individuals who can aid the global convergence of AI governance approaches. It is dedicated to cultivating a network of students and postdocs interested in bridging perspectives across regions and sectors and connecting with key actors in AI governance, to foster collaboration and shared commitment in shaping the future of AI responsibly.

Name

Role

School

Department

Sandy Pentland

Faculty Sponsor

School of Engineering

Stanford Digital Economy Lab

Anka Reuel

Student Co-Lead

School of Engineering

Computer Science

Andreas Haupt

Student Co-Lead

School of Humanities & Sciences

Digital Economy Lab/Institute for Economic Policy Research

Sacha Alanoca

Student Co-Lead

School of Humanities & Sciences

Communication & Media Studies Department

Our mission is to address the environmental and socioeconomic impacts of artificial intelligence (AI). We explore AI governance, the carbon and water footprint of AI systems, and strategies for data center decarbonization and energy efficiency. By fostering discussions and research projects where academia, policy, and industry converge, we aim to share insights and promote sustainable practices in AI development. Through driving innovation that aligns with global environmental goals, we strive to ensure that AI contributes positively to both society and the planet.

Name

Role

School

Department

Ram Rajagopal

Faculty Sponsor

School of Engineering

Civil and Environmental Engineering

Anshika Agarwal

Student Co-Lead

School of Engineering

Computer Science

Stela Tong

Student Co-Lead

Graduate School of Business

Graduate School of Business

Cassandra Duchan Saucedo

Graduate Student

Graduate School of Business

Graduate School of Business

Estelle Ye

Graduate Student

School of Engineering

Civil and Environmental Engineering

Faith Riensche

Undergraduate Student

School of Sustainability

Earth Systems

Ruying Gao

Graduate Student

Graduate School of Business

Graduate School of Business

Adrian Mak

Graduate Student

School of Law

Law

Sam Xiao

Graduate Student

School of Engineering

Civil and Environmental Engineering

Healthcare is a notoriously difficult market for technological innovation -- Stanford Hospital is still forced to rely on fax to transit information, despite being on the same campus as the world’s smartest AI researchers! Why is this the case? A core reason is the disconnect between the problems that researchers think matter in healthcare, and the problems actually faced day-to-day in delivering care to patients.

To help address this gap, our seminar will focus on highlighting immediate-term opportunities for turning existing biomedical AI technologies into solutions and products that solve real-world healthcare problems. In other words, less “How can we achieve state-of-the-art on a benchmark” and more “How can we help patients yesterday?""

Our seminar will aim to bring together AI researchers, MBAs, healthcare leaders, designers, and entrepreneurially-minded folk to identify concrete problems where existing AI methods could provide immediate value.

Name

Role

School

Department

Nigam Shah

Faculty Sponsor

School of Medicine

Medicine

Michael Wornow

Student Co-Lead

School of Engineering

Computer Science

Rahul Thapa

Student Co-Lead

School of Medicine

Biomedical Informatics

Akshay Swaminathan

Graduate Student

School of Medicine

Medicine

Eric Pan

Graduate Student

School of Medicine

Biomedical Informatics

Chase Navellier

Graduate Student

Graduate School of Business

MBA

Morty Zadik

Graduate Student

Graduate School of Business

MBA

Our affinity group will focus on exploring the ethical and security challenges posed by AI generally and resulting from the integration of AI in cybersecurity. While AI has the potential to significantly enhance cybersecurity operations by automating threat detection, response, and analysis, it also empowers cybercriminals with sophisticated tools for conducting highly personalized attacks, such as deepfakes and AI-driven phishing schemes. Our group will bring together interdisciplinary expertise from cybersecurity analysts and AI researchers to investigate how AI-driven cyber threats are evolving and to develop frameworks that ensure AI's capabilities are used to bolster security without inadvertently increasing vulnerability.

Name

Role

School

Department

Andrew Grotto

Sponsor

School of Engineering

FSI

Malika Aubakirova

Student Co-Lead

School of Humanities & Sciences

MPP/ MBA

Carlton Gossett

Student Co-Lead

Graduate School of Business

MBA

Mobina Riazi

Undergraduate Student

School of Humanities & Sciences

Policy

Omer Doron

Graduate Student

Graduate School of Business

Mba

Kevin Xu

Graduate Student

Graduate School of Business

MBA

Uri Kedem

Graduate Student

Graduate School of Business

MBA

Kyla Guru

Graduate Student

School of Engineering

Computer science

Steps to Get Started

Step 1: Identify a topic of focus and gather an interdisciplinary group of students who share interest in that topic. If you have a topic idea but are looking for others to join your group, fill out this Interest Form. Responses can be found here.

Step 2: Identify two students from different disciplines to serve as the leads.

Step 3: Identify a faculty mentor; no formal time commitment is required of faculty. If you need support in reaching out to faculty, please contact HAI Research Program Manager Christine Raval.

Step 4: Submit the Application Form detailing the goals for your group and steps you’ll take to achieve those objectives.

Guidelines for Your Group

Applications will reopen in Summer 2025. Student Affinity Groups were announced in late September and will run during the fall, winter, and spring quarters.

  • Each group must have students from two or more disciplines.

  • Each group must have one faculty mentor; no formal time commitment is required of faculty.

  • At the end of the Spring quarter, groups must submit a report summarizing the outcomes. If members elect to continue, they must reapply.

Benefits of the Student Affinity Group

  • Funding of up to $1,000 for the academic year to spend on small quarterly or biweekly gatherings, such as: workshopping lunches, student-hosted speakers, book discussions or discussion series.

  • Space (physical and intellectual) to share knowledge across disciplines and create collaborations.

  • Access to a community of researchers, faculty, and staff committed to promoting human-centered uses of AI, and ensuring that humanity benefits from the technology and that the benefits are broadly shared.

  • Inside scoop on HAI events, research, publications, and volunteer opportunities.

Frequently Asked Questions

How can funds be used?

Possible expenses include food expenses, creating marketing materials, or to purchase other materials needed for the program (books, software, printing, etc).

Can people join mid-program?

Yes, new members can join at any point during the academic year.

Is more funding available if larger project ideas are developed through the affinity groups?

Promising research ideas that develop through the affinity group could make a great proposal for the HAI seed grant program.

For more information, contact HAI Research Program Manager Christine Raval. 

About

Oftentimes making technology accessible to people with disabilities is a game of catch-up because tools and techniques are not born accessible. This affinity group will be a space where people with disabilities at Stanford can have intentional conversations and develop strategic plans to ensure that emerging technologies, policies, and procedures around generative AI include the interests of people with disabilities. Specific sub-topics will include advocating for fair disability representation in data, articulating research directions for advancements in AI that are grounded in the experiences of people with disabilities, and how people with disabilities can make an impact on the Stanford community and AI community at large through navigating careers/advocacy efforts in AI.

Group Members

Name

Role

School

Department

Sean Follmer 

Faculty Sponsor

School of Engineering

Mechanical Engineering

Gene Kim

Student Co-Lead

School of Humanities & Sciences

Symbolic systems

Aya Mouallem

Student Co-Lead 

School of Engineering

Electrical Engineering

Trisha Kulkarni

Graduate Student

School of Engineering

Computer Science

About

The "AI for Climate Resilience: Bridging Technology and Humanity" group seeks to harness AI to advance climate resilience. By synergizing expertise from computer science, economics, ethics, design, and policy, our initiative aims to contextualize current challenges to drive climate solutions that resonate across cultural contexts, bolster community resilience, and uphold human-centered governance. We firmly believe that collaborative cross-sector engagement and knowledge exchange are critical to innovating AI-enabled climate solutions grounded in equity, collaboration, and sustainable impact.

Group Members

Name

 Role

School

Department

Mykel J. Kochenderfer  

Faculty Sponsor

School of Engineering

Aeronautics and Astronautics (Computer Science, by courtesy)

Serena Lee    

Student Co-Lead 

School of Humanities & Sciences

Data Science and Social Systems

Bhu Kongtaveelert    

Student Co-Lead 

School of Engineering

Computer Science & Art Practice

Gabrielle Tan

Graduate Student

School of Sustainability

Sustainability Science and Practice MS 

Derick Guan

Undergraduate Student

School of Humanities & Sciences

Mathematical and Computational Science

Griffin Clark

Undergraduate Student

School of Engineering

Chemical Engineering

About

At all levels, the US healthcare system is both complex and opaque; it is a web of intertwined and conflicting incentives, outdated technology, and lack of transparency. AI has the potential to improve healthcare accessibility and equity while reducing cost and improving outcomes, however has been demonstrably difficult to implement in the healthcare system. This affinity group unravels the landscape of healthcare challenges and ideates novel ways to use AI to address key healthcare challenges. How can AI augment the capabilities of clinicians to deliver care efficiently? Can models and algorithms help patients navigate the convoluted landscape of health insurance, providers, and employers? We frame our sessions around expert-led discussion sessions, design sprints, and case studies, each of which will focus on a specific area of interest within healthcare.

Group Members

Name

Role

School

Department

Sophia Wang

Faculty Sponsor

School of Medicine

Ophthalmology

Akash Chaurasia

Student Co-Lead 

School of Engineering

Computer Science

Priyanka Shrestha

Student Co-Lead 

School of Engineering

Computer Science

Isaac Bernstein

Graduate Student

School of Medicine

Medicine

Ank Agarwal

Graduate Student

School of Medicine

Medicine

Mahdi Honarmand 

Graduate Student

School of Engineering

Mechanical Engineering

Aditya Narayan

Graduate Student

School of Medicine

Medicine

Shobha Dasari

Graduate Student

School of Engineering

Computer Science

About

We’re interested in interfaces, agents, and tools for audiovisual performance. Recent advances in foundation models have garnered popular interest in applications of AI in artistic domains, however, with this progress comes crucial ambiguities in how humans can and should relate to AI-augmented creative practices. Our group invites students working in and adjacent to music, theater, audio signal processing, computer graphics, virtual reality, generative models, and HCI to study how new forms of computation can shape their work. The organizing goals of our group are (1) to further an understanding of how we as humans ought to relate to machine collaborators, and, in turn, how models ought to be designed to learn from behavior and adapt to users’ needs, (2) to promote artists pushing the boundaries of generative tools and shaping the frontiers of human-computer interaction, (3) to explore how new computational tools give new perspective to the nature of intention, identity, and authenticity in artistic practice, and (4) to connect scholars and artists across disciplines to work together on new creative projects.

Group Members

Name

Role

School

Department

Julius Orion Smith

Faculty Sponsor

School of Humanities and Sciences

Music

Nic Becker

Student Co-Lead 

School of Engineering

Computer Science

Alex Han

Student Co-Lead

School of Humanities and Sciences

Music

Miranda Li 

Student Co-Lead

School of Engineering

Computer Science

Eito Murakami

Student Co-Lead 

School of Humanities and Sciences

Music

About

We are interested in understanding how AI can be a gamechanger for employee productivity. Employees globally are plagued by “information overload” at the workplace, in part caused by the SaaS revolution of the 2000s and proliferation of tools. Evidence shows: (i) An average employee spends 2.5 – 5 hours daily on just different communication platforms; (ii) 1 out of 2 workers feel navigating across platforms is more annoying than losing weight! and (iii) 68% feel they don’t have uninterrupted focus time. Research conducted by one of the Co-leads points to extreme employee quotes such as “digital communication fatigue is the biggest bane of my life and is making me an ineffective leader.” As a result, employees suffer longer working hours, mental exhaustion, and a loss of personal time. This is also a silent killer of business output with 2 out of 3 business leaders already seeing a slowdown in strategic thinking and innovation among teams. This issue has become more pressing after the shift to hybrid / remote working. AI promises a new frontier for human productivity. We want to explore how AI can be leveraged to empower employees, cut through the noise and busy work, and “maximize output per unit of human effort”.

Group Members

Name

Role

School

Department

Dan Iancu   

Faculty Sponsor

Graduate School of Business

Operations, Information and Technology

Saloni Goel

Student Co-Lead 

Graduate School of Business

Graduate School of Business

Siya Goel 

Student Co-Lead 

School of Engineering

Computer Science

Sanjit Neelam

Graduate Student

School of Engineering

Computational and Mathematical Engineering

Teddy Ganea

Graduate Student

School of Engineering

Math and Symbolic Systems

Roger Liang

Graduate Student

Graduate School of Business

Graduate School of Business

Thai Tong

Graduate Student

Graduate School of Business

Graduate School of Business

About

In this affinity group, we will investigate the human-centered governance of AI. Governance is crucial in shaping the direction of AI research, the manifestation of its beneficial impacts, and mitigation of its harms. While discussions on what ethical and responsible AI entails have become increasingly popular, there is also a pressing need for deliberation on how governance itself should be structured and implemented in order to be effective, proactive, and inclusive.

Specifically, we will study and engage with the different stakeholders involved in AI governance (e.g. international governing leaders, tech entrepreneurs, engineers, ethics nonprofits, users, domain specialists, and educators). We will also seek to understand the parts of a governance toolkit (e.g. private and public regulations, funding, policies, laws, human rights doctrines, economic incentives, technical risk assessment measures, and enterprise software for governance).

Through discussions, speaker events, and outreach, we will merge disciplines such as computer science, management, international relations, and social science. We will understand the technical challenges AI poses for governance, as well as compare and evaluate existing governance frameworks. Valuing diverse perspectives, we aim to conduct panels with speakers across institutions, geographical regions worldwide, and applications of AI. Lastly, we hope to create opportunities for Stanford students and Bay Area residents to explore the intersection of novel innovations in AI governance with their career aspirations and the public sector.

Group Members

Name

Role

School

Department

Taylor Madigan 

Faculty Sponsor

School of Humanities and Sciences

Philosophy

Priti Rangnekar

Student Co-Lead 

School of Engineering

Computer Science

John Lee

Student Co-Lead 

Graduate School of Business

Graduate School of Business

Javokhir Arifov

Undergraduate Student

School of Engineering

Computer Science

Larissa Lauer 

Undergraduate Student

School of Humanities and Sciences

Data Science & Social Systems

Ayush Agarwal

Undergraduate Student

School of Humanities and Sciences

Symbolic Systems

Emily Tianshi

Undergraduate Student

School of Humanities and Social Science

International Relations and Data Science

Kenneth Bui

Undergraduate Student

School of Engineering

Computer Science

About

The computational journalism HAI affinity group focuses on cultivating diverse perspectives in understanding how machine learning and artificial intelligence can be used responsibly to produce stories that serve the public. From algorithmic accountability journalism that aims to inspect and hold code itself accountable, to emerging research on computational tools and software produced by journalists to tell better stories with data, we want to use this space to convene conversations on how AI is and can be used in newsrooms across journalists, technologists in media, and other practitioners. Computational journalism spans many fields at Stanford and we hope that this affinity group can cultivate a more diverse space to discuss these issues, spanning technical and non-technical researchers, as all are impacted by the news and should have a say in its production.

Group Members

Name

Role

School

Department

Maneesh Agrawala 

Faculty Sponsor

School of Engineering

Computer Science

Dana Chiueh

Student Co-Lead 

School of Engineering

Computer Science

Tianyu Fang

Student Co-Lead 

School of Humanities & Sciences

Political Science

Elias Aceves

Graduate Student

School of Humanities & Sciences

Latin American Studies

Michelle Cai 

Undergraduate Student

School of Humanities & Sciences

History

Chih-Yi Chen

Graduate Student

School of Engineering

Materials Science & Engineering

About

The Ethical and Effective Applications of AI in Education affinity group explores the central question: “Who are we prioritizing in human-centered AI, and which ‘humans’ are included in the loop?” with a deep focus on education. Our group includes student perspectives from computer science, education, law, psychology, and more. Together, we’ll facilitate discussions with guest speakers, Stanford affiliates (current students, faculty, alumni) who are grappling with current challenges related to education and AI and can share case studies from their experience. These discussions will be recorded to share with the broader Stanford-HAI community. Group members will engage with readings and materials recommended by each guest speaker, before entering these cross-disciplinary conversations. Through our sessions, we will be:

  • Deepening Awareness: Investigate the systemic inequalities present in education systems, questioning who benefits and who might be left behind as AI systems are rapidly integrated.

  • Learning through Collaboration: Make space for robust peer-to-peer exchanges to elevate diverse perspectives and synthesize interdisciplinary insights.

  • Engaging in Critical Dialogue: Engage both intellectually and personally, as we bring each of our unique holistic human experiences into the dialogue.

  • Taking Action: Develop clear objectives and possible next steps that each of us can take in our respective disciplines to address issues of inequality in education.

  • Meeting Current Challenges: Engage with and evaluate contemporary case studies, readings and research.

  • Bridging Academic-Industry Gaps: Build bridges between academic research on AI and education, and industry-wide implications for EdTech product development.

Group Members

Name

Role

School

Department

Dora Demszky

Faculty Sponsor

Graduate School of Education

Learning Sciences and Technology Design

Samin Khan

Student Co-Lead 

Graduate School of Education

Education Data Science

Regina Ta

 

Student Co-Lead 

School of Engineering

Computer Science

Radhika Kapoor 

 

Graduate Student

Graduate School of Education

Developmental and Psychological Sciences

Khaulat Abdulhakeem

Graduate Student

Graduate School of Education

Education Data Science

Carl Shen 

Graduate Student

School of Engineering

Computer Science

About

Our methodology centers around “Bias Limitation and Cultural Knowledge”. We aim to forge pathways for transformative solutions to facilitate the creation of inclusive and equitable AI — systems vigilant against bias, informed by best practices, sensitive to cultural nuances, and dedicated to fair representation and treatment. We explore technical, social, and political considerations to illuminate the necessity of incorporating diverse cultural perspectives, experiences, norms, and knowledge into the algorithm design, development, deployment, and analysis processes. We are committed to amplifying marginalized voices, especially within our Black & African Diaspora communities. We welcome anyone seeking to foster a more inclusive and equitable technological landscape to join us in critical dialogue, discourse, and discovery.

Group Members

Name

Role

School

Department

Mehran Sahami

Faculty Sponsor

School of Engineering

Computer Science

Asha Johnson

Student Co-Lead 

School of Engineering

Management Science & Engineering (Master's), Computer Science (Undergraduate)

Saron Samuel 

Student Co-Lead 

School of Engineering

Computer Science

Justin Hall

Undergraduate Student

School of Engineering

Computer Science

Gabrielle Polite

Undergraduate Student

School of Humanities and Sciences

Symbolic Systems

Andrew Bempong

Undergraduate Student

School of Engineering

Computer Science

Saba Weatherspoon

Undergraduate Student

Global Studies

International Relations

Abel Dagne 

Undergraduate Student

School of Engineering

Computer Science

Eban Ebssa

Undergraduate Student

School of Humanities and Sciences

Symbolic Systems

About

The Social NLP affinity group will focus on topics at the intersection of social sciences and AI, with an emphasis on foundation models and NLP. We will tackle (1) Significant innovations in AI methods, models, or design paradigms applied to social problems, and (2) New theories and concepts from social sciences and new ways to study them using AI. Concrete examples of such topics include: simulating human behaviors with foundation models, AI-driven persuasion, or human information seeking in the foundation models era. Our group is inherently interdisciplinary, including students from Computer Science, Psychology, and Linguistics departments who are well-positioned to address these complex issues.

Group Members

Name

Role

School

Department

Diyi Yang

Faculty Sponsor

School of Engineering

Computer Science

Kristina Gligoric 

Student Co-Lead 

School of Engineering

Computer Science

Maggie Perry

Student Co-Lead 

School of Humanities and Sciences

Psychology

Weiyan Shi

Postdoctoral Scholar

School of Engineering

Computer Science

Omar Shaikh

Graduate Student

School of Engineering

Computer Science

Cinoo Lee

Postdoctoral Scholar

School of Humanities and Sciences

Psychology

Myra Cheng

Graduate Student

School of Engineering

Computer Science

Yiwei Luo

Graduate Student

School of Humanities and Sciences

Linguistics

Tiziano Piccardi

Postdoctoral Scholar

School of Engineering

Computer Science

About

In a world where automation is becoming increasingly dominant, it is vital to discuss the future of human-machine interaction once AI becomes software and data beyond the screen. We will focus on bringing together a community of students from all schools with a common interest in responsibly furthering the human condition with AI-enabled hardware systems. We will host guest speakers from a variety of backgrounds, from robotics researchers to lawyers, government and industry. After each event, we will explore key questions including the technical, ethical, legal, and moral questions of AI-enabled machines, especially as they enter our workplaces and homes. At the end, we hope to publish an artifact of our research into the future of AI-enabled machines, and recommend avenues for researchers and practitioners.

Group Members

Name

Role

School

Department

Mark Cutkosky

Faculty Sponsor

School of Engineering

Mechanical Engineering

Julia Di 

Student Co-Lead 

School of Engineering

Mechanical Engineering

Jeremy Topp

Student Co-Lead 

Graduate School of Business

Graduate School of Business

Jorge Andres Quiroga

Graduate Student

Graduate School of Business

Graduate School of Business

Ali Kight 

Graduate Student

School of Engineering

Bioengineering

Hojung Choi

Graduate Student

School of Engineering

Mechanical Engineering

Rachel Thomasson

Graduate Student

School of Engineering

Mechanical Engineering

Nikil Ravi

Graduate Student

School of Engineering

Computer Science

Cem Gokmen

Graduate Student

School of Engineering

Computer Science

Claire Chen

Graduate Student

School of Engineering

Computer Science

Marion Lepert

Graduate Student

School of Engineering

Computer Science

Malik Ismail

Graduate Student

School of Business & Doerr School of Sustainability

Graduate School of Business & Emmett Interdisciplinary Program in Environment and Resources

Selena Sun

Undergraduate

School of Engineering

Computer Science

About

WellLabeled is an affinity group aimed at addressing the ethical challenges related to data annotation in AI development, particularly focusing on toxic and harmful content. The group's goal is to investigate welfare-maximizing annotation approaches by synthesizing ideas from human-centered design, economics, and machine learning. To achieve this, WellLabeled aims to host discussions and speaker seminars. In particular we aim to focus our attention on how to regulate annotators' exposure to distressing content, establish fair compensation mechanisms based on measured harm, and investigate validation methods through human-subject studies.

Group Members

Name

Role

School

Department

Sanmi Koyejo  

Faculty Sponsor

School of Engineering

Computer Science

Mohammadmahdi Honarmand 

Student Co-Lead 

School of Engineering

Mechanical Engineering

Zachary Robertson

Student Co-Lead 

School of Engineering

Computer Science

Jon Qian

Graduate Student

Graduate School of Business

Graduate School of Business

Nava Haghighi

Graduate Student

School of Engineering

Computer Science

About

Our affinity group is focused on employing AI for solving problems linked to climate and environmental issues. Climate change is one of the biggest challenges faced in the 21st century and is a complex issue that requires diverse perspectives. Discussions will cover the science behind greenhouse gas, disastrous effects of climate change (wildfire, flooding, etc.), humanity’s role in mitigating this issue, and human-centered AI developments that can solve climate-related issues. Discussion topics will be moderated by affinity group leaders Wai Tong Chung, a PhD student and HAI Grad Fellow, and Greyson Assa, a Master's student at the Doerr School of Sustainability.

Group Members

Name

Role

School

Department

Wai Tong Chung

Graduate Student Co-Lead

School of Engineering

Mechanical Engineering

Greyson Assa

Graduate Student Co-Lead

School of Sustainability

Sustainability Science and Practice

David Wu

Graduate Student

School of Engineering

Aeronautics and Astronautics

James Hansen

Graduate Student

School of Engineering

Aeronautics and Astronautics

Khaled Younes

Graduate Student

School of Engineering

Mechanical Engineering

Seth Liyanage

Graduate Student

School of Engineering

Mechanical Engineering

Allison Cong

Graduate Student

School of Engineering

Mechanical Engineering

Matthias Ihme

Faculty Sponsor

School of Engineering

Mechanical Engineering/SLAC

About

We are interested in the conditions under which human AI collaboration leads to better decision-making. Algorithms are increasingly being used in high-stake settings, such as in medical diagnosis and refugee settlement. However, algorithmic recommendation in the setting of human AI collaboration can lead to perverse effects. For example, doctors may not put in as much effort when recommendations from algorithms are readily available. Similarly, the introduction of algorithmic recommendation can cause moral hazard, leading to worse decision-making. Our affinity group would like to explore the conditions and incentives affecting human AI collaboration, integrating theories from political science, communication, and HCI.

Group Members

Name

Role

School

Department

Eddie Yang

Graduate Student Co-Lead

School of Humanities & Sciences

Center on Democracy, Development and the Rule of Law

Yingdan Lu

Graduate Student Co-Lead

School of Humanities & Sciences

Communication

Matt DeButts

Graduate Student

School of Humanities & Sciences

Communication

Yiqin Fu

Graduate Student

School of Humanities & Sciences

Political Science

Yiqing Xu

Faculty Sponsor

School of Humanities & Sciences

Political Science

About

Recent developments in foundation models like Stable Diffusion and GPT-3 have enabled AI to create in ways that were previously only possible by humans—marking an evolution of AI from a problem-solving machine to a generative machine. Simultaneously, we are seeing these models moving from research to  industry. The productization of AI for creative purposes (writing, image generation, etc.) is just beginning to emerge, but will accelerate in the coming years, impacting the media industry and creatives of all kinds (filmmakers, photographers, writers, professional artists, etc.).

While hype around these new tools for creativity is exploding in the media, we have yet to find a student community at Stanford dedicated to exploring the future of creative generative AI. We are interested in understanding the technical capabilities of generative AI models, current product innovations in industry, the impact of generative AI on the future of art creation, and the social and cultural implications of new creative tools. As our team comes from a range of backgrounds (Computer Science, Symbolic Systems, Political Science, and English), our breadth of expertise will enable us to engage in cross-disciplinary conversations.

Group Members

Name

Role

School

Department

Isabelle Levent

Undergraduate Student Co-Lead

School of Humanities & Sciences

Symbolic Systems

Lila Shroff

Undergraduate Student Co-Lead

School of Humanities & Sciences

English

Regina Ta

Graduate Student

School of Humanities & Sciences

Symbolic Systems

Millie Lin

Graduate Student

School of Engineering

Computer Science

Sandra Luksic

Research Assistant

School of Humanities & Sciences

Research Assistant, Ethics in Society

Mina Lee

Graduate Student

School of Engineering

Computer Science

Michelle Bao

Graduate Student

School of Engineering

Computer Science

Rob Reich

Faculty Sponsor

School of Humanities & Sciences

Political Science

About

HAI graduate fellows are planning to host a panel with three AI experts from Academic, Government, and Industry moderated by a comedian as an effort to lower the barrier of entry into the AI conversation. This event will join HAI’s effort to raise awareness and inform the general public regarding AI limitations and how AI can empower human capabilities. Our goals are to solidify the HAI graduate fellow community, connect HAI graduate fellows with the general public and Stanford community, start a fun and entertaining conversation about AI limitations, and engage with AI experts in academia, government and industry in an informal setting.

Group Members

Name

Role

School

Department

Alberto Tono

Graduate Student Co-Lead

School of Engineering

Civil and Environmental Engineering

Martino Banchio

Graduate Student Co-Lead

Graduate School of Business

Graduate School of Business

Yingdan Lu

Graduate Student

School of Humanities & Sciences

Communication

Surin Ahn

Graduate Student

School of Engineering

Electrical Engineering

Betty Xiong

Graduate Student

School of Medicine

Biomedical Informatics

Martin Fischer

Faculty Sponsor

School of Engineering

Civil and Environmental Engineering

About

Our group will develop tools that will improve the next-generation of foundation models. We plan on doing research at different stages of foundation model development. Our research specifically focuses on training foundation models and large language models on newer modalities such as structural biology and joint image-text pairings, and evaluating methods with meta-learning downstream task performance-in-the-loop. Most foundation model research has been in text or image data, and we plan on focusing our work in new multimodal and biological data types as well as evaluating downstream fine-tuned task performance using in-the-loop meta-learning techniques.

Group Members

Name

Role

School

Department

Rohan Koodli

Graduate Student Co-Lead

School of Medicine

Biomedical Data Science

Gautam Mittal

Graduate Student Co-Lead

School of Engineering

Computer Science

Rajan Vivek

Graduate Student

School of Engineering

Computer Science

Douwe Kiela

Faculty Sponsor

School of Humanities & Sciences

Symbolic Systems

About

We propose to develop an affinity group with the topic and purpose of advancing theoretical understandings of human interaction and trust with AI-based systems and technologies. For AI to augment human intelligence while having humans in charge, we must understand how humans interact with AI technologies and build up trust with such systems. Understanding trust in human-AI interaction is critical to develop AI systems that are ethical, safe, authentic, and trustworthy. Despite the importance of this relationship, little attention in the literature has been devoted to advancing theoretical and practical knowledge of human-computer interaction with AI systems. Particularly, there is a gap on this issue that can be addressed by a multidisciplinary approach, such as leveraging knowledge across cognitive psychology, computer science, and user design fields. We also believe that such understanding with an ultimate theoretical framework developed by the end of our affinity group meetings and discussions will be beneficial for researchers and practitioners across disciplines to advance applications of AI systems in real-world problems of human lives.

Group Members

Name

Role

School

Department

Alaa Youssef

Postdoc Co-Lead

School of Medicine

Radiology

Xi Jia Zhou

Graduate Student Co-Lead

Graduate School of Education

Graduate School of Education

Michael Bernstein

Faculty Sponsor

School of Engineering

Computer Science

news
Two smiling HAI Fellows sitting on steps

Building the Next Generation of AI Scholars

Beth Jensen
Education, SkillsJul 12

A cross-disciplinary group of Stanford students explores fresh approaches to human-centered AI.

news

Exploring the Complex Ethical Challenges of Data Annotation

Beth Jensen
Design, Human-Computer InteractionWorkforce, LaborJul 10

A cross-disciplinary group of Stanford students examines the ethical challenges faced by data workers and the companies that employ them.