Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Design, Human-Computer Interaction | Stanford HAI

Design, Human-Computer Interaction

AI is reshaping HCI by enabling more intuitive, personalized experiences.

Stanford HAI Announces Hoffman-Yee Grants Recipients for 2024
Nikki Goth Itoi
Aug 21, 2024
Announcement

Six interdisciplinary research teams received a total of $3 million to pursue groundbreaking ideas in the field of AI.

Announcement

Stanford HAI Announces Hoffman-Yee Grants Recipients for 2024

Nikki Goth Itoi
Design, Human-Computer InteractionHealthcareNatural Language ProcessingMachine LearningAug 21

Six interdisciplinary research teams received a total of $3 million to pursue groundbreaking ideas in the field of AI.

Stories for the Future 2024
Isabelle Levent
Deep DiveMar 31, 2025
Research

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Research

Stories for the Future 2024

Isabelle Levent
Machine LearningGenerative AIArts, HumanitiesCommunications, MediaDesign, Human-Computer InteractionSciences (Social, Health, Biological, Physical)Deep DiveMar 31

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Closed
HAI and AIMI Partnership Grant

The HAI and AIMI Partnership Grant is designed to fund new and ambitious ideas that reimagine artificial intelligence in healthcare, using real clinical data sets, with near term clinical applications.

Closed

HAI and AIMI Partnership Grant

The HAI and AIMI Partnership Grant is designed to fund new and ambitious ideas that reimagine artificial intelligence in healthcare, using real clinical data sets, with near term clinical applications.

James Landay
Person
Person

James Landay

Design, Human-Computer InteractionOct 05
James Landay: Paving a Path for Human-Centered Computing
James Landay
Katharine Miller
Aug 12, 2024
News

The Stanford HAI co-director has blazed a trail by keeping humans at the center of emerging technologies.

News

James Landay: Paving a Path for Human-Centered Computing

James LandayKatharine Miller
Design, Human-Computer InteractionAug 12

The Stanford HAI co-director has blazed a trail by keeping humans at the center of emerging technologies.

How Culture Shapes What People Want From AI
Chunchen Xu, Xiao Ge, Daigo Misaki, Hazel Markus, Jeanne Tsai
May 11, 2024
Research
Your browser does not support the video tag.

There is an urgent need to incorporate the perspectives of culturally diverse groups into AI developments. We present a novel conceptual framework for research that aims to expand, reimagine, and reground mainstream visions of AI using independent and interdependent cultural models of the self and the environment. Two survey studies support this framework and provide preliminary evidence that people apply their cultural models when imagining their ideal AI. Compared with European American respondents, Chinese respondents viewed it as less important to control AI and more important to connect with AI, and were more likely to prefer AI with capacities to influence. Reflecting both cultural models, findings from African American respondents resembled both European American and Chinese respondents. We discuss study limitations and future directions and highlight the need to develop culturally responsive and relevant AI to serve a broader segment of the world population.

Research
Your browser does not support the video tag.

How Culture Shapes What People Want From AI

Chunchen Xu, Xiao Ge, Daigo Misaki, Hazel Markus, Jeanne Tsai
Design, Human-Computer InteractionSciences (Social, Health, Biological, Physical)May 11

There is an urgent need to incorporate the perspectives of culturally diverse groups into AI developments. We present a novel conceptual framework for research that aims to expand, reimagine, and reground mainstream visions of AI using independent and interdependent cultural models of the self and the environment. Two survey studies support this framework and provide preliminary evidence that people apply their cultural models when imagining their ideal AI. Compared with European American respondents, Chinese respondents viewed it as less important to control AI and more important to connect with AI, and were more likely to prefer AI with capacities to influence. Reflecting both cultural models, findings from African American respondents resembled both European American and Chinese respondents. We discuss study limitations and future directions and highlight the need to develop culturally responsive and relevant AI to serve a broader segment of the world population.

All Work Published on Design, Human-Computer Interaction

How Culture Shapes What People Want from AI
Nikki Goth Itoi
Jul 29, 2024
News

Stanford researchers explore how to build culturally inclusive and equitable AI by offering initial empirical evidence on cultural variations in people’s ideal preferences about AI. 

How Culture Shapes What People Want from AI

Nikki Goth Itoi
Jul 29, 2024

Stanford researchers explore how to build culturally inclusive and equitable AI by offering initial empirical evidence on cultural variations in people’s ideal preferences about AI. 

Design, Human-Computer Interaction
News
Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising
Michelle Lam, Ayush Pandit, Colin H. Kalicki, Rachit Gupta, Poonam Sahoo, Danaë Metaxa
Oct 04, 2023
Research
Your browser does not support the video tag.

Algorithm audits are powerful tools for studying black-box systems without direct knowledge of their inner workings. While very effective in examining technical components, the method stops short of a sociotechnical frame, which would also consider users themselves as an integral and dynamic part of the system. Addressing this limitation, we propose the concept of sociotechnical auditing: auditing methods that evaluate algorithmic systems at the sociotechnical level, focusing on the interplay between algorithms and users as each impacts the other. Just as algorithm audits probe an algorithm with varied inputs and observe outputs, a sociotechnical audit (STA) additionally probes users, exposing them to different algorithmic behavior and measuring their resulting attitudes and behaviors. As an example of this method, we develop Intervenr, a platform for conducting browser-based, longitudinal sociotechnical audits with consenting, compensated participants. Intervenr investigates the algorithmic content users encounter online, and also coordinates systematic client-side interventions to understand how users change in response. As a case study, we deploy Intervenr in a two-week sociotechnical audit of online advertising (N = 244) to investigate the central premise that personalized ad targeting is more effective on users. In the first week, we observe and collect all browser ads delivered to users, and in the second, we deploy an ablation-style intervention that disrupts normal targeting by randomly pairing participants and swapping all their ads. We collect user-oriented metrics (self-reported ad interest and feeling of representation) and advertiser-oriented metrics (ad views, clicks, and recognition) throughout, along with a total of over 500,000 ads. Our STA finds that targeted ads indeed perform better with users, but also that users begin to acclimate to different ads in only a week, casting doubt on the primacy of personalized ad targeting given the impact of repeated exposure. In comparison with other evaluation methods that only study technical components, or only experiment on users, sociotechnical audits evaluate sociotechnical systems through the interplay of their technical and human components.

Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising

Michelle Lam, Ayush Pandit, Colin H. Kalicki, Rachit Gupta, Poonam Sahoo, Danaë Metaxa
Oct 04, 2023

Algorithm audits are powerful tools for studying black-box systems without direct knowledge of their inner workings. While very effective in examining technical components, the method stops short of a sociotechnical frame, which would also consider users themselves as an integral and dynamic part of the system. Addressing this limitation, we propose the concept of sociotechnical auditing: auditing methods that evaluate algorithmic systems at the sociotechnical level, focusing on the interplay between algorithms and users as each impacts the other. Just as algorithm audits probe an algorithm with varied inputs and observe outputs, a sociotechnical audit (STA) additionally probes users, exposing them to different algorithmic behavior and measuring their resulting attitudes and behaviors. As an example of this method, we develop Intervenr, a platform for conducting browser-based, longitudinal sociotechnical audits with consenting, compensated participants. Intervenr investigates the algorithmic content users encounter online, and also coordinates systematic client-side interventions to understand how users change in response. As a case study, we deploy Intervenr in a two-week sociotechnical audit of online advertising (N = 244) to investigate the central premise that personalized ad targeting is more effective on users. In the first week, we observe and collect all browser ads delivered to users, and in the second, we deploy an ablation-style intervention that disrupts normal targeting by randomly pairing participants and swapping all their ads. We collect user-oriented metrics (self-reported ad interest and feeling of representation) and advertiser-oriented metrics (ad views, clicks, and recognition) throughout, along with a total of over 500,000 ads. Our STA finds that targeted ads indeed perform better with users, but also that users begin to acclimate to different ads in only a week, casting doubt on the primacy of personalized ad targeting given the impact of repeated exposure. In comparison with other evaluation methods that only study technical components, or only experiment on users, sociotechnical audits evaluate sociotechnical systems through the interplay of their technical and human components.

Design, Human-Computer Interaction
Your browser does not support the video tag.
Research
Exploring the Complex Ethical Challenges of Data Annotation
Beth Jensen
Jul 10, 2024
News

A cross-disciplinary group of Stanford students examines the ethical challenges faced by data workers and the companies that employ them.

Exploring the Complex Ethical Challenges of Data Annotation

Beth Jensen
Jul 10, 2024

A cross-disciplinary group of Stanford students examines the ethical challenges faced by data workers and the companies that employ them.

Design, Human-Computer Interaction
Workforce, Labor
News
Tuning Our Algorithmic Amplifiers: Encoding Societal Values into Social Media AIs
Angèle Christin
Oct 20, 2023
News

The values built into social media algorithms are highly individualized. Could we reshape our feeds to benefit society?

Tuning Our Algorithmic Amplifiers: Encoding Societal Values into Social Media AIs

Angèle Christin
Oct 20, 2023

The values built into social media algorithms are highly individualized. Could we reshape our feeds to benefit society?

Design, Human-Computer Interaction
Machine Learning
Communications, Media
News
Spellburst: A Large Language Model–Powered Interactive Canvas for Generative Artists
Shana Lynch
Sep 13, 2023
News

This new creativity support tool helps artists who work in code explore ideas using natural language and iterate with precision.

Spellburst: A Large Language Model–Powered Interactive Canvas for Generative Artists

Shana Lynch
Sep 13, 2023

This new creativity support tool helps artists who work in code explore ideas using natural language and iterate with precision.

Arts, Humanities
Design, Human-Computer Interaction
News
AI-Detectors Biased Against Non-Native English Writers
Andrew Myers
May 15, 2023
News

Don’t put faith in detectors that are “unreliable and easily gamed,” says scholar.

AI-Detectors Biased Against Non-Native English Writers

Andrew Myers
May 15, 2023

Don’t put faith in detectors that are “unreliable and easily gamed,” says scholar.

Design, Human-Computer Interaction
Natural Language Processing
Machine Learning
News
1
2
3
4