Exploring HAI — Experience at Georgia Tech

Xin Lian
7 min readMay 11, 2024

--

During the last Spring semester of 2024 at Georgia Tech, I had the opportunity to participate in CS 8803 HAI (Human-AI Interaction), taught by Chris MacLellan, with whom I’ve had a long-standing collaboration. This course offers more seminar-like classes and requires students to discuss weekly readings on specific topics during in-person classes along with several assignments. Whereas attendance isn’t mandatory, I personally think active participation is crucial for justifying grades for yourself at the semester’s end. Assignments throughout the term include critiquing existing HAI technologies, proposing new ones, and defining HAI, etc.

Suppose you enjoy engaging in critical discussions surrounding HAI (or related fields like HCI and interactive ML) and are eager to potentially meet some new friends in this or similar fields. In that case, I think enrolling in this course is highly recommended. It may typically run every Spring semester, though its future availability may vary. The course may seem tailored to students with backgrounds in HCI and interactive computing, but from my end, it welcomes all interested individuals, including me, who do not have a strong HCI background (I think I was more of a CogSci person then). I personally found the course profoundly insightful because it kind of prompts reflections on how algorithms should be settled down in interactive systems with certain agents, why and how the algorithms should be “interactive”, etc.

Favorite Readings

I think my favorite, and probably the favorite for most students, is A Mulching Proposal: Analyzing and Improving an Algorithmic System for Turning the Elderly into High-Nutrient Slurry by Os Keyes, Jevan Hutson, and Meredith Durbin. Well, the title here is not even a metaphor, and they indeed discussed why and how the elderly should be turned into high-nutrient slurry, and this ironic reading even happened at the right time one day after April Fool’s Day.

I think this one is actually condemning how the preset ethical principle framework in designing agents, is hilarious, hypocritical, or hilarious, or indifferent. Though the idea of mulching people sounds super unsettling and creepy, it is indeed logically feasible since it is consistent with Principle 1.2 of the “newest ACM Code of Ethics” in 2019. From my end, the authors framed this story to make people reflect on whether these “ethics frameworks” truly enhance ethics, or just provide facades of adherence to ethical principles. I do think the underlying idea is intuitive to many people, including those within the AI community — so possibly the authors actually try to cast a critical eye on the AI folks involved in AI endeavors of questionable ethical nature, who may attempt to justify their actions by aligning with proposed frameworks promoting “fairness, transparency, accountability,” etc. thus presenting their work as ethically sound — and so these folks are just ignoring if they play ethically consciously — moral hypocrisy!

Another one I really like is Critical Race Theory for HCI by Ihudiya Finda Ogbonnaya-Ogburu, Angela D. R. Smith, Alexandra To, and Kentaro Toyama. In a nutshell, I think this one is more of a call to action, highlighting the deeper issues that many people are not aware of in the HCI field. The paper is structured into three main parts — an introduction to critical race theory, reflections on personal stories shared by HCI researchers through the lens of this theory, and an exploration of how the theory resonates with the HCI field and how racial issues should be addressed in HCI studies. This was actually my first exposure to critical race theory, and though I’m not sure if it is truly reasonable for every social group, it resonates with many of my ideas about racial issues in the US. I’m grateful to learn about this theory since it may help ground my future claims about racism. Some of the key tenets that resonated with me include “racism is ordinary and not aberrational,” “those with power rarely concede it without interest convergence,” and “liberalism itself can hinder anti-racist progress”. Regarding the personal stories included in the paper, I believe many of the ideas resonate with concepts from sociolinguistics. In one story, the author felt uncomfortable being asked to edit participants’ quotations for “grammar” and “readability” since many used colloquial language. This relates to how varieties of English correlate with social factors like ethnicity, class, and gender, and how language elements carry social meanings — except the ones we know that are more explicit, there are much more covert ones, such as the “Jocks and Burnouts” theory (higher social status, adherence to school norms, etc. vs. resistance to school culture, lower social status, etc.) proposed by Penelope Eckert. Another sociolinguistic paper, “Monolingual Language Ideologies and the Idealized Speaker” by Chris K. Chang-Bacon, reveals how racism is systemic in society by studying the “new bilingualism,” which seems to be a way to improve the diversity in society, but designed to disproportionately benefit monolingual English speakers, often from the white middle class.

Bringing this back to the paper, I find the personal studies in the paper fair and frustrating. For example, I disagree with the idea that filter bubbles are beneficial in HCI studies — I personally see them as a (really) explicit way of hindering social mobility (p.s. I’m even more frustrated to find the story used to happen at NU from my speculation — it is private, and CMU is not that “white” compared to NU — though it really makes sense as they have the most prestigious school of journalism and communication, lol — and yes journalism plays a pivotal role in hindering the anti-racist progress in the States, as I supposed).

Despite these favorites, I got to learn some really interesting stories (teas?) behind these readings. One story is the aftermath of the publication of the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, which examined multiple risks of very large LLMs, including how they highlight the bias in AI. This publication kind of forced Timnit Gebru, one of the co-authors, out of Google, because she neither withdrew the paper before publication nor removed the names of all the Google employees from the paper. Another co-author, Margaret Mitchell (using her pseudo name Shmargaret Shmitchell in the paper — I don’t know if that’s an irony provocation LOL), investigated how Gebru was treated unfairly by Google after she left by reportedly using a script to search through Gebru corporate account and download emails that allegedly documented discriminatory incidents involving Gebru, and later Mitchell was fired for “alleged exfiltration of confidential business-sensitive documents and private data of other employees”.

Related news:

Another story is from Joichi Ito (伊藤穰一), the former Director of MIT Media Lab and the author of Resisting Reduction: A Manifesto, which is about why we should be against singularity and advocate human-machine collaboration rather than competition. Unfortunately, Ito left this publication to be entirely satirical after his resignation because of the whole controversy of the reveal of his relationship with Jeffrey Epstein, which undoubtedly did not align with the values of his studies to promote AIs for better human welfare.

Related news:

What Makes the Course at Its Worth

I believe Chris fosters a culture of critical thinking and reflection throughout the classes, encouraging students to make critiques and engage deeply with course materials. This emphasis on intellectual rigor was in all class discussions, where classmates offered diverse perspectives and thoughtful analyses. Moreover, the open-ended nature of the course, wherein students grapple with defining HAI themselves, raises a sense of intellectual autonomy and creativity.

Moreover, the richness of this course lies in its community of students, whose diverse backgrounds and perspectives enrich class discussions and foster meaningful connections.

In summary, if you are eager to explore the intersection of human and artificial intelligence, engage in critical discourse, and connect with like-minded individuals passionate about shaping the future of technology for the betterment of society, this course is an invaluable opportunity. Whether you’re a seasoned HCI expert or a newcomer to the field, CS 8803 HAI offers a transformative learning experience grounded in intellectual curiosity and collaborative exploration.

And a huge shoutout to all the interesting souls in this course — including Alec Helbling, Julia Kruk, William Goodall, Rachel Lowy, Christoffer Rokholm, Philipp Hemkemever, and Momin Siddiqui. I’m so glad to have you on this course and you guys are just amazing ;)

The wonky group photo of students in the class + Chris.

--

--

Xin Lian
Xin Lian

Written by Xin Lian

CS PhD Student @NU. Main Interest: All-genre music. Minor Interest: Human-like learning, computational cognitive science, knowledge-based AI.

No responses yet