I have been working together with Mark Alfano and his Digital Trust lab on applying data science and network analysis methods on the study of social epistemology. I am also passionate about empowering philosophers with state-of-the-art data science methods (missing reference). Examples of work in this amazing collab include: (Cheong et al., 2024) (Alfano et al., 2024) (Alfano et al., 2022) (Abedin et al., 2023) (Ojea Quintana et al., 2022)
Collaborators include: Mark Alfano (MacQuarie); J Adam Carter (Glasgow); Emily Sullivan (Eindhoven); Colin Klein (ANU); Ignacio Ojea Quintana (University of Munich); Ritsaart Reimann (MacQuarie); Annie Chan (MacQuarie); Marinus Ferreira (MacQuarie); Ehsan Abedin (Flinders).
I am also part of a collaboration with Oliver Curry and team by contributing to digital ‘machine reading morals’ techniques. I contribute to data-driven analyses for Curry’s Morality-as-Cooperation (MAC) Theory (missing reference).
Collaborators include: Oliver Scott Curry (kindness.org/Oxford), Mark Alfano (MacQuarie); Rene Weber, Musa Malik, Sungbin Youk, Frederic Hopp (Media Neuroscience Lab, UCSB).
References
2024
-
Investigating gender and racial biases in DALL-E Mini Images
Marc Cheong, Ehsan Abedin, Marinus Ferreira, Ritsaart Willem Reimann, Shalom Chalson, Pamela Robinson, Joanne Byrne, Leah Ruppanner, Mark Alfano, and Colin Klein
2024
Generative artificial intelligence systems based on transformers, including both text generators such as GPT-4 and image generators such as DALL-E 3, have recently entered the popular consciousness. These tools, while impressive, are liable to reproduce, exacerbate, and reinforce extant human social biases, such as gender and racial biases. In this article, we systematically review the extent to which DALL-E Mini suffers from this problem. In line with the Model Card published alongside DALL-E Mini by its creators, we find that the images it produces tend to represent dozens of different occupations as populated either solely by men (e.g., pilot, builder, plumber) or solely by women (e.g., hairdresser, receptionist, dietitian). In addition, the images DALL-E Mini produces tend to represent most occupations as populated primarily or solely by White people (e.g., farmer, painter, prison officer, software engineer) and very few by non-White people (e.g., pastor, rapper). These findings suggest that exciting new AI technologies should be critically scrutinized and perhaps regulated before they are unleashed on society.
-
Now you see me, now you don’t: an exploration of religious exnomination in DALL-E
Mark Alfano, Ehsan Abedin, Ritsaart Reimann, Marinus Ferreira, and Marc Cheong
2024
Artificial intelligence (AI) systems are increasingly being used not only to classify and analyze but also to generate images and text. As recent work on the content produced by text and image Generative AIs has shown (e.g., Cheong et al., 2024, Acerbi & Stubbersfield, 2023), there is a risk that harms of representation and bias, already documented in prior AI and natural language processing (NLP) algorithms may also be present in generative models. These harms relate to protected categories such as gender, race, age, and religion. There are several kinds of harms of representation to consider in this context, including stereotyping, lack of recognition, denigration, under-representation, and many others (Crawford in Soundings 41:45–55, 2009; in: Barocas et al., SIGCIS Conference, 2017). Whereas the bulk of researchers’ attention thus far has been given to stereotyping and denigration, in this study we examine ‘exnomination’, as conceived by Roland Barthes (1972), of religious groups. Our case study is DALL-E, a tool that generates images from natural language prompts. Using DALL-E mini, we generate images from generic prompts such as “religious person.” We then examine whether the generated images are recognizably members of a nominated group. Thus, we assess whether the generated images normalize some religions while neglecting others. We hypothesize that Christianity will be recognizably represented more frequently than other religious groups. Our results partially support this hypothesis but introduce further complexities, which we then explore.
2023
-
Exploring intellectual humility through the lens of artificial intelligence: Top terms, features and a predictive model
Ehsan Abedin, Marinus Ferreira, Ritsaart Reimann, Marc Cheong, Igor Grossmann, and Mark Alfano
2023
Intellectual humility (IH) is often conceived as the recognition of, and appropriate response to, your own intellectual limitations. As far as we are aware, only a handful of studies look at interventions to increase IH – e.g. through journalling – and no study so far explores the extent to which having high or low IH can be predicted. This paper uses machine learning and natural language processing techniques to develop a predictive model for IH and identify top terms and features that indicate degrees of IH. We trained our classifier on the dataset from an existing psychological study on IH, where participants were asked to journal their experiences with handling social conflicts over 30 days. We used Logistic Regression (LR) to train a classifier and the Linguistic Inquiry and Word Count (LIWC) dictionaries for feature selection, picking out a range of word categories relevant to interpersonal relationships. Our results show that people who differ on IH do in fact systematically express themselves in different ways, including through expression of emotions (i.e., positive, negative, and specifically anger, anxiety, sadness, as well as the use of swear words), use of pronouns (i.e., first person, second person, and third person) and time orientation (i.e., past, present, and future tenses). We discuss the importance of these findings for IH and the value of using such techniques for similar psychological studies, as well as some ethical concerns and limitations with the use of such semi-automated classifications.
2022
-
The Affiliative Use of Emoji and Hashtags in the Black Lives Matter Movement in Twitter
Mark Alfano, Ritsaart Reimann, Ignacio Ojea Quintana, Anastasia Chan, Marc Cheong, and Colin Klein
2022
Protests and counter-protests seek to draw and direct attention and concern with confronting images and slogans. In recent years, as protests and counter-protests have partially migrated to the digital space, such images and slogans have also gone online. Two main ways in which these images and slogans are translated to the online space is through the use of emoji and hashtags. Despite sustained academic interest in online protests, hashtag activism, and the use of emoji across social media platforms, little is known about the specific functional role that emoji and hashtags play in online social movements. In an effort to fill this gap, the current paper studies both hashtags and emoji in the context of the Twitter discourse around the Black Lives Matter movement.
-
Polarization and trust in the evolution of vaccine discourse on Twitter during COVID-19
Ignacio Ojea Quintana, Ritsaart Reimann, Marc Cheong, Mark Alfano, and Colin Klein
2022
Trust in vaccination is eroding, and attitudes about vaccination have become more polarized. This is an observational study of Twitter analyzing the impact that COVID-19 had on vaccine discourse. We identify the actors, the language they use, how their language changed, and what can explain this change. First, we find that authors cluster into several large, interpretable groups, and that the discourse was greatly affected by American partisan politics. Over the course of our study, both Republicans and Democrats entered the vaccine conversation in large numbers, forming coalitions with Antivaxxers and public health organizations, respectively. After the pandemic was officially declared, the interactions between these groups increased. Second, we show that the moral and non-moral language used by the various communities converged in interesting and informative ways. Finally, vector autoregression analysis indicates that differential responses to public health measures are likely part of what drove this convergence. Taken together, our results suggest that polarization around vaccination discourse in the context of COVID-19 was ultimately driven by a trust-first dynamic of political engagement.