Tracking Wang and Kosinski’s AI Gayface Controversy

(Posted Sept. 13, updated Sept 16).

This post tracked early responses to to Wang and Kosinski’s preprint paper that claimed their deep neural network’s ability to identify gay faces was evidence of the prenatal hormone theory of homosexuality, and exposed heretofore unknown threats to LGBTQ people in authoritarian regimes. My Sept. 9 blog post critiquing it was “Artificial Intelligence Discovers Gayface. Sigh.” I also posted about Peer review, ethics and LGBT Big Data.

This post has three sections. The first curates the most useful blog posts about the Wang and Kosinski. The second section collects the most useful responses by Michal Kosinski. The last is an initial journalistic timeline of stories that contributed new information or framings of the story.

If I’ve missed something, or mischaracterized it, please respond in comments.

1.–USEFUL BLOG POSTS–

This Quartz piece from 9/16 is the most useful summary of most these of the critiques below.

Calling Bullshit walks us carefully and thoroughly through Wang and Kosinski in their “Case Study on Machine Learning of Sexual Orientation,” updated on Sept. 19th to include conversations with Kosinski.

Carwil Bjork-James publishes a step-by-step way for journalists (and others) to evaluate scientific claims, and also a useful critique both of the : “Bad Science Journalist: Gay Facial Recognition.”

Philip Cohen publishes a useful critique of the study’s methods and unwarranted leaps of inference in their claims, on his blog, Family Inequality: “On Artificially Intelligent Gaydar.”

Andrew Gelman’s blog post offers also useful interpretations of Wang & Kosinski’s data, and offers suggestions of how future studies might avoid bias: “God, Goons, and Gays: 3 Quick Takes.”

Jay Livingstone demonstrates why “The overall accuracy may be 90%, but when it comes to picking out gays, the machine is wrong far more often than it’s right.”

Thomas Sutton tweets useful thread evaluating the sources for Wang and Konski’s claim of “‘widely accepted’ prenatal hormone theory of sexual orientation,” concluding “your ‘I’m doing this to warn you of your privacy risks’ as a post-facto veneer of respectability.”

Jenny Davis publishes “Rendering Bodies out of Rendered Machines”  on Cyborgology, a feminist critique on Wang and Koskinski’s essentialist assumptions about sexual orientation and identity, and a reminder that AI is not a “sanitary machine apparatus but a vessel of human values.”

Lisa Vaas, for computer security firm Sophos’ news site Naked Security, publishes useful exploration of privacy implications “Concerns Raised Over Claim That Neural Networks Can Detect Sexuality.”

Sara Jamie Lewis Tweets useful thread, including the statement “Research Should be Ethical, It Should Be Consensual.”

Tom White, on Twitter, suggests that “vgg-face is certainly not invariant to pose, in fact it’s a swell basis for a “pose classifier”; ‘s theory seems legit.

And Michael Byrne carefully walks us through “How to Navigate the Coming AI Hype StormHow to Navigate the Coming AI Hype Storm” which bears the subtitle: Many machine learning studies don’t actually show anything meaningful, but they spread fear, uncertainty, and doubt.”

 

2. –RESPONSES BY MICHAL KOSINSKI– (between 9/7 and 9/13, but not ordered by hour/time)–

9/7: In thread with @ellenbroad, she asks questions about ethical processes and privacy of the data. In response, Kosinski replies “Good points. We included some info it in the article,” by which he apparently means the Authors’ note file.

as of 9/8: lists this study as #1 on his personal website’s list of “selected representative publications,” linking also to the Authors’ notes, edited until 9/10.

9/8: tweets “This was not a paper about diversity but about risky algorithms.”

9/8: in thread with @ellenbroad, tweets “we followed standard procedure at Stanford. No identifiable information. Only public data (i.e. indexed by Google.” links to humansubjects.stanford.edu

9/8 In thread with @N0ffon, Kosinski replies “Are you an expert on AI/methodology? If so, please tell us about your concerns. If not, criticizing the methods makes you look a bit silly.” and later “I am not pursuing a better understanding- I used a widely spread tech. I am just telling you how dangerous it is…”

9/9: tweets “The study has been approved by the IRB.”

9/9: Tweets “Sexuality researchers that have reviewed it, suggested (correctly I believe) that linking [Prenatal hormone theory] with the theory makes it stronger.”

9/9: Comments on Carwil Bjork-James’ blog, borrowing Economist framing that his algorithm is as accurate as “spectroscopy when detecting breast cancer or state-of-the-art diagnostic tools for Parkinson’s disease. We widely use and trust those diagnostic tools.” Includes the cut-and-paste paragraph that nobody is worrying about the privacy implications and that gay rights organizations “are putting at risk the very people that are [sic] supposed to be protecting.”

9/9: Tweets to Carwil Bjork-James “I do not think that we speculative claims in the paper, but I would be keen to learn more.”

9/10: In a thread response to @AnnieTheObscure ‘s request “how do we respond to the dangers your paper exposes?” Kosinski punts “What, in your view, is the best way to address this.” I participate in the thread. Kosinski suggests “we need policies regulating storage and processing facial images; it’s as sensitive as data on sexuality, political views, or religion,” and later, “We need policymakers, lawyers, LGBTQ rights orgs, and technology companies to take this threat seriously.”

9/10 Tweets “If you noticed that a popular tech poses a threat, would you keep it to yourself, or study, peer-review, and sound a warning,” in thread with @XandaSchofield who says “affirming the prenatal hormone theory needs ‘rock solid’ evidence. Thread joined by @shionguha who asks “I think you study is important but is there a reason why you didn’t apriori collaborate with stakeholders say ethicists or activist groups?” Kosinski responds :”We did, a long time ago…”

9/10 In thread with @jacobkesinger ‘s statement “not surprised, just disappointed that @stanford IRB approved this study” and later, “as a ML practitioner I realize I have to always keep in mind the actual impact of my work on real people.” Kosinski responds: “as you can clearly see, most people weren’t aware of what ML practitioners can do.”

9/10: In thread started by @mathbabedotorg with “1. Builds Cambridge Analytica’s models 2. Algos to find gays 3. Tells us he’s a really good dude. Data Science needs to do better than this.” Kosinsiki responds “I am warning you – you hate me.” and “I did not build CA’s models, nor algos to find gays – I warned against them. But people prefer to hate the messenger rather than listen.”

9/10: In response to @tato_tweets that “91% accuracy eh? if one in ten people are gay an dyou guess they are all straight, you get 90% accuracy. that algorithm though *slow clap*” Kosinski responds, “You do, but how many gay people have you identified accurately? *Claps* for trying, but classification accuracy is more complex that that.” In another exchange with @tato_tweets who says: “Wow Michal, what would the world do without your groundbreaking research? You are saving so many lives. God bless you.” Response: “The world would survive. Many more LGBTQ people would be hurt though. I care, you don’t have to.” Tato_tweets replies: “Not telling us anything we don’t know. LGBTQ people are at risk. Question why. I think I care more than you. That’s why we do sociology.”

9/11: In another thread with @AnnieTheObscure, dodges her assertion that “you did yourself no good by rather wild extrapolations of the meaning of the result,” citing Scatterplot.” He reponds: “we can disagree about that. Can we agree, however, that AI poses real threats to privacy and try to do something about it” She reponds “yes. I’m only questioning your analysis of the meaning of your results, not their danger.” He reponds: “The meaning of my results is that the AI poses threat to privacy. So we seem to be agreeing about this too, amazing!”

9/11: In another thread with @mathbabedotorg, who posts Business Insider story about Faception and note “Dude’s also on Faceptions board of advisors.”In a separate thread, he replies, “I am not Cathy, they came to ask about ethics of AI, they referred to me as advisor in one of their slides. Startups do that.”

His response, in first thread: “Do you think that by trying to smear me, you will make the threats go away?” The author of Weapons of Math Destruction responds: “What you call academic research, I call weaponized algorithms. I am not surprised you don’t agree with me, but yes I’d like a chance to talk.” He responds: “I call them weaponized algorithms, too, and I am warning you against their potential. Let’s talk, then, reach out.

9/12: In response to @Texifire‘s discovery that he is the registrant of the domain Applymagicsauce.com, Tweets “I have no commercial interest in any predictive technology whatsoever.”

Screen Shot 2017-09-12 at 9.55.33 AM.png

9/13: In Twitter thread, says “opening a false (or real!) profile to gain access to data is an ethical violation.

 

3.–Some background and unfolding media coverage–

9/7: Story dated 9/8 broken by The Economist: “Advances in AI are used to spot signs of sexuality.” First posted to Twitter on 9/7 by Economist Technology Correspondent Hal Hodson
9/7: Story picked up by TechCrunch author Devin Coldewey “AI That Can Determine a Person’s Sexuality Photos Shows the Dark Side of the Data Age.” Concludes with statement that tech is autonomous from “us:” “tech won’t save us from ourselves–we might have to save ourselves from tech.”
9/8: HRC and GLAAD issue joint statement “GLAAD and HRC call on Stanford University & responsible media to debunk dangerous & flawed report claiming to identify LGBTQ people through facial recognition technology.”
9/8: Kosinski releases Google doc calling HRC/GLAAD statement a “smear campaign” and accusing them of bullying journalists (note: Kosinski edited the statement several times after it was released).
9/8: Guardian publishes first article with framing of “Wang & Kosinski vs. HRC/GLAAD: “LGBT Groups Denounce ‘Dangerous’ Study That Uses Your Face to Guess Sexuality.”
9/8: Louise Matsakis, for Vice’s Motherboard, publishes first article framed around digital privacy and consent: “A Frightening AI Can Determine Whether a Person Is Gay With 91 Percent Accuracy.” (the headline is incorrect, as the blogs above explain)

9/9: I publish on my blog: “Artificial Intelligence Discovers Gayface. Sigh.”
9/9: Sydney Fussell, filed to “Can’t Keep a Straight Face” on Gizmodo, publishes “Researchers Claim They Can Use Face Recognition to Accurately Identify Someone’s Sexuality.” Raises privacy, validity, and technical concerns

9/10: Dan Hirschman graciously republishes my post on his blog, Scatterplot
9/10: Alex Bollinger of LGBTQ Nation publishes “HRC and GLAAD Release A Silly Statement About the ‘Gayface’ Controversy.” Usefully critiques inaccuracies in the statement while accepting the authors’ framing that their study is a useful warning about a dangerous technology
9/11: Mashable’s Gianluca Mezzofiore: “Everything That’s Wrong With That Study Which Used AI to ‘Identify Sexual Orientation.”
9/11: General coverage of the Wang and Kosinski vs. HRC/GLAAD controversy appears in The Advocate
9/11: Adrianne Jeffries reports that the Journal that had accepted Wang and Kosinski’s paper as sidelined it for ethical review: “That Study On Artificially Intelligent Gaydar is Now Under Ethical Review.”

9/12: Inside Higher Ed publishes “How Good is Your Gaydar? How Good is Your Science?
9/12: More general coverage framed as Wang and Kosinski vs. HRC/GLAAD appears in The Washington Post

9/16 Slate adds Sonya Katyal’s discussion of civil rights law and AI technology.

 

Create a website or blog at WordPress.com