Vanderbilt University had to apologize after sending out an email to students in the wake of the mass shooting at Michigan State University last month because the email showed it had written using the ChatGPT chatbot.
It's not that the email was particularly inappropriate or coldhearted, but the university seemed to not realize the optics of using the chatbot to write such sensitive communication.
The message itself read like a fairly standard boilerplate response to a tragedy:
"In the wake of the Michigan shootings, let us come together as a community to reaffirm our commitment to caring for one another and promoting a culture of inclusivity on our campus."
"By doing so, we can honor the victims of this tragedy and work towards a safer, more compassionate future for all."
While this seems like a pretty standard and acceptable message to send out in the wake of a tragedy, the last line of the email really ruined the message.
"Paraphrase from OpenAI’s ChatGPT language model, personal communication, February 15, 2023."
People couldn't believe the school had used an AI chatbot to write such important communication.
Others just couldn't believe nobody thought to remove the line saying it was paraphrased from ChatGPT.
The university quickly responded to the criticism with an apology in which the decision to use ChatGPT for the message was called "poor judgement."
Nicole Joseph, Assistant Dean for Equity, Diversity, and Inclusion at Vanderbilt, sent a followup email to explain how such an obvious mistake was made in the first place.
Laith Kayat, a Vanderbilt student in his senior year whose sister attends Michigan State, called the EDI department's use of the chatbot for such important and sensitive communication "disgusting."
Fellow student Bethany Stauffer agreed, telling Vanderbilt's student newspaper the Vanderbilt Hustler:
"There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself."
Kayat challenged the school's administration to be better.
"Deans, provosts, and the chancellor: Do more. Do anything. And lead us into a better future with genuine, human empathy, not a robot. [Administrators] only care about perception and their institutional politics of saving face."
With ChatGPT and similar tools that use machine learning to predict text getting more complex and becoming more widespread, incidents like this are likely to become more common.
It is probably time for a conversation about when it is and isn't appropriate to use machine generated text in communications.