The Privacy Pros and Cons of Anthropomorphized AI

Safia Kazi
Author: Safia Kazi, CSX-F, CIPT
Date Published: 14 September 2023

The rapid growth of generative artificial intelligence (AI) over the last year has raised numerous privacy and ethical issues, including discussion of the ethics and legality of generative AI learning from copyrighted work. For example, some authors have filed a lawsuit against OpenAI, which owns ChatGPT, for breach of copyright.1 There is also concern about the ways in which generative AI might use personal information. For instance, Italy’s data protection authority temporarily banned ChatGPT due to privacy-related concerns.2

Generative AI that creates text and chats with a user poses a unique challenge because it can lead people to feel as though they are interacting with a human. Anthropomorphism refers to attributing human attributes or personalities to nonhumans.3 People often anthropomorphize AI—especially generative AI—because of the human-like outputs it can create.

This faux relationship can be problematic. The ELIZA effect describes the phenomenon of “when a person attributes human-level intelligence to an AI system and falsely attaches meaning, including emotions and a sense of self, to the AI.”4 In one instance, a man shared his concerns about climate change with a chatbot, and it provided him with methods to take his own life. Tragically, he died by suicide, and his wife alleges that he would not have done so had it not been for his perceived relationship with the chatbot.5

These algorithms that produce human-like outputs can be fun to play with, but they pose serious privacy concerns. There are privacy-related pros and cons to anthropomorphized AI, and certain considerations need to be made for enterprises that leverage consumer-facing generative AI.

Privacy Pros

One privacy-related benefit associated with anthropomorphizing AI is it may help users contextualize excessive data collection. When information is collected by an enterprise or an app, it may seem like a large assortment of 1s and 0s that do not hold any significance. But if it seems like a stranger is telling users private information about themselves, it seems off putting, and the consequences of sharing data become concrete and tangible.

Apps may be required to include a description of what information they collect in the app store, but users often do not read it because it is complex or they do not understand exactly what that data collection entails. In contrast, having an AI chat put that information into use may be more impactful. Consider Snapchat’s AI chatbot called My AI. Although the Apple App Store informs users that Snapchat can access location data, figure 1 more practically illustrates what access to location data means. For example, I asked My AI to recommend a good coffee shop near me.

Figure 1—My AI and Location Data

App users may understand that an app has access to location data, but having a conversation with a human-seeming chatbot that names specific neighborhood businesses better exemplifies what it means to share location data with an app. This may help people learn more about privacy matters, which can lead to consumers being more careful about the information they share and taking steps to protect their privacy.

Privacy Cons

When talking to an AI chatbot, users may feel comfortable sharing more information than they ordinarily would if the chatbot sounds human-like and uses first- or second-person language. It may feel like the information provided to the chatbot is being shared with a friendly person rather than an enterprise that may use those data for a variety of purposes. For example, people may talk to a chatbot for a while and eventually reveal sensitive information (e.g., health issues they are struggling with). Most chatbots will not warn users when they are providing sensitive information.

It is possible that an individual’s inputs or an AI platform’s outputs could be used to train future responses. This could mean that sensitive data or secret information shared with a chatbot might be shared with others or influence the outputs others receive. For example, although it has changed its policy, OpenAI initially trained its models on data provided to ChatGPT.6 And after AI is trained on data, it is hard to untrain it.7

Privacy notices are often challenging to understand, and there may be a temptation for consumers to bypass them in favor of a conversational, easy-to-comprehend response from an AI chatbot about privacy. Users may believe the information the chatbot provides is comprehensive. but someone who does not dig further into a provider’s privacy notice may have an inaccurate idea of what information the provider collects.

For example, I asked Snapchat’s chatbot what information the app collects, and it provided me with incomplete information. Figure 2 shows My AI indicating what data it collects, but that is not a complete list. Figure 1 established that the app also collects location data.

Figure 2—Asking Snapchat What Data It Collects

Takeaways for Practitioners

Privacy professionals who work at enterprises that leverage AI chatbots should consider some of the ways in which the chatbot provides information about privacy practices. Responses to questions about data collected, how data are used and consumer rights must be accurate and thorough to avoid misleading data subjects.

Enterprises that use AI chatbots must also consider the ages of their users and ensure that minors are protected. For example, 59% of teens say they use Snapchat,8 and only users with a paid Snapchat+ subscription can remove its My AI feature.9 This means that minors and parents of minors who are using the free version of Snapchat cannot remove the My AI function. Teens, who may not understand the flaws with it or how their information could be used, are using My AI for mental health help, which could be dangerous.10 Enterprises that provide AI services to minors should inform them about the consequences of providing sensitive information and reinforce that outputs may not be accurate.

I asked My AI what to do about a headache, and while it initially declined to provide medical advice, it eventually recommended I take medication (figure 3). Although it recommended relatively harmless, over-the-counter medication, minors may not know if these recommended medications may interact with any prescribed medications.

Figure 3—My AI Recommending Medication

Even enterprises that do not leverage generative AI need to explore how it may affect their day-to-day operations. Staff may feel as though asking ChatGPT for help drafting an email is akin to asking a colleague for help, but the two are not comparable: ChatGPT is a third-party service and does not abide by workplace confidentiality norms. It is imperative to have a policy around the use of generative AI. This requires input from a variety of departments, not just the privacy team. Managers should also establish guidelines about what kind of work can leverage AI tools (e.g., it may be permitted to use ChatGPT to draft a social media post, but it may not be acceptable to use it to draft a confidential email). Not having a policy around the use of GPTs and other AI tools can lead to employee misuse, and this can lead to privacy issues.

In privacy awareness training, privacy professionals can leverage the anthropomorphic elements of AI to show the impact of what happens if information is improperly disclosed. Nonprivacy staff may not understand why certain data are considered sensitive or why it matters if information is breached, but examples such as Snapchat’s AI can effectively illustrate the consequences of what might happen if information such as location data are improperly shared.

Conclusion

Many enterprises leverage human-like AI tools, and many employees may also rely on them for their work. The way AI has been anthropomorphized has simultaneously made it seem less trustworthy and more trustworthy from a privacy perspective. Having a policy around AI use and ensuring that consumers understand the implications of sharing data with AI can promote more effective and trustworthy AI tools.

Endnotes

1 Creamer, E.; “Authors File a Lawsuit Against OpenAI for Unlawfully ‘Ingesting’ Their Books,” The Guardian, 5 July 2023
2 Mukherjee, S.; G. Vagnoni; “Italy Restores ChatGPT After OpenAI Responds to Regulator,” Reuters, 28 April 2023
3 Merriam-Webster Dictionary, “Anthropomorphize
4 Xiang, C.; “'He Would Still Be Here': Man Dies by Suicide After Talking With AI Chatbot, Widow Says,” Vice, 30 March 2023
5 Ibid.
6 OpenAI, “Enterprise Privacy at OpenAI
7 Claburn, T.; “Funnily Enough, AI Models Must Follow Privacy Law—Including Right to Be Forgotten,” The Register, 13 July 2023
8 Vogels, E.; R. Gelles-Watnick; N. Massarat; “Teens, Social Media and Technology 2022,” Pew Research Center, 10 August 2022
9 Snapchat, “How Do I Unpin or Remove My AI From My Chat Feed?
10 Rudy, M.; “Teens Are Turning to Snapchat's 'My AI' for Mental Health Support—Which Doctors Warn Against,” Fox News, 5 May 2023

Safia Kazi, CSX-F, CIPT

Is a privacy professional practices principal at ISACA®. In this role, she focuses on the development of ISACA’s privacy-related resources, including books, white papers and review manuals. Kazi has worked at ISACA for 9 years, previously working on the ISACA® Journal and developing the award-winning ISACA Podcast. In 2021, she was a recipient of the AM&P Network’s Emerging Leader award, which recognizes innovative association publishing professionals under the age of 35.