The adoption of AI tools and products is growing at a steady pace. Today, companies are rolling out AI chatbots to serve almost every need of users. There are AI chatbots that can write an essay, role-play as your partner, remind you to brush your teeth, take down notes during meetings, and more. These AI tools are mostly in the form of large language models (LLMs) such as OpenAI’s ChatGPT.
However, the LLMs that are being developed and deployed everywhere could threaten users’ privacy as they are trained on large amounts of data gathered by indiscriminately scraping the information that is available online. Yet, many users remain unaware of the privacy and data protection risks that come with LLMs as well as other generative AI tools.
Over 70 per cent of users interact with AI tools without fully understanding the dangers of sharing personal information, according to a recent survey. It also found that at least 38 per cent of users unknowingly revealed sensitive details to AI tools, putting themselves at risk of identity theft and fraud.
Feeding in the right prompts could also cause LLMs to “regurgitate” personal user data as they are likely trained on data pulled from every nook and corner of the internet.
Beware of social media trends
Recently, a trend that went viral on social media urged users to ask an AI chatbot to “Describe my personality based on what you know about me”. Users were further encouraged to share sensitive data like their birth date, hobbies, or workplace. However, this information can be pieced together, leading to identity theft or account recovery scams. Risky Prompt: “I was born on December 15th and love cycling—what does that say about me?”
Safer Prompt: “What might a December birthday suggest about someone’s personality?”
Do not share identifiable personal data
According to experts from TRG Datacenters, users should frame their queries or prompts to AI chatbots more broadly to protect their privacy.
Risky Prompt: “I was born on November 15th—what does that say about me?”
Safer Prompt: “What are traits of someone born in late autumn?”
Avoid disclosing sensitive information about your children
Parents can unintentionally share sensitive details such as their child’s name, school, or routine while interacting with an AI chatbot. This information can be exploited to target children.
Risky Prompt: “What can I plan for my 8-year-old at XYZ School this weekend?”
Safer Prompt: “What are fun activities for young children on weekends?”
Never share financial details
Over 32 per cent of identity theft cases stem from online data sharing, including financial information, according to a report by the US Federal Trade Commission (FTC).
Risky Prompt: “I save $500 per month. How much should I allocate to a trip?”
Safer Prompt: “What are the best strategies for saving for a vacation?”
Refrain from sharing personal health information
Since health data is frequently exploited in data breaches, avoid sharing personal medical histories or genetic risks with AI chatbots:
Risky Prompt: “My family has a history of [condition]; am I at risk?”
Safer Prompt: “What are common symptoms of [condition]?”
Be the first to comment