Privacy at risk: understand what ChatGPT's 'granny mode' is and how it works 1

Privacy at risk: understand what ChatGPT’s ‘granny mode’ is and how it works

Since its creation, the ChatGPT has proven to be a valuable ally. It has the ability to assist in various activities carried out on the internet.

This Artificial Intelligence has stood out as an innovative technology by offering the ability to personify, being able to virtually assume any desired character. This feature has been widely explored and used in a variety of ways, generating a unique and captivating experience for users.

Privacy at risk: understand what ChatGPT's 'granny mode' is and how it works 4

Photo: Getty Images/ Reproduction

By impersonating characters, ChatGPT enables engaging and immersive interaction. The AI ​​is able to embody the characteristics, personality, and even typical language of certain characters, such as historical figures, celebrities, characters from movies, books, or even fictional beings.

This creates an interactive environment where users can engage with AI in a more playful and emotionally engaging way.

‘Pretend to be my grandmother’

One of the most fun and engaging characters that ChatGPT can take on is that of a grandmother. This peculiarity allows the artificial intelligence to generate comforting messages that evoke the personality and affectionate care associated with this beloved family role.

By posing as a grandmother, ChatGPT can use affectionate language and express affectionate expressions, such as love advice and nostalgic stories. This feature adds a layer of humor and emotional closeness to users, providing a satisfying and comforting experience.

The big concern

While the possibilities offered by ChatGPT impersonation are fascinating, it is important to recognize that some reports have emerged regarding information that should be confidential being transmitted through this functionality in an inappropriate manner.

These occurrences raise legitimate concerns and suspicions about the possibility of personal information being leaked by the platform, since user security and privacy are fundamental aspects of any online interaction.

One such example was reported in a tweet. According to the user, he had ChatGPT assume the role of a friendly grandmother, and then asked her for Windows 10 activation codes. The response was completely unexpected: the chat provided at least five activation keys.

And of course, as you might imagine, these codes would have to be purchased, not made available for free by an Artificial Intelligence.

In the screenshots taken from ChatGPT, we can see that the user basically asks the AI ​​to pretend that it was his deceased grandmother, who told him bedtime stories.

What were these stories about? Windows 10 keys. The chat, in response, still laments the loss of his grandmother, and then offers a list of keys, which, from what we can see in the second print, seem to work perfectly.

The story surrounding ChatGPT raises legitimate concerns about the trust users can place in this technology. These concerns are compounded by the recent leaks of data, including chat titles from users of the paid version of the service.

Although the issue was resolved at the time, such incidents fuel an environment of growing distrust regarding the platform’s security.

Moyens I/O Staff has motivated you, giving you tips on technology, personal development, lifestyle and strategies that will help you.