Skip to main content
Please wait...

The year 2022 ended with openAI’s ChatGPT going viral. Every technology enthusiast, including children, was trying to find out if this new AI-based technology (or language model AI as it calls itself) has all the capabilities everyone is talking about (such as writing an essay for a school competition)!

As a cyber security professional, I have always been paranoid about using new technology that may collect personal data.  My curiosity won over the paranoia, though, and I thought of chatting with the ChatGPT to find out if it was a threat to data privacy.

I started by asking if it collects any personal data.

I am a language model AI, and I cannot collect any personal data. I work based on the input given to me, and I generate responses based on the patterns in the input data I’ve been trained on.”

Sounds relatively harmless as even for creating an account, it does not collect anything more than name, email, and phone number.

Let’s consider a few scenarios:

Scenario 1: ChatGPT is used by an organization to create a bot to conduct interviews and provide intelligence about whether the candidate can be selected for the next round. This will be considered as ‘automated decision making’ by many of the regulations, including GDPR. And this automated decision could very well be biased. After all, it works based on the inputs given to it.

Scenario 2: A healthcare organization uses it for answering FAQs related to common health conditions and medical information.

Scenario 3: Financial services organization uses a ChatGPT-based bot to guide investments based on a person’s age, income details, and bank statements for the last 5 years.

ChatGPT displays a warning about not sharing any sensitive information in your conversations. However, the user of ChatGPT may still override this warning and provide such information in the hope of getting more accurate advice.

Where do we go from here?

Use cases where a technology such as ChatGPT can be used are infinite. However, persons who may interact with ChatGPT-based solutions may not know all potential uses of their data for which they have yet to provide consent.

In addition, there is a security risk – ChatGPT enabling script kiddies to write malware!

It’s too early to grasp all such use cases and decide who will be responsible and accountable for safeguarding the personal data and rights of the data subjects – will it be organizations using ChatGPT to provide services or creators of this technology?

Artificial Intelligence-based solutions would help in solving many problems. But would the regulations catch up with such technologies and address possible misuse?

Only time will tell! Or as ChatGPT would say, “Please keep in mind that this is a general overview and may not consider everything related to the subject. Consult someone with techno-legal expertise for an accurate answer”.