What you need to Know about ChatGPT

OpenAI’s chatGPT has divided opinion. Some believe it will revolutionize people’s lives and work, while others are concerned about the potential disruption.

Some countries have even temporarily prohibited its use to protect data. There have been cases where sensitive information was leaked, and employees were in hot water when they entered confidential company data into the chatbot.

What does ChatGPT make of your data? How can you use this information securely?

What data sources does ChatGPT use?

Chat GPT is an artificial Intelligence tool powered by Machine Learning (ML), meaning it uses an ML algorithm to understand and respond in a conversational way to user prompts. It has been “trained,” using vast amounts of data scraped off the internet. This includes 570GB of data from articles, books, Wikipedia, and other online content.

This amount of data allows it to write essays, debug and create code, solve complex math equations, and even translate languages.

As a natural-language processing tool, it uses probability to answer questions. It does this by predicting the next word of a sentence based on millions of examples that it has been taught. The information provided may need to be completed or updated. Since most of the data was collected before 2021, it cannot provide information about events that occurred in the past two years.

How safe is ChatGPT?

ChatGPT will save your responses to its prompts and use them as a training tool for its algorithms. The bot can use the data even if your conversations are deleted. It is dangerous if the user enters sensitive information about themselves or their company that malicious parties could use.

ChatGPT also stores your IP address, payment information, and device details (although most websites store this information for analytical purposes, so it’s not exclusive to ChatGPT).

Researchers have expressed concern about the data collection methods used by OpenAI because scraped may contain copyrighted materials. In The Conversation article, Uri Gal (Professor in Business Information Systems, University of Sydney) called ChatGPT a “privacy nightmare.” He stated, “If anyone has ever posted a product review or blog or commented on a piece of online content, it’s likely this information was consumed.”

ChatGPT and Cybersecurity

It is being used to create malware. The ability to write code allows it to create malicious software, build dark web pages, and enact attacks.

In a recent CSS Hub advisory meeting, members discussed how ChatGPT is used to create highly sophisticated Phishing attacks. They also discussed using it to improve the language, as poor grammar and spelling are often telltale signs of a successful phishing attack. It was also revealed that malicious actors use ChatGPT better to understand the psychology and motivations of their intended recipients. This helps them create more effective phishing attempts.

In March 2023, more than 1,000 AI experts, including OpenAI co-founders Elon and Sam Altman, called for a halt to developing major generative AI for at least six months to give researchers time to understand better and mitigate their risks.

How many data breaches have there been?

OpenAI confirmed that in March 2023, a bug within the source code of the chatbot may have led to a data leak, which allowed certain users access to parts of an active user’s conversation history. The bug may also have made payment information available to 1.2 percent of ChatGPT plus active subscribers in a specific time frame.

OpenAI released a statement stating that they believed the number of users who had their data revealed to be “extremely small,” as it would have required them to have opened a subscription email or clicked on certain functions during a specified timeframe. ChatGPT, however, was taken offline for several hours as the bug was fixed.

Samsung experienced three separate incidents where confidential information about the company was entered into a chatbot. (Samsung’s source code; a transcription of a meeting held by the company and a sequence for identifying defective chips). These incidents led to disciplinary proceedings.

As far as we are aware, the data has not been leaked. However, as previously mentioned, all information inputted by Samsung employees is stored to train the algorithm. Therefore, the proprietary information entered is now, theoretically, available to anyone who uses the platform.

What does OpenAI have to say about security?

OpenAI claims to conduct annual testing to identify security vulnerabilities and prevent them from being exploited maliciously. OpenAI also offers a bug bounty program, which invites researchers and ethical hackers to test for security vulnerabilities in the system in exchange for cash.

Leave a Reply

Your email address will not be published. Required fields are marked *