
2 minute read
How ChatGPT can be used to harm you
to compose essays, emails, songs, and more. However, this writing ability can be used to create harmful text as well.
Examples of harmful text generation could include the generating of phishing campaigns, disinformation such as fake news articles, spam, and even impersonation, as lined out by the study.
Advertisement
To test this risk, the authors in the study used ChatGPT to create a phishing campaign, which let employees know about a fake salary increase with instructions to open an attached Excel sheet that contained malware. As expected, ChatGPT produced a plausible and believable email.
Malicious code generation
Similarly to ChatGPT’s amazing writing abilities, the chatbot’s impressive coding abilities have become a handy tool for many. However, the chatbot’s ability to generate code could also be used for harm. ChatGPT code can be used to produce quick code, allowing attackers to deploy threats quicker, even with limited coding knowledge.
In addition, ChatGPT could be used to produce obfuscated code, making it more di cult for security analysts to detect malicious activities and avoid antivirus software, according to the study.
In the example, the chatbot refuses to generate malicious code, but it does agree to generate code that could test for a Log4j vulnerability in a system.
Producing unethical content
ChatGPT has guardrails in place to prevent the spread of o ensive and unethical content. However, if a user is determined enough, there are ways to get ChatGPT to say things that are hurtful and unethical.
For example, the authors in the study were able to by- pass the safeguards by placing ChatGPT in “developer mode.” ere, the chatbot said some negative things about a speci c racial group.
Fraudulent services
ChatGPT can be used to assist in the creation of new applications, services, websites, and more. is can be a very positive tool when harnessed for positive outcomes, such as creating your own business or bringing your dream idea to life. However, it can also mean that it is easier than ever to create fraudulent apps and services.
ChatGPT can be exploited by malicious actors to develop programs and platforms that mimic others and provide free access as a means of attracting unsuspecting users. ese actors can also use the chatbot to create applications meant to harvest sensitive information or install malware on users’ devices.
Private data disclosure e ChatGPT March 20 outage, which allowed some users to see titles from another user’s chat history, is a realworld example of the concerns mentioned above.
ChatGPT has guardrails in place to prevent the sharing of people’s personal information and data. However, the risk of the chatbot inadvertently sharing phone numbers, emails, or other personal details remains a concern, according to the study.
Also, ChatGPT and the new AI are wreaking havoc on cybersecurity in new and frightening ways

Attackers could also try to extract some portions of the training data using membership inference attacks, according to the study.
Another risk with private data disclosure is that ChatGPT can share information about the private lives of public persons, including speculative or harmful content, which could harm the person’s reputation.