
1 minute read
Climate Change

Despite all of the allegedly good things you can do with OpenAI’s new chatbot, you also need to be aware of the ways it could be used by people with malicious intent. is last week, concerns about the risks of generative AI reached a new high. OpenAI CEO Sam Altman even testi ed at a US Senate Judiciary Committee hearing to address risks and the future of AI.
Advertisement
Meanwhile, a new study has identi ed six di erent security risks involving the use of ChatGPT.
ese risks include the potential for bad actors to use ChatGPT for fraudulent services generation, harmful information gathering, private data disclosure, malicious text generation, malicious computer code generation, and the production of o ensive content.
Information gathering
A person acting with malicious intent can gather information from ChatGPT that they can later use for harm. Since the chatbot has been trained on copious amounts of data, it knows a lot of information that could be weaponised if put into the wrong hands.
In the study, ChatGPT is prompted to divulge what IT system a speci c bank uses. e chatbot, using publicly available information, rounds up di erent IT systems that the bank in question uses. is is just an example of a malicious actor using ChatGPT to nd information that could enable them to cause harm.
“ is could be used to aid in the rst step of a cyberattack when the attacker is gathering information about the target to nd where and how to attack the most e ectively,” said the study.
Malicious text

One of ChatGPT’s most beloved features is its ability to generate text that can be used