10 Things I know about … ChatGPT risks

leadership team img1

By Michelle Drolet

Founder & CEO

Ms. Drolet is responsible for all aspects of business for Towerwall. She has more than 24 years of,

Read More

10. Benefits & risks.

Like most tools, large language models (like ChatGPT & Google Bard) can be used for good or ill purposes. Positives: generate creative content, translate languages, and debug software. Negatives: They can be used to damage reputations, spread misinformation, code malware, and conduct cyberattacks.

9. Phishing at scale.

LLMs can be used to create unlimited phishing campaigns, free from grammatical errors and in any language. This can make them difficult to detect.

8. Misinformation.

LLMs can be used to generate and spread false information, conspiracy theories, and hate speech. This can be used to manipulate public opinion, sow discord, and undermine trust in government institutions. They can be prone to hallucinations – inventing things that don’t exist.

7. Amplification of biases.

LLMs learn from the data they are trained on, which can include biased or unbalanced information. This means that LLMs can be used to generate biased outputs, generate malicious content, and reinforce stereotypes.

6. Unreliability.

LLMs can be unreliable about reporting facts, generate error-prone content, bypass security measures, or provide inaccurate information. Its responses in human-like sentences can make people gullible. Always verify facts.

5. Data leaks & privacy.

LLMs process vast amounts of data, which includes user-generated prompts and inputs. This data could be leaked or accidentally exposed by the LLM, which stores all user inputs.

4. Reconnaissance.

LLMs can be used to gather information about a target system, individual, or organization. This information could be used to conduct a social engineering scam, such as an email compromise attack on the CEO.

3. Vulnerability hunting.

LLMs can be used to hunt for vulnerabilities, making it useful for white-hat hacking. The same data could be used to exploit organizations.

2. Online harassment.

LLMs can be used to harass, bully, or extort individuals.

1. User beware.

The potential for abuse is a serious concern. Businesses should implement security policies and procedures designed to protect against LLM-related threats.

 

This article was originally posted on Worcester Business Journal >