10 Ways Cybercriminals Can Abuse Large Language Models

leadership team img1

By Michelle Drolet

Founder & CEO

Ms. Drolet is responsible for all aspects of business for Towerwall. She has more than 24 years of,

Read More

Large language models (LLMs) like ChatGPT and Google Bard have taken the world by storm. While these generative AI programs are incredibly versatile and can be implemented in a wide range of productive business use cases for the good, there is also a potential downside for LLMs to empower threat actors, adversaries and cybercriminals with more widespread and dangerous capabilities.

Here are 10 different ways in which LLMs can be abused.

1. Social Engineering And Phishing

ChatGPT enables threat actors to overcome one of their biggest weaknesses, namely communication skills. Using LLMs, threat actors can craft highly personalized and persuasive phishing campaigns free from grammatical errors, in many languages or dialects.

Attackers can generate highly convincing spear-phishing emails, texts or fraudulent content to trick victims into revealing passwords, downloading malware or visiting a malicious website. Although ChatGPT has some built-in controls to prevent this kind of misuse, experiments have revealed that anyone can circumvent these guardrails easily.

2. Malware Obfuscation

LLMs aid cybercriminals in obfuscating malware code, making it harder for cybersecurity systems to detect malware. For instance, researchers at CyberArk demonstrated that it was possible to mutate malware code repeatedly (creating multiple variations of the same code) simply by querying the chatbot. This allowed them to create a highly evasive polymorphic program difficult to detect by most security systems.

3. Misinformation And Propaganda

Cornell University showed that LLMs can be programmed to generate and spread false information, conspiracy theories and propaganda. Threat actors can exploit LLMs’ text generation capabilities and program it in such a way that it spins out an adversarial chosen sentiment or point of view.

For example, imagine an army of bots on social media that can write authoritatively on a variety of subjects but suddenly produce biased content or hate speech when triggered by flashpoint events, specific keywords or topics.

4. Amplification Of Biases

LLMs learn from the data they are trained on, which can include biased or unbalanced information. So, if threat actors somehow trick LLMs to consume poisoned data, then they may inadvertently produce biased outputs, reinforce stereotypes (on race, gender, ideology or religion) or exhibit discriminatory behavior. If such biases are not detected in time, LLMs can perpetuate or amplify societal biases and inequalities.

5. Malicious Content Generation

Threat actors can leverage LLMs for the creation of large volumes of content for malicious purposes. For instance, LLMs can produce hundreds and thousands of websites that go undetected and spread quickly, and they can generate fake reviews, social media comments, forum posts, fraudulent product listings, advertisements, etc., all at an industrial scale.

6. Prompt Injection And Manipulation

By providing carefully crafted inputs, LLMs can be tricked into generating malicious content, bypassing security measures or providing inaccurate information. For example, using a technique known as “indirect prompt injection,” a security researcher was able to manipulate Microsoft Bing Chat to pose as a Microsoft employee and request credit card information from users.

7. Data Leaks And Data Privacy

LLMs process large amounts of data that include user-generated prompts and inputs. These queries are usually stored by developers and used for advancing the LLM model. If employees upload sensitive data and confidential information into the model, then this data could be hacked, leaked or accidentally exposed by the LLM.

For this very reason, companies such as Samsung, Amazon and JPMorgan have reportedly barred employees from uploading confidential information, including code, to LLMs. Italy was the first EU member to suspend access to ChatGPT on fears of it violating GDPR privacy rules. Not surprisingly, certain countries have banned it.

8. Reconnaissance

Reconnaissance is an initial phase of an attack where cybercriminals gather information about a target system, individual or organization before launching an attack. Using LLMs, adversaries can easily enhance or accelerate their data collection process (an otherwise extremely cumbersome and time-consuming process) by simply asking the program to gather information about their targets—such as the people who work there, the systems they use, the events they take part in and the companies they are associated with.

9. Vulnerability Hunting And Code Deconstruction

Cybercriminals can leverage LLMs to hunt for vulnerabilities in the victim’s software, machine or environment. Instead of going through large batches of code manually, line by line, attackers can simply query to deconstruct code and examine or highlight the weaknesses and vulnerabilities that might be present in it (such as hard-coded credentials and weak password hashing). Case in point, a smart contract auditing firm was able to identify weaknesses in its code simply by querying ChatGPT.

10. Online Harassment And Trolling

Another potential abuse of LLMs is that they can be used to harass, troll, bully or extort individuals, leading to psychological distress, anxiety or depression. Predators can deploy automated bots that interact with unsuspecting users, win their trust and build a relationship with victims (romance scams) before making inappropriate requests or threats.

Europol recently sounded the alarm on the potential exploitation of AI and LLMs by cybercriminals. Called the Artificial Intelligence Act, the EU is working on a legal framework for the regulation of AI. But it is not just the duty of regulators to control the abuse of LLMs.

Businesses must actively evaluate these risks and work to plan and draft security policies, ensuring that a broad security strategy is enforced—one that includes content moderation, user awareness training, threat detection and response and up-to-date security tools to defend against AI-related risks.

 

This article was originally posted on Forbes.com >