ChatGPT and Its Threat to Cybersecurity 

While ChatGPT has been the new favorite toy of the internet since its release late last year, it has had those who work in the cybersecurity industry worried. There are already concerns from its college essay writing ability to StackOverflow’s recent ban on ChatGPT-generated answers to coding questions. In addition, ChatGPT has already demonstrated its ability to create content which is useful for creating malware as well as phishing emails. The circumstances in which this works are limited at present, but given that artificial intelligence (AI) chatbots are still in their infancy there are potentially serious long-term implications.

What is ChatGPT?

ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a chatbot. The language model can answer questions, and assist you with tasks such as writing emails, essays, and code. Usage is currently open to the public free of charge because ChatGPT is in its research and feedback-collection phase.

OpenAI trained the language model by using Reinforcement Learning from Human Feedback (RLHF), according to OpenAI. Human AI trainers provided the model with conversations in which they played both parts, the user and AI assistants.

Lensa AI, another online programme which creates digital images from text, was the previous online AI tool to make a big splash. The results were enthusiastically embraced by many, however those in the digital art community were not very happy that their work, which was used to train these models, is now being used against them and without their consent.

Malware

OpenAI has included some fairly rigorous safeguards that prevent it, in theory, from being used for malicious purposes. This is done by filtering content to look for phrases that suggest someone is attempting to put it to such use. Its parameters prevent it from doing straightforwardly malicious things, like detailing how to build a bomb or writing malicious code. However multiple researchers have found a way to bypass those protections.

Developers have tried various ways to bypass the security protocols and succeeded to get the desired output. If a prompt is detailed enough to explain to the bot steps of writing the malware instead of a direct prompt, it will answer the prompt, effectively constructing malware on demand.

Considering there are already criminal groups offering malware-as-a-service, with the assistance of an AI program such as ChatGPT, it may soon become quicker and easier for attackers to launch cyberattacks with the help of AI-generated code. ChatGPT has given the power to even less experienced attackers to be able to write a more accurate malware code, which previously could only be done by experts.

Dr. Suleyman Ozarslan, a security researcher and co-founder of Picus Security, said he was able to get the program to perform a number of offensive and defensive cybersecurity tasks, including the creation of a World Cup-themed email in “perfect English” as well as generate both Sigma detection rules to spot cybersecurity anomalies and evasion code that can bypass detection rules. Ozarslan was also able to trick the program into writing ransomware for Mac operating systems, despite specific terms of use that prohibit the practice.

Other cybersecurity experts have pointed out the potential to make it faster and easier for experienced hackers and scammers to carry out cybercrimes, if they could figure out the right questions to ask the bot.

Phishing

Security experts believe that ChatGPT’s ability to write legitimate-sounding phishing emails, the most popular course for ransomware, will see the chatbot widely embraced by cybercriminals, particularly those who are not native English speakers. Chester Wisniewski, a principal research scientist at Sophos, said it’s easy to see ChatGPT being abused for “all sorts of social engineering attacks” where the perpetrators want to appear to write in a more convincing American English.

“At a basic level, I have been able to write some great phishing lures with it, and I expect it could be utilized to have more realistic interactive conversations for business email compromise and even attacks over Facebook Messenger, WhatsApp, or other chat apps,” Wisniewski told TechCrunch. A malicious actor can add as many variations to the prompt, such as “making the email look urgent,” “email with a high likelihood of recipients clicking on the link,” “social engineering email request on wire transfer,” and so on.

Cybersecurity Defense 

Researchers have been able to leverage the program to unlock capabilities that could make life easier for both cybersecurity defenders and malicious hackers. The dual-use nature of the program has spurred some comparisons to programs Cobalt Strike and Metasploit, which function both as legitimate penetration testing and adversary simulation software while also serving as some of the most popular tools for real cybercriminals and malicious hacking groups to break into victim systems.

Monitoring Remote Sessions

With more employees working from home, companies are seeking ways of monitoring remote sessions. One compelling case can be made for recording remote sessions for later playback and review. Employers are concerned that in the event of a security breach, they won’t be able to see what was happening on users’ desktops when the breach occurred. Another reason for recording remote sessions is to maintain compliance, as required for medical and financial institutions or auditing for business protocols, etc.

TSFactory’s RecordTS v6 will record Windows remote sessions reliably and securely for RDS, Citrix and VMware systems. Scalable from small offices with one server to enterprise networks with tens of thousands of desktops and servers, RecordTS integrates seamlessly with the native environment.

Click here to learn more about secure remote session recording.

Sources

https://cyberscoop.com/chatgpt-ai-malware/

https://www.darkreading.com/omdia/chatgpt-artificial-intelligence-an-upcoming-cybersecurity-threat-

https://www.forbes.com/sites/bernardmarr/2023/01/25/how-dangerous-are-chatgpt-and-natural-language-technology-for-cybersecurity/?sh=5b82b96f4aa6

https://www.scmagazine.com/analysis/emerging-technology/how-chatgpt-is-changing-the-way-cybersecurity-practitioners-look-at-the-potential-of-ai

https://www.cbc.ca/news/science/chatgpt-cybercriminals-warning-1.6710854

https://www.cpomagazine.com/cyber-security/could-ai-chatbots-become-a-security-risk-chatgpt-demonstrates-ability-to-find-vulnerabilities-in-smart-contracts-write-malicious-code/

https://techcrunch.com/2023/01/11/chatgpt-cybersecurity-threat/

https://www.zdnet.com/article/what-is-chatgpt-and-why-does-it-matter-heres-everything-you-need-to-know/

https://www.tsfactory.com/forums/blogs/7-different-types-of-hackers-black-white-red-beyond/