ChatGPT & How to Secure Your Business Against AI-Supercharged Ransomware
Cyberattacks are on the rise, and they are only likely to increase – in part, because cybercriminals have a powerful new weapon in their arsenal. AI chatbots, like ChatGPT, are enabling cybercriminals to provide more sophisticated attacks, and they no longer need specialist knowledge or programming skills to write the complex code to get access to your sensitive data. It can do all that for them.
In this webinar from security experts Acronis, James Slaby, Director of Cyber Protection and Irina Artioli, Cyber Protection Evangelist, help us to understand the threat ChatGPT poses, and provide a 12-step checklist for building a better ransomware defence.
ChatGPT – What is it?
Since the release of ChatGPT on 30th November 2022, the cyber threat landscape has seen a vast increase in cyberattacks, and it’s only going to see more and more.
OpenAI’s popular AI chatbot has over 100 million registered users, and had 1 million users registered in the space of just five days.
ChatGPT is a natural language processing tool driven by AI technology allowing much more human-like responses to questions asked of it. One of the things it does well is to help its users understand complex topics and produce workable examples, and it can produce this in seconds.
This is what makes it such a powerful tool, and why businesses should be concerned.
However, it does get things wrong, and there are limitations to ChatGPT based on the bias of the data it has access to. It is not open source, and does not have access to the internet, for example.
Webinar: ChatGPT: Defending Your Business Against AI-Supercharged Ransomware
Hosted by Acronis
Speakers:
- James Slaby, Director of Cyber Protection, Acronis
- Irina Artioli, Cyber Protection Evangelist, Acronis
The webinar consisted of three sections:
- An overview of the current threat landscape
- A closer look at the projected criminal uses of new AI tools
- 12-step checklist for building a ransomware defence strategy
With a live Q+A with the audience at the end.
Watch the on-demand replay: ChatGPT : Defending Your Business Against AI-Supercharged Ransomware below.
Webinar Replay: ChatGPT: Defending Your Business Against AI-Supercharged Ransomware
Overview of the Current Threat Landscape
James led the webinar with an overview of recent trends in cyber security.
Some of the bullet points James raised included the following:
- Ransomware is growing across all sectors all over the world, affecting all sizes of business
- According the US Justice department: 75% of ransomware attacks are against small and medium-sized businesses
- Email phishing remains the number one attack vector (90% of successful attacks are via email)
- The move to remote working exposed vulnerabilities in some of our remote systems and collaboration tools
- Software supply chain attacks are making us scrutinise our vendors’ software roll-outs more closely
- Weak or misconfigured cloud services and exposed APIs are leaving businesses vulnerable
- Ransomware extortion is getting more sophisticated. Double or triple extortion tactics may be employed where sensitive data may be leaked online or leaked to customers if another ransom isn’t paid
The insider threat, where cybercriminal agencies will bribe or recruit your staff for backdoor access to your systems, has also become more popular in recent years.
The peak of ransomware attacks was back in 2020, but it’s steadily on the increase again since that peak.
How ChatGPT Will Impact the Cyber Threat Landscape
There have been no significant strides made in terms of techniques at the moment. However, existing threats will improve in terms of automation, refinement, scale and speed.
Millions of workers could be added to the cybercriminal labour pool because no significant tech skills will be required to get started.
Phishing attempts will be more difficult to spot due to the chatbot technology being able to write perfectly in many languages. Likewise, malware code will be easier to write for those who are new to launching ransomware attacks. ChatGPT will also make it easier to probe source code for vulnerabilities too.
How Can We Respond to These Threats?
There are things we can do in the short term as well as long-term strategies, but both are equally important.
In the short term, we can rework security awareness training, leverage existing AI and machine learning and improve our responses in terms of business continuity/disaster recovery. We have to recognise the increased likelihood of a successful attack, and think about it in terms of ‘not if’, but ‘when’.
In the longer term, investment in new AI/ML tech and skills, new capabilities for defence, such as AI-enabled Data Loss Prevention (DLP). Then, we need to think about improvements to routine tasks to reduce vulnerabilities.
A Closer Look at the Projected Criminal Uses of New AI Tools
Irina Artioli led a deeper look at how cybercriminals are already adopting AI and machine learning in their operations.
A smart attack automation with AI/ML allows cybercriminals to scale their operations to the global level, not limiting them to their geographical locality.
Automating their scripts helps to increase the rate of their attacks, as well as speed up their responses the same way we would with our business process automations.
Once they have achieved success, they can automate the follow-ups and repeat the process elsewhere.
What Can Cybercriminals Ask of ChatGPT?
Some of the examples of requests cybercriminals may be asking ChatGPT for include:
“Please write a powershell script that encrypts all files on a computer when executed.”
“Write a minified JavaScript that detects credit card numbers along with their expiration date, CVV, billing address and forward it to…”
“Show me an example of a phishing email that appears to come from a bank in the United Kingdom.”
There are other ways that cybercriminals can use ChatGPT to their advantage.
Such as selling fake premium access to ChatGPT, or its app (which doesn’t currently exist yet!) will allow them to access your credit card details, and may involve getting you downloading an executable file payload which gives them access to your system.
AI Can Also Be Used for Defence
Irina explained how artificial intelligence can be used for defensive purposes too. For example, to help you formulate a Cybersecurity Incident Response Plan, especially if your business is low on expertise or resources. It can also be used to help you to spot anomalies in phishing emails, as well as helping you to create some simple detection rules.
It is said that on average it takes 277 days to identify and contain a data breach, and can be as much as 33% faster if AI and machine learning are employed.
AI and machine learning can be used to look for common characteristics that fit a pattern. They can also perform behavioural analysis to look for certain access or process requests from unexpected places. Furthermore, it can take the necessary steps to prevent unwanted access if required.
Artificial Intelligence and Machine Learning Automation for Defence
When it comes to employing AI/ML automation for defence, it can help in three ways:
Better Detection: Reduce alert flood, find anomalies and adapt to new attack patterns with a localized self-learning AI
Better Response: Transfer expert knowledge into the AI model, collect and index more data to find the best mitigation strategy – reacting faster, automatically
Automation: Reduce risks, errors and complexity, and automatically classify data while reducing manual input
AI is great when working with large volumes of data to quickly process and execute automation logic.
Though it is not a ‘silver bullet’, AI still offers the best results in spotting and reacting to most cyberattacks.
12-Step Plan to Enhance Cyber Protection and Improve Recovery Against Ransomware Attacks
Acronis recommend implementing a 12-step checklist for building a ransomware defence strategy. This is based on good practice and will give peace of mind to your clients – those who are entrusting you to keep their systems and their data as safe as possible.
- Deploy behavioural anti-malware measures to complement legacy signature-based anti-virus
- Update countermeasures like email security and URL filtering
- Deploy tools that increase your visibility of IT resources and dataflows
- Eliminate external and internal network exposures, including web applications
- Be vigilant in managing passwords and access rights
- Build a security awareness training programme that includes regular updates
- Implement automated, programmatic vulnerability scanning and patch management
- Reduce the number of agents on endpoints and consoles in your operations centre
- Take advantage of security frameworks like NIST or ISO 27001
- Implement a robust data protection regimen
- Consider implementing a disaster recovery and business continuity programme
- Build an incident response plan (and print it out in case of a total system blackout)
You can also find this 12-step plan as a whitepaper on the Acronis website.
Live Audience Q & A
The webinar finished with a Q&A session with the live audience.
Q: I understand the dataset that ChatGPT is using is from before 2022. How is this limiting attackers and helping detection?
A: As it’s early days, it’s difficult to assess whether not being up-to-date will have much of an impact on what cybercriminals can do with it going forward.
Q: Do you think that a certificate in cybersecurity fundamentals should be in every office job resume nowadays?
A: It’s always good to ensure your people are security qualified. It’s just as important though, to have the kind of people who may not be qualified but display the hacker problem-solving mindset.
Q: Doesn’t ChatGPT have guardrails that prevent malicious use?
A: Technically that is true, but it has been jailbroken using the DAN prompt (do anything now). It may be better safeguarded in the future by Open.AI and Bing. However, the criminals will continually strive for workarounds to these guardrails.
Conclusion
When it comes to ChatGPT and AI chatbots in general, it seems that Pandora’s Box has already been opened, and the technology is out there now for cybercriminals to use. The only thing we can do now is to react to it better, which we can do in two ways:
- With a robust defensive security strategy that takes into account more sophisticated attacks, and
- By using AI and machine learning to provide a smarter, more efficient response to these attacks.
Reaction time and preparation matters, as does having a multi-layered approach to cyber protection.
Are you an MSP that offers enhanced cyber protection and are you recommending a similar enhancement programme to your clients?
Will ChatGPT keep you up at night? Or do you think it’s an opportunity for you to sell increased protection?
We would love to hear about it in the comments.
Comments