How to Help Your MSP Clients Stay Safe When Using AI Tools
In the last post, we looked at how to support your MSP clients with AI tools. We considered some of the main pitfalls to be aware of and what impact they could have on a business.
This time, we’ll explore the data security aspects of artificial intelligence. How can you keep your MSP clients safe, and how do you communicate the importance of security with them?
Offer to Support with the Installation of AI Tools
If you know that your clients are keen to use AI solutions, make sure that they’re using them safely. And you don’t have to be an expert in the tool to help them, either!
In the same way that you’d encourage clients to only click on trusted links or install software from safe sites, support them to do the same with AI tools too. If they’re Microsoft users, assist them with getting Microsoft Copilot set up.
And if they’re looking for tools to make content creation quicker and easier, point them towards things like Whisper Transcribe or ChatGPT. If there are payment or licencing requirements, can you manage that for them?
We know that business owners are using AI in all sorts of ways to make life easier, but they may not think through the consequences of signing up for the latest shiny object. As a responsible IT partner, you should help keep them safe.
Communicate Clearly with Your Clients About the Risks
As with cybersecurity and data compliance, we know that people don’t like to be lectured about keeping themselves safe while using tech. But if you’ve got a good relationship with your clients already, you should be able to broach it with them.
Perhaps you could make it part of your regular review meeting with them. Ask if they’re considering an AI tool, or if they’re already using one, how it’s working out. They might not want to admit that they’re struggling or concerned about what access the tool has, so make it part of a chat, not a lecture.
And it’s also a good idea to ask them to check with their teams, too. While your client may not have introduced AI tools to the business yet, that doesn’t mean that a staff member isn’t using it to automate a task and save themselves some time!
Provide AI Security Awareness Training
Many MSPs are looking at ways to offer cybersecurity awareness training, as it becomes more of a priority for their clients. This can include phishing simulations, online videos or in-person training sessions.
Over time, as more people adopt AI tools, it will become more important than ever to offer security awareness training. Companies such as KnowBe4 offer AI-powered training. They say that this “enables organisations to reduce risk faster, better and more efficiently.”
Or, if you’re not in a position to offer the training yourself, look for someone you can partner with. This is where it’s handy to be part of a peer group – ask around for anyone offering MSSP (managed security services provider) services. They will have a good referral for you or deliver the training themselves.
And if your clients aren’t keen to invest time and money into awareness training, there are plenty of articles on the dangers of AI tools being exploited by cyber criminals that you can share with them.
For example, in this article from October 2023, as featured in Cyber Magazine, highlights the risks clearly. AI is being used to craft ever-more-convincing phishing emails and to create malware.
More worryingly, they are stealing login credentials for tools such as ChatGPT to access older, unprotected devices and compromise data. These passwords are sold on the Dark Web. And they command higher prices than private emails due to the sensitive nature.
Improve Overall Security as Protection Against AI Tool Threats
As an MSP, you’re probably using some kind of remote monitoring and management (RMM) tool on your clients’ networks. A good RMM not only looks at endpoints such as workstations, but also smartphones and network devices.
Because it proactively highlights any issues on a client’s server and alerts you in real time, you can often fix problems or prevent breaches without the customer even knowing it’s happened.
So if you’ve got a client who you think is at greater risk of hacking or data compromise because of AI tools, now is a good time to help them to improve and update their security systems.
And the added advantage is that while they may not have taken you up on the service in the past, the rise in cybersecurity attacks, particularly involving compromised AI, might be enough to persuade them now.
Make Sure the AI Tools are Legislation Compliant
In a recent interview, GDPR and compliance expert Robert Baugh of Keepabl discussed the challenges for business owners who use AI tools. In particular he highlights the importance of governance – ensuring that you meet legal, ethical, moral and contractual obligations.
And Robert goes on to explain the importance of a data map. This logs every piece of information you have on past and current employees, as well as anyone you’ve interviewed for a role.
When it comes to AI, he says, that companies are in such a rush to adopt it that they’re not giving enough thought to privacy. Depending on how you use a tool, anything you input could be output to another party – this is a serious breach.
Another problem is that because of the way some AI tools work, there is a risk of bias with its outputs. Again, depending on the data its been fed, this could lead to discrimination or harassment, and is against GDPR regulations.
What about you? How do you talk to your clients about AI security measures? Have you made any changes to the services you provide them to help keep them safe? Let us know in the comments!
Comments