How to Better Support Your MSP Clients With AI Tools
Artificial intelligence is here to stay, whether we like it or not. And as managed service providers (MSPs) we’ve probably already anticipated the changes to business and life that AI tools will bring about.
However, you may not have given much thought to what AI will actually mean for your clients’ businesses. As their tech partner, your clients look to you as the expert, even if you’re not totally confident in much outside of Microsoft Co-Pilot.
And of course, there are myriad ways to use AI– you’re clients are most likely using it already without realising. As their adviser, you have a responsibility to make sure any of your clients wanting to use AI for business, even if it’s just in a small way, know how to do so responsibly.
Let’s take a look at just a few of the key areas that you need to focus on when it comes to AI tools and apps for your MSP’s clients.
Consider the Cybersecurity Risks of AI Tools
Cybersecurity is a priority for most MSPs, and especially those who are transitioning to be an MSSP (managed security service provider).
And as cybercriminals are able to gain access to networks through emails from the ‘CEO’, legitimate-looking links and more, it’s perhaps unsurprising that AI is a target too.
The UK’s NCSC recently conducted a study to look at the short-term risks of AI and cyber threats. They found that while AI can actually be a positive tool for rapid threat detection, it is also vulnerable.
So in the coming year or so, existing tactics, techniques and procedures (TTPs) will evolve and be enhanced, and those pose the greatest risk. For now, cyber threat actors, even those who aren’t especially skilled, are employing AI from anywhere in the world.
As a tech partner, what do MSPs need to know? Well, as the criminals access high-quality data and improve their skills, they will employ more sophisticated techniques from 2025 onwards.
More concerning for now is that AI offers what the NCSC terms ‘capability uplift’ in both social engineering and reconnaissance. This makes both more efficient and effective, and also harder to detect.
Now would be a good time to start educating your clients on this, and look at how you could offer awareness training. If you don’t have the capacity to do that yourself, perhaps you can partner with another MSP who does?
Using AI for Content Creation Can Lead to Accidental Plagiarism
As a writer, I know first-hand how busy people are and that they struggle to make time for content marketing. The arrival of AI tools like ChatGPT has made it much quicker and easier for rushed business owners to put together a blog post.
Aside from the issues around the quality of the output, and how much it sounds like you, there’s a real risk that your MSP client could be accused of plagiarism. And that could cause reputational damage.
ChatGPT is a large language model (LLM) which is capable of searching for and generating text based on a prompt. By asking it a question (prompt), the tool will write you whatever you want. As well as blogs, it can be used for emails, course outlines or poetry.
However, not all AI tools come up with this output on their own. Instead, it combs the internet for information which is already out there. If you asked it, “How do I make a Christmas pudding?”, for instance, the results it presents to you have already been published.
So there is a real risk of copying someone else’s content, even if you didn’t mean to. And there are also ethical questions being raised. If you’ve used AI tools which use machine learning, then you haven’t stolen someone else’s work. But you haven’t written it either.
The good news is, there are tools you can use that will scan a piece of text for you. If there’s someone else’s work there, you can remove it. Alternatively, cite it as research or even summarise it in your own words.
Are the AI Tools Giving You Accurate Information?
If your clients are using ChatGPT to save them time in their marketing efforts, are they reading the output? Too often we see badly-written blogs published on a website. But more worrying is that if you don’t check it before you share it, it could be factually incorrect.
Where did the tool find the information? Can you verify the source document? The tech sector moves fast, and data that was relevant and up-to-date one day might be might be meaningless the next.
By relying on the AI tool to do the research and the work for you, you’re handing over quality control. Encourage your MSP clients to read through blog posts to make sure it doesn’t have information from unreliable or discredited sources shared as facts. Again, think about the reputational impact.
Likewise, if they’re using AI for automation, has everything been set up right? Are there safeguards in place to prevent mistakes? For instance, if it produces reports for clients, is it up to date? A process for regular reviews of AI tools is a good idea.
Be Aware of Regulated Industry Requirements
Depending on the industry your clients work in, they may need to be selective about their AI tools. Any company dealing with sensitive information, or has government contracts, will be bound by specific legislation.
As we’ve seen, cyber criminals use AI for nefarious purposes. Your clients may not be aware of the risks, or even realise that they’re not supposed to use a specific tool. Smaller companies are always the most vulnerable, so if your client is an SMB or in a supply chain with one, take preventative measures.
Your clients also need to consider what information they allow the AI tools to access. There are GDPR rules around sharing and storing personal data, for instance, and breaching those can lead to large fines and other sanctions.
Do they hold proprietary information, whether their own or that of their own clients? How can this be safeguarded from AI tools accessing networks? It’s important to be transparent with customers and suppliers if AI is being used somewhere within a business.
And AI tools aren’t infallible! They can make mistakes (hallucinations) and things can go wrong. So far there haven’t been any final court decisions about AI errors. But it’s unlikely that a judge would accept blaming AI for any negligence. So check and recheck the work that these types of tools do for you!
What about you? Do your MSPs clients ask you for help with choosing and implementing AI tools? Do you have a strategy to support them? Let us know in the comments!
Comments