Mon – Fri  9AM – 5PM|Client Portal
ITstuffed
Cybersecurity

AI Tools Are Collecting Your Data. Is That Data Safe?

Your practice has started using AI tools. Maybe it is a transcription service that summarises patient notes, a drafting assistant for client correspondence, or an automated scheduling system. These tools save time, and that is genuinely useful. But behind the scenes, they are collecting, processing, and storing significant volumes of sensitive information - and most businesses have not stopped to ask who else can access it.

A 2023 study found that more than three quarters of businesses using AI tools had experienced some form of AI-related security incident in the previous year. That figure covers everything from minor data exposures to serious breaches involving confidential client or patient records. For a healthcare practice or law firm, where privacy obligations under the NZ Privacy Act 2020 are significant, that is not a statistic to ignore.

The reason AI tools create new security risks is not complicated. These systems need large amounts of data to function. That data - client records, financial details, correspondence, clinical notes - is valuable, and it flows through infrastructure that many businesses have never properly evaluated. AI models can also be difficult to audit. Unlike a standard database, it is not always clear what data an AI tool is storing, where it is stored, or who has access to it. That opacity is exactly what attackers look for.

There is also the question of how staff are using these tools. Employees often adopt AI assistants quickly and informally, without guidance on what data is safe to put into them. A team member pasting a client's personal details into an AI tool connected to offshore servers may not realise the privacy implications. Good cybersecurity for professional services businesses now has to account for AI tool use as a specific risk category, not just an afterthought.

When AI-related security is handled well, it looks like this: your business has a clear picture of which AI tools are in use, what data each one touches, and what the vendor's security and data residency commitments are. Staff know what is appropriate to put into these tools and what is not. Access is controlled so that only the right people can reach sensitive information. And your IT setup is monitored for unusual activity, so if something does go wrong, it is caught early rather than weeks later when the damage is already done. Understanding how AI can cause problems at work is a useful starting point for any practice manager.

If you are not sure whether your current IT setup accounts for AI tool risks, that is a reasonable place to start. A managed IT provider who understands the professional services environment can review what tools your team is using, identify where your data exposure sits, and put sensible controls in place. To see how this works in practice, our professional services case study outlines the kind of outcome a structured review can deliver. If an incident does occur, CERT NZ is the right place to report it, and the Office of the Privacy Commissioner will need to be notified if personal information is involved.

ITstuffed works with professional services businesses across Canterbury on exactly this kind of review. If you want a clear picture of where your AI tool use creates risk, a 15-minute IT Fit Check is a good starting point.

AI Tools Are Collecting Your Data. Is That Data Safe? | ITstuffed News | ITstuffed