Seven Ways AI Can Cause Problems at Work (And What to Do About Them)
Your receptionist is using ChatGPT to draft patient letters. Your office manager is running meeting notes through an AI tool. Someone on the team found a browser extension that summarises documents automatically. Nobody asked IT. Nobody set any rules. And so far, nothing has gone wrong - that you know of.
That last part is the problem. AI tools are easy to access, often free, and genuinely useful. But when staff start using them without any guidance, the risks tend to be invisible until they are not. Here are seven ways AI use at work can go sideways, and what a well-run practice does differently.
The most common issue is wrong information treated as correct. AI tools generate confident-sounding text even when the underlying information is outdated or simply made up. In a healthcare or legal context, acting on that output without checking it can create real problems for clients and for the practice.
The second issue is bias. AI learns from data, and if that data reflects historical biases - in hiring, in clinical assumptions, in language - the AI carries those biases forward. The third is reliability. These tools go down, behave unpredictably, or produce genuinely strange outputs. If your team has started depending on AI for parts of their workflow, that dependency is a risk when the tool fails or changes.
Privacy is where things get serious. When a staff member pastes client notes, case details, or patient information into a free AI tool, that data may be stored, used for training, or accessible to the tool's provider. Under the NZ Privacy Act 2020, your practice has obligations around how personal information is handled - and "I didn't know the tool stored it" is not a defence. If you are handling sensitive client information, this is worth understanding properly. The Office of the Privacy Commissioner has published guidance at privacy.org.nz.
There are also questions that have not been fully resolved yet around who owns content created with AI assistance, and whether AI-generated work meets professional standards in regulated industries. These are live questions in legal and healthcare settings in particular. Understanding the threats catching businesses off guard right now can help put AI risks in a broader context.
Finally, there is the effect on your team. AI can quietly shift work that used to involve collaboration into solo tasks. That is not always a bad thing, but it is worth being intentional about. And some staff will need support to learn how to use these tools well - not everyone will adapt at the same pace.
What good looks like is not banning AI - that ship has sailed. It is having a clear, written policy about which tools staff can use, what information can and cannot be entered into them, and who is responsible for checking outputs before they are acted on. It means keeping humans accountable for decisions, even when AI helped draft the thinking behind them. And it means your IT support understanding your cybersecurity risks and what tools are actually being used across the practice, so they can flag risks before something goes wrong.
If you are not sure what your team is currently using, that is the right place to start. A quick conversation with an engineer who understands professional services will give you a clearer picture than trying to audit it yourself. Security awareness training is the cyber defence most NZ businesses overlook - and AI policy sits in exactly the same category. ITstuffed works with practices across Canterbury on exactly this kind of thing - if you want a 15-minute IT Fit Check to see where you stand, you can book one here.
