Mon – Fri  9AM – 5PM|Client Portal
ITstuffed
Business

How to Use ChatGPT at Work Without It Becoming a Problem

Your team has started using ChatGPT. Some are using it to draft emails, others to summarise long documents, a few to brainstorm. Nobody asked permission, and nobody set any rules. That is pretty much how it has gone in most Canterbury businesses over the past year or two - and it is worth getting ahead of it before something goes wrong.

The main risk is not that AI is inherently dangerous. It is that without clear boundaries, staff will use it in ways that create real problems. Client information pasted into a public AI tool does not stay private. Outputs used without review can be wrong, biased, or just off-brand. And if your business works in a regulated environment - legal, health, finance - an AI-generated document that nobody properly checked could cause serious harm. The NZ Privacy Act 2020 puts obligations on your business around how client data is handled. Feeding identifying details into a third-party AI platform is not a grey area.

The businesses getting value from ChatGPT treat it like a capable but junior assistant that needs supervision. They have decided which tasks it is approved for - drafting, brainstorming, summarising - and which tasks it is not. Staff know they cannot paste in client names, case details, or anything that would be sensitive in any other context. Every output gets a human check before it goes anywhere. That last point matters more than people think. ChatGPT produces confident-sounding text that is sometimes wrong. It is good at sounding authoritative. It is not good at knowing when it is making things up. If you want a broader look at the ways AI can cause problems at work, that is worth reading alongside this.

Transparency is also worth thinking about. If you are using AI to help draft communications that go to clients, a simple line in your policies or your email footer covers you. Something like: we use AI tools to assist with drafting and always review and fact-check the output. Most clients are fine with that. What they are not fine with is discovering it later without being told. Handling data carefully also means thinking about whether your cloud storage is actually safe for the information your team works with every day.

Getting this sorted is not a big project. You need a short, practical policy that tells staff what they can and cannot do with AI tools, a conversation about data handling, and some basic monitoring to see whether the tools are actually saving time or just creating more editing work. If you have a managed IT arrangement, this is exactly the kind of thing to work through with your engineer - it sits at the intersection of security, compliance, and productivity. Many of the same staff habits that create AI risk also show up in security awareness gaps that NZ businesses commonly overlook. If you are not sure whether your current setup handles any of this, the IT support page for professional services businesses is a reasonable starting point.

ITstuffed works with professional services businesses across Canterbury on exactly these kinds of practical questions. If you want a quick read on where your IT setup actually stands, book a 15-minute IT Fit Check at itstuffed.co.nz/booking.

How to Use ChatGPT at Work Without It Becoming a Problem | ITstuffed News | ITstuffed