Mon – Fri  9AM – 5PM|Client Portal
ITstuffed
New Technology

What the Research Actually Says About AI at Work (And What It Means for Your Business)

It is Monday morning and one of your staff mentions they have been using ChatGPT to draft client emails for the past three months. You did not know. You have no policy on it. And now you are wondering who else is doing the same, with what tools, and with whose data.

This is not a hypothetical. The 2024 Work Trend Index, a joint report from Microsoft and LinkedIn, found that 75% of knowledge workers are already using AI at work. Most are not waiting for permission. They are finding tools that make their day easier and getting on with it. For a business handling sensitive client information, that should prompt a proper conversation rather than a blanket ban.

The same report found that AI power users - people who have genuinely integrated AI into how they work - are saving more than 30 minutes a day. Across a team of eight, that adds up quickly. The gains are real. So is the risk of getting there without any guardrails. Employees using personal AI accounts to process client information, documents uploaded to free tools with unclear data retention policies, outputs that look authoritative but have not been checked - these are the things that create problems before anyone realises it.

What good looks like is not complicated. It starts with a clear AI use policy - what tools are approved, what kind of work they can be used for, and what should never go near them. Microsoft 365 already includes AI features through Copilot that sit inside your existing environment, with the same security and compliance boundaries your business already relies on. That is a very different proposition from staff using free external tools with unknown data handling. If you want to understand how Microsoft 365 can support this, managed IT support for professional services is a practical starting point.

The report also flags something worth taking seriously: 55% of business leaders are worried about finding staff with the right AI skills. The answer is not to hire differently - it is to train the team you already have. Identify who on your staff is already getting results with AI and have them share what is working. Build simple templates and processes others can follow. The skills gap closes faster from the inside than it does from job ads. There are also specific AI tools worth knowing about in a professional services office that can help you decide what to put in front of your team.

The ethical side matters too. If your business is using AI to draft client-facing communications or process personal information, your clients have a reasonable interest in knowing that. The NZ Privacy Act 2020 already places obligations on how personal information is collected, used, and disclosed - AI does not create a carve-out from those obligations. Transparency with clients and clear internal guidelines are not optional extras. They are part of running a trustworthy practice. If you have not yet set boundaries around how staff should use these tools, putting AI rules in place for your staff is a sensible next step.

The practical step is to treat AI the same way you would treat any other tool that touches client data. Get visibility on what your team is already using, decide what is acceptable, put it in writing, and make sure the tools you do approve sit within a secure environment. An IT provider who understands your business can help you work through that without turning it into a six-month project.

ITstuffed works with professional services businesses across Canterbury on exactly this kind of thing. If you want a clear picture of where your setup stands, a 15-minute IT Fit Check is a good place to start.

What the Research Actually Says About AI at Work (And What It Means for Your Business) | ITstuffed News | ITstuffed