Cheddar IT
Book a call1300 757 632
Security

The Hidden Risk of Employees Using AI Tools at Work

Staff are already pasting company data into ChatGPT. Here's how to set a sensible AI policy without killing the productivity wins.

7 min read

Artificial intelligence tools are rapidly becoming integrated into everyday business workflows. Platforms like ChatGPT, Microsoft Copilot, Gemini and Claude are helping employees draft emails, analyse spreadsheets, generate reports, and automate routine tasks. In most of the businesses we support, adoption is happening from the ground up — and IT often only finds out weeks or months later.

That uncontrolled adoption has a name: Shadow AI. And it’s introducing a new class of risk that didn’t exist in most security policies two years ago.

What is “Shadow AI”?

Shadow AI happens when employees use AI tools without approval or oversight from IT. Staff may use these platforms to:

  • Draft emails, proposals or documents
  • Analyse client spreadsheets or financials
  • Summarise meeting recordings
  • Write or review code
  • Generate reports or presentations

The productivity benefit is real. The problem is that employees may unknowingly paste confidential business information — client records, financial data, contract terms, source code — into external platforms where the organisation loses visibility and control.

Why sensitive data exposure is the biggest risk

The most common Shadow AI incident pattern we see is unintentional data leakage. An employee pastes a client document, an internal email thread, a pricing model, or a set of staff names and salaries into an AI tool to “help me summarise this” or “rewrite this more professionally.”

Once submitted to an external AI platform, that information is:

  • Stored on third-party servers outside your control
  • Potentially used to train future model versions (depending on the service’s settings)
  • Possibly reviewed by the provider’s staff for quality and safety purposes
  • Processed across international jurisdictions, complicating privacy compliance

For businesses with obligations under the Australian Privacy Act, APRA CPS 234, or industry-specific regimes like healthcare or legal privilege, this is a material issue — and one your auditor will eventually ask about.

What these tools actually do with your data

Not all AI platforms operate identically, and the defaults matter enormously.

Consumer versions of AI tools (the free or personal-tier offerings) are the riskiest. Many use submitted content for model improvement by default. Some retain prompts for extended periods. Few provide audit logs of what’s been submitted.

Enterprise versions of the same tools — Microsoft 365 Copilot, ChatGPT Enterprise, Google Workspace with Gemini — typically offer:

  • No training on customer data
  • Data residency controls
  • Audit logging and admin visibility
  • Integration with existing identity and access policies

The gap between “Brenda in accounts signed up for ChatGPT with her work email” and “the business has an enterprise Copilot tenant with DLP policies” is enormous from a compliance standpoint.

Security risks beyond data leakage

Data exposure is the headline risk, but not the only one.

Unverified AI applications. Employees sign up to new AI platforms with company email addresses daily. Not all of them are what they appear to be — some exist specifically to harvest credentials or business data from curious staff.

AI-generated code risks. Developers using AI coding assistants may unknowingly introduce security vulnerabilities, licence violations, or hallucinated library dependencies into production code. Reviewed code is fine; unreviewed code is increasingly a supply chain risk.

Prompt injection attacks. Attackers design malicious prompts hidden in emails, documents or web pages that, when processed by an AI tool, cause it to act against the user’s interest — for example, extracting and sending sensitive data.

Phishing and social engineering at scale. The same AI tools used by staff are used by attackers to generate convincing phishing emails in volume, in any language.

Why you need an AI usage policy

As AI adoption continues to accelerate, clear internal guidelines help organisations:

  • Define which AI tools are approved and which are not
  • Prevent sensitive data from being shared with external platforms
  • Establish security guidelines and approval workflows for new tools
  • Educate employees on what’s safe and what isn’t
  • Maintain compliance with privacy and industry regulations

The worst position to be in is the one most businesses are currently in: no policy, widespread adoption, and no visibility into what’s been shared.

Practical steps to manage AI in your workplace

1. Establish an AI usage policy. A short, practical document that names approved tools, prohibits specific data categories (client PII, financial records, source code, intellectual property), and describes acceptable use cases. Keep it under two pages.

2. Educate employees. Security awareness training needs an AI module now. Staff generally don’t realise that “paste into ChatGPT” is functionally the same risk profile as “send to a random third party.”

3. Standardise on an enterprise AI solution. Microsoft 365 Copilot integrates with your existing tenant, honours existing permissions, and doesn’t train on your data. For most of our clients it’s the right default. Providing a sanctioned, capable tool is the best way to prevent Shadow AI.

4. Monitor application usage. IT teams should monitor cloud application access to identify unapproved AI services being used across the business. Microsoft Defender for Cloud Apps and similar CASB tooling surface this quickly.

5. Review regularly. AI capabilities evolve monthly. Review your policy and approved tool list at least quarterly.

The bottom line

AI can meaningfully improve business productivity and innovation. Without governance, it also quietly exposes sensitive information, creates compliance issues, and expands the cybersecurity attack surface.

The businesses doing this well aren’t the ones banning AI outright — that just drives it further underground. They’re the ones providing a sanctioned, capable, enterprise-grade AI tool, backed by a clear policy and regular training. That way staff get the productivity benefit, and the business keeps control of its data.

If you’d like help setting up a Microsoft 365 Copilot rollout, drafting an AI usage policy, or assessing your current Shadow AI exposure, get in touch. We’ve guided a number of Australian businesses through exactly this transition over the past year.