Security Stop-Press: A Third Of Staff Hide AI Usage From Employers

Nearly a third of office staff hide AI usage when at work. When staff are secretly using AI tools at work, they could be risking data breaches, compliance failures, and loss of intellectual property.

Ivanti’s latest Technology at Work report is grounded in a comprehensive survey of over 6,000 office workers and 1,200 IT and cybersecurity professionals. It offers a broad perspective on current workplace trends.

Key Findings on Shadow AI Usage

Prevalence of Generative AI (GenAI) Use: The report indicates that 42% of employees are now using GenAI tools at work, a significant increase from 26% in 2024.

Undisclosed AI Usage: Approximately 32% of these employees are keeping their AI use hidden from employers. Reasons include:-

-viewing it as a “secret advantage” (36%)

-fear of job cuts due to AI efficiency (30%)

-experiencing AI-fuelled impostor syndrome (27%).

IT Professionals and Unauthorized Tools: Even among IT professionals, 38% admit to using unauthorized AI tools, highlighting the widespread nature of shadow AI across different organizational levels .

Ivanti’s report is here

Red Flags across the IT industry

This covert use of AI, dubbed ‘shadow AI’, is raising red flags across the industry. As Ivanti’s legal chief Brooke Johnson warns: “Employees adopting this technology without proper guidelines or approval could be fuelling threat actors”.

Whilst Ivanti’s report is US-based it appears to be a similar story here in the UK. A separate study by Veritas back in 2023 found the UK top of the most at risk countries when staff hide AI usage. Over a third of UK staff surveyed admitted they fed sensitive data into chatbots, often unaware of the potential consequences. The worst sector was Public Sector. IT tech and telecoms companies and Business Professional Services companies (who perhaps ought to know better) come in at 6 & 7 respectively!

Read more on Veritas’s website here

Several major firms, including Apple, Samsung and JP Morgan, have already restricted workplace AI use following accidental leaks, but Ivanti warns that policy alone isn’t enough i.e., businesses must assume shadow AI is already happening and act accordingly.

How to reduce risk

To reduce the risk, companies should enforce clear AI policies, educate staff, and monitor real-world usage. Without visibility and oversight, AI could turn from very useful productivity tool to security liability.

It’s time to tighten up now, before shadow AI usage becomes embedded behaviour.

Talk to us now about how to limit the risk to your sensitive or confidential business data.

< Back to blog