Is ChatGPT Safe to Use in the Private and Third Sector?
Published on: 13/01/2026
Issues Covered:
Article Authors The main content of this article was provided by the following authors.
Barry Phillips Chairperson, Legal Island
Barry Phillips Chairperson, Legal Island
Barry Resized
LinkedIn

Barry Phillips (CEO) BEM founded Legal Island in 1998. He is a qualified barrister, trainer, coach and meditator and a regular speaker both here and abroad. He also volunteers as mentor to aspiring law students on the Migrant Leaders Programme.

Barry has trained hundreds of HR Professionals on how to use GenAI in the workplace and is author of the book “ChatGPT in HR – A Practical Guide for Employers and HR Professionals” 

Barry is an Ironman and lists Russian language and wild camping as his favourite pastimes

Legal Island

This week Barry Phillips asks whether the data security concerns around ChatGPT are really justified.

Transcript:

Hello Humans!

And welcome to the podcast that aims to summarise in five minutes or less each week a key AI issue relevant to the world of HR.

Today we're tackling a question that's causing sleepless nights in boardrooms across the country: Is ChatGPT safe to use in the private and third sector?

Picture this: In one corner, we have your CEO, practically vibrating with excitement about AI's potential, imagining productivity gains and competitive advantages. In the other corner, your data protection officer is clutching the GDPR handbook like it's a life raft, mentally calculating the seven-figure fines that could sink the organisation. It's a digital tug of war, and frankly, both sides have a point.

This isn't just happening in your organisation. It's playing out everywhere, particularly in the private sector. Business leaders are eyeing the commercial prize with the enthusiasm of a prospector during the gold rush, while compliance teams are wondering if they're the only adults in the room. And yes, it does feel rather sudden, doesn't it? One minute we're all happily using spellcheck, and the next, artificial intelligence is offering to write our board papers.

So let's talk about what's keeping the data protection people up at night.

First, there's the transparency issue, or rather, the lack of it. We still don't fully know what's under ChatGPT's hood. It's a bit like buying a sports car and being told, "Don't worry about the engine, just enjoy the ride." That doesn't exactly inspire confidence when you're responsible for sensitive data.

Second, processing happens off-premises. Your data leaves the building and travels through the digital ether to OpenAI's servers. The risk? Well, data in transit can potentially end up in the wrong hands. Some say it's the digital equivalent of sending a memo via carrier pigeon and hoping for the best.

Then there's the company age factor. OpenAI is relatively young compared to, say, Microsoft, which has been around since dinosaurs roamed Silicon Valley. Some would argue that means less track record, less proven stability.

And there’s another concern some versions of ChatGPT don't come with the training function automatically disabled. That means unless your employees know to turn it off, anything they type could potentially be used to train the model. And let's be honest, many staff members still don't grasp why this matters. They're typing in customer names and project details, draft strategy plans without a second thought.

But wait. Before we all retreat to typewriters and filing cabinets, let's hear the counterarguments.

ChatGPT has been live for more than three years now. In that time, there hasn't been one reported major data loss from OpenAI. Not one. That's actually quite impressive when you consider how many organisations have suffered breaches using far more "traditional" systems.

Yes, processing happens off-premises, but when data is transferred or stored, it's encrypted to military-grade standards. We're talking about encryption so robust that even MI5 would struggle to crack it. Your data isn't just wandering around the internet in a t-shirt and flip-flops.

And about that data loss risk? It drops to negligible, essentially zero, when properly trained staff know to disable the training function and understand never to enter personal data or commercially sensitive information into the prompt bar. It's less about the technology being dangerous and more about ensuring people know how to use it safely.

Here's another interesting point: to date, there hasn't been a lawsuit where a user's data has been reproduced in a ChatGPT answer causing them loss or damage. Given how litigious our world is, that silence speaks volumes.

Finally, OpenAI employs some of the best data and cyber security professionals in the world. These aren't amateurs playing with code in a garage. They know what they're doing.

So where does this leave us?

It's about finding balance. The data protection team would probably prefer if data never moved at all, ideally sealed in amber for all eternity. Meanwhile, the business leader is already mentally spending the profits from AI-driven innovation. Neither extreme is practical.

The sensible middle ground involves robust training, clear policies, proper use of enterprise versions with appropriate safeguards, and ongoing dialogue between compliance and innovation teams. It means treating AI tools like ChatGPT with the respect they deserve: powerful, useful, but requiring responsible handling.

Because here's the truth: in today's competitive landscape, ignoring AI isn't really an option. But neither is being reckless with data. The organisations that will thrive are those that can walk this tightrope, turning that tug of war into a collaborative dance.

Until next time, keep innovating, keep protecting, and for goodness sake, turn off that training function.

Until next week bye for now.

Disclaimer The information in this article is provided as part of Legal Island's Employment Law Hub. We regret we are not able to respond to requests for specific legal or HR queries and recommend that professional advice is obtained before relying on information supplied anywhere within this article. This article is correct at 13/01/2026