In the era of generative AI, Shadow AI is developing everywhere and posing significant risks to businesses. Here is the latest information on the subject.
The term “Shadow AI” refers to the use of ChatGPT or other generative AI tools in a professional context without prior approval from management. In 2023, 30% of employees were already in this situation. Today, this practice affects 68% of French companies according to a recent study. This is not without creating serious security problems: by using these tools, employees unintentionally transmit potentially sensitive data to third parties.
Contact the market research agency IntoTheMinds
Key takeaways in 30 seconds
- Shadow AI is the unauthorised use of artificial intelligence tools by employees
- 68% of French employees use AI solutions without informing their management or obtaining prior approval
- Risks include data leaks, GDPR non-compliance and exposure to cyber threats
- 71% of French people are familiar with generative AIs such as OpenAI’s ChatGPT and Microsoft’s Bing
- 44% of respondents use these AIs both personally and professionally
- 50% of US companies say they are updating their internal rules to regulate the use of ChatGPT and put an end to Shadow GPT
- 70% of people surveyed in another Microsoft study say they would gladly delegate repetitive tasks to AI
- 45% of 18–24 year-olds say they use AI, compared to only 18% of people over 35
- 72% of French people feel they do not have sufficient knowledge to use generative AIs
What exactly is Shadow AI?
Shadow AI (or “ghost AI”) is the natural evolution of the Shadow IT we have known for years. In practice, it is the use of artificial intelligence tools and applications by employees without the approval or supervision of the company’s IT department.
This practice takes many forms in organisations. A sales person using ChatGPT to write commercial proposals, an HR manager using a text generator to create job offers, or an analyst using unvalidated machine-learning models to process sensitive data – all these examples perfectly illustrate this growing phenomenon.
The fundamental difference with legitimate AI lies in authorisation and control. While official solutions are integrated into the company’s security and compliance policies, Shadow AI completely escapes this supervision. This lack of framework exposes the organisation to considerable risks, particularly in terms of data protection and regulatory compliance.
The statistics speak for themselves: according to the 2024 INRIA-Datacraft study, almost 7 out of 10 employees in France use generative AI tools in their professional environment without informing their hierarchy. A figure that clearly shows the scale of the challenge facing companies.
The hidden risks behind this uncontrolled use
Shadow AI exposes companies to a multitude of dangers that must be taken seriously. The first major risk is exposure to cyber threats. Generative AI models such as GPT or DALL-E can be vulnerable to data poisoning attacks or malicious prompt injections. Cybercriminals exploit these vulnerabilities to manipulate results and compromise system security.
Next come data leaks. When employees send sensitive information through unsecured platforms to train or query AI models, they potentially expose confidential data. We have already mentioned the risks of ChatGPT. But Statoil fell victim as early as 2017 with the use of DeepL (which uses your data to train its algorithms – see below), and in 2023 it was Samsung that got caught by ChatGPT. Its employees used ChatGPT three times and thereby transmitted strictly confidential data to OpenAI, including the source code of its electronic chips.
The third pitfall is the replication of biases. Generative AI can unintentionally reinforce existing prejudices if its training data is biased. This issue becomes particularly critical in sensitive sectors such as finance, human resources or justice, where automated decisions can have significant consequences for individuals. Once again, nothing new: these training bias issues have long been at the heart of the concerns of developers of recommendation algorithms.
Finally, regulatory non-compliance represents a considerable financial risk. Uncontrolled use of AI tools can lead to breaches of the GDPR or other data protection standards. Fines can reach €20 million or 4% of the company’s global annual turnover. The bill can be very high!
How to turn this challenge into an opportunity
Rather than playing the policeman, companies have every interest in adopting a constructive approach to regulate Shadow AI. The first step is to establish clear governance frameworks: draw up an ethical charter to guide the use of AI and appoint dedicated officers such as Chief AI Officers (CAIO) to monitor regulatory compliance.
Data security must also be strengthened. CIOs and CISOs must ensure that the platforms used meet the strictest standards in terms of encryption and data storage. The ideal solution is to favour hybrid or on-premise cloud solutions and to provide secure AI platforms for automating routine tasks.
Training teams is a crucial investment. A thorough understanding of AI technologies and their risks helps avoid misconfigurations or accidental exposure of sensitive data. Employees need to understand the issues in order to become responsible players in this transformation.
Finally, regular auditing and evaluation of systems is essential. Periodic security checks identify and correct potential vulnerabilities while adapting to emerging threats. This proactive approach avoids many problems.
Despite the positive outlook, the risks have led some companies to ban the use of ChatGPT altogether – starting with Samsung (which is rolling out an internal tool), as well as JPMorgan Chase, Amazon, Verizon and Accenture. The common point of all these companies is that they work with large volumes of proprietary data from which they derive a competitive advantage. Using ChatGPT – and more generally any algorithmic tool offering a free version – means transferring user data for training purposes. While the quid pro quo is understandable (a free service in exchange for data), few users are aware of this transfer. In the case of ChatGPT, the lack of consent even led the Italian government to ban ChatGPT from its territory. The decision was harsh and sudden, but it had the merit of highlighting all of OpenAI’s shortcomings.
Statistics and concrete examples of Shadow AI
To better understand the scale of the phenomenon, here are some revealing figures. According to the INRIA-Datacraft study, 68% of French employees use AI tools without informing their management. This proportion rises to 75% among managers and reaches 82% in the technology sector.
The most common uses include content writing (45% of cases), data analysis (32%), automatic translation (28%) and image generation (18%). These figures show that Shadow AI affects all jobs and functions within the company.
On the risk side, incidents are not lacking. In 2024, several companies reported data leaks linked to the uncontrolled use of AI tools. One notable case involved a consulting firm that saw confidential client information exposed after a consultant used a public chatbot to analyse sensitive documents.
In the banking sector, a financial institution discovered that its analysts were using unvalidated machine-learning models to assess credit risk, exposing the institution to potentially biased decisions and regulatory sanctions.
Best practices for mastering Shadow AI
Faced with these challenges, several strategies prove effective. First, adopting a “sandbox” approach allows employees to experiment with AI tools in a secure, controlled environment. This method satisfies their need for innovation while preserving company security.
Creating a catalogue of approved AI tools is also an excellent practice. By offering secure alternatives to public solutions, the company channels usage towards controlled platforms. This naturally goes hand in hand with a clear usage policy defining what is allowed and what is not.
Involving business units in AI governance is crucial. Rather than imposing top-down rules, it is better to co-construct policies with end users. This collaborative approach encourages buy-in and understanding of the issues.
Finally, continuous awareness-raising of teams makes all the difference. Regular training sessions, sharing of experience and communication of best practices help create a genuine AI security culture within the organisation.
The future of Shadow AI: towards controlled coexistence
Shadow AI is not a passing fad but a lasting reality in our professional environment. The challenge for companies is therefore to transform this constraint into an opportunity for controlled innovation.
The most advanced organisations are already developing hybrid AI strategies that combine official solutions with supervised experimentation. This approach captures the agility of Shadow AI while maintaining an acceptable level of security.
Organisations that embrace generative AI have also realised that the generational gap needs to be bridged. Adoption of AI depends on age: in 2024, 45% of 18–24 year-olds used it, compared to only 18% of those over 35. It is hardly surprising that 72% of French people feel they do not have sufficient knowledge to use these technologies.
This gap must make companies realise that risks are not evenly distributed among all employees. Some are more “at risk” than others, and awareness efforts will therefore have to be differentiated according to the target audience.
Regulatory developments, particularly the European AI Act, will also structure this field. Companies that anticipate these changes will gain a head start over their competitors.
Ultimately, mastering Shadow AI requires a delicate balance between innovation and security, between freedom of experimentation and risk control. The companies that succeed in this transition will be those that know how to turn their employees into responsible partners in this digital transformation.
Frequently asked questions about Shadow AI
How can I detect Shadow AI in my company?
Several signals can alert you. Monitor unusual billing on company cards, analyse network traffic to public AI platforms, and above all conduct anonymous internal surveys. A simple survey often reveals the extent of hidden usage. Do not hesitate to create a climate of trust so that your employees feel comfortable speaking out without fear of sanctions.
Should I completely ban the use of external AI tools?
A blanket ban risks being counterproductive. Your teams will probably continue to use these tools, but even more discreetly. It is better to adopt a gradual approach: start by identifying uses, assess the risks, then offer secure alternatives. The aim is to support, not punish.
What are the penalties for GDPR non-compliance linked to Shadow AI?
Fines can be particularly heavy! The GDPR provides for sanctions of up to €20 million or 4% of global annual turnover. But beyond the financial aspect, also consider reputational damage. A customer data leak caused by Shadow AI can permanently damage the trust of your partners and customers.
How do I train my teams on the risks of Shadow AI?
Training must be concrete and interactive. Organise practical workshops showing real risks, share secure use cases and create simple guides. The idea is to make people understand that AI security is not a brake but an accelerator of responsible innovation. Also train your managers so that they become effective relays. Experimentation is essential to understand the usefulness, and as MIT researcher Ethan Mollick explains, it takes at least 4 hours of work on these tools to begin to understand them.
Are there tools to monitor AI usage in the company?
Absolutely! Several solutions are emerging on the market. You can use network monitoring tools to detect connections to AI platforms, AI-adapted DLP (Data Loss Prevention) solutions, or specialised AI governance platforms. The important thing is to choose tools that respect the privacy of your employees while ensuring the security of the company.







![Illustration of our post "ChatGPT: its answers are very similar [exclusive research]"](https://5cc2b83c.delivery.rocketcdn.me/app/uploads/chatGPT-brain-cerveau-120x90.jpg)


