The term “Shadow GPT” refers to using ChatGPT or other generative AI in a professional context without prior approval from the hierarchy. Employees unwittingly transmit potentially sensitive data to third parties by using such tools. Research shows that 30% of the employees are in this situation with security issues.
Shadow GPT: 7 statistics to remember
- 71% of French people know about generative AIs like OpenAI’s ChatGPT and Microsoft’s Bing.
- 44% of respondents use these AIs in personal and professional settings.
- 68% of generative AI users within the company still need approval from their supervisor.
- 50% of U.S. companies say they are updating their internal policies to govern the use of ChatGPT and end Shadow GPT
- 70% of respondents in another Microsoft research say they would gladly delegate repetitive tasks to AI.
- 45% of 18-24-year-olds say they use AI, compared to 18% of people over 35.
- 72% of French people believe they need more knowledge to use generative AI.
The term Shadow GPT echoes the term “Shadow IT,” which refers to software not previously approved by the company’s IT department. Shadow GPT employees use generative AI tools like ChatGPT for business purposes without prior approval from their company. Recent research shows that this practice is, in fact, much more widespread than one might imagine, but it puts the company’s confidential data at risk.
Shadow GPT: nearly 1/3 of employees currently affected
April 2023 research reveals that 71% of French people are familiar with generative AIs, such as OpenAI’s ChatGPT and Microsoft’s Bing (whose “creation” mode is based on GPT4). The research also showed that 44% of respondents use these AIs personally and professionally. Of these, 68% do so without informing their superiors. In other words, nearly 30% of French employees use generative AI without prior authorization from their managers.
In another research conducted by Microsoft, 70% of the people interviewed said they agreed to delegate their repetitive tasks to AI, which could save up to two days of work.
The risks of Shadow GPT
We have already mentioned the risks of ChatGPT. Simply put, these are risks related to confidential data leaks. These risks are inherent to all free services on the Internet. If they are free, your data are used as currency.
Statoil had been tricked as early as 2017 with the use of Deepl (which uses your data to train its algorithms, see below), and in 2023 it was Samsung that was caught in the trap of ChatGPT. Its employees used ChatGPT 3 times and transmitted strictly confidential data to OpenAI. These data concern, in particular, the source code of its electronic chips.
These risks have led some companies to ban the use of ChatGPT. Starting with Samsung will make an internal tool available, including JPMorgan Chase, Amazon, Verizon, and Accenture. All these companies have in common that they work with a large amount of proprietary data from which they gain a competitive advantage. The use of ChatGPT, and in general of all algorithmic tools proposing a free version, is the transfer of the user’s data for training purposes. While the exchange of goodwill is understandable (a free service for data), only some users are aware of this transfer. In the case of ChatGPT, the lack of consent even led the Italian government to ban ChatGPT from the territory. This decision was violent and sudden but had the merit of highlighting all the flaws of OpenAI.
Special attention for younger employees
The research shows a generational gap in AI adoption, with 45% of 18-24-year-olds using them, compared to 18% of those over 35. Only moderately surprising is that 72% of French people feel they need more knowledge to use these technologies.
This gap must make companies aware that risks are not equally distributed among employees. Some are more “at risk” than others, and awareness efforts will therefore have to be made in a differentiated way depending on the target.
Most companies do not measure the risks
Shadow GPT does not start with bad intentions. Employees are curious to experiment with this new tool, and that’s to their credit. The experimentation part is essential to understand its usefulness, and, as MIT researcher Ethan Mollick explained, it takes at least 4 hours of work on these tools to start understanding them. Because that’s where the problem lies: the lack of information and transparency.
Companies, on the other hand, have been caught off guard. Nearly 50% of U.S. companies surveyed in a March 2023 survey were revising their internal policies to accommodate Shadow GPT. The adoption of ChatGPT by their employees was so sudden, and the subject so new that they couldn’t care about it early enough.
But besides the large companies, there are all those for whom ChatGPT will remain a mystery for a long time until their business secrets are out in the open. Remember that 96% of companies are SMEs and that their level of digital maturity needs to be more exemplary. There are, therefore, many challenges to educating employees about the benefits of AI but also about its limits.
Tags: artificial intelligence, market research