"It's easier to get forgiveness than permission," says John, a software engineer at a financial services technology company. "Just get on with it. And if you get in trouble later, then clear it up." He's one of the many people who are using their own AI tools at work, without the permission of their IT division (which is why we are not using John's full name). According to a surveyby Software AG, half of all knowledge workers use personal AI tools. The research defines knowledge workers as "those who primarily work at a desk or computer". For some it's because their IT team doesn't offer AI tools, while others said they wanted their own choice of tools. John's company provides GitHub Copilot for AI-supported software development, but he prefers Cursor. "It's largely a glorified autocomplete, but it is very good," he says. "It completes 15 lines at a time, and then you look over it and say, 'yes, that's what I would've typed'. It frees you up. You feel more fluent." His unauthorised use isn't violating a policy, it's just easier than risking a lengthy approvals process, he says. "I'm too lazy and well paid to chase up the expenses," he adds. John recommends that companies stay flexible in their choice of AI tools. "I've been telling people at work not to renew team licences for a year at a time because in three months the whole landscape changes," he says. "Everybody's going to want to do something different and will feel trapped by the sunk cost." The recent release of DeepSeek, a freely available AI model from China, is only likely to expand the AI options. Peter (not his real name) is a product manager at a data storage company, which offers its people the Google Gemini AI chatbot. External AI tools are banned but Peter uses ChatGPT through search tool Kagi. He finds the biggest benefit of AI comes from challenging his thinking when he asks the chatbot to respond to his plans from different customer perspectives. "The AI is not so much giving you answers, as giving you a sparring partner," he says. "As a product manager, you have a lot of responsibility and don't have a lot of good outlets to discuss strategy openly. These tools allow that in an unfettered and unlimited capacity." The version of ChatGPT he uses (4o) can analyse video. "You can get summaries of competitors' videos and have a whole conversation [with the AI tool] about the points in the videos and how they overlap with your own products." In a 10-minute ChatGPT conversation he can review material that would take two or three hours watching the videos. He estimates that his increased productivity is equivalent to the company getting a third of an additional person working for free. He's not sure why the company has banned external AI. "I think it's a control thing," he says. "Companies want to have a say in what tools their employees use. It's a new frontier of IT and they just want to be conservative." The use of unauthorized AI applications is sometimes called 'shadow AI'. It's a more specific version of 'shadow IT', which is when someone uses software or services the IT department hasn't approved. Harmonic Security helps to identify shadow AI and to prevent corporate data being entered into AI tools inappropriately. It is tracking more than 10,000 AI apps and has seen more than 5,000 of them in use. These include custom versions of ChatGPT and business software that has added AI features, such as communications tool Slack. However popular it is, shadow AI comes with risks. Modern AI tools are built by digesting huge amounts of information, in a process called training. Around 30% of the applications Harmonic Security has seen being used train using information entered by the user. That means the user's information becomes part of the AI tool and could be output to other users in the future. Companies may be concerned about their trade secrets being exposed by the AI tool's answers, but Alastair Paterson, CEO and co-founder of Harmonic Security, thinks that's unlikely. "It's pretty hard to get the data straight out of these [AI tools]," he says. However, firms will be concerned about their data being stored in AI services they have no control over, no awareness of, and which may be vulnerable to data breaches. It will be hard for companies to fight against the use of AI tools, as they can be extremely useful, particularly for younger workers. "[AI] allows you to cram five years' experience into 30 seconds of prompt engineering," says Simon Haighton-Williams, CEO at The Adaptavist Group, a UK-based software services group. "It doesn't wholly replace [experience], but it's a good leg up in the same way that having a good encyclopaedia or a calculator lets you do things that you couldn't have done without those tools." What would he say to companies that discover they have shadow AI use? "Welcome to the club. I think probably everybody does. Be patient and understand what people are using and why, and figure out how you can embrace it and manage it rather than demand it's shut off. You don't want to be left behind as the organization that hasn't [adopted AI]." Trimble provides software and hardware to manage data about the built environment. To help its employees use AI safely, the company created Trimble Assistant. It's an internal AI tool based on the same AI models that are used in ChatGPT. Employees can consult Trimble Assistant for a wide range of applications, including product development, customer support and market research. For software developers, the company provides GitHub Copilot. Karoliina Torttila is director of AI at Trimble. "I encourage everybody to go and explore all kinds of tools in their personal life, but recognise that their professional life is a different space and there are some safeguards and considerations there," she says. The company encourages employees to explore new AI models and applications online. "This brings us to a skill we're all forced to develop: We have to be able to understand what is sensitive data," she says. "There are places where you would not put your medical information and you have to be able to make those type of judgement calls [for work data, too]." Employees' experience using AI at home and for personal projects can shape company policy as AI tools evolve, she believes. There needs to be a "constant dialogue about what tools serve us the best", she says.
Why employees smuggle AI into work
TruthLens AI Suggested Headline:
"Employees Increasingly Use Unauthorized AI Tools Despite Company Restrictions"
TruthLens AI Summary
Many employees are increasingly using unauthorized AI tools at work, a trend driven by the perception that it is easier to seek forgiveness than permission from their IT departments. A survey by Software AG indicates that around half of knowledge workers utilize personal AI applications, with reasons ranging from the unavailability of tools provided by their companies to a desire for personal choice. For instance, John, a software engineer, prefers using Cursor over the company's GitHub Copilot, citing its efficiency in streamlining his workflow. He notes that the rapid development of AI tools leads to a need for flexibility in tool selection, urging companies to avoid long-term commitments to specific applications. This sentiment is echoed by others who find that unauthorized AI tools enhance productivity and provide unique insights that are often lacking within traditional corporate structures. Peter, a product manager, describes how using ChatGPT allows him to engage in strategic discussions that would otherwise be limited in scope, showcasing the potential benefits of these tools despite company restrictions.
The phenomenon of 'shadow AI,' where employees use unapproved tools, has raised concerns among companies regarding data security and confidentiality. Harmonic Security has been tracking the use of over 10,000 AI applications, identifying a significant number of instances where sensitive corporate information could be inadvertently shared through these tools. While some experts believe that the risk of leaking trade secrets is minimal, there is a growing anxiety about data being stored in external services without corporate oversight. Companies like Trimble are exploring ways to manage AI use by developing internal tools that provide similar functionalities to popular AI applications, while also educating employees on data sensitivity. The ongoing dialogue about the integration of AI into workplaces is crucial as organizations strive to adapt to new technologies while maintaining data integrity and security. As AI continues to evolve, companies are encouraged to embrace these tools rather than resist them, acknowledging their potential to enhance employee productivity and innovation.
TruthLens AI Analysis
Analysis failed for 'Why employees smuggle AI into work' due to an error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}