The rising threat of shadow AI

analysis
Feb 28, 20255 mins
Cloud SecurityData GovernanceGenerative AI

The uncontrolled and ungoverned AI apps your employees are using are becoming a real threat to cloud deployments, but banning them wonโ€™t work. Hereโ€™s what to do.

iStock
Credit: iStock

Employees in a large financial organization began developing AI tools to automate time-consuming tasks such as weekly report generation. They didnโ€™t think about what could go wrong. Within a few months, unauthorized applications skyrocketed from just a couple to 65. The kicker is that all these AI tooks are training on sensitive corporate data, even personally identifiable information.

One team used a shadow AI solution built on ChatGPT to streamline complex data visualizations. This inadvertently exposed the companyโ€™s intellectual property to public models. Of course, compliance officers raised alarms about potential data breaches and regulatory violations. (How come these guys donโ€™t prevent this stuff but show up after itโ€™s happened?)

The companyโ€™s leadership realized the critical need for centralized AI governance. They conducted a comprehensive audit and established an Office of Responsible AI aimed at mitigating risks while allowing employees to leverage sanctioned AI tools. Perhaps too little too late?

Stay out of the shadows

Cloud security administrators are increasingly grappling with the rise of shadow AI. Employees, driven by the pressures of demanding workloads and tight deadlines, utilize AI applications without IT approval or oversight. The security implications are profound and challenging. Shadow AI represents a fundamental challenge to our carefully constructed security perimeters. Enterprises have developed somewhat restrictive policies around the use of AI since the emergence of generative AI and ChatGPT. As you may have guessed, this results in a chaotic jumble of applications that can lead to significant security risks.

According to recent findings, more than 12,000 such apps have already been identified, and 50 new applications pop up daily. Disturbingly, many of these tools bypass established security protocols. I would suspect that security admins are not aware of most of them and in many instances, never will be. The security audits I attend often conclude that about 75% of threats are missed. Given that these shadow AI applications often run on or around cloud systems, the issues become more of a problem. Cloud deployments are much more complex, considering the exposure is also directed at the cloud provider or providers.

What to do?

These unauthorized applications open up critical risks that even the most educated security admins donโ€™t yet understand. First and foremost is the undeniable threat of data breaches. When employees input sensitive company information into unvetted AI applications, they inadvertently expose this data to potential leaks. AI applications alone are not always the bad actors; the data is often transmitted to remote servers, in many instances outside of the country.

Another risk is that many shadow AI tools, such as those utilizing OpenAIโ€™s ChatGPT or Googleโ€™s Gemini, default to training on any data provided. This means proprietary or sensitive data could already mingle with public domain models. Moreover, shadow AI apps can lead to compliance violations. Itโ€™s crucial for organizations to maintain stringent control over where and how their data is used. Regulatory frameworks not only impose strict requirements but also serve to protect sensitive data that could harm an organizationโ€™s reputation if mishandled.

Cloud computing security admins are aware of these risks. However, the tools available to combat shadow AI are grossly inadequate. Traditional security frameworks are ill-equipped to deal with the rapid and spontaneous nature of unauthorized AI application deployment. The AI applications are changing, which changes the threat vectors, which means the tools canโ€™t get a fix on the variety of threats.

Getting your workforce on board

Creating an Office of Responsible AI can play a vital role in a governance model. This office should include representatives from IT, security, legal, compliance, and human resources to ensure that all facets of the organization have input in decision-making regarding AI tools. This collaborative approach can help mitigate the risks associated with shadow AI applications. You want to ensure that employees have secure and sanctioned tools. Donโ€™t forbid AIโ€”teach people how to use it safely. Indeed, the โ€œban all toolsโ€ approach never works; it lowers morale, causes turnover, and may even create legal or HR issues.

The call to action is clear: Cloud security administrators must proactively address the shadow AI challenge. This involves auditing current AI usage within the organization and continuously monitoring network traffic and data flows for any signs of unauthorized tool deployment. Yes, weโ€™re creating AI cops. However, donโ€™t think they get to run around and point fingers at people or let your cloud providers point fingers at you. This is one of those problems that can only be solved with a proactive education program aimed at making employees more productive and not afraid of getting fired.ย 

Shadow AI is yet another buzzword to track, but also itโ€™s undeniably a growing problem for cloud computing security administrators. The lack of adequate defenses against these unauthorized applications is a pressing concern. However, organizations can navigate this new landscape with centralized governance, education, and proactive monitoring while reaping AI technologiesโ€™ benefits. We need to be smart with this one.

David Linthicum

David S. Linthicum is an internationally recognized industry expert and thought leader. Dave has authored 13 books on computing, the latest of which is An Insiderโ€™s Guide to Cloud Computing. Daveโ€™s industry experience includes tenures as CTO and CEO of several successful software companies, and upper-level management positions in Fortune 100 companies. He keynotes leading technology conferences on cloud computing, SOA, enterprise application integration, and enterprise architecture. Dave writes the Cloud Insider blog for InfoWorld. His views are his own.

More from this author