From Shadow IT to Shadow AI

The first law for AI was approved recently and gives manufacturers of AI applications between six months and three years to adapt to the new rules. Anyone who wants to utilise AI, especially in sensitive areas, will have to strictly control the AI data and its quality and create transparency – classic core disciplines from data management.

The EU has done pioneering work and put a legal framework around what is currently the most dynamic and important branch of the data industry with the AI Act, just as it did with GDPR in April 2016, and with Digital Operational Resilience in January 2025. And many of the new tasks from the AI Act will be familiar to data protection officers and every compliance officer involved in GDPR and DORA.

The law sets a definition for AI and defines four risk levels: minimal, limited, high and unacceptable. AI applications that companies want to use in aspects of healthcare, education and critical infrastructure fall into the highest security category of “high risk”. Those that fall into the “unacceptable” category will be banned, for example if considered a clear threat to the safety, livelihoods and rights of people.

AI systems must, by definition, be trustworthy, secure, transparent, accurate and accountable. Operators must carry out risk assessments, use high-quality data and document their technical and ethical decisions. They must also record how their systems are performing and inform users about the nature and purpose of their systems. In addition, AI systems should be supervised by humans to minimise risk, and to enable interventions. They must be highly robust and achieve a high level of cybersecurity.

The potential of generative AI has also created a real gold rush that no one will want to miss. This is highlighted in a study by Censuswide on behalf of Cohesity, a global provider of AI-supported data management and security. 86 percent of the 903 companies surveyed are already using generative AI technologies. 

Mark Molyneux, EMEA CTO from Cohesity, explains the challenges this development brings with it and why, despite all the enthusiasm, companies should not repeat old mistakes from the early cloud era.

The path for users to AI is very short; entry is gentle, easy and often free, and that has big consequences that should be familiar to companies from the early phase of the cloud. That’s why it’s particularly important to pay attention to the following aspects right now:

Avoid loss of control

In the past, public cloud services have sparked a gold rush, with employees uploading company data to external services with just a few clicks. IT had temporarily lost control of company data leading to it accepting risks in terms of protection and compliance. The birth of shadow IT.

Respondents now expect something similar with AI, as the survey shows. Compliance and data protection risks are cited as the biggest concerns by 34 and 31 percent respectively. 30 percent of company representatives fear that the AI could also spit out inaccurate or false results. After all, most users do not yet know how to optimally interact with the AI engines. And last but not least, the generative AI solutions are still new and not all of them are yet fully developed.

The media often reports on companies that have had this experience. In April 2023, engineers at Samsung uploaded company confidentials to ChatGPT, making them the learning material of a global AI – the worst case from a compliance and intellectual property perspective.

Since the innovation cycles in AI are extremely short, the range of new approaches, concepts and solutions is exploding. Security and IT teams find it extremely difficult to keep up with this pace and put the respective offers through their paces. Often they are not even involved because, like the cloud, a business unit has long been using a service – after shadow IT, shadow AI is now emerging and with it an enormous loss of control.

Make people aware of dangers

At the same time, new forms of possible misuse of AI are becoming known. Researchers at Cornell University in the USA and the Technion Institute in Israel have developed Morris II, a computer worm that spreads autonomously in the ecosystem of public AI assistants. The researchers managed to teach the worm algorithms to bypass the security measures of three prominent AI models: Gemini Pro from Google, GPT 4.0 from OpenAI and LLaVA. The worm also managed to extract useful data such as names, phone numbers and credit card details.

The researchers shared their results with operators so that the gaps can be closed and security measures can be improved. But here a new open flank is clearly emerging on the cyber battlefield where hackers and providers have been fighting each other with malware, spam and ransomware for decades.

Speed without being hasty

IT teams will not be able to turn back the clock and keep AI out of corporate networks. Therefore, bans are usually not an appropriate approach. IT cannot and should not be tempted to rush and make quick decisions, but rather regain control over its data and responsibly govern the AI.

This allows IT teams to accurately assess the risk and rule out possible external data sharing. The AI is self-contained and can be introduced in a controlled manner. IT teams can also be very selective about which internal systems and data sources the AI modules actively examine. You can start with a small cluster and introduce AI in a highly controlled manner.

AI models that have already been introduced by third parties can be tamed by specifying exactly which data these models are allowed to access. A decisive advantage for slowing down the uncontrolled dynamics of AI, because data flows can be precisely controlled, useful information protected and legal requirements adhered to. 

Mark Molyneux

Mark Molyneux is CTO for EMEA at Cohesity

Ab Initio partners with BT Group to deliver big data

Luke Conrad • 24th October 2022

AI is becoming an increasingly important element of the digital transformation of many businesses. As well as introducing new opportunities, it also poses a number of challenges for IT teams and the data teams supporting them. Ab Initio has announced a partnership with BT Group to implement its big data management solutions on BT’s internal...

WAICF – Dive into AI visiting one of the most...

Delia Salinas • 10th March 2022

Every year Cannes held an international technological event called World Artificial Intelligence Cannes Festival, better known by its acronym WAICF. One of the most luxurious cities around the world, located on the French Riviera and host of the annual Cannes Film Festival, Midem, and Cannes Lions International Festival of Creativity. 

Bouncing back from a natural disaster with resilience

Amber Donovan-Stevens • 16th December 2021

In the last decade, we’ve seen some of the most extreme weather events since records began, all driven by our human impact on the plant. Businesses are rapidly trying to implement new green policies to do their part, but climate change has also forced businesses to adapt and redefine their disaster recovery approach. Curtis Preston,...