From Shadow IT to Shadow AI

The first law for AI was approved recently and gives manufacturers of AI applications between six months and three years to adapt to the new rules. Anyone who wants to utilise AI, especially in sensitive areas, will have to strictly control the AI data and its quality and create transparency – classic core disciplines from data management.

The EU has done pioneering work and put a legal framework around what is currently the most dynamic and important branch of the data industry with the AI Act, just as it did with GDPR in April 2016, and with Digital Operational Resilience in January 2025. And many of the new tasks from the AI Act will be familiar to data protection officers and every compliance officer involved in GDPR and DORA.

The law sets a definition for AI and defines four risk levels: minimal, limited, high and unacceptable. AI applications that companies want to use in aspects of healthcare, education and critical infrastructure fall into the highest security category of “high risk”. Those that fall into the “unacceptable” category will be banned, for example if considered a clear threat to the safety, livelihoods and rights of people.

AI systems must, by definition, be trustworthy, secure, transparent, accurate and accountable. Operators must carry out risk assessments, use high-quality data and document their technical and ethical decisions. They must also record how their systems are performing and inform users about the nature and purpose of their systems. In addition, AI systems should be supervised by humans to minimise risk, and to enable interventions. They must be highly robust and achieve a high level of cybersecurity.

The potential of generative AI has also created a real gold rush that no one will want to miss. This is highlighted in a study by Censuswide on behalf of Cohesity, a global provider of AI-supported data management and security. 86 percent of the 903 companies surveyed are already using generative AI technologies. 

Mark Molyneux, EMEA CTO from Cohesity, explains the challenges this development brings with it and why, despite all the enthusiasm, companies should not repeat old mistakes from the early cloud era.

The path for users to AI is very short; entry is gentle, easy and often free, and that has big consequences that should be familiar to companies from the early phase of the cloud. That’s why it’s particularly important to pay attention to the following aspects right now:

Avoid loss of control

In the past, public cloud services have sparked a gold rush, with employees uploading company data to external services with just a few clicks. IT had temporarily lost control of company data leading to it accepting risks in terms of protection and compliance. The birth of shadow IT.

Respondents now expect something similar with AI, as the survey shows. Compliance and data protection risks are cited as the biggest concerns by 34 and 31 percent respectively. 30 percent of company representatives fear that the AI could also spit out inaccurate or false results. After all, most users do not yet know how to optimally interact with the AI engines. And last but not least, the generative AI solutions are still new and not all of them are yet fully developed.

The media often reports on companies that have had this experience. In April 2023, engineers at Samsung uploaded company confidentials to ChatGPT, making them the learning material of a global AI – the worst case from a compliance and intellectual property perspective.

Since the innovation cycles in AI are extremely short, the range of new approaches, concepts and solutions is exploding. Security and IT teams find it extremely difficult to keep up with this pace and put the respective offers through their paces. Often they are not even involved because, like the cloud, a business unit has long been using a service – after shadow IT, shadow AI is now emerging and with it an enormous loss of control.

Make people aware of dangers

At the same time, new forms of possible misuse of AI are becoming known. Researchers at Cornell University in the USA and the Technion Institute in Israel have developed Morris II, a computer worm that spreads autonomously in the ecosystem of public AI assistants. The researchers managed to teach the worm algorithms to bypass the security measures of three prominent AI models: Gemini Pro from Google, GPT 4.0 from OpenAI and LLaVA. The worm also managed to extract useful data such as names, phone numbers and credit card details.

The researchers shared their results with operators so that the gaps can be closed and security measures can be improved. But here a new open flank is clearly emerging on the cyber battlefield where hackers and providers have been fighting each other with malware, spam and ransomware for decades.

Speed without being hasty

IT teams will not be able to turn back the clock and keep AI out of corporate networks. Therefore, bans are usually not an appropriate approach. IT cannot and should not be tempted to rush and make quick decisions, but rather regain control over its data and responsibly govern the AI.

This allows IT teams to accurately assess the risk and rule out possible external data sharing. The AI is self-contained and can be introduced in a controlled manner. IT teams can also be very selective about which internal systems and data sources the AI modules actively examine. You can start with a small cluster and introduce AI in a highly controlled manner.

AI models that have already been introduced by third parties can be tamed by specifying exactly which data these models are allowed to access. A decisive advantage for slowing down the uncontrolled dynamics of AI, because data flows can be precisely controlled, useful information protected and legal requirements adhered to. 

Mark Molyneux

Mark Molyneux is CTO for EMEA at Cohesity

Cheltenham MSP is first official local cyber advisor

Neil Smith Managing Director of ReformIT • 23rd April 2024

ReformIT, a Managed IT Service and Security provider (MSP) based in the UK’s cyber-capital, Cheltenham, has become the first MSP in the local area to be accredited as both a Cyber Advisor and a Cyber Essentials Certification Body. The Cyber Advisor scheme was launched by the Government’s official National Cyber Security Centre (NCSC) and the...

How we’re modernising BT’s UK Portfolio Businesses

Faisal Mahomed • 23rd April 2024

Nowhere is the move to a digitised society more pronounced than the evolution from the traditional phone box to our innovative digital street units. Payphone usage has dropped massively since the late 1990s/2000s, with devices and smart phones replacing not only communication access, but the central community points that the payphones once stood for. Our...

How we’re modernising BT’s UK Portfolio Businesses

Faisal Mahomed • 23rd April 2024

Nowhere is the move to a digitised society more pronounced than the evolution from the traditional phone box to our innovative digital street units. Payphone usage has dropped massively since the late 1990s/2000s, with devices and smart phones replacing not only communication access, but the central community points that the payphones once stood for. Our...

What is a User Journey

Erin Lanahan • 19th April 2024

User journey mapping is the compass guiding businesses to customer-centric success. By meticulously tracing the steps users take when interacting with products or services, businesses gain profound insights into user needs and behaviors. Understanding users’ emotions and preferences at each touchpoint enables the creation of tailored experiences that resonate deeply. Through strategic segmentation, persona-driven design,...

From Shadow IT to Shadow AI

Mark Molyneux • 16th April 2024

Mark Molyneux, EMEA CTO from Cohesity, explains the challenges this development brings with it and why, despite all the enthusiasm, companies should not repeat old mistakes from the early cloud era.

Fixing the Public Sector IT Debacle

Mark Grindey • 11th April 2024

Public sector IT services are no longer fit for purpose. Constant security breaches. Unacceptable downtime. Endemic over-spending. Delays in vital service innovation that would reduce costs and improve citizen experience.

Best of tech to meet at VivaTech in May

Viva Technology • 10th April 2024

A veritable crossroads for business and innovation, VivaTech once again promises to show why it has become an unmissable stop on the international business calendar. With its expanding global reach and emphasis on crucial themes like AI, sustainable tech, and mobility, VivaTech stands as the premier destination for decoding emerging trends and assessing their economic...