In light of the European Union’s AI Act coming into full force in 2026, Squirro’s Co-founder & CEO, Dorian Selz, looks at a critical yet often overlooked challenge – the lack of standardisation in AI regulation across a globalised economy. While much of the conversation focuses on the EU AI Act and national policies, businesses operating across multiple jurisdictions face a deeper issue: a fragmented regulatory landscape that creates compliance uncertainty and risk.
Right now, we’re on the cusp of what is likely to be the most thrilling phase of the rollercoaster ride otherwise known as GenAI adoption. Instead of hit and miss experiments, we are beginning to see the kind of serious, large-scale deployments that prove this innovation is more than just hype. AI is about to deliver tangible results that are transformational across multiple industries.
This is immensely exciting for organisations which now have the chance to revolutionise how they operate, how they find solutions for real-world conundrums and how they bring customers along for the ride, with exceptional, personalised experiences.
If they are to embrace all that AI has to offer, they must not only maintain their deployments at scale but also face up to one of the greatest challenges – operating efficiently in a fragmented regulatory landscape.
A regulatory patchwork
We live in a globalised society, and it is commonplace for a company headquartered on one continent to also operate in another, or maybe even three continents. But the fact is that when it comes to AI, there are a patchwork of evolving frameworks, with some regions enacting stricter guidelines while others take a more reactive approach. The Wild West comes to mind – plenty of action, but no sheriff in sight.
The EU AI Act is expected to take effect next year, providing a unified framework for AI regulation across its 27 member states. However, AI compliance outside the EU remains a complex challenge. The UK has opted for a more flexible, pro-innovation approach, while the US continues to regulate AI through sector-specific and state-level laws. The result is that requirements for companies are constantly evolving.
History offers a lesson in how to reach standardisation. In the 19th century, Britain’s railway system initially lacked standardisation, with different companies using incompatible track widths. This inefficiency led to the government’s intervention, mandating a standard gauge that later became widely adopted. AI governance today faces a similar challenge – without a unified framework, businesses must navigate inconsistent compliance requirements across jurisdictions.
Right now, of those companies that are complying with regulations, these only meet local standards, just as the railway companies did in the 19th century. It’s easy for them to believe that these are the only rules that apply, since there is little enforcement in place to stop them and disjointed guidance around compliance.
Setting guardrails
So, what can they do to future-proof their global AI projects? As AI grows more sophisticated, its capacity to generate impactful – but potentially risky – outputs will escalate. Companies must make a strategic shift and take responsibility for implementing their own AI governance frameworks and guardrails.
Many CXOs and C-suite executives will be familiar with the concept of AI guardrails, which, to date have often been pigeonholed as reactive mechanisms, primarily deployed to stop GenAI from producing offensive or discriminatory outputs. Of course, this function is critical, but it barely scratches the surface of their strategic potential. The true power of AI guardrails lies in their ability to proactively align AI behaviour with corporate, ethical and legal expectations – acting as a dynamic, real-time compliance layer.
Consider the multi-dimensional nature of AI governance:
· Governance guardrails: These cut risk by ensuring AI systems comply with corporate policies, accepted ethical standards, and legal mandates – a crucial defence against regulatory missteps
· Role-based guardrails: AI systems must adapt to individual roles, tailoring actions to reflect specific user rights and responsibilities. This personalisation ensures AI outputs respect context and hierarchy
· Performance guardrails: Efficiency and quality are non-negotiables. These guardrails maintain AI-driven processes at peak performance, enforcing operational best practices
· Brandkey guardrails: Consistency is king. AI-generated content must adhere to corporate values and brand identity, preventing off-brand messaging from slipping through
Let’s move from theory to practice. In the US, AI-driven financial tools must comply with stringent SEC and FINRA regulations, ensuring that AI does not mislead consumers or violate fiduciary responsibilities. Without proactive AI guardrails, a simple oversight could trigger regulatory penalties and reputational damage. Implementing compliance-focused AI guardrails, which vet AI-generated responses before customer delivery, creates a seamless layer of protection against inadvertent lawbreaking.
But compliance isn’t just about dodging fines – it’s about intelligent adaptability. AI must grasp the rights and roles of those it interacts with. Picture an online car dealership whose AI chatbot was tricked into lowering the price of a car to one dollar – a viral example of AI naivety. With role-based guardrails in place, AI systems would instantly recognise the customer’s attempt to manipulate the chatbot and counteract it with intelligent, rule-bound responses.
Trust through ethical AI frameworks
AI guardrails are not a substitute for enforceable AI standards, but they are a vital bridge to responsible AI adoption. While policymakers grapple with aligning global AI regulations, companies can’t afford to wait. Proactively establishing AI guardrails fosters trust – both internally and externally – and mitigates systemic vulnerabilities that could snowball into full-blown crises.
Standardising AI oversight does more than tick a regulatory box; it provides companies with a competitive advantage over those that are still struggling to work out what they can and can’t do. Organisations that integrate AI guardrails now are positioning themselves as leaders in responsible innovation, not just protecting against legal pitfalls but building consumer confidence and market stability in an era of relentless technological progress.
The path forward is clear: AI innovation and AI governance go hand in hand. Businesses must recognise that embracing AI’s potential means equally embracing the responsibility to harness it safely. Without robust AI guardrails, organisations risk more than regulatory backlashes, they jeopardise their reputation, customer trust and long-term success.