Introducing Agentic Tool Sovereignty (ATS)
The EU AI Act regulates AI systems through pre-market conformity (for high-risk systems) and role-based obligations; a rather static compliance model that assumes fixed configurations with predetermined relationships.
However, real-life AI systems are rarely fixed or predetermined.
To understand this claim, one must first understand that cloud computing providers – which are the backbone of most web services including APIs, databases, search engines, etc – exist across jurisdictions and borders, leaving “less certainty for the customer in terms of the location of data placed into the Cloud and the legal foundations of any contract with the provider”.
Legal scholarship has long since noted that the Internet allows EU law and non-EU legal orders to collide across borders, creating “frequent legal conflicts” in a “fragmented and global” environment. Likewise, EU policy analysis emphasises a loss of control over data and infrastructure, and the dominance of non-EU cloud providers, raising sovereignty concerns.
One must also understand that AI Agents can be defined as “goal-oriented assistants”, designed to act autonomously with minimal input. That is, they are “not mere tools, but actors” that exercise decision-making.
AI agents can invoke third-party tools (which include APIs and web searches), and even other AI systems (workflows) – services provided by the aforementioned cloud computing providers – which may not be known before runtime and may operate under different jurisdictional regimes and in different geographic areas.
Legal scholar Giovanni Sartor has argued that we may attribute legally relevant intentional states to AI agents and recognise their capacity to act on users' behalves.
This directly challenges the manner in which they integrate into the static, predetermined compliance model of the EU AI Act.
I call this challenge “Agentic Tool Sovereignty” (ATS); the (in)ability of states and providers to maintain lawful control over how their AI systems autonomously invoke and use cross-border tools. Where digital sovereignty concerns control over one’s digital infrastructure, data, and technologies, ATS extends this concern to the runtime conduct of AI systems themselves; their capacity to act, choose, and integrate tools beyond any single jurisdiction’s effective reach.
Fifteen months after the Act entered force, no guidance addresses this gap, while €20 million in GDPR fines for cross-border AI violations (OpenAI €15 m, Replika €5 m) signal how regulators might respond when agents' autonomous tool use inevitably triggers similar breaches. The disjunction between the AI Act’s static compliance model and agents' dynamic tool use creates an accountability vacuum that neither providers nor deployers can navigate.
Consider a hypothetical scenario: An AI recruitment system in Paris autonomously invokes a US psychometric API, UK verification service, Singapore skills platform, and Swiss salary tool, all in less than five seconds. Three months later, four regulators issue violations. The deployer lacked visibility into data flows, audit trails proved insufficient, and the agent possessed no geographic routing controls.
Defining ATS’ Dimensions
The question of ATS arises from the tension between agent autonomy on the one hand, and cross-border data flows with digital sovereignty on the other. The legal frameworks that we will consider (The EU AI Act and GDPR) assume static relationships, predetermined data flows, and unified control; assumptions incompatible with agents' runtime, autonomous, cross-jurisdictional tool invocation.
ATS has technical, legal, and operational dimensions, which might be summarised thusly:
Technically, agents might dynamically select tools from constantly-updating hubs/registries (digital ‘catalogues’ listing the available tools), making the import jurisdiction unknown until runtime.
Legally, when agents autonomously transfer data across borders, jurisdiction becomes ambiguous.
Operationally, responsibility disperses across model providers, system providers, deployers, and tool providers, with no actor possessing complete visibility or control into the agent’s decision tree, data flows, or compliance posture at the moment of tool invocation.
Gartner predicts that by 2027, 40% of AI-related data breaches will result from cross-border generative AI misuse. Yet the AI Act provides no mechanism to constrain where agents execute, attest their runtime behavior, or maintain accountability as control leaves the original perimeter.
The AI Act’s Structural Failures
Substantial Modification Ambiguity
Article 3(23) defines "substantial modification" as changes "not foreseen or planned in the initial conformity assessment".
But does runtime tool invocation constitute such modification?
Related legal scholarship reveals these ambiguities are structural rather than transitional. Even when developers make intentional modifications to AI systems using documented approaches, "it is unlikely that upstream developers will be able to predict or address risks stemming from all potential downstream modifications to their model". If predictability fails for known, planned modifications, it becomes impossible for autonomous runtime tool invocation; providers cannot foresee which tools agents will select from constantly-updating registries, what capabilities those tools possess, or what risks they introduce.
If tools were documented during conformity assessment, responsibility likely remains with the original provider. If tool selection and use was unanticipated or fundamentally alters capabilities, Article 25(1) may trigger, transforming the deployer into a provider. Yet the “substantial modification” threshold requires determining whether changes were “foreseen or planned”; a determination that becomes structurally impossible when agents autonomously select tools that did not exist at the time of conformity assessment.
Article 96(1)(c) mandates Commission guidance on substantial modification, but agentic systems remain excluded and no guidance from The AI Office has been forthcoming.
Post-Market Monitoring Impossibility
Article 72(2)
Article 72(2) requires post-market monitoring (of high-risk systems) to "include an analysis of the interaction with other AI systems". While this provides the strongest textual basis for monitoring external tool interactions, it still raises further questions, namely:
do "other AI systems" encompass non-AI tools and APIs? Most external tools that agents invoke are conventional APIs, not AI systems; while others might be ‘black boxes’ that are not outwardly interfaced-with as AI systems but internally operate as such;
how can providers monitor third-party services beyond their control? Providers lack access to tool providers' infrastructure, cannot compel disclosure of data processing locations, and have no mechanism to audit tool behaviour; this is especially true if tool providers reside outside of the EU.
Academic analysis of the AI Act's post-market monitoring framework acknowledges this structural challenge by noting that post-market monitoring becomes especially challenging for "AI systems that continue to learn, i.e. update their internal decision-making logic after being deployed at the market". Agentic AI systems with dynamic tool selection capabilities fall under this category.
Furthermore, the Act assumes that monitoring logs "can either be controlled by the user, the provider, or a third party, as per contractual agreements", but this assumption breaks down entirely when agents invoke tools from providers unknown before runtime and with whom no contractual relationship exists, creating visibility gaps that render Article 72(2)'s monitoring obligations impossible to fulfil.
Article 25(4)
Article 25(4) requires providers and third-party suppliers (of high-risk systems) to specify "necessary information, capabilities, technical access and other assistance" by written agreement. However, this assumes pre-established relationships that cannot exist when agents select tools at runtime from constantly-updating hubs/registries.
The Many Hands Problem
Responsibility diffuses across the AI value chain. Model providers build foundational capabilities. System providers integrate and configure. Deployers operate in specific contexts. Tool providers (sometimes even unknowingly) supply external capabilities. Each actor possesses partial visibility and control, yet accountability frameworks assume unified responsibility.
The Act provides no mechanism to compel tool providers to disclose data processing locations, implement geographic restrictions, provide audit access, or maintain compatibility with compliance systems. When an agent autonomously selects a tool that transfers personal data to a non-adequate jurisdiction, who determined the transfer? The model provider who enabled tool-use capabilities? The system provider who configured the tool registry? The deployer who authorised autonomous operation? Or the tool provider who processed the data?
This distributed responsibility problem is well-documented in AI governance scholarship. Traditional legal frameworks for ascribing responsibility “treat machines as tools that are controlled by their human operator based on the assumption that humans have a certain degree of control over the machine’s specification” yet “as AI relies largely on ML processes that learn and adapt their own rules, humans are no longer in control and, thus, cannot be expected to always bear responsibility for AI's behaviour”.
When applied to agentic tool invocation, this responsibility gap multiplies: ML systems can "exhibit vastly different behaviours in response to almost identical inputs" making it impossible to predict which tools will be invoked or where data will flow. The Act assumes unified control that no longer exists.
Further questions arise when an AI agent selects a web service as a tool when said service never envisaged or authorised themselves to be used as part of an AI agent’s operation. They have effectively become a tool without even knowing it.
Furthermore, the Act offers no mechanism to compel cooperation from tool providers selected at runtime (ie, not known in advance). Recital 88 merely encourages tool suppliers to cooperate, but creates no binding obligation absent contractual arrangements. Article 25(4) does impose written agreements, but only between providers and suppliers in pre-established relationships. Neither provision therefore addresses the issues raised by runtime tool selection from ephemeral sources.
This vagueness is likely not accidental but by design. Lawmakers are "deterred from outlining specific rules and duties for algorithm programmers to allow for future experimentation and modifications to code" but this approach "provides room for programmers to evade responsibility and accountability for the system's resulting behaviour in society".
The AI Act typifies this trade-off: specific rules would constrain innovation, but general rules create accountability vacuums. ATS exists precisely in this vacuum; the space between enabling autonomous tool use and maintaining legal control over that autonomy.
The GDPR Tension
The intersection with GDPR Chapter V creates fundamental tensions. Standard Contractual Clauses under Article 46require specific importer identification and case-by-case adequacy assessments per Schrems II. Again, these mechanisms assume pre-established relationships and intentional transfer decisions; again structurally incompatible with dynamic tool invocation.
Turning our gaze again to constantly-updating tool hubs/registries; in many cases the specific tool (and indeed its existence) is unknown until runtime. Agentic decisions occur too rapidly for legal review, and relationships are ephemeral rather than contractual. Article 49's derogations cannot support systematic business operations according to EDPB Guidelines 2/2018.
Academic analysis commissioned by the European Parliament acknowledges this structural tension: GDPR's "traditional data protection principles—purpose limitation, data minimisation, the special treatment of 'sensitive data', the limitation on automated decisions" fundamentally conflict with AI systems' operational realities, involving "the collection of vast quantities of data concerning individuals and their social relations and processing such data for purposes that were not fully determined at the time of collection".
When agents autonomously invoke cross-border tools, they create data flows that satisfy neither the predetermined transfer mechanisms of Chapter V (which require specific importer identification) nor the purposeful collection principles of Chapter I (which assume determined purposes at collection time). The Act requires knowing why and where data flows; agentic systems determine this autonomously at runtime.
When agents autonomously select tools that transfer personal data, the established controller-processor framework relationships break down; the tool provider isn’t acting under the deployer’s instructions, yet neither are they independently determining purposes and means.
This suggests some sort of joint controllership perhaps. The CJEU's Fashion ID decision establishes joint controller responsibility where parties jointly determine purposes and means. But can organisations maintain the required "control" if unaware of the agent's runtime decisions? EDPB Guidelines 05/2021 on the interplay between Article 3 and Chapter V make no comment on autonomous AI agent decisions.
Against this backdrop, providers find themselves caught between Scylla and Charybdis: Pre-approve limited tool sets (eliminating agentic flexibility), implement geographic restrictions (the same issue via another constraint), or operate in non-compliance.
Legal scholarship confirms that “the current patchwork of regulations is inadequate to address the global nature of AI technologies” particularly when "AI systems operate across borders and affect multiple jurisdictions simultaneously" rendering "unilateral regulatory approaches insufficient".
The challenge is conceptual; traditional "data sovereignty" focuses on territorial control over data within jurisdictions, but agentic systems make autonomous cross-border decisions that transcend any single jurisdiction’s authority. The AI Act (a unilateral regional approach) cannot constrain agents that autonomously invoke tools operating under different jurisdictional regimes, to different levels of conformity, in real-time.
ATS thus demands a fundamental reconceptualisation: Sovereignty must shift from static territorial boundaries to dynamic governance over autonomous actions themselves.
A Call for Runtime Governance
Fifteen months after the AI Act entered force, the AI Office has published no guidance specifically addressing AI agents, autonomous tool use, or runtime behavior. In September 2025, MEP Sergey Lagodinsky formally asked the Commission to clarify "how AI agents will be regulated". At the time of writing, no public response has been issued.
The Future Society's June 2025 report confirmed that technical standards under development "will likely fail to fully address risks from agents". This regulatory gap is not only technical but conceptual too; existing law embeds sovereignty in territory and data residency, while agentic systems require embedding sovereignty in runtime behaviour.
Until guidance emerges, providers face ambiguities that are incredibly difficult to resolve:
whether tool invocation constitutes substantial modification;
how to satisfy Article 72(2)'s monitoring obligations for third-party services;
whether GDPR transfer mechanisms can apply to ephemeral, agent-initiated relationships.
Deployers of successful agentic AI systems with tool use must also maintain human oversight (per Article 14) while enabling the system’s autonomous operation, which is – on the face of it – a compliance impossibility.
Recent legal scholarship on AI agents confirms that sufficiently sophisticated systems “could engage in a wide range of behavior that would be illegal if done by a human, with consequences that are no less injurious” yet existing frameworks provide only a “weak safeguard against serious harm” through ex post liability.
The runtime governance gap is thus not merely technical but fundamental: AI agents can autonomously perform complex cross-border actions (including tool invocation that triggers data transfers) that would violate GDPR and the AI Act if done by humans with the same knowledge and intent.
Yet neither framework imposes real-time compliance obligations on the systems themselves. Post-facto fines cannot undo millisecond-duration transfers to non-adequate jurisdictions; conformity assessments cannot predict which of thousands of constantly-updating tools an agent will autonomously select. The Act's enforcement model assumes human decision-making timescales, but agentic operations occur too rapidly for human oversight to be anything more than theatrical.
ATS therefore begs a fundamental reconceptualising of how we view digital sovereignty; not static jurisdictional boundaries, but dynamic guard-rails on autonomous actions. This might require mechanisms to potentially constrain which tools agents can invoke, attest where execution occurs, and maintain accountability as control disperses.
Without such mechanisms, providers face sanctions for contraventions that the Act’s own architecture renders unavoidable.
Lloyd Jones researches regulatory challenges of agentic AI at the intersection of tech and law. With 15+ years building tech and AI systems, he brings deep technical insight to AI governance. He's a member of SCL and pursuing advanced legal studies.