When the Agent Acts Without Permission

2026-03-23
2026-03-23 AI Economics

A widely shared LinkedIn post from March 2026 explains, with commendable clarity, the distinction between Skills and MCP in agentic AI architecture. Skills teach the model how to reason. MCP gives the model access to external systems, CRMs, databases, internal documents, business tools. The post concludes: “Skills tell the model how to work. MCP lets the model use tools and data.”

It is a precise technical description. It is also, from a governance perspective, an incomplete one.

Thesis: Deploying agentic AI architecture without a prior control layer transforms a technical capability into an open execution channel with an indeterminate cost profile and no clear accountability chain.

Before an agent is allowed to act, three conditions must be defined: its expected economic value, its maximum cost exposure, and the boundary conditions under which execution is permitted.

The distinction between Skills and MCP matters but not for the reasons most architecture discussions suggest. The relevant question is not which component handles reasoning and which handles tool access. The relevant question is: who authorized the action, what was the expected cost, and what are the boundaries of permissible behavior before the agent executes.

In environments where an MCP-connected agent can open a browser, pull from a CRM, query internal documents, and push structured data to downstream systems, the execution model is deterministic at the level of individual calls. The API will execute as defined. What remains non-deterministic is the aggregate economic and operational outcome across sequences of such calls, particularly when multiple agents operate concurrently without shared constraints.

This is precisely the gap that current architecture discussions leave unaddressed.


I. Economic Control Precedes Agentic Architecture

Skills and MCP define what an agent can do. They do not define what it should cost, when it should stop, or who is accountable when it does something unexpected. These are not engineering questions. They are governance questions and they belong to a layer that must exist before architecture decisions are made.

The Control Layer described in “Infrastructure in the Era of Operational AI” addresses this directly. AI Engineering ensures reproducibility and auditability of model behavior. FinOps establishes unit economics and cost transparency per execution. Observability provides the trace required to reconstruct what happened, when, and under what conditions.

Without these three disciplines in place, an MCP-connected agent operating across production systems is not a controlled asset. It is an open execution channel with an indeterminate cost profile and no clear accountability chain.


II. Agentic Architecture Amplifies Existing Governance Gaps

Traditional software systems fail in predictable ways. A misconfigured API call returns an error. A failed database query logs an exception. The failure surface is bounded.

Agentic systems fail differently. A model operating with MCP tool access can complete a sequence of individually correct actions that produce a collectively harmful outcome — not because any single step was wrong, but because no constraint governed the aggregate.

One concrete and increasingly observed mechanism is prompt injection via MCP-connected data sources. A malicious or manipulated instruction embedded in an external document, CRM record, or web page can influence an agent’s subsequent tool calls without any explicit human authorization event. Each individual call may remain technically valid. The aggregate outcome may not.

This does not represent a flaw in a single component. It reflects the absence of a governing constraint across the system.

The architectural model enables action. It does not enforce restraint.


III. Governance Must Precede Deployment, Not Follow It

The architectural distinction between Skills and MCP is useful at the design stage. At the governance stage, it is insufficient. A Board-level question is not “which component governs reasoning and which governs tool access.” It is: “What is the authorized cost envelope for this agent? What are the boundaries of permissible action? What is the audit trail if something goes wrong?”

In regulated environments, this accountability does not reside with the model, the protocol, or the engineering team. It resides with the institution’s governance structure, ultimately at Board level.

Frameworks such as the EU Digital Operational Resilience Act (DORA) establish obligations related to operational resilience, incident traceability, and the ability to reconstruct the sequence of events in automated systems. Applied to MCP-connected agents, this implies the need for traceable execution logs at the level of individual tool interactions. An institution that cannot reconstruct what an agent did, in what sequence, and at what cost, may face challenges demonstrating compliance under supervisory review.

The architecture community has produced clear thinking on how to build capable agents. The governance community must now produce equally clear thinking on how to control them.


The implication for leadership is direct. Agent capability and agent control are not the same problem and confusing the two introduces unmanaged fiduciary exposure.

The question is not whether your organization will deploy agentic systems. The question is whether you will control them before they operate  or after something goes wrong.


Three questions for your team

  1. Do we have a defined cost envelope and authorization boundary for each MCP-connected agent operating in production  and is it enforced at runtime, not just documented?
  2. Can we reconstruct the full execution sequence of any agent action taken in the last 30 days, including which tools were called, in what order, and at what cost?
  3. Is there a tested mechanism to halt an agent’s tool access without disrupting the underlying systems it is connected to?

If the answer to any of these is “we haven’t defined that yet,” the architecture is ahead of the governance. That gap requires immediate attention.

Contact

Get Connected.

We welcome you to contact us for more information
about any of our services.

AI Economics sp. z o.o.

AI Control Plane & Disciplined AI Engineering
Address:
Al. Jerozolimskie 89 lok. 43
Warsaw 02-001
Poland
NIP: 7011293630
Regon: 543666879
KRS: 0001215415

Start a conversation

Book a 30-minute executive briefing
to assess AI cost, infrastructure,
and automation opportunities.
contact@aieconomics.pl

Trusted Partner for Enterprise AI Transformation

We help organizations reduce AI costs, deploy production-grade systems, and establish full operational control.

Contact