Once again, too many standards are vying for dominance in solving a relatively simple problem. The IT industry needs to break this counterproductive pattern.
You know the routine: The IT industry, driven as much by vendor ambition as by necessity, develops many competing standards to solve a simple problem. Todayโs culprit: agent-to-agent communication in AI.
The recent rise of so-called โstandardsโ for how intelligent agents should communicate echoes past issues with service-oriented architecture, web services, and various messaging middleware conflicts. The key difference is that now, this confusion could prevent one of the most promising areas in enterprise technologyโagentic AIโfrom ever providing real value.
Letโs set the scene. Intelligent agents, whether they are specialized large language models (LLMs), service-brokering bots, Internet of Things digital twins, or workflow managers, need to communicate efficiently, securely, and transparently. This is a typical interoperability issue. A well-established industry could, in theory, create a straightforward, practical protocol and move forward. Instead, we see a flood of emerging standards from too many โexpertโ voices with an underlying agenda, each accompanied by a white paper, a community call, a sponsored conference, and, of course, an ecosystem. This is the core problem.
The alphabet soup of protocols
Letโs look at a cross-section of just some of the technologies that are on offer or are in the works:
- OpenAIโs Function Calling and OpenAI Agent Protocol (OAP)ย is promoted as a way to enable their models to interact more flexibly with APIs, enhancing prompts with context and coordination logic. Thereโs talk of standardizing this into the โOAP Standardโ but details remain unclear.
- Microsoftโs Semantic Kernel (SK) Extensionsย are designed to foster agent communication and coordination across various toolkits, including Microsoftโs own Copilot and external agents by using plug-in skills and manifest-driven connectors.
- Metaโs Agent Communication Protocol (Meta-ACP)ย focuses on graph-based intent resolution, message-passing semantics, and decentralized trust. The pitch: make agents modular and composable at internet scale.
- LangChain Agent Protocol (LCAP)ย builds on the open source LangChain framework with a focus on interoperability among various agent systems. Their protocol emphasizes chained tool invocation and task-switching, providing compatibility layers with OpenAI and Anthropic models.
- Stanfordโs Autogen Protocolย supports research-level coordination and negotiation among AI agents, particularly in collaborative planning and negotiation contexts.
- Anthropicโs Claude-Agent Protocolย is less of a full-stack protocol and more of a set of message formatting and invocation best practices aimed at aligning with human intent and maintaining context across multi-agent dialogues.
- W3C Multi-Agent Protocol Community Groupย of the World Wide Web Consortium is proposing universal message types, schemas, and agent discovery mechanisms. They want to make โagents as discoverable as web pages.โ
- IBMโs AgentSphereย focuses on multi-modal agent communication across hybrid cloud environments, with specifications for policy negotiation and session transfer.
This list isnโt complete. There are dozens more protocols mentioned in Reddit posts, Substack essays, and well-funded stealth startups, each claiming to be the one true answer to multi-agent coordination.
Competition breeds silos
Some will say, โCompetition breeds innovation.โ Thatโs the party line. But for anyone whoโs run a large IT organization, it means increased integration work, risk, cost, and vendor lock-inโall to achieve what should be the technical equivalent of exchanging a business card.
Letโs not forget history. The 90s saw the rise and fall of CORBA and DCOM, each claiming to be the last word in distributed computing. The 2000s blessed us with WS-* (the asterisk is a wildcard because the number of specs was infinite), most of which are now forgotten. REST and JavaScript Object Notation communication finally won, mostly because they didnโt try too hardโbut not before millions of dollars were wasted on false starts and incompatible ecosystems.
The truth: When vendors promote their own communication protocols, they build silos instead of bridges. Agents trained on one protocol canโt interact seamlessly with those speaking another dialect. Businesses end up either locking into one vendorโs standard, writing costly translation layers, or waiting for the market to move on from this round of wheel reinvention.
Multiple standards means no standards
Itโs a fundamental principle: producing 20 standards for the same need essentially results in no standards. There is no network effect, only confusion. The time spent debating minor protocol differences, lobbying standards organizations, and launching compatibility initiatives is time not spent creating value or solving end-user business issues.
We in IT love to make simple things complicated. The urge to create a universal, infinitely extensible, plug-and-play protocol is irresistible. But the real-world lesson is that 99% of enterprise agent interaction can be handled with a handful of message types: request, response, notify, error. The restโtrust negotiation, context passing, and the inevitable โunknown unknownsโโcan be managed incrementally, so long as the basic messaging is interoperable.
Letโs be honest. Most of the churn around standards is more about gaining mindshare and securing business development budgets than solving architecture issues. Announcing a standard protocol aims to foster an ecosystem rather than achieve consensus. Everyone aspires to be the TCP/IP of AI agents, but history shows that protocol dominance is mainly achieved through grassroots adoption rather than white papers or marketing efforts.
Go for the minimum
Hereโs an unpopular truth: the industry would be best served by collectively deciding on theย minimum viable protocolย and iterating from there. Something as dead simple as HTTP+JSON with common schemas would meet 80% of use cases, with optional extensions as needs emerge. Today we have a Tower of Babel: overcomplex schemes, edge-case features no one will use, and competing vendor alliances.
Business leaders and architects should resist jumping on every protocol bandwagon. Demand interoperability, evaluate whether a โstandardโ actually solves a real pain point, and when in doubt, build abstraction layers that prevent lock-in.
We urgently need open protocols for AI agent communication. Too many competing standards render them all essentially meaningless. The IT industry has gone through this cycle before. Unless we break free from it, agentic AI will just be another example of wasted time and effort. Letโs not allow protocol vanity to get in the way of creating real business value.


