MCP is Evil.
TL;DR
The Model Context Protocol (MCP) promises to solve AI’s tool integration problem but actually makes things worse. It pollutes your context window (your AI’s “working memory”), adds security vulnerabilities, and solves problems that Unix solved in 1969 with pipes and small, composable CLI tools. Instead of adopting yet another protocol, we should build simple command-line utilities that work together — the same philosophy that’s powered computing for over 50 years.
In the 1982 film Tron, the Master Control Program (MCP) is a monolithic system that consolidates power, controls all other programs, and ultimately becomes the villain. Fast forward to 2024, and we have another MCP — the Model Context Protocol — that’s rapidly becoming the darling of the AI agent community. And just like its cinematic namesake, this MCP promises centralized control of how AI agents interact with external tools and data.
But here’s the thing: we already solved this problem. In 1969.
The Context Window Problem (In Plain English)
Before diving into why MCP is problematic, let’s talk about what a “context window” is without the jargon.
Imagine you’re having a conversation with someone who can only remember the last few pages of what you’ve said. Every time you add a new page of information, if you exceed their memory limit, they start forgetting earlier parts of the conversation. That’s essentially what a context window is — it’s your AI’s working memory.
When you use MCP servers, they dump large amounts of tool descriptions and metadata into this limited memory space. This directly impacts an LLM’s performance, slowing down responses and potentially hindering its ability to maintain focus and reason effectively over extended or complex interactions.
As Mario Zechner notes in his excellent article “What if you don’t need MCP at all?”, many popular MCP servers are inefficient, consuming significant context with large numbers of tools and lengthy descriptions. His benchmarking showed that common MCP servers can inject 13,000 to 18,000 tokens just for tool metadata, while a simple CLI-based approach might only need 225 tokens.
Think about that: you’re burning 80x more of your AI’s memory just to tell it what tools are available, before it even tries to use them.
Unix Already Solved This
In 1969, Ken Thompson and Dennis Ritchie were working on Unix at Bell Labs. They established a philosophy that has powered computing for over half a century:
Write programs that do one thing and do it well. Write programs to work together.
The Unix philosophy gave us:
- Pipes: Connect the output of one program to the input of another
- Text streams: Universal data format that every tool understands
- Composability: Build complex workflows from simple tools
Instead of pulling in massive frameworks and thousands of lines of glue code, you can build modular systems by having tools pass data through standard formats. This is exactly what MCP claims to solve, except Unix has been doing it elegantly for 55 years.
The Unix Approach: Real-World Examples
Let me show you what this looks like in practice. Over the past few months, I’ve built several CLI tools following the Unix philosophy:
OuraCLI: Health Data Made Simple
My Oura Ring collects sleep, activity, and readiness data. Instead of building an MCP server with dozens of tool definitions, I built ouracli—a simple CLI that:
# Get sleep data in JSON
ouracli sleep yesterday --format json
# Get activity data as markdown
ouracli activity --days 7 --format markdown
# Pipe to other tools
ouracli sleep today --format json | jq '.sleep_score'Supports multiple output formats (JSON, Markdown, tree, dataframe) and composes naturally with other Unix tools like jq, grep, and awk. The AI doesn’t need to understand a complex MCP tool definition—it just needs to know how to call a CLI and read text output.
uptop: System Monitoring That Composes
Rather than building a system monitoring MCP server, uptop provides:
- Interactive TUI mode for humans
- Structured output (JSON, Markdown, Prometheus) for scripts and AI
- Plugin architecture using Python entry points
- Clean composition with existing monitoring tools
An AI can simply run uptop --format json --interval 1 and get system metrics without any special protocol knowledge.
voxio: Communications Made Simple
I also wrote voxio, (note: yes, lol) a CLI to make phone calls and send and receive text messages. It uses SignalWire, but could just as easily use Twilio or Vonage. I love SignalWire though because it’s easy to use and the people there are genuine and helpful. (Bias disclosure: I know some folks there from my prior companies Voxeo and Tropo.)
# Make a phone call with text-to-speech
voxio call 4075551212 say "Hi don't forget your appointment at 3pm."
# Send a text message
voxio text 4075551212 say "You have a meeting with Brian West at 5:30pm"
# Make a call and play an audio file
voxio call 4076661212 play message.wavNo MCP server needed. Just a simple CLI that an AI agent can call with standard arguments. The tool handles all the complexity of the telephony API internally. I could have built seperate CLI’s for calling and texting, but their is a good deal of shared authentication and other code, so I put both features in one CLI.
dashdash: Making CLIs More Discoverable
One challenge with the CLI approach is discoverability — how does an AI agent know what a CLI tool can do? I’m alsoworking on dashdash, a specification that improves discoverability for CLIs, web sites, web APIs, and even (blech) MCPs.
The core idea: add a --ai-help flag to CLIs that outputs structured markdown with YAML front matter. This gives AI agents everything they need:
ouracli --ai-help
# Returns structured help with:
# - Complete usage examples
# - Explicit date/time formats
# - Output format recommendations (prefer --json)
# - Common errors and solutions
# - Alternative access methods (web, API, MCP)For web applications, dashdash defines __ai_help.md (and a newer format that extends the work of https://llmstxt.org/) files that provide similar structured guidance. The key is that each access method can reference alternatives—a CLI tool’s help can point to a web interface or API, letting agents choose the best approach for their task.
It’s early days, but the goal is simple: keep the Unix philosophy of composable tools while making them more discoverable to AI agents.
Houston, We Have a Problem
“But wait,” you might say, “MCP also supports remote access to an MCP server. CLI tools don’t have that capability!”
Yes they do. We solved that in 1969 too with telnet. We can run commands remotely no problem.
“Except telnet doesn’t support secure connections like HTTPS.”
Oh wait, we solved that in 1995 with SSH.
You can set up specific SSH endpoints that go directly to CLIs. SSH solves encryption, authentication, authorization — the whole security stack. And with SSH keys and some clever coding, you can authenticate automagically, as the quirky but exceptionally useful service exe.dev does (see their API docs).
With exe.dev, the entire API is SSH. Authentication happens via SSH keys. No protocol servers. No middleware. Just SSH doing what it’s done securely for 30 years.
Why MCP Gets It Wrong
LLM reliability often negatively correlates with the amount of instructional context provided. This is in stark contrast to most users, who believe that more data and integrations will solve their problems.
Beyond context pollution, MCP introduces:
- Security Nightmares: MCP wasn’t designed with security first. Until that changes, every MCP integration is a potential backdoor into your systems. Issues include prompt injection, credential leakage, confused deputy problems, and malicious MCP servers
- Tool Poisoning: Hackers are creating fraudulent MCP servers with malicious tool metadata crafted to mislead LLMs into unsafe behavior.
- Immaturity: The MCP ecosystem is brand new and moving fast. Standards today might look different in six months. That’s a problem if you’re building production systems that need to stay stable.
- Complexity Without Benefit: MCP does not yet enforce comprehensive error-handling standards, and its scope is limited to discovery and invocation, omitting crucial aspects like tool governance, versioning, or lifecycle management. This lack of complete standardization can lead to inconsistent implementations.
- Cross-Context Data Leaks: The point of MCP is to use multiple tools in the same session. This cross-context power makes unintended data leaks easy — the model might fetch private data from one source then use another tool that leaks or stores it.
The Better Way: Build CLI Tools
Here’s what I recommend:
- Build focused CLI tools that do one thing well
- Support standard output formats (JSON, Markdown, CSV)
- Use environment variables for configuration
- Make them composable with pipes and redirection
- Document with simple examples that AI agents can understand
When your AI agent needs to use your tool, it just needs to know:
- The command name
- The arguments it accepts
- What format the output comes in
That’s it. No protocol servers. No massive tool registries. No context window pollution.
Real-World Validation
This isn’t just theory. Projects like ClawdBot and similar minimal agent frameworks prove that powerful AI agents can be built with simple Bash tools and standard I/O. These simple tools are also composable. Instead of reading the outputs into context, the agent can decide to save them to a file for later processing, either by itself or by code.
Mario Zechner’s approach, detailed in his article, shows browser DevTools operations implemented as simple Bash commands that consume ~225 tokens versus 13,000+ for equivalent MCP servers. That’s 98% less context pollution for the same functionality.
In Summary
The Model Context Protocol (MCP) aims to connect AI agents with external tools, but it creates significant problems:
- Consumes 13,000–18,000 tokens of context for tool metadata (vs ~225 for CLI tools)
- Introduces serious security risks: prompt injection, credential leakage, tool poisoning
- Fragments the ecosystem with immature, rapidly changing standards
- Violates the proven Unix philosophy of simple, composable tools
Better approach: Use lightweight CLI tools that output JSON or Markdown, compose via pipes & stdio, and rely on plain-text contracts that AI already understands.
- OuraCLI, uptop, voxio, ClawdBot — powerful CLI’s and agents with zero protocol overhead
- Use The Unix philosophy — simple, secure, and composable — has driven computing for 55+ years and aligns naturally with how both humans and AI work.
- Most problems are better solved with a short, clean tool that prints JSON than with yet another heavy protocol.
- The best new protocol is often no new protocol at all.
References:
- OuraCLI — Oura Ring data access
- uptop — System monitoring with AI-friendly output
- Mario Zechner’s article — Detailed MCP analysis
- ClawdBot — Minimal agent framework
https://www.cdata.com/blog/navigating-the-hurdles-mcp-limitations
https://mariozechner.at/posts/2025–11–02-what-if-you-dont-need-mcp/
https://medium.com/@ckekula/model-context-protocol-mcp-and-its-limitations–4d3c2561b206



