AI coding assistants have become indispensable. Claude Code, Cursor, GitHub Copilot, Gemini CLI—these tools can generate entire features, refactor codebases, and debug complex issues in seconds. But there’s a problem nobody talks about.
They don’t follow your rules.
The Instruction File Illusion
Every AI coding tool has some form of instruction file:
-
CLAUDE.mdfor Claude Code -
.cursor/rules/*.mdcfor Cursor -
.github/copilot-instructions.mdfor GitHub Copilot -
GEMINI.mdfor Gemini CLI
You carefully craft these files with your team’s standards. “Use Minitest, not RSpec.” “Never use Devise.” “Always add database constraints.” Then you watch your AI assistant cheerfully ignore them and generate exactly what you said not to.
Why? Because these files are context, not constraints. The LLM receives them as user-provided guidance, weighted against its training data and system prompts. When your instructions conflict with common patterns in its training data, the training wins.
The Gap Between Context and Constraints
Here’s the uncomfortable truth: instruction files are advisory. They’re suggestions. The model will consider them, but it’s under no obligation to follow them.
This creates a frustrating workflow:
- AI generates code
- You review and find it violates your standards
- You ask for corrections
- AI apologizes and fixes some issues
- You find more violations
- Repeat until exhausted
Even worse, the AI might acknowledge your rules and then violate them in the same response. It’s not being malicious—it’s just that “write idiomatic code” (from training) often beats “follow this specific constraint” (from your context).
What Actually Works
After months of experimentation, we’ve identified what does and doesn’t work for enforcing standards with AI coding tools:
What doesn’t work:
- Longer, more detailed instruction files
- Threatening the AI with consequences
- Asking it to “always” or “never” do something
- Repeated reminders in conversation
What does work:
- Git hooks that block commits
- CI pipelines that fail on violations
- Branch protection rules
- Agent hooks (in tools that support them)
The pattern is clear: deterministic enforcement beats probabilistic compliance. If bad code can’t merge, it doesn’t matter whether the AI followed instructions.
Enter Convext
Convext takes a different approach. Instead of hoping LLMs follow your rules, we make your rules available at every stage of development—and provide the infrastructure to actually enforce them.
Central Rule Management
Define your engineering standards once in Convext:
- Rules with severity levels (critical, high, medium, low)
- Language standards with specific tooling recommendations
- Technology hierarchies (languages, frameworks, libraries)
- Rule sets that can be assigned to projects
MCP Integration
Convext implements the Model Context Protocol (MCP), allowing AI assistants to fetch your organization’s standards in real-time. When you connect your AI tool to Convext via MCP, it receives your rules as structured data—not just a blob of text, but categorized, prioritized, and consistently formatted.
{
"mcpServers": {
"convext": {
"url": "https://convext.app/mcp",
"transport": "http"
}
}
}
Multi-Format Export
For tools without MCP support, Convext generates instruction files in every format:
CLAUDE.md.cursor/rules/*.mdc.github/copilot-instructions.md.windsurfrules- And more
One source of truth, every format your team needs.
Beyond Instructions
But the real power isn’t in better instruction files—it’s in everything else:
Planning Documents: Create requirements, designs, and ADRs directly in Convext. AI assistants can access these via MCP, ensuring they understand not just your rules but your architectural decisions.
Task Tracking: Break down plans into tasks. AI assistants can update task status and link completed work to specific commits or PRs.
Telemetry: Track rule compliance over time. See which rules are most frequently violated. Identify patterns and improve your standards.
Getting Started
- Sign up at convext.app
- Connect GitHub to sync your repositories
- Configure MCP in your AI coding tool
- Define rules or import from the marketplace
The MCP integration handles authentication automatically via OAuth. When your AI assistant needs your organization’s standards, it asks Convext, authenticates you via the browser, and receives structured data it can actually use.
The Path Forward
AI coding assistants aren’t going away. They’re getting faster, smarter, and more capable every month. The question isn’t whether to use them—it’s how to use them without sacrificing code quality.
Convext is our answer: give AI assistants the context they need, provide enforcement mechanisms that actually work, and track compliance over time. It’s not about fighting the AI. It’s about giving it the information it needs to help you write better code.
We’re just getting started. MCP is evolving, AI tools are adding new capabilities, and we’re building the infrastructure to keep your engineering standards at the center of it all.
Ready to try it? Get started with Convext →
Have questions? Feedback? Find us on GitHub or reach out at [email protected].