5 min read

Automatic Compliance Tracking: Know If Your AI Followed the Rules

Convext
Convext Staff

Here’s a workflow problem we kept running into: after completing a task with an AI coding assistant, you’re supposed to verify it followed the rules. Did it write tests first? Did it link the commit to the task? Did it run the linter?

Tracking this manually is tedious. The AI has to remember which files changed, call a reporting tool, include commit SHAs, and mark whether tests passed. It’s error-prone and relies entirely on the AI’s memory—which isn’t great when you’re 47 messages deep in a complex refactor.

So we built something better.

The Old Way: Manual Everything

Before this update, completing a Convext task looked like this:

# AI had to manually track and report:
report_task_complete_tool(
  task_description: "Add user validation",
  files_changed: ["app/models/user.rb", "test/models/user_test.rb"],
  commit_sha: "abc123",
  tests_run: true,
  tests_passed: true
)

Problems with this approach:

  1. Memory burden — The AI has to remember every file it touched
  2. Easy to skip — Nothing enforces calling the reporting tool
  3. Self-reported — The AI claims it ran tests, but did it really?
  4. No verification — No way to confirm the commit includes what the AI claims

The New Way: Git Is the Source of Truth

Now, compliance tracking is automatic. When you mark a task complete with a commit SHA, Convext fetches the actual commit data from GitHub and calculates compliance from what actually happened—not what the AI says happened.

# All you need now:
manage_task_tool(
  action: "update",
  task_id: 501,
  status: "completed",
  commit_sha: "abc123"
)

# Convext automatically:
# 1. Fetches commit from GitHub API
# 2. Extracts files changed
# 3. Checks if tests were included
# 4. Validates commit message has task ID
# 5. Calculates compliance score
# 6. Returns everything in the response

The response now includes a full compliance breakdown:

{
  "task": {
    "id": 501,
    "status": "completed",
    "commit_sha": "abc123",
    "commit_files_changed": ["app/models/user.rb", "test/models/user_test.rb"],
    "commit_message": "feat: Add user validation [convext #501]"
  },
  "compliance": {
    "score": 100,
    "metrics": {
      "task_started_before_commit": { "passed": true, "weight": 25, "points": 25 },
      "tests_included": { "passed": true, "weight": 30, "points": 30 },
      "commit_has_task_id": { "passed": true, "weight": 20, "points": 20 },
      "lint_passed": { "passed": true, "weight": 25, "points": 25 }
    }
  }
}

What Gets Checked

The compliance score is calculated from four metrics, each with a weight:

Metric Weight What It Checks
Task started before commit 25% Did you mark the task in-progress before committing?
Tests included 30% Does the commit include files in test/ or spec/?
Commit has task ID 20% Does the commit message include [convext #ID]?
Lint passed 25% Was the linter run before committing?

A perfect score is 100. Skip writing tests? That’s -30 points. Forget to start the task first? -25 points.

Why This Matters for Teams

1. Accountability Without Micromanagement

You can now see, at a glance, whether your AI coding sessions are following your engineering standards. No need to review every commit manually—the compliance dashboard shows aggregate scores across projects.

2. Objective Metrics

The score isn’t based on AI self-reporting. It’s derived from the actual git commit. The commit either includes test files or it doesn’t. The message either has the task ID or it doesn’t. No interpretation required.

3. Trend Tracking

Track compliance over time. Are your AI-assisted commits becoming more compliant or less? Is a particular project slipping? The dashboard makes this visible.

4. Faster Feedback

The AI gets immediate feedback on compliance when completing a task. If tests are missing, it knows right away—not after a failed code review.

What This Looks Like in Practice

Here’s a real workflow with the new system:

# 1. Start the task
manage_task_tool(action: "update", task_id: 501, workflow_state: "in_progress")

# 2. Write tests first (TDD)
# ... AI writes test/models/user_test.rb ...

# 3. Implement
# ... AI writes app/models/user.rb ...

# 4. Lint and test locally
bin/lint --check app/models/user.rb
rails test test/models/user_test.rb

# 5. Commit with task ID
git commit -m "feat: Add user validation [convext #501]"

# 6. Complete the task (compliance auto-calculated)
manage_task_tool(
  action: "update",
  task_id: 501,
  status: "completed",
  commit_sha: "$(git rev-parse HEAD)"
)
# Response includes compliance: { score: 100, ... }

If the AI skipped writing tests:

{
  "compliance": {
    "score": 70,
    "metrics": {
      "tests_included": { "passed": false, "weight": 30, "points": 0 }
    }
  }
}

Instant feedback. No ambiguity.

The Compliance Dashboard

Beyond individual task scores, we built a dashboard that aggregates compliance across your organization:

This gives engineering leads visibility into AI-assisted development quality without reviewing every PR.

What’s Next

This is the foundation for more advanced compliance features:

Try It Out

If you’re already using Convext MCP tools, this is live now. Your next completed task will automatically include compliance scoring.

New to Convext? The automatic compliance tracking is part of our core platform. Get started →


Stop trusting self-reported compliance. Let the commits speak for themselves.