AI  

Schrödinger's AI Part 14.2: ReviewMyCode MCP Server: Core Implementation

First, let’s address the obvious question.

Why am I calling this 14.2 instead of just Part 15?

Well, teaching how to build a production-grade MCP server definitely isn’t going to fit into a single article, is it?

So instead of cramming everything into one massive post, we’re turning this into a 4-part mini-series.

Yes, I know how that sounds.

A series… inside another series?

Umm. Yeah.

Anyway, here’s how this is going to work:

Schrödinger's AI is your invitation to look inside. Right now, AI feels like a mystery , wired like a brain, yet running on pure math.

Each article is a new layer of the box. We start with the first spark of an idea and move all the way to the models reshaping everything we thought we knew .

Explore the entire series Schrodingers-AI

I’d suggest cloning the code from my repository: review-my-code-mcp

It’ll make it easier to follow along with the project as we build it. That said, it’s not strictly required since we’ll be building everything step by step throughout the series.

Schrödinger’s AI

Part 14.2: ReviewMyCode MCP Server: Core Implementation

In Part 14.1, we covered the project structure and organization

  • How Program.cs bootstraps the MCP server and registers services

  • The MCP request/response flow

  • How dependency injection wires everything together

Now we dive into the core execution engine: the services that actually do the work.

When a client calls review_csharp_code with C# code, two services take action:

  1. ReviewAnalyzer: Executes all rules against the code and collects findings

  2. ReviewScorer: Calculates a 0-10 quality score and per-category scores

This article explains both, with code, flow diagrams, and step-by-step examples so you see exactly how findings flow through the system.

The Pipeline Overview

Rikam Palkar Review code

Let's implement each layer.

The Interfaces: Defining the Contracts

Before implementation, we define interfaces so services are testable and replaceable.

IReviewAnalyzer

Create Services/IReviewAnalyzer.cs:

using McpCodeReviewServer.Models;

namespace McpCodeReviewServer.Services;

public interface IReviewAnalyzer
{
    ReviewAnalysisResult Analyze(string code, int maxIssues);
}
  • code: The raw C# source (string)

  • maxIssues: Client can request how many issues to get back (we check more, but cap the response)

  • ReturnReviewAnalysisResult contains:

    • All findings (used for scoring)

    • Returned findings (capped by maxIssues)

    • Category coverage (how many rules checked/matched per category)

IReviewScorer

Create Services/IReviewScorer.cs:

using McpCodeReviewServer.Models;

namespace McpCodeReviewServer.Services;

public interface IReviewScorer
{
    int CalculateScore(IReadOnlyCollection<ReviewIssue> issues);
    IReadOnlyCollection<CategoryReviewScore> CalculateCategoryScores( IReadOnlyCollection<CategoryAnalysis> categoryAnalyses,
                                                                                                                                                                                                                             IReadOnlyCollection<ReviewIssue> issues);
}
  • CalculateScore: Takes issues, returns 0-10 score

  • CalculateCategoryScores: Breaks scores down by category with rule coverage metrics

The Data Container: RuleContext

Before analyzing code, we wrap it in a data structure that rules can use.

Create Rules/Abstractions/RuleContext.cs:

namespace McpCodeReviewServer.Rules.Abstractions;

public sealed class RuleContext
{
    public RuleContext(string code, IReadOnlyList<string> lines)
    {
        Code = code;
        Lines = lines;
    }

    public string Code { get; }
    public IReadOnlyList<string> Lines { get; }
}
  • Rules need both the full code (for regex, pattern matching) and line-by-line access (for line numbers)

  • Normalization ensures consistent line breaks across Windows (\r\n) and Unix (\n) systems

  • Immutable design means no rule can accidentally corrupt data

Implementation 1: ReviewAnalyzer

Now let's implement the analyzer that runs all rules.

Create Services/ReviewAnalyzer.cs:

using McpCodeReviewServer.Models;
using McpCodeReviewServer.Rules.Abstractions;

namespace McpCodeReviewServer.Services;

public sealed class ReviewAnalyzer : IReviewAnalyzer
{
    // All rules collected at startup (from all registered providers)
    private readonly IReadOnlyCollection<RegisteredRule> _rules;

    public ReviewAnalyzer(IEnumerable<IRuleGroupProvider> ruleGroups)
    {
        // Build the master rule list from all providers
        _rules = ruleGroups
            .SelectMany(group => 
                group.BuildRules()  // Each provider returns its rules
                    .Select(rule => new RegisteredRule(group.Category, rule))  // Pair each rule with its category
            )
            .ToArray();
    }

    public ReviewAnalysisResult Analyze(string code, int maxIssues)
    {
        // Ensure maxIssues is at least 1
        var normalizedMax = Math.Max(1, maxIssues);

        // Step 1: Normalize code into lines
        // This handles Windows (\r\n) and Unix (\n) line endings consistently
        var lines = NormalizeLines(code);

        // Step 2: Create context (immutable container for rules to use)
        var context = new RuleContext(code, lines);

        // Step 3: Prepare to collect findings
        var allIssues = new List<ReviewIssue>();
        var categoryCoverage = new Dictionary<string, CategoryCounter>(StringComparer.OrdinalIgnoreCase);

        // Step 4: Run each rule
        foreach (var rule in _rules)
        {
            // Get or create category entry
            var category = string.IsNullOrWhiteSpace(rule.Category) ? "uncategorized" : rule.Category;
            if (!categoryCoverage.TryGetValue(category, out var counter))
            {
                counter = new CategoryCounter();
                categoryCoverage[category] = counter;
            }

            // Mark this rule as checked
            counter.RulesChecked++;

            // Evaluate the rule against the context
            // Rule returns ReviewIssue or null
            var issue = rule.Evaluate(context);

            // If rule matched, record the finding
            if (issue is not null)
            {
                counter.RulesMatched++;
                allIssues.Add(issue);
            }
        }

        // Step 5: Cap returned issues to maxIssues
        // We check all rules, but only return up to maxIssues
        var returnedIssues = allIssues.Take(normalizedMax).ToArray();

        // Step 6: Build category analysis
        var categoryAnalyses = categoryCoverage
            .Select(entry => new CategoryAnalysis(
                entry.Key,                      // Category name
                entry.Value.RulesChecked,       // How many rules in this category
                entry.Value.RulesMatched))      // How many matched
            .OrderBy(entry => entry.Category, StringComparer.OrdinalIgnoreCase)
            .ToArray();

        // Return complete result
        return new ReviewAnalysisResult(allIssues, returnedIssues, categoryAnalyses);
    }

    private static IReadOnlyList<string> NormalizeLines(string code)
    {
        return code
            .Replace("\r\n", "\n", StringComparison.Ordinal)  // Windows → Unix
            .Replace('\r', '\n')                              // Old Mac → Unix
            .Split('\n');
    }

    private sealed class RegisteredRule
    {
        public RegisteredRule(string category, ICodeRule rule)
        {
            Category = category;
            Rule = rule;
        }

        public string Category { get; }
        public ICodeRule Rule { get; }
        public ReviewIssue? Evaluate(RuleContext context) => Rule.Evaluate(context);
    }

    private sealed class CategoryCounter
    {
        public int RulesChecked { get; set; }
        public int RulesMatched { get; set; }
    }
}

Walk-Through

Let's trace through an example. Suppose we call:

var analyzer = /* ReviewAnalyzer instance */;
var result = analyzer.Analyze(
    code: "public async void BadAsync() { }",
    maxIssues: 50
);

This is how it looks in cursor:

Review my code
  1. Normalize lines:

    • Input code has 1 line

    • After normalization: lines = ["public async void BadAsync() { }"]

  2. Create RuleContext:

    context = new RuleContext(
        Code: "public async void BadAsync() { }",
        Lines: ["public async void BadAsync() { }"]
    )
  3. Run each rule (simplified; imagine we have ~127):

    • Rule 1 (AsyncRulesProvider): Detect async void

      • Evaluates context

      • Finds async void in the code

      • Returns ReviewIssue { Severity: "critical", Category: "async correctness", ... }

      • Counter for "async correctness": RulesChecked = 1, RulesMatched = 1

    • Rule 2: Detect .Result

      • Evaluates context

      • No .Result found

      • Returns null

      • Counter for "async correctness": RulesChecked = 2, RulesMatched = 1

    • ... (more rules)

  4. Cap to maxIssues:

    • allIssues has ~8 findings

    • maxIssues = 50

    • returnedIssues = take all 8

  5. Build category analysis:

    categoryAnalyses = [
        new CategoryAnalysis(
            Category: "async correctness",
            RulesChecked: 27,
            RulesMatched: 2
        ),
        new CategoryAnalysis(
            Category: "security",
            RulesChecked: 18,
            RulesMatched: 0
        ),
        // ... other categories
    ]
  6. Return:

    return new ReviewAnalysisResult(
        AllIssues: [8 issues],
        Issues: [8 issues],      // Same as AllIssues (under maxIssues cap)
        CategoryAnalyses: [8 categories with coverage]
    )

The key insight: We run all 127 rules, collect all findings, but only return what was requested.

Cursor might give you this:

Curosr total issues

Implementation 2: ReviewScorer

Now let's score the findings.

Create Services/ReviewScorer.cs:

using McpCodeReviewServer.Models;

namespace McpCodeReviewServer.Services;

public sealed class ReviewScorer : IReviewScorer
{
    public int CalculateScore(IReadOnlyCollection<ReviewIssue> issues)
    {
        var score = 10;

        foreach (var issue in issues)
        {
            // Deduct points based on severity
            score -= issue.Severity switch
            {
                "critical" => 3,    // Serious problems
                "warning" => 2,     // Important but not critical
                _ => 1              // Suggestions (suggestions, info, etc.)
            };
        }

        // Ensure score stays in 0-10 range
        return Math.Clamp(score, 0, 10);
    }


    public IReadOnlyCollection<CategoryReviewScore> CalculateCategoryScores( IReadOnlyCollection<CategoryAnalysis> categoryAnalyses,
                                                                                                                                                                                                                                              IReadOnlyCollection<ReviewIssue> issues)
    {
        // Step 1: Group issues by category for quick lookup
        var issueLookup = issues
            .GroupBy(issue => issue.Category, StringComparer.OrdinalIgnoreCase)
            .ToDictionary(group => group.Key, group => group.ToArray(), StringComparer.OrdinalIgnoreCase);

        // Step 2: Build category scores
        var categoryScores = new List<CategoryReviewScore>(categoryAnalyses.Count);

        foreach (var analysis in categoryAnalyses)
        {
            // Get issues for this category (or empty if none)
            var categoryIssues = issueLookup.TryGetValue(analysis.Category, out var found)
                ? found
                : Array.Empty<ReviewIssue>();

            // Calculate score for just this category's issues
            var score = CalculateScore(categoryIssues);

            // Build category score with metrics
            categoryScores.Add(new CategoryReviewScore(
                analysis.Category,
                score,
                analysis.RulesChecked,
                analysis.RulesMatched,
                categoryIssues.Length
            ));
        }

        return categoryScores;
    }
}

Let's use an example.

Scenario: Code has 3 issues

  • Issue 1: severity = "critical" (async void method)

  • Issue 2: severity = "warning" (using .Result)

  • Issue 3: severity = "suggestion" (naming convention)

Calculation:

score = 10
score -= 3 (for critical)    → score = 7
score -= 2 (for warning)     → score = 5
score -= 1 (for suggestion)  → score = 4
result = Clamp(4, 0, 10)     → score = 4

What this means: Code with these issues gets a 4/10 score.

Another scenario: No issues

score = 10
no deductions
result = 10

Perfect code gets a 10/10.

Extreme scenario: Many critical issues

score = 10
-3 (critical 1) → 7
-3 (critical 2) → 4
-3 (critical 3) → 1
-3 (critical 4) → -2
result = Clamp(-2, 0, 10) → score = 0

Score is clamped to 0, so very bad code gets 0/10.

Per-Category Scoring

For each category, we apply the same scoring formula but only to issues in that category.

Example: Suppose we have:

  • "async correctness" category: 2 issues (1 critical, 1 warning)

    • score = 10 - 3 - 2 = 5

  • "security" category: 0 issues

    • score = 10

  • "performance" category: 3 issues (all suggestions)

    • score = 10 - 1 - 1 - 1 = 7

Response includes:

"categoryScores": [
  {
    "category": "async correctness",
    "score": 5,
    "rulesChecked": 27,
    "rulesMatched": 2,
    "issueCount": 2
  },
  {
    "category": "security",
    "score": 10,
    "rulesChecked": 18,
    "rulesMatched": 0,
    "issueCount": 0
  },
  {
    "category": "performance",
    "score": 7,
    "rulesChecked": 22,
    "rulesMatched": 3,
    "issueCount": 3
  }
]

This gives clients detailed visibility: "Your async code is risky (5/10), but security and performance are reasonable."

This is how it looks in cursor:

Review my code summary

This json format helps cursor to form its own summary:

Cursor outputCursor summary 1

How These Fit Into CodeReviewTool

Now let's see how CodeReviewTool uses both analyzer and scorer.

Tools/CodeReviewTool.cs (partial code):

[McpServerTool(Name = "review_csharp_code")]
 public string ReviewCSharpCode( [Description("Raw C# source code to review.")] string code,  [Description("Maximum number of issues to return.")] int maxIssues = 50)
{
    try
    {
        var invocationId = Guid.NewGuid().ToString("N");

        // Guard: empty code
        if (string.IsNullOrWhiteSpace(code))
        {
            // Return early with empty result
            var emptyResult = new ReviewResult(...);
            return JsonSerializer.Serialize(emptyResult, JsonOptions);
        }

        // Step 1: Analyze code (run rules, collect findings)
        var analysis = _reviewAnalyzer.Analyze(code, maxIssues);

        // Step 2: Calculate overall score
        var score = _reviewScorer.CalculateScore(analysis.AllIssues);

        // Step 3: Calculate per-category scores
        var categoryScores = _reviewScorer.CalculateCategoryScores(
            analysis.CategoryAnalyses,
            analysis.AllIssues
        );

        // Step 4: Extract metadata
        var checkedCategories = analysis.CategoryAnalyses
            .Select(category => category.Category)
            .OrderBy(category => category, StringComparer.OrdinalIgnoreCase)
            .ToArray();

        var totalRulesChecked = analysis.CategoryAnalyses.Sum(category => category.RulesChecked);
        var totalRulesMatched = analysis.CategoryAnalyses.Sum(category => category.RulesMatched);

        // Step 5: Build suggested changes (normalized fix list)
        var suggestedChanges = analysis.Issues
            .Select(issue => new SuggestedChange(
                issue.Severity,
                issue.Category,
                issue.Line,
                issue.Description,
                issue.Fix
            ))
            .ToArray();

        // Step 6: Build final response
        var result = new ReviewResult(
            summary: BuildSummary(score, analysis.Issues.Count),
            score: score,
            issues: analysis.Issues,
            invocationId: invocationId,
            totalRulesChecked: totalRulesChecked,
            totalRulesMatched: totalRulesMatched,
            checkedCategories: checkedCategories,
            categoryScores: categoryScores,
            suggestedChanges: suggestedChanges
        );

        // Step 7: Serialize to JSON and return
        return JsonSerializer.Serialize(result, JsonOptions);
    }
    catch (Exception ex)
    {
        _logger.LogError(ex, "Error during code review");
        // Return error response
        ...
    }
}

private static string BuildSummary(int score, int issueCount)
{
    return score switch
    {
        >= 9 => "Excellent code quality with minimal issues.",
        >= 7 => "Good code quality. Minor issues found.",
        >= 5 => "Moderate code quality. Several issues need attention.",
        >= 3 => "Significant quality issues. Review and refactor recommended.",
        _ => "Critical quality issues. Immediate action required."
    };
}

The Complete Picture

Rikam Palkar Code Review Flow 1

Summary

In this article, you learned:

  1. The analysis pipeline: How code flows from input to rules to findings

  2. ReviewAnalyzer implementation: How it normalizes code, runs rules, and tracks coverage

  3. ReviewScorer implementation: How it calculates 0-10 scores with a specific penalty formula

  4. RuleContext: Immutable data structure passed to all rules

  5. Integration: How CodeReviewTool orchestrates both services

In Article 16, we'll build the rule system itself:

  • ICodeRule interface (what all rules implement)

  • Concrete rule types: ContainsTokenRuleRegexRuleDelegateRule

  • How to build a rule provider

  • All eight category providers

The cat is neither alive nor dead and honestly, that's the most exciting place to be. There are a lot more layers to uncover.

Explore the entire series Schrodingers-AI

I’d suggest cloning the code from my repository: review-my-code-mcp

It’ll make it easier to follow along with the project as we build it. That said, it’s not strictly required since we’ll be building everything step by step throughout the series.

Previous: Part 14.1: ReviewMyCode MCP Server: Foundation and Architecture

Next: Part 14.3: ReviewMyCode MCP Server: Rules & Extensibility