ASP.NET Core  

Integrating Machine Learning Models into ASP.NET Core Applications

Machine learning (ML) is no longer just a research topic—it is widely used in real-world applications like recommendation systems, fraud detection, sentiment analysis, predictive analytics, and more. ASP.NET Core, being a modern and high-performance web framework, provides excellent ways to integrate ML models into production-ready applications.

In this article, we will explore how to integrate ML models into ASP.NET Core applications , covering key approaches, best practices, and real-world considerations.

1. Understanding the Integration Approaches

There are multiple ways to integrate machine learning models into ASP.NET Core applications:

1. Direct Integration using ML.NET

ML.NET is Microsoft’s native machine learning framework for .NET. It allows you to train, evaluate, and deploy models directly within your ASP.NET Core application.

Use cases:

  • Predicting user behavior

  • Classification tasks

  • Regression tasks

  • Recommendation engines

2. Using Pre-Trained Models via ONNX

ONNX (Open Neural Network Exchange) allows running pre-trained models from other frameworks like TensorFlow or PyTorch in .NET applications using Microsoft.ML.OnnxRuntime .

Use cases:

  • Image classification

  • Object detection

  • Text embeddings or sentiment analysis

3. Using External AI/ML APIs

You can also use cloud-based ML APIs like Azure Cognitive Services, OpenAI, or AWS ML APIs . These services host models and provide REST APIs that your ASP.NET Core app can consume.

Use cases:

  • Text-to-speech

  • Sentiment analysis

  • Language translation

  • Chatbots

2. Project Setup

Step 1: Create a New ASP.NET Core Web API

  
    dotnet new webapi -n MLIntegrationApp
cd MLIntegrationApp
  

Step 2: Install Required Packages

For ML.NET :

  
    dotnet add package Microsoft.ML
dotnet add package Microsoft.ML.DataView
  

For ONNX runtime :

  
    dotnet add package Microsoft.ML.OnnxRuntime
  

If you plan to consume external ML APIs , install System.Net.Http.Json :

  
    dotnet add package System.Net.Http.Json
  

3. Example 1: Using ML.NET for Sentiment Analysis

ML.NET allows you to train models in .NET and consume them directly.

Step 1: Define Data Models

Models/SentimentData.cs :

  
    using Microsoft.ML.Data;

public class SentimentData
{
    [LoadColumn(0)]
    public string Text { get; set; } = string.Empty;

    [LoadColumn(1), ColumnName("Label")]
    public bool Label { get; set; }
}

public class SentimentPrediction
{
    [ColumnName("PredictedLabel")]
    public bool Prediction { get; set; }

    public float Probability { get; set; }
    public float Score { get; set; }
}
  

Step 2: Train the Model

  
    using Microsoft.ML;
using Microsoft.ML.Data;

public class SentimentModel
{
    private readonly string _dataPath = "Data/sentiment.csv";
    private readonly string _modelPath = "Models/sentiment_model.zip";
    private MLContext _mlContext;
    private ITransformer? _model;

    public SentimentModel()
    {
        _mlContext = new MLContext();
        TrainModel();
    }

    private void TrainModel()
    {
        var data = _mlContext.Data.LoadFromTextFile<SentimentData>(_dataPath, hasHeader: true, separatorChar: ',');
        var trainTestSplit = _mlContext.Data.TrainTestSplit(data, testFraction: 0.2);

        var pipeline = _mlContext.Transforms.Text.FeaturizeText("Features", nameof(SentimentData.Text))
            .Append(_mlContext.BinaryClassification.Trainers.SdcaLogisticRegression());

        _model = pipeline.Fit(trainTestSplit.TrainSet);

        _mlContext.Model.Save(_model, trainTestSplit.TrainSet.Schema, _modelPath);
    }

    public SentimentPrediction Predict(string input)
    {
        if (_model == null)
        {
            _model = _mlContext.Model.Load(_modelPath, out _);
        }

        var engine = _mlContext.Model.CreatePredictionEngine<SentimentData, SentimentPrediction>(_model);
        return engine.Predict(new SentimentData { Text = input });
    }
}
  

Step 3: Create API Controller

  
    using Microsoft.AspNetCore.Mvc;

[ApiController]
[Route("api/[controller]")]
public class SentimentController : ControllerBase
{
    private readonly SentimentModel _model;

    public SentimentController()
    {
        _model = new SentimentModel();
    }

    [HttpPost]
    public IActionResult Predict([FromBody] string text)
    {
        var prediction = _model.Predict(text);
        return Ok(prediction);
    }
}
  

Result:
POST request with {"text":"I love ASP.NET Core"} returns a prediction object with Prediction (true/false) and Probability .

4. Example 2: Using ONNX Models

ONNX allows integration of pre-trained models in ASP.NET Core.

Step 1: Install ONNX Runtime

  
    dotnet add package Microsoft.ML.OnnxRuntime
  

Step 2: Load ONNX Model

  
    using Microsoft.ML.OnnxRuntime;
using Microsoft.ML.OnnxRuntime.Tensors;

public class OnnxModelService
{
    private InferenceSession _session;

    public OnnxModelService(string modelPath)
    {
        _session = new InferenceSession(modelPath);
    }

    public float[] Predict(float[] inputData)
    {
        var inputTensor = new DenseTensor<float>(inputData, new int[] { 1, inputData.Length });
        var inputs = new List<NamedOnnxValue>
        {
            NamedOnnxValue.CreateFromTensor("input", inputTensor)
        };

        using var results = _session.Run(inputs);
        var output = results.First().AsEnumerable<float>().ToArray();
        return output;
    }
}
  

Key Notes:

  • ONNX is excellent for integrating pre-trained models from TensorFlow, PyTorch, or Hugging Face.

  • It is high-performance and can run multiple inference requests concurrently.

5. Example 3: Using External ML APIs

Sometimes, calling external AI APIs is more practical than hosting models locally.

Step 1: Create Service

  
    using System.Net.Http.Json;

public class AIService
{
    private readonly HttpClient _httpClient;

    public AIService(HttpClient httpClient)
    {
        _httpClient = httpClient;
    }

    public async Task<string> GetPredictionAsync(string input)
    {
        var response = await _httpClient.PostAsJsonAsync("https://api.example.com/predict", new { text = input });
        response.EnsureSuccessStatusCode();

        var result = await response.Content.ReadFromJsonAsync<dynamic>();
        return result?.prediction ?? "No result";
    }
}
  

Step 2: Inject Service in Controller

  
    [ApiController]
[Route("api/[controller]")]
public class ExternalMLController : ControllerBase
{
    private readonly AIService _aiService;

    public ExternalMLController(AIService aiService)
    {
        _aiService = aiService;
    }

    [HttpPost]
    public async Task<IActionResult> Predict([FromBody] string input)
    {
        var prediction = await _aiService.GetPredictionAsync(input);
        return Ok(new { prediction });
    }
}
  

6. Best Practices for Production

  1. Model Versioning :
    Keep track of model versions to avoid breaking changes. Store model version metadata in the database or configuration.

  2. Separation of Concerns :

    • ML logic in services

    • API endpoints only handle request/response

    • Keep training separate from inference

  3. Performance Optimization :

    • Use batch inference for multiple inputs

    • Use ONNX for CPU/GPU acceleration

    • Cache results for repeated requests

  4. Error Handling :

    • Validate inputs

    • Catch exceptions during inference

    • Return user-friendly errors

  5. Scalability :

    • Use background processing for heavy ML tasks (e.g., Hangfire, Azure Functions)

    • Offload training to separate services or cloud infrastructure

  6. Security :

    • Never expose internal model files

    • Validate external API responses

    • Rate-limit ML API endpoints if necessary

  7. Logging & Monitoring :

    • Track input and output metrics

    • Monitor latency and errors

    • Use Application Insights or Serilog for logging

7. Optional Enhancements

  • ASP.NET Core gRPC Integration : For faster inference between microservices.

  • Dockerize ML Services : Containerize models and APIs for consistent deployment.

  • Model Retraining Pipelines : Automate retraining with Azure ML pipelines or CI/CD.

  • Front-end Integration : Use Angular or React frontend to send input and display predictions dynamically.

8. Folder Structure

  
    MLIntegrationApp/
  Controllers/
    SentimentController.cs
    ExternalMLController.cs
  Models/
    SentimentData.cs
    SentimentPrediction.cs
  Services/
    SentimentModel.cs
    AIService.cs
    OnnxModelService.cs
  Program.cs
  appsettings.json
  Data/
    sentiment.csv
  Models/
    sentiment_model.zip
  

9. Conclusion

Integrating machine learning models into ASP.NET Core applications can be achieved in multiple ways:

  1. ML.NET for native .NET model training and prediction

  2. ONNX Runtime for using pre-trained models

  3. External ML APIs for cloud-hosted AI

By following best practices like separation of concerns, performance optimization, logging, versioning, and security, you can build production-ready ML-enabled applications in ASP.NET Core.

These architectures are suitable for:

  • Customer support chatbots

  • Fraud detection

  • Predictive analytics dashboards

  • Recommendation systems

  • NLP and image classification

With this knowledge, developers can confidently add AI and ML capabilities into enterprise-grade .NET applications.