Introduction
Voice technology is no longer limited to smart speakers. From AI assistants to voice-powered dashboards, speech recognition has become one of the most exciting ways to improve user experience.
In this article, we’ll build a Voice-Controlled Angular Dashboard that listens to voice commands and interacts with an ASP.NET Core API to perform tasks such as fetching data, navigating pages, or executing actions — hands-free.
Why Voice Control in Web Dashboards?
Imagine a manager saying:
“Show me today’s sales report”
“Open inventory dashboard”
“Add new purchase order”
— and your web app does it instantly!
Adding voice control makes dashboards:
More accessible (especially for differently-abled users),
More efficient (hands-free data retrieval),
And simply futuristic.
Tech Stack Overview
| Layer | Technology | Purpose |
|---|
| Frontend | Angular 17+ | Main UI and Voice Recognition |
| Speech Recognition | Web Speech API (Browser API) | Converts speech to text |
| Backend | ASP.NET Core 8 Web API | Business logic and data processing |
| Database | SQL Server | Stores dashboard data |
Architecture Overview
Here’s the high-level flow of how everything works:
🎤 User speaks
↓
🧠 Angular (Web Speech API)
↓
🧩 Command Processor (Angular service)
↓
🌐 ASP.NET Core API
↓
🗄️ SQL Server Database
↓
📊 Angular Dashboard updates dynamically
Step 1: Setting up the Angular Project
ng new voice-dashboard
cd voice-dashboard
npm install
Then create a SpeechService to handle recognition logic.
speech.service.ts
import { Injectable, NgZone } from '@angular/core';
@Injectable({ providedIn: 'root' })
export class SpeechService {
recognition: any;
isListening = false;
transcript = '';
constructor(private zone: NgZone) {
const SpeechRecognition = (window as any).webkitSpeechRecognition || (window as any).SpeechRecognition;
this.recognition = new SpeechRecognition();
this.recognition.continuous = true;
this.recognition.lang = 'en-US';
this.recognition.onresult = (event: any) => {
const text = event.results[event.results.length - 1][0].transcript.trim();
this.zone.run(() => this.transcript = text);
};
}
startListening() {
this.isListening = true;
this.recognition.start();
}
stopListening() {
this.isListening = false;
this.recognition.stop();
}
}
Step 2: Voice-Controlled Dashboard Component
dashboard.component.ts
import { Component, OnInit } from '@angular/core';
import { SpeechService } from '../speech.service';
import { HttpClient } from '@angular/common/http';
@Component({
selector: 'app-dashboard',
templateUrl: './dashboard.component.html'
})
export class DashboardComponent implements OnInit {
message = 'Say something...';
apiUrl = 'https://localhost:7200/api/dashboard';
constructor(public speech: SpeechService, private http: HttpClient) {}
ngOnInit() {
this.speech.recognition.onresult = (event: any) => {
const command = event.results[event.results.length - 1][0].transcript.toLowerCase();
this.message = `Command: ${command}`;
this.handleCommand(command);
};
}
handleCommand(command: string) {
if (command.includes('sales report')) {
this.http.get(`${this.apiUrl}/sales`).subscribe(data => console.log(data));
} else if (command.includes('inventory')) {
this.http.get(`${this.apiUrl}/inventory`).subscribe(data => console.log(data));
} else if (command.includes('stop listening')) {
this.speech.stopListening();
this.message = 'Voice recognition stopped.';
}
}
toggleListening() {
this.speech.isListening ? this.speech.stopListening() : this.speech.startListening();
}
}
dashboard.component.html
<div class="dashboard">
<h2>{{ message }}</h2>
<button (click)="toggleListening()">
{{ speech.isListening ? 'Stop Listening' : 'Start Voice Control' }}
</button>
</div>
Step 3: ASP.NET Core API Setup
DashboardController.cs
using Microsoft.AspNetCore.Mvc;
namespace VoiceDashboard.API.Controllers
{
[ApiController]
[Route("api/[controller]")]
public class DashboardController : ControllerBase
{
[HttpGet("sales")]
public IActionResult GetSalesReport()
{
var data = new { Total = 150000, Region = "North", Date = DateTime.Now };
return Ok(data);
}
[HttpGet("inventory")]
public IActionResult GetInventory()
{
var data = new[] {
new { Item = "Laptop", Quantity = 15 },
new { Item = "Monitor", Quantity = 42 }
};
return Ok(data);
}
}
}
Step 4: Make It Smarter (Optional)
You can enhance it further using OpenAI or Gemini APIs for Natural Language Understanding (NLU).
For example
User says: “How was sales this week compared to last week?”
Angular sends this text to the OpenAI API.
API returns structured intent → “GetWeeklySalesTrend”
ASP.NET Core executes corresponding logic.
Step 5: UI Flow and Voice Command Example
Example Flow
| Voice Command | Action Performed |
|---|
| “Show sales report” | Angular calls /sales API |
| “Show inventory” | Angular calls /inventory API |
| “Stop listening” | Turns off microphone |
Flowchart (Voice-to-Action Pipeline)
+---------------------+
| User Speaks Voice |
+----------+----------+
|
v
+---------------------------+
| Angular Web Speech API |
+----------+----------------+
|
v
+-----------------------------+
| Angular Command Processor |
+----------+------------------+
|
v
+-------------------------+
| ASP.NET Core API Call |
+----------+--------------+
|
v
+---------------------+
| SQL Server / Data |
+---------------------+
|
v
+--------------------------+
| Angular Dashboard View |
+--------------------------+
Step 6: Browser Compatibility Notes
Works best in Chrome and Edge.
Safari and Firefox have partial speech recognition support.
Use a secure HTTPS context (microphone access won’t work on HTTP).
Step 7: Security & Privacy Considerations
When enabling microphone access:
Always ask for permission clearly.
Do not send raw voice data to the backend.
Convert to text in the browser, then process.
Real-Life Use Cases
Manufacturing Dashboards: Supervisors can check production status hands-free.
Warehouse Apps: workers can log stock updates by speaking.
Healthcare Systems: doctors can view patient data without touching screens.
Conclusion
The combination of Angular, Speech Recognition, and ASP.NET Core APIs opens a new frontier for natural, voice-driven web experiences.
By using simple voice commands, users can control dashboards, fetch live reports, and interact with systems in real time — making your web apps not just smarter, but more human-friendly.