API structure loaded from DB → change APIs without code changes
Enterprises often need fast, low-risk ways to expose, adapt and version APIs. A Metadata-Driven API Engine lets you define routes, request/response shapes, validation rules, transformations, and security policies in a database (or a central configuration store). The engine reads that metadata at runtime (or on deploy) and exposes fully working HTTP endpoints — no code changes required for many common API changes.
This article is a production-ready, beginner-to-expert guide for senior developers. It explains architecture, data models, runtime design, security, validation, versioning, performance, testing, and operational concerns. Example code is in .NET (ASP.NET Core), but the concepts are applicable to other platforms.
Table of contents
Problem statement and goals
High-level benefits and tradeoffs
Architecture overview
Workflow diagram (how metadata → API)
Flowchart (runtime request processing)
Metadata model (tables / JSON schema)
Core engine components (router, controller factory, binder, validator, transformer, policy engine)
Implementation details — ASP.NET Core patterns and snippets
Validation, schema evolution and compatibility
Security (auth, authorization, input sanitation, rate limits)
Performance, caching and scaling strategies
Observability, testing and CI/CD for metadata changes
Governance, audit, and approvals
Real-world use cases and examples
Limitations and when not to use this approach
Roadmap: extending the engine (UI, DSL, GraphQL adapter)
Conclusion and next steps
1. Problem statement and goals
Typical problem:
Product teams need APIs for many business cases. Each change requires code change → PR → QA → deploy. This slows down iteration.
Clients (mobile/web/partner systems) need slightly different shapes or behaviours.
Integration teams want to add “lightweight” endpoints quickly without full engineering time.
Goal for a metadata-driven API engine:
Allow ops or API architects to define endpoints in metadata stored in a DB or config service.
Support CRUD, simple queries, composition endpoints, transformation, and mapping to backend services (DB, microservices).
Provide runtime validation, security policies, rate limits and auditing.
Allow safe, auditable changes with governance (approval flows and versioning).
Keep high performance and strong security.
2. High-level benefits and tradeoffs
Benefits
Faster change turnaround — add or tweak endpoints without code deploys.
Centralised governance — policies, versions and audits in one place.
Reuse — the same metadata engine exposes many shapes for different consumers.
Reduced maintenance for simple endpoints.
Tradeoffs / Risks
Not ideal for complex business logic — the engine is best for CRUD, mappings, simple composition and routing.
Debugging and observability need careful design because logic is in metadata.
Overuse may lead to an ad-hoc "low code" spaghetti if rules and transforms multiply.
Security and performance must be enforced centrally.
3. Architecture overview
Main components
Metadata Store: relational DB (Postgres/SQL Server) or document store containing endpoint definitions, models, transformations, policies and versions.
Admin UI / API: UI for creating/editing endpoint metadata and templates, plus approval workflow.
API Engine: the runtime service that loads metadata, instantiates endpoints dynamically, validates requests, transforms inputs, invokes backend handlers (DB queries, HTTP calls, functions), transforms responses and applies policies.
Backend Connectors: adapters for DB, gRPC, HTTP, message queues, or serverless functions.
Policy Engine: authz rules, rate limiting, request quotas, logging rules.
Audit & Telemetry: logs of metadata changes, requests, approvals, and errors.
Cache / CDN: to speed responses for stable endpoints.
Components communicate securely; metadata is cached and refreshed to avoid DB roundtrips on every request.
4. Workflow diagram (metadata → API)
+-----------------+ +-----------------+ +-----------------+
| Admin UI / CI | ---> | Metadata Store | ---> | API Engine |
| (create/update) | | (endpoints, | | (loads metadata |
+-----------------+ | models, policies)| | and exposes) |
| |
v v
+------------------+ +-------------------+
| Approval Workflow| | Backend Connectors|
+------------------+ +-------------------+
5. Flowchart: runtime request processing
Incoming HTTP Request
|
v
Match route to metadata (fast lookup)
|
v
Check API version & status (active? deprecated?)
|
v
Authorize (authN/authZ) using metadata policy
|
v
Bind and parse inputs (JSON/Query/Form) → apply type coercion
|
v
Validate against schema (required, types, regex)
|
v
If validation fail → return 4xx with structured error
|
v
Transform input (mapping, enrichment) if metadata specifies
|
v
Invoke backend connector(s) (DB, HTTP, function)
|
v
Transform backend response to API response schema
|
v
Apply response filters (masking, pagination, projection)
|
v
Apply caching / rate limits / metrics
|
v
Return response & write audit log
6. Metadata model (tables / JSON schema)
A compact normalized relational model (can be adapted to document DB). Keep metadata small and versioned.
Tables (conceptual)
Metadata example (JSON stored in ApiEndpoint.jsonDefinition)
{
"id": "customers.search.v1",
"path": "/v1/customers",
"method": "GET",
"status": "active",
"requestModel": "CustomerSearchRequest",
"responseModel": "CustomerListResponse",
"backend": {
"type": "sql",
"connectionId": "orders-db",
"queryTemplate": "SELECT id, name, email FROM customers WHERE name ILIKE @name LIMIT @limit OFFSET @offset"
},
"mappings": [
{ "from": "query.name", "to": "@name", "transform": "trim|sqlLikePattern" },
{ "from": "query.limit", "to": "@limit", "default": 25 }
],
"policies": {
"auth": { "required": true, "roles": ["CustomerViewer"] },
"rateLimit": { "perMinute": 120 }
}
}
Store ApiModel definitions with JSON Schema v2020-12 so you can validate inputs and generate docs automatically.
7. Core engine components
Design the engine as modular middleware and runtime factories:
Metadata Loader & Cache
Loads metadata at startup and refreshes via polling or push (webhooks). Keep a fast in-memory index keyed by (path, method, version).
Use an immutable versioned snapshot to avoid partial updates.
Router / Endpoint Factory
Request Binder
Bind incoming values (route, query, headers, body, cookies) into a canonical context object.
Coerce types and apply simple transform functions.
Schema Validator
Transform Engine
Allow lightweight transformations specified in metadata. Avoid raw JS execution unless you run in a sandbox (e.g., Jailed V8, Wasm). Prefer a safe expression language (JMESPath, CEL, JsonLogic) or a sandboxed script engine with strict whitelisting.
Backend Connectors
Implement adapter pattern for SQL, NoSQL, HTTP, gRPC, function invocation, or message queue. Use parameterized queries and connection pooling. Secrets and connection strings must be stored encrypted and injected via Key Vault.
Response Mapper & Filter
Map backend results to response model, apply projection, masking, pagination. Support fields= style projections if enabled.
Policy Engine
Enforce authZ (roles/scopes), rate limits, quotas, CORS, and audit rules. Integrate with existing identity provider (OIDC, JWT) and RBAC.
Audit & Metrics
Structured logging for request metadata (apiId, version, userId, latency, outcome). Generate trace IDs and integrate with distributed tracing (OpenTelemetry).
Admin & Approval Workflow
8. Implementation details — ASP.NET Core patterns and snippets
Two approaches:
Approach A: Dynamic route registration — create route handlers at startup for each active endpoint. This requires host restart or hot rewire capability.
Approach B: Single dispatcher middleware — register a single middleware that inspects incoming request path & method and dispatches to metadata. This is simpler for hot changes.
I recommend Approach B for runtime agility.
8.1 Dispatcher middleware (simplified)
public class MetadataDispatcherMiddleware
{
private readonly RequestDelegate _next;
private readonly IMetadataService _metadata;
private readonly IEngine _engine;
public MetadataDispatcherMiddleware(RequestDelegate next, IMetadataService metadata, IEngine engine)
{
_next = next;
_metadata = metadata;
_engine = engine;
}
public async Task Invoke(HttpContext context)
{
var path = context.Request.Path.Value;
var method = context.Request.Method;
var ep = _metadata.Lookup(path, method, context.Request.Headers["Accept-Version"]);
if (ep == null)
{
await _next(context);
return;
}
try
{
var result = await _engine.HandleRequestAsync(context, ep);
context.Response.StatusCode = result.StatusCode;
context.Response.ContentType = "application/json";
await context.Response.WriteAsync(JsonSerializer.Serialize(result.Payload));
}
catch (ApiException ex)
{
context.Response.StatusCode = ex.Status;
await context.Response.WriteAsync(JsonSerializer.Serialize(new { error = ex.Message }));
}
}
}
Register at startup early, after authentication middleware.
8.2 Engine.HandleRequestAsync (high level)
Parse & bind inputs.
Validate request JSON against request model.
Apply pre-invoke transforms.
Call backend adapter(s). Support composition: call multiple backends and merge results in memory.
Apply post-invoke transforms and map to response model.
Return.
8.3 Safe transformations
Prefer expression languages rather than arbitrary JS. Example: use JsonLogic or CEL for transforms and conditions.
Example transform metadata:
"transform": {
"stage": "post",
"scriptType": "cel",
"script": "response.items.map(i, { id: i.id, fullName: concat(i.firstName, ' ', i.lastName) })"
}
Evaluate in a sandboxed CEL engine with controlled imports.
8.4 Backend SQL example with parameter binding
Use parameterized SQL templates. Do not construct raw SQL via string concatenation.
SELECT id, name, email FROM customers
WHERE (@name IS NULL OR name ILIKE @name)
ORDER BY id
LIMIT @limit OFFSET @offset
Template engine populates @name with sanitized parameter. For complex queries prefer stored procedures or prepared statements.
9. Validation, schema evolution and compatibility
Store models as JSON Schema. Use schema versioning: Customer.v1, Customer.v2.
Support backward compatibility: keep old response shapes for existing clients, deprecate gradually.
Use compatibility checks when changing models: require that new response schema is a superset of previous (or provide adapters).
Provide a staging environment where metadata changes are tested end-to-end before approval.
For breaking changes, require version increment and automated migration scripts or mapping transforms.
10. Security (auth, authorization, input sanitation, rate limits)
Security must be first-class:
Authentication: validate JWT/OIDC tokens early. Map principal claims to metadata auth policies.
Authorization: metadata declares required scopes/roles; engine enforces. Support ABAC if needed.
Input sanitation: validate and coerce types using JSON Schema. Reject unknown fields if additionalProperties:false.
SQL injection prevention: always use parameterized queries. Validate that binding names match allowed parameters.
Script sandboxing: if using scripts, run in sandbox (Wasm or jailed V8). Deny network or file access. Limit CPU/memory and execution time.
Rate limiting: per-endpoint and per-client quotas. Use token bucket or Redis backed counters across instances.
Secret management: do not keep plain DB credentials in metadata. Use secret references resolved at runtime from Key Vault.
Audit trail: write immutable logs for metadata changes and declassification / admin actions.
CORS and CSP: apply policies in metadata if endpoint is publicly consumed.
11. Performance, caching and scaling strategies
Metadata caching: don't query DB for each request. Use in-memory snapshots with TTL and invalidation via push (webhooks or message bus) when metadata changes.
Compiled plans: precompile query plans for SQL backends or prepare HTTP request templates.
Connection pooling: backend connectors must use pooled connections.
Parallel composition: call multiple backends in parallel and merge results (with timeouts).
Response caching: allow metadata flags to enable caching at CDN or engine level (Vary header, cache key from parameters). Use Redis for shared cache across instances.
Rate limiting and throttling: distributed counters (Redis/Leaky bucket).
Async & background: for heavy transforms or long running composition, return 202 Accepted with a jobId and let clients poll.
Hot path optimisations: inline simple endpoints (CRUD) into compiled handlers for maximum throughput; complex ones use generic flow.
12. Observability, testing and CI/CD for metadata changes
Automated validation: when metadata is edited submit it to CI that runs static validations (JSON Schema correctness, query parameter checks, transform syntax checks) and integration tests in a sandbox.
Staging preview: render live preview endpoints in a staging environment for reviewers.
Audit logs: every metadata change gets a changelog entry and must pass approval for production.
Monitoring: capture metrics per apiId — requests, latency p50/p95/p99, errors, validation failures, cache hit rate, backend latency. Use OpenTelemetry.
Error tracing: include apiId and metadataVersion in traces.
Chaos & load tests: validate the engine under real loads.
Rollback: maintain previous snapshot so you can rollback metadata changes instantly.
13. Governance, audit, and approvals
Role separation: creators can make drafts; approvers publish to production.
Approval flow: multiple approvers for high-impact endpoints. Integrate with existing ticketing (Jira) or CI gates.
Change policy: require tests to be attached for certain endpoint types (financial, PII).
Metadata versioning: immutable snapshots per publish; each request logs used snapshot id.
Immutable audits: store approval and publish events in an append-only store. Use cryptographic signing if required.
14. Real-world use cases and examples
B2B partner APIs: quickly expose partner-specific thin adapters to core services without new code.
Feature flags & A/B: expose feature-flagged endpoints for canary consumers, then change mapping on the fly.
Data products: allow analysts to expose curated datasets via read-only endpoints mapped to SQL views.
Internal admin APIs: create ad-hoc endpoints for internal dashboards without back-end sprint.
Migration adapter: temporarily map old API shapes to new microservices during migration.
Example: expose GET /v1/active-users mapping to an internal analytics DB query defined in metadata — change the SQL or add filters without code release.
15. Limitations and when not to use this approach
Do not use metadata engine for:
Very complex domain logic with branching stateful workflows.
Performance critical inner loops where compiled code and optimisations matter.
Highly secure operations that must be implemented with reviewed code; metadata changes are easier to misuse.
Cases requiring advanced transactions across many services where choreography is complex.
Use engine for:
Query/mapping endpoints, simple composition, CRUD, adapters, and light transforms.
16. Roadmap: extending the engine
Ideas to extend:
Visual Admin UI / DSL: drag-and-drop endpoint creation and test harness.
Policy Marketplace: reusable transforms and connectors in a library.
GraphQL adapter: map metadata to GraphQL schema dynamically.
WASM transform engine: run safe, high-performance transforms.
Schema-driven SDK generation: generate client SDKs from current metadata snapshot.
Audit immutability: store change log hashes in blockchain or WORM storage for compliance.
17. Conclusion and next steps
A Metadata-Driven API Engine lets teams move fast while maintaining governance and observability. The core ideas are:
Keep metadata small, versioned and validated.
Use a modular engine: dispatcher, binder, validator, transform engine, backend connectors, policy engine.
Avoid executing arbitrary code in metadata; prefer safe expression languages or sandboxed runtimes.
Enforce security, audits, and approvals.
Cache metadata and compiled plans for performance.
Provide CI checks and staging previews for every metadata change.