When a SQL Server feels “slow,” it’s usually a handful of statements doing most of the damage. The fastest way to find them is with Dynamic Management Views (DMVs). Below are copy-paste scripts to surface the worst offenders by CPU, IO, duration, and “what’s running right now,” along with notes on reading the results and what to do next.
Requirements: VIEW SERVER STATE
permission. Numbers like total_worker_time
and total_elapsed_time
are in microseconds unless noted.
What “high usage” means (pick the lens)
CPU: how much processor time a query uses.
IO: logical/physical reads and writes (memory and disk pressure).
Duration: how long a query takes end-to-end.
Currently running: live workload that may be blocking others.
You’ll use a different script depending on which lens you want.
Top queries by CPU
-- Top 20 queries by total CPU since the plan was cached
SELECT TOP 20
DB_NAME(st.dbid) AS database_name,
qs.execution_count,
qs.total_worker_time/1000.0 AS total_cpu_ms,
(qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_cpu_ms,
(qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
qs.total_logical_reads + qs.total_physical_reads AS total_reads,
SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset
WHEN -1 THEN DATALENGTH(st.text)
ELSE qs.statement_end_offset END
- qs.statement_start_offset)/2) + 1) AS query_text,
qp.query_plan
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
ORDER BY qs.total_worker_time DESC;
Tip: Add WHERE DB_NAME(st.dbid) = 'YourDbName'
if you only care about one database.
Top queries by IO (reads/writes)
-- Top 20 by total reads; add writes if you care about heavy DML
SELECT TOP 20
DB_NAME(st.dbid) AS database_name,
qs.execution_count,
(qs.total_logical_reads + qs.total_physical_reads) AS total_reads,
(qs.total_logical_writes + qs.total_physical_writes) AS total_writes,
(qs.total_logical_reads + qs.total_physical_reads) / NULLIF(qs.execution_count,0) AS avg_reads,
(qs.total_logical_writes + qs.total_physical_writes) / NULLIF(qs.execution_count,0) AS avg_writes,
(qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset
WHEN -1 THEN DATALENGTH(st.text)
ELSE qs.statement_end_offset END
- qs.statement_start_offset)/2) + 1) AS query_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
ORDER BY total_reads DESC;
Top queries by average duration
-- Queries that are slow per execution (not just popular)
SELECT TOP 20
DB_NAME(st.dbid) AS database_name,
qs.execution_count,
(qs.total_elapsed_time/1000.0) AS total_duration_ms,
(qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_duration_ms,
(qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0) AS avg_cpu_ms,
(qs.total_logical_reads + qs.total_physical_reads) / NULLIF(qs.execution_count,0) AS avg_reads,
SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset
WHEN -1 THEN DATALENGTH(st.text)
ELSE qs.statement_end_offset END
- qs.statement_start_offset)/2) + 1) AS query_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
WHERE qs.execution_count > 0
ORDER BY avg_duration_ms DESC;
Stored procedures that hit the server hardest
Use sys.dm_exec_procedure_stats
to get procedure-level rollups (cleaner than trying to stitch statements together).
-- Procedures by total CPU
SELECT TOP 20
DB_NAME(ps.database_id) AS database_name,
OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id) AS schema_name,
OBJECT_NAME(ps.object_id, ps.database_id) AS procedure_name,
ps.execution_count,
ps.total_worker_time/1000.0 AS total_cpu_ms,
(ps.total_worker_time/1000.0)/NULLIF(ps.execution_count,0) AS avg_cpu_ms,
ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4 -- skip system DBs; remove if you want them
ORDER BY ps.total_worker_time DESC;
-- Procedures by total reads (IO)
SELECT TOP 20
DB_NAME(ps.database_id) AS database_name,
OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id) AS schema_name,
OBJECT_NAME(ps.object_id, ps.database_id) AS procedure_name,
ps.execution_count,
(ps.total_logical_reads + ps.total_physical_reads) AS total_reads,
((ps.total_logical_reads + ps.total_physical_reads)/NULLIF(ps.execution_count,0)) AS avg_reads,
ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4
ORDER BY total_reads DESC;
-- Procedures by average duration
SELECT TOP 20
DB_NAME(ps.database_id) AS database_name,
OBJECT_SCHEMA_NAME(ps.object_id, ps.database_id) AS schema_name,
OBJECT_NAME(ps.object_id, ps.database_id) AS procedure_name,
ps.execution_count,
(ps.total_elapsed_time/1000.0)/NULLIF(ps.execution_count,0) AS avg_duration_ms,
ps.last_execution_time
FROM sys.dm_exec_procedure_stats AS ps
WHERE ps.database_id > 4 AND ps.execution_count > 0
ORDER BY avg_duration_ms DESC;
What’s heavy right now (live view)
-- Currently executing requests ordered by CPU time
SELECT
r.session_id,
r.status,
DB_NAME(r.database_id) AS database_name,
r.cpu_time AS cpu_ms, -- already in ms
r.total_elapsed_time AS elapsed_ms, -- already in ms
r.wait_type,
r.wait_time,
r.blocking_session_id,
SUBSTRING(t.text, r.statement_start_offset/2 + 1,
(CASE WHEN r.statement_end_offset = -1
THEN DATALENGTH(t.text)
ELSE r.statement_end_offset END - r.statement_start_offset)/2 + 1) AS running_statement
FROM sys.dm_exec_requests AS r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS t
WHERE r.session_id <> @@SPID
ORDER BY r.cpu_time DESC;
If you see blocking_session_id
populated, chase that session and fix the blocker first.
Group similar statements together (query_hash)
Same query text with different literals can appear as separate rows. Aggregate by query_hash
to see the true top offenders.
-- Roll up by query_hash to combine similar statements
WITH q AS (
SELECT
qs.query_hash,
qs.execution_count,
qs.total_worker_time,
qs.total_elapsed_time,
qs.total_logical_reads + qs.total_physical_reads AS total_reads
FROM sys.dm_exec_query_stats qs
)
SELECT TOP 20
query_hash,
SUM(execution_count) AS executions,
SUM(total_worker_time)/1000.0 AS total_cpu_ms,
(SUM(total_worker_time)/1000.0)/NULLIF(SUM(execution_count),0) AS avg_cpu_ms,
(SUM(total_elapsed_time)/1000.0)/NULLIF(SUM(execution_count),0) AS avg_duration_ms,
SUM(total_reads) AS total_reads
FROM q
GROUP BY query_hash
ORDER BY total_cpu_ms DESC;
Filters you’ll actually use
Add these lines to any query above as needed:
-- Only one DB
WHERE DB_NAME(st.dbid) = 'YourDbName'
-- Only statements executed in the last day (approx; uses last_execution_time)
WHERE qs.last_execution_time >= DATEADD(DAY, -1, SYSUTCDATETIME())
-- Exclude trivial one-off executions
AND qs.execution_count >= 5
Read the numbers the right way
High total + low average: popular query. Optimize for throughput (indexing, cached plan quality).
Low total + very high average: rare but slow. Optimize for latency (rewrite, avoid RBAR/scalar UDFs, better joins).
High duration but modest CPU/IO: usually blocking or waits. Check wait_type
, blocking_session_id
, and missing indexes that cause scans.
Metrics reset when plans get evicted or the instance restarts. Treat them as a rolling window, not forever history.
Quick wins to try after you find a culprit
Add the right index (covering where needed). Look at the actual plan’s missing index hints, then design a lean index yourself (don’t blindly accept 12-column monsters).
Kill implicit conversions (mismatched data types, e.g., NVARCHAR
vs INT
).
Replace SELECT * with exact columns (cuts reads).
Update statistics if they’re stale; consider WITH RECOMPILE
for bad parameter sniffing cases (sparingly).
Avoid scalar UDFs in hot paths; inline logic or use APPLY
.
Batch big writes; keep transactions short.
Bonus: store a snapshot for trending
If you want a daily/15-minute snapshot to trend over time:
-- One-time setup
CREATE TABLE dbo.TopQuerySnapshot
(
captured_at DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
database_name SYSNAME,
executions BIGINT,
total_cpu_ms BIGINT,
avg_cpu_ms DECIMAL(18,2),
avg_duration_ms DECIMAL(18,2),
total_reads BIGINT,
query_text NVARCHAR(MAX)
);
-- Collector (schedule as an Agent Job)
INSERT INTO dbo.TopQuerySnapshot (database_name, executions, total_cpu_ms, avg_cpu_ms, avg_duration_ms, total_reads, query_text)
SELECT TOP 50
DB_NAME(st.dbid),
qs.execution_count,
qs.total_worker_time/1000,
(qs.total_worker_time/1000.0)/NULLIF(qs.execution_count,0),
(qs.total_elapsed_time/1000.0)/NULLIF(qs.execution_count,0),
(qs.total_logical_reads + qs.total_physical_reads),
SUBSTRING(st.text, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(st.text) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) + 1)
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
ORDER BY qs.total_worker_time DESC;
Now you can chart trends and prove improvements.
Common questions
Does this include plans not in cache?
No. DMVs reflect what’s cached. For long-term history, enable Query Store (SQL Server 2016+) and analyze sys.query_store_runtime_stats
.
Azure SQL Database?
Same idea. Scripts work with minor differences; permissions still apply.
What about currently blocked sessions?
Use the “live view” script; chase the blocking_session_id
, then inspect that session’s SQL text and plan.
Conclusion
Start with CPU, then IO, then duration. Use the live view if users are complaining right now. Once you spot a heavy hitter, check its plan, add the right index, fix data types, and re-test. Small, focused changes usually deliver big wins.
If you want, share a sample result and I’ll help interpret which knob to turn first.