Resolving "100 percent CPU Usage" Problem In SQL Server

Today, we shall discuss a hot topic; i.e., how to handle 100% CPU utilization. DBAs face this kind of situation often in daily life. Sometimes it is a must to solve this situation in the production environment, as this will hamper the business activity in terms of transactions and money.

The methods we are about to discuss, help only if SQL Server is the culprit. You need first to identify whether SQL is consuming all the memory resources.

I shall cover two methods, where both the methods are not the same and can’t be used at the same time. Their aim is the same: to lower the memory usage, but they are applied over different scenarios.

Before beginning, I want to discuss a myth i.e., mostly it is assumed that longer running queries are a problem, but it is not true all the time. Yes, it may cause a problem but small concurrent running queries having maximum worker time or maximum execution count can also be a problem. For example, a query which is executing before 1 sec can’t be a problem but the same query, if executed 1 lac times concurrently (at the same time) can cause an issue.

As per my experience, mostly select queries are the culprit and create such situations, so that SQL Server starts consuming 100% of memory resources. You can use the task manager or resource monitor to find the CPU usage.

Method 1

It is a traditional approach, mostly used by DBAs. Whenever this kind of situation arises, you need to first check for the intense processes running on the Server. For this, you need to continuously execute one procedure sp_who2 and monitor which spid is increasing gradually. Now, you need to identify what is going on that session for that use dbcc inputbuffer(<spid>). If it is a select query, you can kill it but you should not kill the transaction and queries having insert update delete on SQL tables.

  1. sp_who2  
  2. dbcc inputbuffer( < spid > )  
  3. kill < spid >  

Note

You need to look for spid greater than 50 because less than 50 spids are used by SQL Server for its internal working.

Method 2

It is a more granular approach to optimizing the query. In this approach, you need to use a few DMVs. SQL Server increases the execution_count for same query, if the definition of the query changes to the new plan which is created. Now, you need to find out if the queries have maximum execution_count and maximum total_worker_time. When you find the record, you will get the query plan, which you need to copy and paste to another dmv i.e. sys.dm_exec_query_stats.

  1. Select * from sys.dm_exec_query_stats order by execution_count desc  
  2. Select * from sys.dm_exec_query_stats order by total_worker_time desc  
  3. Select * from sys.dm_exec_query_plan( < plan_handle > )  

From the query given above, we will get the execution plan and from there, we can view XML view of query and find the parameters for the query. After getting the queries, you can apply sargable rules, as these rules are used to optimize the queries.

This is how you can trace costly queries and handle 100% CPU utilization.

Note

Take prior approval for killing any session from the approval authority. sp_who, sp_who2 is not documented.