CPU Usage measures the amount of available CPU that is being used in your SQL Database or Elastic Pool as a percentage. The amount of total CPU available depends on your service tier and the configured amount of vCore of DTU. Blue Matador automatically monitors the cpu_percent metric for both SQL Databases and Elastic Pools to detect anomalies in CPU usage and also detect when CPU usage is near 100%.
Anomalous CPU usage can be an indicator that access patterns have changed in your application, slow queries have been introduced, or that the database is near its capacity limits. Running at consistently high CPU usage, especially near 100% usage will be detrimental to the performance of your database and should be addressed ASAP. Specific issues may include:
Diagnosing and fixing database performance is a complicated and involved process at best. Microsoft provides a very comprehensive troubleshooting guide for SQL Databases here, but we will also go over some basic techniques below to get you started.
First, determine if there is an upstream service that is causing increased load on your database. In an emergency it may be quicker and easier to alleviate load on the database from upstream sources than to do database maintenance on-the-spot.
When looking at high CPU usage on a database, the culprit is very likely to be one of the following:
Azure SQL provides Dynamic Management Views on your databases that allow you to quickly identify problematic queries. Follow this documentation to look at the currently running queries and then determine a fix for those queries.
In the case that your database has slowly increased its usage and does not have any slow queries running, the correct solution may be to upgrade your database. You can either increase its DTUs, add vCore capacity, or upgrade to a higher service tier altogether. For read-heavy workloads, consider adding read replica databases to spread read activity over multiple databases.