In the age of AI, “Data is the New Gold”, at least according to this Salesforce commercial. This means that even more is going to asked of your databases, whether it is extracting data into an analytics warehouse, or more users accessing it directly using SQL queries or AI agents generating those queries.
Nobody wants to wait for results, so if your database starts slowing down, you have a couple of choices:
- Throw money at the problem by adding processors, memory, or faster disks
- Tune the database by optimizing indexes
Adding 2 cores to SQL Server Standard will run you $1,418/year for the subscription alone, and that doesn’t include the costs for the processors themselves or the Windows license. With SQL Server Enterprise you are looking at $5,434/year.
Table of Contents
Identifying Indexes
To optimize indexes, you can open up SSMS, bring up the Performance Dashboard report and then click on the Missing Indexes link. This will show you indexes the SQL Server plan optimizer noticed as potentially helpful when it ran queries.
You should also take a look at the Index Usage Statistics report for the database you are trying to tune. This will show you all of the indexes in the database and their indexes. Indexes are like anything else, and you can have too much of a good thing. These will appear as indexes with high numbers of user updates, and no or very few user seeks and user scans. In other words, SQL Server is doing a lot of work to maintain the indexes but not actually using them for queries.
Finally, you should look at queries in the plan cache to see which ones are using the most resources so you can prioritize them. SSMS doesn’t have a report for that, but Brent Ozar Unlimited’s sp_BlitzCache is a great script to use.
Once you have all that you can spend a few hours sifting through them to try to find the best indexes to speed up your database.
Index Compression options and Fill Factor
SQL Server offers row and page compression as part of its data compression features to reduce storage footprint and improve I/O efficiency. Row compression works by storing fixed‑length data types in variable‑length format, eliminating unnecessary padding (for example, converting CHAR(50) values into just the bytes needed). Page compression goes further by applying prefix and dictionary compression within each 8 KB page, identifying repeating patterns across rows and storing them once. While compression can significantly reduce disk usage and improve buffer cache efficiency, it comes at the cost of additional CPU overhead during reads and writes, so it’s best applied to large, read‑heavy tables or indexes where I/O is the bottleneck.
The fill factor setting, on the other hand, controls how much free space SQL Server leaves on each index page when it is created or rebuilt. A lower fill factor (e.g., 80%) leaves more room for future inserts, reducing page splits and fragmentation in write‑intensive workloads. A higher fill factor (closer to 100%) packs pages more tightly, which is efficient for read‑only or mostly static data but can lead to fragmentation if the table is frequently updated. Together, compression and fill factor are powerful tuning levers: compression optimizes storage and I/O, while fill factor balances read efficiency against write performance. Choosing the right combination depends on workload patterns, hardware resources, and whether your priority is minimizing storage, maximizing throughput, or reducing maintenance overhead.
Index optimization the easy way
If going through this process is not your idea of fun, then try AI SQL Tuner. It will connect to your database, gather all the information described here, and then use the latest AI models such as gpt-5 to analyze it. In just seconds you will have a prioritized set of recommended changes to make to speed up your database.

