Read time 5 minutes
The Database Administrator has a crucial job to create a healthy database, make their data instantly available, and modify them accordingly. It is a simple process till the database is healthy and accessible. But, when the MDF file that is the data file of SQL Server has some problem in its structure due to any corruption, it can show various types of errors. In the present article, we give you some critical checkpoints that the Administrator should apply in each database to increase performance and health.
Monitoring and alert automation
It is very important for you to have a proper strategy to manage your database properly. In fact, your strategy must include a comprehensive system of monitoring and notifying on key database parameters that very often direct problems. It is very important for you to keep a very close eye on every single aspect of your SQL Server environments, like instance and server availability; database performance that also includes cross-server data contention; and job management, and overdue jobs.
As a database administrator, it is very important for you to closely monitor databases across numerous vendors and at the same time effectively define database status, steer through data and graphs, quickly find and correct performance issues. Alerts and notifications can automatically run scripts to resolve issues before they become problems.
Consistency checks
Having worked with SQL Server, you must note that there are dozens of documented and undocumented DBCC commands, with various functions.
You can easily run DBCC CHECKDB to ensure that you are not backing up a damaged database. Although, with SQL Server 2000, you’re no longer required to run DBCC CHECKDB or DBCC CHECKALLOC statements. The latest version of SQL Server automatically detects damaged pages and fixes them.
However, you can always be very cautious. It’s still a good idea to run DBCC CHECKDB to ensure the overall health of your database prior to backing it up. In SQL Server 2000, DBCC CHECKDB uses schema locks.
The other DBCC commands that you might find beneficial for tuning your database effectively are DBCC SHRINKFILE and DBCC SHRINKDATABASE. Both these commands are very handy in reducing the size of a database file size as you must not forget that smaller database files are always stressed-free to manage.
Maintenance of index
You must be well aware of the fact that SQL Server does make use of indexes to read the data from a table. Consequently, it also gets vital for you to properly build appropriate indexes for each table. You must ensure to keep index fragmentation to a least and the ways you can opt to do so are:
- Drop and re-create an index. Various indexes are affected on the same table, it gets important for you to drop and re-create the clustered index. Take help of DBCC DBREINDEX with a table name and re-create one or multiple indexes on the specified table. On the other hand, you can make proper use of CREATE INDEX … WITH DROP EXISTING syntax to rebuild an index.
- Run DBCC INDEXDEFRAG statement counter to the affected index. This statement is worth enough to reorder the leaf-level pages of the index in order to eliminate fragmentation. DBCC INDEXDEFRAG stops you as a user from reading and writing to a table.
Statistics updating
In order to maintain a high level of performance, you could even minimize the index fragmentation.
In general, the index statistics notify SQL Server about the presence of different key values in the index. If statistics are outdated, it might get difficult for SQL Server to evaluate index effectiveness suitably, and so you scan a table as a replacement for scanning the index.
A simple practice of updating the statistics guarantees real use of indexes. In fact, a best act you can opt is to schedule a job that executes the UPDATE STATISTICS statement or you may also run the system procedure sp_updatestats. The difference between the two is –
- UPDATE STATISTICS would require you the table (view) or index name and completes an update on the specified object.
- The sp_updatestats process just runs over a cursor containing all user tables.
Although, the entire process of updating statistics on large tables with multiple indexes will take a long time, so experts do recommend of running UPDATE STATISTICS on such tables distinctly. For instance, the best you can do is run UPDATE STATISTICS on small tables and large tables on alternate days.
Alter the MAX memory limit
In recent times, there have been improvements in the 64-bit versions of SQL Server regarding memory allocation, but the key problem many administrators face is deciding the default settings for the MAX Memory. Even though your host server might be devoted to SQL Server, there are other applications that necessitate adequate memory. Therefore, the best act you can follow is to set the MAX memory setting to minimum 1 to 2GB less than the total sum of memory on the server. The important point to note is that how much memory you require to leave depends on what else you have running and the memory it requires to operate competently.
You got to understand that every year the volume of the SQL server database increases and as an administrator, senior management always asks you to provide higher levels of data availability, and performance proving with flat budgets. These few yet essential 5 tips as discussed above could very well assist you in an effective management of SQL database assets and eliminate redundancies and further reducing the requirement of any additional skills required to manage the SQL databases.