7 – Securing access to SQL Server 200 • Set up a user with read/write permissions on the central storage database, to be used for the linked server credentials. • Use the new user credentials you have just made to setup a linked server on the SQL Server that you want to run an automated trace on. • In the usp_startTrace and usp_stopTrace scripts, locate the calls (all noted in the comments) that point to a generic linked server ( MYSERVER123) and modify them to reflect the name of your central trace data repository server. • Run the create scripts on each of the servers that will be performing the traces. Once these stored procedures are installed, you can begin starting and stopping traces via a query window/query analyzer or through SQL Agent jobs. Testing Here is a quick example that demonstrates how to find the cause of all this grief for our beloved program manager. Create a new SQL Agent job to kick off at 5:30 PM (after business hours) involving only one step, as shown in Figure 7.6. In this step, execute the start trace procedure, with the parameters needed to gather only the data relevant to the issue at hand. Figure 7.6: Create a SQL Agent job to run usp_startTrace. 7 – Securing access to SQL Server 201 This will produce a trace named TruncateTrace. The trace file will be stored in the root of the C drive. The maximum space the trace file should take is 10 MB and we will place a filter on the first column (text data) looking for any instances of the word "truncate". The last three parameters are optional and will be defaulted to 5 (trace file size in MB), 0 (no trace column) and NULL (no filter keyword) respectively. If you do not specify these parameters then a bare bones trace will be created with a maximum file size of 5 MB, and it will perform no filtering, so you will get all available data from the trace events. Alternatively, create another job to run at 6:00 AM, calling the usp_stopTrace giving the same trace name, as shown in Figure 7.7. Figure 7.7: Create a SQL Agent job to run usp_stopTrace. 7 – Securing access to SQL Server 202 This will stop any trace named TruncateTrace on this particular server and export all of the collected data into the repository table ( trace_table or trace2k_table) on the linked data collection server. Any older information will have been archived to the trace_archive (or trace2k_archive) table. All data is marked with the server name so we can still filter the archive table to look at older data from any server. The trace file is also cleaned up from the traced server so the filename will be available for future use. This will require that xp_cmdshell is available for use by the SQL Agent service account. From this point, all we have to do is look through our newly acquired trace_table data for the suspect. I hope that these scripts can make life a little easier for those of you out there who do not run full auditing on all of your SQL servers. The trace scripts can easily be modified to include other columns and other trace events. I am presenting this as a spring board for any DBA out there that needs an automated solution for profiler tracing. If you do want to add any event or trace columns, I would look to http://msdn2.microsoft.com/en-us/library/ms186265.aspx for a complete list of all trace events and available data columns. In an event, the next time you encounter a red-face program manager, demanding to know who truncated his tables, much job satisfaction can be gained from being able to respond something along the lines of: "So <Manager Name>, we have been tracing that server all week and it seems that one of the DTS packages you wrote, and have running each night, is the problem. It is truncating the table in question each morning at 4:00 AM. Don't be too hard on yourself though. We all make mistakes." Summary All SQL Server DBAs are tasked with securing the SQL Servers they manage. While it is not the most glamorous of tasks, it is one of, if not the most, important aspects of being a DBA. This is especially true in a world where compromised data results in large fines, humiliation and potential loss of the coveted job that you were hired to do. Knowing who has access to the data you oversee if the first step. Working to alleviate potential threats, either harmfully innocent or innocently harmful, is essential. The scripts provided here will assist you in isolating and resolving these threats. There is so much more to securing SQL Server and I have only touched on the obvious first line, user accounts and logins, error logs, DDL triggers, and server-side tracing. 7 – Securing access to SQL Server 203 Next and last up is the topic of data corruption, which ranks right up there with security in terms of threats to the integrity of the DBA's precious data. I'll show you how to detect it and how to protect yourself and your databases and most importantly what to do when you realize you have a problem … which statistically speaking, you will eventually. Be afraid; I saved the monster at the end of the book until the end of the book. Don't turn the data page. 204 CHAPTER 8: FINDING DATA CORRUPTION I have mentioned a couple for times previously the monster at the end of this book. This being the final chapter, it is time for the monster to be revealed. The monster can be a silent and deceptive job killer. It can strike at once or lay in wait for weeks before launching an attack. No, I am not talking about developers; I am talking about database corruption. If you have been a DBA for long enough, you will have encountered the data corruption monster in at least one of its many forms. Often corruption occurs when there is a power failure and the server, rather than shutting down gracefully, simply dies in the middle of processing data. As a result of this, or some other hardware malfunction, data or indexes become corrupt on disk and can no longer be used by SQL Server, until repaired. Fortunately, there are several steps you can take to protect your data, and equally important your job, in the event of data corruption. First and foremost, it should go without saying that not having a good backup strategy is equivalent to playing Solitary Russian Roulette. However, I'll also demonstrate a few other techniques, based around the various DBBC commands, and a script that will make sure corruption issues are discovered and reported as soon as they occur, before they propagate through your data infrastructure. Hopefully, suitably armed, the DBA can limit the damage caused by this much-less-friendly version of the monster at the end of the book. P.S. If you are unfortunate enough never to have read The Monster at the End of This Book (by Jon Stone, illustrated by Michael Smollin. Golden Books), starring the lovable Grover Monster from Sesame Street, you have firstly my sympathy and secondly my apologies, because the previous references will have meant little to you. I can only suggest you buy it immediately, along with The Zombie Survival Guide (by Max Brooks, Three Rivers Press), and add them both to your required reading list for all new DBAs. Causes of corruption There are many ways that a database can become "corrupt". Predominantly it happens when a hardware malfunction occurs, typically in the disk subsystem that is responsible for ensuring that the data written to disk is the exact same data that SQL Server expected to be written to disk when it passed along this responsibility . to securing SQL Server and I have only touched on the obvious first line, user accounts and logins, error logs, DDL triggers, and server- side tracing. 7 – Securing access to SQL Server 203. Figure 7.7: Create a SQL Agent job to run usp_stopTrace. 7 – Securing access to SQL Server 202 This will stop any trace named TruncateTrace on this particular server and export all of. too hard on yourself though. We all make mistakes." Summary All SQL Server DBAs are tasked with securing the SQL Servers they manage. While it is not the most glamorous of tasks, it is