Wednesday, November 21, 2012

SQL Error in the Post Snapshot File for Transactional Replication

When using replication, it is sometimes useful to run a set of commands on the subscribing database either before or after the initial snapshot has been applied. By using a pre-snapshot file, a DBA could remove any dependencies in the subscribing database that would keep the table from being dropped in the event of a re-initialization. We use post-snapshot files to create indexes on the subscribing database that do not exist on the publisher. To find out whether or not pre/post snapshot files are being used, right click on the publication and choose properties. From there, select snapshot and look under the “Run additional scripts” section.

So when the subscription is initialized, a snapshot is prepared and placed in the default folder. Along with the bulk copy files, any pre or post snapshot files you are using are placed in there too. So the snapshot is applied to the subscriber and, in this case, a post-snapshot script is ran against the subscribing database. While the snapshot and the post-snapshot scripts are being ran, updates to the replicated tables are being queued for replication in the distribution database. These will be replicated to the subscriber once the snapshot has been applied and the post-snapshot script has completed.
But what happens in the event that the post-snapshot script has an error? The distribution agent considers this to be a failure and retries the post-snapshot script over and over again. While this is happening, updates that are queued for distribution are not being applied to the subscriber. How can we fix this?

One option is to open the post-snapshot file at the location under “Run additional scripts”, fix the error, reinitialize the subscription and re-create the snapshot. This might not be the best option depending on the size of your snapshot and the time it takes to generate it.
The second option is to directly open the post-snapshot file that was copied into the folder when the snapshot was created. By editing the post-snapshot file in the folder specified in the “Put files in the default folder” option, we are directly modifying the SQL that the distribution agent is executing and failing on. If time is critical, you may want to remove all SQL from the file and save it. Note: If you do this, you will have to manually apply the SQL to the subscriber. Once the error in the file either is fixed or the file is cleaned out, the distribution agent will open the file and complete successfully. After that, all the queue updates, inserts and deletes can flow from the publisher to the subscriber.

Tuesday, July 17, 2012

SQL Reporting Services (SSRS) 2008 R2 Administration Scripts

Below are a few SQL Reporting Services administration scripts that I've created to help me audit usage of the system.

1. Report Usage Count By User For The Previous 30 Days

SELECT  UserName
,COUNT(*) AS TimesExecuted

FROM    ReportServer.dbo.ExecutionLog3
WHERE   ItemPath LIKE '%<ReportName>%'
AND TimeStart > GETDATE() - 30


2. Report Usage With Parameters, Row Count, Execution Time, Format and User Name

SELECT  TimeStart
,TimeDataRetrieval + TimeProcessing + TimeRendering [TimeInMilliseconds]

FROM    dbo.ExecutionLog3
WHERE   ItemPath LIKE '%<ReportName>%'
AND TimeStart BETWEEN '3/10/12' AND '6/11/12'


3. Report Location with Creation Date, Last Execution Date, and Usage Count

,E.LastExecutedFROM    Catalog
,COUNT(*) AS TimesExecuted
,MAX(TimeStart) AS LastExecuted
FROM    ExecutionLog3
GROUP BY ItemPath) E
ON CATALOG.PATH = E.ItemPathWHERE   Path LIKE '/Reports%'

Monday, July 16, 2012

Subsystem 'CmdExec' could not be loaded (reason: The specified module could not be found)

After a recent server crash, we had the opprotunity to rebuild the SQL Server instance from the previous nights backups. After going through the various steps to restore the Master and MSDB databases, as well as change the location of the tempdb, resource and model databases, we thought everything was set.

Over the weekend, the server in question was set to backup user databases to a file share using SQLCMD. Upon starting the job, the SQL Agent returned the following error message: "Unable to start execution of step 1 (reason: The CMDEXEC subsystem failed to load [see the SQLAGENT.OUT file for details]; The job has been suspended).  The step failed."

After looking throught the SQLAGENT.OUT file (which yielded the same error message), we took a look at the MSDB..SYSSUBSYSTEMS table. Looking in the subsystem_dll column, the issue presented itself.

 SELECT subsystem
   FROM msdb..syssubsystems

subsystem  subsystem_dll   
CmdExec    D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\binn\SQLCMDSS90.DLL   

During the server rebuild, the SQL Server program files where installed under the C:\Program Files directory whereas on the crashed server installation, all the old binaries were installed under D:\Program Files. Since we restored the MSDB database from an old backup, those values overwrote the values from the new server install.

In order for the binaries to load, MSDB..SYSSUBSYSTEMS table had to be updated.

UPDATE msdb..syssubsystems
   SET subsystem_dll = REPLACE(subsystem_dll,'D;\','C:\')   
 WHERE subsystem = 'CMDEXEC'

Once the subsystem_dll column has been updated to point to the C:\Program File directory, we restarted the SQL Agent service and checked the SQLAGENT.OUT file again. The error message was gone and the job completed successfully after we restarted it.

Tuesday, May 22, 2012

SQL Reporting Services 2008 Execution Log History

In SQL Reporting Services 2008, you can set the Execution Log retention to keep more than the default 60 days by updating the ConfigurationInfo table inside the ReportServer database. Inside that table is a value labeled ExecutionLogDaysKept which can be updated to any value up to 2,147,483,647 , with 0 meaning that all history is kept indefinitely. You can update the value to keep 1 year's (365 days) worth of history with the script below.

UPDATE  ConfigurationInfo
Value = '365'
Name = 'ExecutionLogDaysKept'

Friday, April 27, 2012

Dynamic Connection Strings in SSIS

If you need the ability to change connection strings in your SSIS packages without opening each one and hard coding a string, then dynamic connection strings are your answer. There are many different ways to do this, including Package Configurations and Parameters (2012), but here I’ll be explaining how to use variables to dynamically change the connection string using information stored in a flat file or database table that can be manipulated without opening the SSIS package. This make server renames / upgrades / moves much easier since the information only has to be changed in one place.

Starting out, we’ll need the source data. I’ve created a table with 2 columns, ConnectionName and ConnectionString along with some test data.

CREATE TABLE ConnectionStrings
(ConnectionName VARCHAR(256)
ConnectionString VARCHAR(256))
INSERT INTO ConnectionStrings
VALUES  ('<ConnectionName>',
'Data Source=<Server>;Initial Catalog=<Database>;Provider=SQLNCLI.1;Integrated Security=SSPI;Auto Translate=False;'

Once the data has been populated in the table, we can open up a new SSIS project in the SQL Server Data Tools (2012) or BIDS (2008/2005) application and add a new OLE DB Connection. At this point it doesn’t matter what connection string is used since it will be overwritten at execution time, but for testing purposes you will want to point it to your test system.

Next ,we will create a new variable scoped to the package called SourceConnectionString with a Data Type of String.

Now, select the Source connection in the Connection Manager and expand Expressions. This is where we’ll assign the SourceConnectionString variable to the ConnectionString of the Source connection via expressions.

Finally, we need to populate the SourceConnectionString variable by querying the connection string table we setup in the first step. Once we have the results from the query, we’ll assign the results to the variable, which in turn assigns a connection string to the Source connection. This can be accomplished by doing the following.
Under the Execute SQL Task Editor, the result set should be set to Single Row and the SQL Statement should read “Select ConnectionString From ConnectionStrings”

Next, under Result Set, assign Result Name 0 to the variable that you created for the connection string.

Now that everything is set, the Source connection string value will be filled with the value that you placed in the ConnectionStrings table.

Wednesday, March 21, 2012

Connect Item: Filter Tables and Views in SQL Server Import and Export Wizard

If you've ever used the SQL Server Import and Export Wizard inside SSMS on databases with hundreds of objects you know that it can be difficult to navigate the source tables and views grid looking for the object or objects you need. So to aleviate this I've created a Connect item to add the ability to filter objects when selecting the source tables and views. If you've had this problem or think this is a worthwhile suggestion, please vote it as important.

You can view the Connect item here:


Tuesday, January 17, 2012

Extracting a Query Execution Plan from the Procedure Cache

From SQL Server 2005 forward, it has been possible to view the execution plan of a query that has been executed by someone other than yourself. This is extremely useful on procedures that you, as a DBA, would not want to execute in production because of data modification logic, but are used everyday as a part of the application. Finding out how the procedure is running and why it is running so poorly without having to execute it also saves a lot of setup time since many procedures require an extensive set of input parameters that we may or may not know.

First, a little setup. We need to create a test stored procedure that we will be looking for in the procedure cache.

  SELECT TOP 1 * FROM sys.objects

After we’ve created the stored procedure, execute it a couple times so that SQL Server will cache it. Once it is there, we can start querying various execution related DMV’s that will eventually show us the execution plan.

EXEC KevinsTestProc

Since there can be an enormous amount of plans in the procedure cache, the first thing we need to help narrow down the search is some text that is unique to that procedure, like the procedure name.  By querying the sys.dm_exec_cached_plans DMV and using the sys.dm_exec_sql_text DMF we can get the plan handle will be used to extract the exection plan.

SELECT plan_handle,usecounts, cacheobjtype, objtype, size_in_bytes, text, query_plan
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
CROSS APPLY sys.dm_exec_query_plan(plan_handle)
WHERE usecounts > 1
AND text LIKE '%KevinsTestProc%'
AND objtype = 'Proc'
ORDER BY usecounts DESC;


We can see from here that we have the plan handle,  how many times this store procedure has been executed, how large the plan is, the text inside the cached plan, and an XML representation of the Execution Plan generated at compile time.

Now that we have this we can click on the XML hyperlink in the results window to open up the execution plan in SSMS.

Tuesday, January 10, 2012

Finding Open Cursors (And Their Locks) In SQL Server

If you need information regarding open cursors on your SQL Server, you can query the sys.dm_exec_cursors DMV to get information on when they were created and their various properties. With that DMV along with the sys.dm_exec_sessions DMV and the sys.dm_exec_sql_text DMF to get the SQL that was used to create the cursor.

sys.dm_exec_cursors (0) c
JOIN sys.dm_exec_sessions s
ON c.session_id = s.session_id
CROSS APPLY sys.dm_exec_sql_text(c.sql_handle) t

If you need to see if one of those cursors is holding locks on your system, you can use the sys.dm_tran_locks and sys.dm_exec_sessions DMV 's along with the sys.partitions catalog view to see the cursor and the table it has locked.

SELECTOBJECT_NAME(P.object_id) AS TableName
sys.dm_tran_locks L
JOIN sys.partitions P
ON L.resource_associated_entity_id = P.hobt_id
JOIN sys.dm_exec_sessions s
ON L.request_session_id = s.session_id
--JOIN sys.dm_exec_connections c
--ON s.session_id = c.most_recent_session_id
--CROSS APPLY sys.dm_exec_sql_text(c.most_recent_sql_handle) t
L.request_owner_type = 'CURSOR'
BY L.request_session_id

Wednesday, January 4, 2012

Starting a SQL Agent job with Powershell and Windows Scheduled Tasks

In the past, I’ve needed to start SQL Agent jobs using Windows Scheduled Tasks. Using Powershell, this is now easier than ever.

First, let’s create the Powershell script. It’s very basic and relies on Windows Authentication and SQL Native Client to connect to the SQL Server.

#Vars for Server and JobName
$Server = "<Server>"
$JobName = "<JobName>"

#Create/Open Connection
$sqlConn = new-object System.Data.SqlClient.sqlConnection "server=$Server;database=msdb;Integrated Security=sspi"

#Create Command Obj
$sqlCommand = $sqlConn.CreateCommand()
$sqlCommand.CommandText = "EXEC dbo.sp_start_job N'$JobName'"

#Exec Command

#Close Conneection

In a nutshell, this script will need you to set 2 variables, $Server and $JobName. Once those have been set, the script will open up a connection to the server, execute the sp_start_job command and then close the connection.

Next, we want to start the Powershell script via Windows Scheduled Tasks. When setting up the task, choose "Start a program" under Action and type powershell.exe . We specify the Powershell script to execute in the additional arguments. By adding &'\\<filepath>\StartTest_SQL_AgentJob.ps1' Powershell will start up and execute the script.

Note: If your <filepath> has spaces in it, you will need the "&", otherwise it can be omitted.