Category Archives: SQL

exorcising ghosts

ERROR: THROWING MICROSOFT.REPORTINGSERVICES.DIAGNOSTICS.UTILITIES.REPORTSERVERSTORAGEEXCEPTION: , AN ERROR OCCURRED WITHIN THE REPORT SERVER DATABASE. THIS MAY BE DUE TO A CONNECTION FAILURE, TIMEOUT OR LOW DISK CONDITION WITHIN THE DATABASE.;

Subscriptions failing. Cryptic messages. “Failure sending mail:Failure sending mail”. Time to check out the logs. I’m running SQL Server 2016.

In SQL Server Reporting Services 2016 or earlier: C:\Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\LogFiles

Where to start…

Do I look at permissions? Service Accounts? Network conditions? Disk Space/Queue Length?

This was it. This was the error filling up our SSRS logs at a rate of over 9,000 entries/second, 32 MB log files produced every minute, for the last 4 1/2 days. 483 GB in all. Storage volume was 1.10 TB free of 1.72 TB so I could breathe a little easier. CPU was not spiked and memory allocations were optimal. No authentication issues present, and network connectivity was strong. What was happening he pondered?

SQL Server Profiler, you wonderful tool, to the rescue. I began a trace and found the gremlin instantly:

exec msdb.dbo.sp_delete_job @job_name=N'72383D9B-8662-4973-B71F-402DFAC50F1C'
go
exec msdb.dbo.sp_delete_job @job_name=N'72383D9B-8662-4973-B71F-402DFAC50F1C'
go
exec msdb.dbo.sp_delete_job @job_name=N'72383D9B-8662-4973-B71F-402DFAC50F1C'
go
exec msdb.dbo.sp_delete_job @job_name=N'72383D9B-8662-4973-B71F-402DFAC50F1C'
go
exec msdb.dbo.sp_delete_job @job_name=N'72383D9B-8662-4973-B71F-402DFAC50F1C'
go

This is what happens when your server attempts to delete a ghost, a job that has passed and is no longer with us. The resolution was quite simple – bring the job back to life. Create a new job with the same name, purpose unimportant. Within a millisecond the job will be terminated. We mourn the loss, all except the log files which take a much needed rest.

Happy Troubleshooting!

Empty string returned when concatenating a field to a space with REPEAT or REPLICATE

I was creating a query against an Intersystems Cache DB where the fields needed to be fixed length to export to a flat file. For a field with data of length 4 and an file specification of length 10, I need to concatenate 6 spaces to the end of the data. With other systems I’ve always accomplished this using a REPEAT or REPLICATE function, calculating the amount of spaces as flat file specification length minus the field length. This query would look like this:

SELECT field || REPEAT(10 – LEN(field)) FROM Table

In Cache though, what returned was a completely empty string. Knowing NULL values can cause this, I made sure there was data in the field. I tried using different concatenate operators like || or fn{ Concatenate()} but still had the same results. After pecking around, I found a different function, SPACE(), that does exactly what I needed, adding in a specified numbers of spaces to the end of a string. I tried this in my query and finally got the expected results:

SELECT field || SPACE(10 – LEN(field)) FROM Table

I’m still not sure why REPEAT and REPLICATE were behaving like this, but I’m glad there is a viable option.

View Performance / SQL Server Profiler

A couple of days ago I pushed some changes to our custom ePrescription application into production.  The changes had been made a couple of weeks ago and tested fine.  However, after deploying we began to receive calls from angry doctors about how slow the system was, and how long it was taking to complete tasks.  During this same period, we had a major network issue with out anti-virus software (name withheld as we are awaiting an official response from the company).  A routine upgrade of the anti-virus software brought our entire network to a halt.  The upgrade closed all network connections on every server and client machine until a reboot was initiated.  We wound up having to manually connect to every server in our agency (30+ at last count) and click a confirmation on a dialog box to finish the installation and reboot the machines.  A mass email was also sent out for users to perform the same steps.  Eventually the problem was taken care of and network traffic resumed.  Due to this failure, I wasn’t sure if the complaints about the ePrescription software was related to the anti-virus debacle, or the code changes I made.  The answer became clear when after the network cleared up, the application’s performance was still suffering.  My job then became to analyze the queries running from the application.  For this I turned to SQL Server Profiler.  If you haven’t used this awesome tool, I recommend you begin immediately.  With profiler, you can set up a trace that will listen to and display information about all kinds of events and activities being executed on a SQL Server.  For my purposes, I selected a built in Tuning Trace and filtered it to listen to TSQL statements from the database used by the software.  I ran the trace for approximately 10 minutes and was able to immediately see that one query was taking anywhere from 8 to 15 seconds to execute, and it was a query that I had put in as part of the upgrades.  I don’t know why I didn’t catch this in testing.  Our test server is extremely slow by nature.  It has been beat up to no ends and I may have subconciously considered the slow execution just a result of running on the test system.  I won’t make that mistake again.  The query that was causing all the commotion was relatively simple, selecting four columns from a table joined with a view.  I remembered in the past a project that I had taken over for a departed employee.  There was a function that was used to delete records from a table.  When run, it would often cause the ColdFusion server to time out.  Upon analysis, I saw that this function was calling a view that called another function.  I changed the procedure to call one function only, and the process completed in no more than 2 seconds after that.  I figured the view in my new query was to blame.  I did some research and came across this article that explains performance hits encountered when using views, especially when joining.  I restructred the query to omit the use of the view and it is now running on average of 20 miliseconds.  That’s a far better figure than 8 to 15 seconds!  I’ve just deployed the new query into production, and am confident that our Dr.’s will have a much better experience tomorrow. 

Hope this helps.  Happy profiling / analyzing / viewing!

 

Testing ODBC Data Sources

We do a ton of document imaging where I work.  As a healthcare organization, we have made it a priority to work towards our ultimate goal of being a paperless entity.  For now, most printed documents we use are scanned in and saved to a dedicated SQL Server.  The software that handles this is 3rd party and the connections to handle the data exchange are built in and permanently set.  The software does allow us the ability to alter export settings for individualized documents, as was the case for a client satisfaction survey we are required to implement.  These documents were set up to have their data stored in our primary clinical SQL Server, with an SSIS package that is run monthly to collect the reportable data to send to the responsible insurance agency.  As usual, specifications for the surveys were changed quite dramatically, which resulted in a need to redo the surveys, as well as the database tables that hold the data and the package that exports it.  As the DBA, it was my job to set up the tables, the SSIS package, and the ODBC System DSN entries that would allow for the exporting of the data to our primary SQL Server.  Our report writer and document master created the new surveys, and we sat down to test them out.  All documents scanned in to the system are subject to verification of data, a semi-automated process that requires user interventions whenver the software is unable to validate entries on the scanned documents itself.  It was upon verification of our new forms that we ran into some errors related to the back-end VBA script that handles external calls for verification of client data.  The scripts were set up by someone no longer working here, and at a time before I began my career here.  While not terribly complicated, I am not familiar with the script and do not have the time to diagnose the problem.  As a programmer, I understand that one simple misplaced character can wreak catastrophic havoc on code, and the offending part may not be easily visible, especially when dealing with a language that I have not looked at in over 6 years.  While I was confident that a problem existed with the script, my report writer seemed to lean towards an issue with the new DB tables and the ODBC data source.  Up until now, I had never known how to test the connection to an existing data source.  After creation of a new data source, there is a button that performs a connection test, and that is all.  I needed a way to test an existing connection, and an easy way to see if I could retrieve and push data through it.  After a quick search, I came across this site which has all kinds of goodies.  From there I downloaded a small and powerful tool called OdbcTest.  This tool allows you to connect to any defined ODBC connection, and perform queries against it, including inserts and updates.  This was exactly what I was looking for.  Using this tool I was able to confirm that the data source I set up was functioning as designed and data could flow in and out.  Armed with this info, I asked the report writer to generate a new survey using an existing template that verified without problem.  He set the new form up and it both verified correctly and the data was successfully sent through the ODBC connection to the tables.  So if you are looking for a great utility to help analyze your connections, make sure to check out OdbcTest.
 
Hope this helps.   Happy connecting!

TSQL PATINDEX()

If you are not aware, there is a string manipulation function available called PATINDEX().  The function is similar to CHARINDEX() in that it will allow you to find the index of a particular character in a string, however PATINDEX() takes it one step further and allows you to search by a regular expression pattern.  I have found this ability extremely useful several times where I had to pare strings that were similar in construction yet contained different prefix and suffixes.  Case in point, I had a list of 500 medications. Each record held the name of the med along with the dosage information attached to the end.  Some of the meds also had anidentifier appended to the front, either @-, X-, or *-, to mark them as belonging to a special category.  My goal was to strip the records of these identifiers, if present, as well as the dosage information on the end, and return a distinct list.  My process was not pretty, and probably not the most efficient solution, but making good use of PATINDEX() and CASE statements, I was able to come up with the following solution:
 

SELECT DISTINCT SUBSTRING(somestuff,

                CASE

                                WHEN PATINDEX(‘%X-%’, somestuff) > 0

                                THEN 3

                                ELSE PATINDEX(‘%[A-Z]%’, somestuff)

                END,

                CASE

                                WHEN PATINDEX(‘%[0-9]%’, somestuff) = 0

                                THEN LEN(somestuff)

                                ELSE PATINDEX(‘%[0-9]%’, somestuff) –

                                                CASE

                                                                WHEN PATINDEX(‘%X-%’, somestuff) > 0

                                                                THEN 3

                                                                ELSE PATINDEX(‘%[A-Z]%’, somestuff)

                                                END

                END)

 

 

Note the extra use of PATINDEX(‘%X-%’, somestuff).  This was necessary to remove the prefix indentifier "X-" that was not removed by the use of PATINDEX(‘%[A-Z]%’, somestuff).  The other identifiers were (@ and *).  There is probably a way to do this easily with a better RegEx, but I am definitely an amateur in that arena. 

Hope this info helps.  Happy Coding!