Sunday, October 29, 2017

Search all databases for an object using sp_MSforeachdb

I often need to search an entire database for the existence of a database object.  Typically a table or view that is used across databases.

The script below uses sp_MSforeachdb (1):


       
use master
go

declare @sqlstmt varchar(1000);
select @sqlstmt = 
         'use [?]  select  ''[?]'' as [DBName],'
       + ' SCHEMA_NAME(o.schema_id ) as [schema_name], OBJECT_NAME(m.object_id) as [ObjectName]  /* ,m.[definition] */'
       + ' from sys.sql_modules as m ' 
       + ' inner join sys.all_objects as o on m.object_id = o.object_id ' 
       + ' where definition like ''%mydatabaseobject%''; '

select @sqlstmt;

exec sp_MSforeachdb @sqlstmt;

       
 



(1) There is a lot of debate about sp_MSforeachdb, along with some alternative procedures.
You can read about them here:

http://www.sqlservercentral.com/articles/sp_msforeachdb/117654/
https://www.mssqltips.com/sqlservertip/2201/making-a-more-reliable-and-flexible-spmsforeachdb/http://spaghettidba.com/2011/09/09/a-better-sp_msforeachdb/
http://sqlblog.com/blogs/aaron_bertrand/archive/2010/12/29/a-more-reliable-and-more-flexible-sp-msforeachdb.aspx






Script to auto-create a MERGE statement

With SQL Server, the MERGE statement is one of the most powerful commands available to a developer. With one statement, you can run a column by column comparison, issue updates, inserts and delete. All with one statement. The downside, the MERGE syntax is complex, and quite the hassle to assemble. That's where this page comes in to play. Below is a MERGE build script that will create a standard MERGE statement for two tables.

 INTERSECT properly handles NULLS 


One hassle with doing a row by row comparison is how to deal with NULL's. Perhaps you turned off ANSI_NULLS. Alternatively, you can use the INTERSECT command. And as part of the MERGE statement, the process is straight forward. All you need is the statement:

 WHEN MATCHED AND NOT EXISTS(SELECT s.* INTERSECT SELECT t.*)

Reference these for more info on INTERSECT

http://www.made2mentor.com/2013/05/writing-t-sql-merge-statements-the-right-way/
http://sqlblog.com/blogs/paul_white/archive/2011/06/22/undocumented-query-plans-equality-comparisons.aspx

A Declared Primary Key


MERGE requires you to tell it what column, or columns to use to match the two tables.
The script below expects the target (or destination) table to have a declared primary key. If your table does not have a declared primary key, you'll have to build out the match manually.

SSMS has a limit of 8000 characters in 'results to text'


You'll need to adjust the SSMS settings so 'Results to Text' are set to 8000. Still, if your table has many columns, the total output could exceed 8000.  If it does, the output will be chunked up a bit, but still in the proper order.

Understand MERGE


Before using this script, make sure you understand just what MERGE is doing. 

       
/* 
-- SET RESULTS TO TEXT || Tools | Options | Results to Text | 8000 
 uses INTERSECT: for comparison, both INTERSECT EXCEPT, handle NULL as a value 
, which allows proper comparison (as if ANSI_NULLS OFF, which is deprecated )

Reference these articles:

http://sqlblog.com/blogs/paul_white/archive/2011/06/22/undocumented-query-plans-equality-comparisons.aspx
http://www.made2mentor.com/2013/05/writing-t-sql-merge-statements-the-right-way/

*/
set nocount on;
declare  @MySourceTableView   varchar(128)  = '[schema].[MyInventory_table]'  ;
declare  @MyDestinationTable  varchar(128) =  'MyOtherTable' ; 
-- supports with/ or w/o schema, with/ or w/o brackets
-----------------------------------------------------------------
-- standardize object values
-----------------------------------------------------------------
declare    @_source_TableView varchar(128),@_destination_Table varchar(128)
   ,@_source_schema    varchar(128),@_destination_schema varchar(128),@_dest_object_id int ;
 select @_source_TableView = OBJECT_NAME(OBJECT_ID(@MySourceTableView));
 select @_source_schema  = OBJECT_SCHEMA_NAME(OBJECT_ID(@MySourceTableView));
 select @_dest_object_id  = OBJECT_ID(@MyDestinationTable);
 select @_destination_Table  = OBJECT_NAME(OBJECT_ID(@MyDestinationTable));
 select @_destination_schema = OBJECT_SCHEMA_NAME(OBJECT_ID(@MyDestinationTable));

 select @_destination_schema, @_destination_Table,@_source_schema,@_source_TableView,@_dest_object_id;
 -- PK from @_destination_Table;
-----------------------------------------------------------------
-- process variables varchar(max) if exceeds 8000 character truncation
-----------------------------------------------------------------
-- PK format for MERGE statement:  t. = s. , ....
declare  @_PKsyslist table( COLUMN_NAME nvarchar(128), ORDINAL_POSITION int ) ;

declare  @colorder int = 0     ,@maxcol int = 0     
  ,@colname varchar(128) = ''   ,@colxtype int = 0 
  ,@Values varchar(max) = ''   ,@insert varchar(max) = ''   
  ,@update varchar(max) = ''   ,@openquery varchar(max) = '' 
  ,@_PKforMergeList varchar(max) = '' ,@_PKforCIDXList varchar(max) = ''
  ,@MergeSection varchar(max) = '' ,@Merge1Section varchar(max) = '' 
  ,@Merge1aSection varchar(max) = '' ,@Merge2Section varchar(max) = '' 
  ,@SPSection_cTwo varchar(max) = '' ,@_procedure_all varchar(max) = '' 
  ,@counter int = 0     ,@updatecolname varchar(max) = '';
declare  @debugflag        char(1) = 'x' ;  

---------------------------------------------------------------------------------------------------------
-- Get PK for MERGE statement:  t. = s. , - Separate to use ordinal position
---------------------------------------------------------------------------------------------------------
insert into @_PKsyslist (COLUMN_NAME, ORDINAL_POSITION)
select COLUMN_NAME, ORDINAL_POSITION FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE kcu
 INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc ON tc.TABLE_NAME = kcu.TABLE_NAME AND tc.CONSTRAINT_NAME = kcu.CONSTRAINT_NAME and tc.CONSTRAINT_TYPE = N'PRIMARY KEY'
  where kcu.TABLE_NAME = @_destination_Table and kcu.CONSTRAINT_SCHEMA = @_destination_schema ORDER BY kcu.ORDINAL_POSITION;              
 ---------------------------------------------------------------------------------------------------------
 -- assemble for MERGE statement:  t. = s. , - Separate to use ordinal position
 ---------------------------------------------------------------------------------------------------------
  Select @colorder = 0, @maxcol = 0, @_PKforMergeList = '', @_PKforCIDXList = '';
  Select @colorder = min(ORDINAL_POSITION), @maxcol = max(ORDINAL_POSITION)   from @_PKsyslist;          
  while @colorder <= @maxcol and @colorder > 0 
   begin
    select @colname = ''; 
    select @colname = cast(COLUMN_NAME as varchar(128)) from @_PKsyslist as l where l.ORDINAL_POSITION = @colorder;
    select @colname = '[' + @colname + ']';
    select @_PKforMergeList = @_PKforMergeList + 't.' + @colname + ' = s.' + @colname  ;
    select @_PKforCIDXList = @_PKforCIDXList + ' ' + @colname;
    if @colorder != @maxcol
     begin
     select @_PKforMergeList = @_PKforMergeList + ' and ';
     select @_PKforCIDXList = @_PKforCIDXList + ',';
     select @colorder = min(ORDINAL_POSITION) from @_PKsyslist as l where l.ORDINAL_POSITION > @colorder;
     end
    else 
     select @colorder = -9;
   end; 
   if  ltrim(rtrim(@_PKforMergeList)) = '' 
    select @_PKforMergeList = 'looks like your destination table does not have a declared PK';
    ----------------------------------------------------------------------------------------
    -- get max/min col info for table
    ----------------------------------------------------------------------------------------
    Select @colorder = 0, @maxcol = 0;
    Select @colorder = min(colorder), @maxcol = max(colorder)   from sys.syscolumns where  id = @_dest_object_id;     
    while  @colorder <= @maxcol
     begin

      select @colname = '', @updatecolname = '', @colxtype = 0;
      -----------------------------
      -- get column name, add brackets  [ ]  
      select @colname = '[' + [name] + ']', @colxtype = xtype from sys.syscolumns where  id = @_dest_object_id and colorder = @colorder;

      -- OPEN Query build; @openquery, WITHOUT TRAILING COMMA
      if @colxtype in ( 167, 231) 
        select @openquery =  @openquery + @colname ; 
      else
        select @openquery =  @openquery + @colname;  

      -- add to column name
      if @colorder != @maxcol
       begin                 
        select @updatecolname = 't.' + @colname + ' = s.' + @colname + ',';
        select @colname   = @colname + ',';
        select @openquery  = @openquery + ',';
       end
      else                  
        select @updatecolname = 't.' + @colname + ' = s.' + @colname + '  -- the end';
                       
      ----------------------------------------------
      -- build out the insert/values
       
      select @insert = @insert + '  ' + @colname;
      select @Values = @Values + 's.' + @colname;
      select @update = @update + @updatecolname ;
      -------------------------------
      -- get next column order
      select @colorder = min(colorder) from sys.syscolumns where  id = @_dest_object_id and colorder > @colorder;
      -- bump counter 
      set @counter = @counter + 1;
     end;      
      
    -------------------------------------------------------------------------------------------------------------------------
    -- MERGE build
    select @MergeSection = 
      '    MERGE [' + @_destination_schema + '].[' + @_destination_Table + '] as t'  + CHAR(13) + CHAR(10) 
     + '    USING [' + @_source_schema      + '].[' + @_source_TableView  + '] as s' + CHAR(13) + CHAR(10) 
     + '    on (' + @_PKforMergeList + ')     -- pk INFO HERE; FORMAT t. = s.)   '   + CHAR(13) + CHAR(10) 
     +  CHAR(13) + CHAR(10) 
     + '    WHEN MATCHED AND NOT EXISTS(SELECT s.* INTERSECT SELECT t.*) '+ CHAR(13) + CHAR(10) 
     + '    THEN UPDATE SET   -- format:  t. = s. , ....'+ CHAR(13) + CHAR(10);

    select @Merge1Section = '    ' + @update + CHAR(13) + CHAR(10)  ;
      
    select @Merge1aSection = 
      '    WHEN NOT MATCHED BY TARGET' + CHAR(13) + CHAR(10)  
     + '    THEN ' + CHAR(13) + CHAR(10) 
     + '     INSERT  (' + @insert + ')' +  CHAR(13) + CHAR(10);

    select @Merge2Section = 
      '     VALUES ( '  +  @values + ')' +  CHAR(13) + CHAR(10)
     + CHAR(13) + CHAR(10) 
     + '   WHEN NOT MATCHED BY SOURCE then ' + CHAR(13) + CHAR(10) 
     + '      DELETE;' + CHAR(13) + CHAR(10) 
     + CHAR(13) + CHAR(10);  

     select @SPSection_cTwo = 
       CHAR(13) + CHAR(10) 
      + ' /* ------------------------------------------------------------------------------------------------------'  +  CHAR(13) + CHAR(10) 
      + '    MERGE'  +  CHAR(13) + CHAR(10) 
      + '    WHEN MATCHED AND NOT EXISTS(SELECT Source.* INTERSECT SELECT Target.*) THEN'  +  CHAR(13) + CHAR(10) 
      + '     (requires 1:1 column match between source and target, target requires PK)'  +  CHAR(13) + CHAR(10) 
      + '    ------------------------------------------------------------------------------------------------------ */'  +  CHAR(13) + CHAR(10) 
      + '   '  +  CHAR(13) + CHAR(10) 
     -- +  CHAR(13) + CHAR(10);
     -- chunked to support debugging
     select @_procedure_all = @_procedure_all + @SPSection_cTwo;
     select @_procedure_all = @_procedure_all + @MergeSection;  
     select @_procedure_all = @_procedure_all + @Merge1Section;  
     select @_procedure_all = @_procedure_all + @Merge1aSection; 
     select @_procedure_all = @_procedure_all + @Merge2Section;  
             
 if len(@_procedure_all)  < 8000
 select @_procedure_all;
 else 
 begin
  select  @SPSection_cTwo; 
  select  @MergeSection  ; 
  select  @Merge1Section ; 
  select @Merge1aSection ; 
  select @Merge2Section  ; 
 end

       
 

Compare Two Tables and Columns - Meta-data T-SQL script

You need a tool to compare the meta- data between two tables, and you don't have the budget for one of the great tools already out in the market.  The script below will display all the differences between two tables for the main table and column meta-data attributes.

       
----------------------------------------------------------------------------
-- the two databases to be compared
-- user data entered here:
----------------------------------------------------------------------------
declare  @MyLegacyDatabase  nvarchar(128)  = N'[MyLegacyDatabase]' 
        ,@MyProposedDatabase nvarchar(128) = N'[MycompareDatabase]' 
        ,@Objecttype   nchar(1)    = N'U' ; 

----------------------------------------------------------------------------
-- variables and setup
----------------------------------------------------------------------------
declare @LegacyQuery nvarchar(max) = '', @ProposedQuery nvarchar(max) = '';
select  @LegacyQuery = 
  'select  Tablename = o.[name], Columnname =  c.[name], c.[column_id], cast(t.name as varchar(15)) as datatype, c.system_type_id, c.user_type_id, c.max_length, c.[precision], c.scale '
 +   ' from ' + @MyLegacyDatabase + '.sys.all_columns as c '
 +   ' inner join ' + @MyLegacyDatabase + '.sys.all_objects as o on c.object_id = o.object_id and o.type = '''  + @Objecttype + ''' and o.is_ms_shipped = 0 '
 +   ' inner join ' + @MyLegacyDatabase + '.sys.systypes as t  on c.system_type_id = t.xtype; ';
select  @ProposedQuery = 
  'select  Tablename = o.[name], Columnname =  c.[name], c.[column_id], cast(t.name as varchar(15)) as datatype, c.system_type_id, c.user_type_id, c.max_length, c.[precision], c.scale '
 +   ' from ' + @MyProposedDatabase + '.sys.all_columns as c '
 +   ' inner join ' + @MyProposedDatabase + '.sys.all_objects as o on c.object_id = o.object_id and o.type = '''  + @Objecttype + ''' and o.is_ms_shipped = 0 '
 +   ' inner join ' + @MyProposedDatabase + '.sys.systypes as t  on c.system_type_id = t.xtype; ';
select @LegacyQuery;  

declare @tblLegacycompare TABLE ( [Tablename] [sysname] NULL, [Columnname] [nvarchar](128) NULL, [column_id] int null, [datatype] [varchar](15) NULL, [system_type_id] [tinyint] NULL, [user_type_id] [int] NULL, [max_length] [smallint] NULL, [precision] [tinyint] NULL, [scale] [tinyint] NULL);
declare @tblProposedcompare TABLE ( [Tablename] [sysname] NULL, [Columnname] [nvarchar](128) NULL, [column_id] int null, [datatype] [varchar](15) NULL, [system_type_id] [tinyint] NULL, [user_type_id] [int] NULL, [max_length] [smallint] NULL, [precision] [tinyint] NULL, [scale] [tinyint] NULL);

--select top 1 * from @tblLegacycompare; 
--EXEC sp_executesql @LegacyQuery;
--select @LegacyQuery;   

----------------------------------------------------------------------------
-- populate the tables with data to compare
----------------------------------------------------------------------------
           
insert into @tblLegacycompare (Tablename, Columnname, column_id, datatype, system_type_id, user_type_id, max_length, [precision], scale) 
 EXEC sp_executesql @LegacyQuery;
 -- select * from @tblLegacycompare;

insert into @tblProposedcompare (Tablename, Columnname, column_id, datatype, system_type_id, user_type_id, max_length, [precision], scale) 
 EXEC sp_executesql @ProposedQuery;
 --  select * from @@tblProposedcompare;

----------------------------------------------------------------------------
-- compare
----------------------------------------------------------------------------

select TableSort = isnull(n.Tablename ,o.Tablename ), ColumnSort = isnull(n.Columnname, o.Columnname)
,OnLegacyDb  = case when  o.Tablename is not null then 'Yes' else 'No' end 
,OnProposedDb = case when  n.Tablename is not null then 'Yes' else 'No' end 
,ColumnAddDrop = case when o.Columnname = n.Columnname then '-'      when o.Columnname is null and n.Columnname is not null then 'ADD' when o.Columnname is not null and n.Columnname is null then 'DROP' else '???' end
,ColumnIDChg   = case when o.column_id  = n.column_id  then 'no chg' when o.column_id  is null and n.column_id  is not null then 'ADD' when o.column_id  is not null and n.column_id  is null then 'DROP' else 'CHANGE' end


,ColumnTypeChg = case when isnull(o.datatype,'') = 'varchar' and isnull(n.datatype,'') = 'char' and isnull(o.max_length,-1) = isnull(n.max_length,-2) then 'no chg(H)'
      when o.datatype is null then 'add' when n.datatype is null then 'drop' when  o.datatype != n.datatype then 'CHANGE' else 'no chg' end

,ColumnLengthChg = case when o.max_length  is null then 'add' when n.max_length  is null then 'drop' when o.max_length  != n.max_length  then 'CHANGE' else 'no chg' end
,ColumnPrecisionChg = case when o.[precision] is null then 'add' when n.[precision] is null then 'drop' when o.[precision] != n.[precision] then 'CHANGE' else 'no chg' end
,ColumnScaleChg  = case when o.scale    is null then 'add' when n.scale  is null then 'drop' when o.scale       != n.scale  then 'CHANGE' else 'no chg' end
-- ,o.Tablename as TblOld, n.Tablename as TblNew, o.Columnname as ColOld, n.Columnname as ColNew
, o.datatype as TypeOld, n.datatype as TypeNew, o.column_id as ColIDOld, n.column_id as ColIDNew
, o.max_length as LENold, n.max_length as LENnew, o.[precision] as PrecisionOld, n.[precision] as PrecisionNew, o.scale as ScaleOld, n.scale as ScaleNew
, o.system_type_id as SysTypeOld, o.user_type_id as UserTypeOld, n.system_type_id as SysTypeNew, n.user_type_id as UserTypeNew

from @tblLegacycompare as o full outer join @tblProposedcompare as n on o.Tablename = n.Tablename and o.Columnname = n.Columnname

order by isnull(n.Tablename ,o.Tablename ) asc , isnull(n.Columnname, o.Columnname) asc ;
       
 

Wednesday, October 18, 2017

IDENTITY Column - and the steps to remove from your table

Issue:


Your table has a column identified as an IDENTITY (autonumber) column, and you need to

1) keep the column and its contents
2) remove the IDENTITY attribute for the column.

Approach:


As of 2017, SQL Server does not have an ALTER TABLE command that allows you to remove the IDENTITY property. The basic steps to remove an IDENTITY property are as follows:

- add a new column, and copy contents into the new column
- drop any indexes and constraints using the TO BE dropped column
- drop the old column
- rename the new column with the old name
- rebuild indexes and constraints

Here is a sample set up scripts to get the job done:

The first part creates a test table, the second part removes the IDENTITY property.

-------------------------------------------------------
-- setup:

use work;
go

create table work.dbo.testidentity
(  tblid     int identity(1,1) not null
  ,somevalue varchar(20) null
  );

alter table work.dbo.testidentity add constraint pk_tblid  primary key clustered ( tblID);

  insert into  work.dbo.testidentity (somevalue)
  values ( 'one' )
,( 'two' )
,( 'three' )
,( 'four' );

select * from work.dbo.testidentity ;

-------------------------------------------------------
-- the process to remove:

alter table work.dbo.testidentity add temp_tblID int null;
go
select * from work.dbo.testidentity ;
go

update work.dbo.testidentity set temp_tblID = tblid;
go
alter table work.dbo.testidentity alter column temp_tblID int not null;
go
select * from work.dbo.testidentity ;
go
alter table work.dbo.testidentity drop  constraint pk_tblid;
go
alter table work.dbo.testidentity  drop column tblid ;
go
select * from work.dbo.testidentity ;
go
EXEC sp_rename 'dbo.testidentity.temp_tblID' , 'tblID', 'COLUMN';
go
select * from work.dbo.testidentity ;
go

alter table work.dbo.testidentity add constraint pk_tblid  primary key clustered ( tblID);
go

Sunday, October 8, 2017

Fix Windows 10 - 100% Disk Usage in Task Manager

A colleague directed me to the URl below with a list of tips to fix the 100% disk usage problem I was having with Windows 10.   I also disabled all the Google Chrome Sync "on - custom settings"

You can find the site here:  Windows 10 100% disk usage in Task Manager.

I only needed to do a few steps:


  1. disable windows search
  2. disable SuperFetch service  (see registry adjustments below)
  3. disable all Google Chrome:  Sync "on - custom settings"



A failing SAS hard drive on my RAID :


I've setup my primary workstation with RAID 1 using an LSI SAS 3000 series (1068E) controller.  When disk usage started to spike again, the problem was with the hard drive.  The RAID controller was struggling to maintain consistency with the failing SAS drive. Once the hard drive was replaced, and the RAID rebuilt, disk usage for the RAID partition was back to normal.

Other notes:


I eventually, changed the registry settings for memory management:

Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\PrefetchParameters\

  • EnablePrefetcher:   change from 3 to 0
  • EnableSuperfetch:  change from 3 to 0


Background Intelligent Transfer Service:  I have it set to 'Manual' and is typically not running





Monday, September 18, 2017

CONCAT different Rows into a single attribute value

Problem:


Using T-SQL, you need to concatenate values from different rows into a single attribute value.  The ideal solution would a function that allows you to CONCAT the values from a GROUP BY, much like a SUM(0) function might perform.

Solution


Let's say we want a single row, that lists out all the distinct Titles in the person.person table in the AdventureWorks2012 database. All we need is a string SUM() or CONCAT() command for a group.  We could imagine the command might look something like this:

       
with t as 
( select Title = isnull(p.Title,'n/a'), count(*) as TitleCnt
 from [Person].[Person] as p 
 group by p.Title
) 
Select AllTitles = MYROWCONCATFUNCTION(t.Title), sum(TitleCnt) as TitleCount from t
 
 

SQL Server 2017?
If you are using SQL Serer 2017 or Azure SQL Database, you can use the STRING_AGG function.

Not using the latest version?
For now, the most straight forward approach is to use FOR XML PATH. One key downside might be that for some, this command is considered a blackbox that might not be reliable in the longer term. So, if you are not comfortable using FOR XML PATH, go to the ProjectMX write-up on all the ways you can approach row concatenation using T-SQL:  ProjectMX: Concatenating row values in Transact-SQL.

FOR XML PATH

For a simple approach to concatenate the values from multiple rows is to setup a SubQuery using FOR XML PATH .
       
with t as 
( select Title = isnull(p.Title,'n/a'), count(*) as TitleCnt
 from [Person].[Person] as p 
 group by p.Title
) 
Select  
 sum(TitleCnt) as TitleCount
,(SELECT ',' + t.title + ' ' FROM t where (t.Title = t.Title) FOR XML PATH ('')) AS AllTitles
 from t;
 
 

With the output looking like this:

         TitleCount AllTitles
         ----------          ---------
         19972 ,Sr. ,Mrs. ,n/a ,Sra. ,Ms. ,Ms ,Mr.

To remove the leading comma, we can add a SUBSTRING function:

       
with t as 
( select Title = isnull(p.Title,'n/a'), count(*) as TitleCnt
 from [Person].[Person] as p 
 group by p.Title
) 
Select  sum(TitleCnt) as TitleCount
  ,SUBSTRING((SELECT  ',' + t.title + ' ' from t where (t.Title = t.Title)
  FOR XML PATH ('')),2,9999) AS AllTitles
  from t;
 
 

With the output looking like this:

         TitleCount AllTitles
         ----------          ---------
         19972 Sr. ,Mrs. ,n/a ,Sra. ,Ms. ,Ms ,Mr.


Need more information on FOR XML? See: m FOR XML (docs.microsoft.com)

Monday, August 14, 2017

Expert Beginner a.k.a "Sophomore" or the "wise fool"

Erik Dietrich has an excellent write-up of what he calls the "Expert Beginner."  As you read it, I'm sure you'll recognize the situation in a current or former organization. In the broadest terms, the academic phrase "sophomore" is well suited.

You can find it here: How Developers Stop Learning: Rise of the Expert Beginner

Another interesting post:  How to hire a senior developer

Using the definition provided by Wikipedia, would get:

The term is derived from Greek σόφισμα (sophisma), 'acquired skill, clever device, method', altered to resemble a compound of the Greek words σοφός (sophos), 'wise' and μωρός (moros), 'foolish, dull'.



AI and the demise of the DBA

Are DBA's soon to be the way of the buggy whip?

I've long held that the need for a full time DBA to tune a database was past its prime. In an age where database vendors like Microsoft hire brilliant database theorists, it makes far more sense for them to work out the algorithms to manage the database than having a full-time database mechanic (DBA) on staff to monitor and adjust the system.

With the release of SQL Server 2017, a new feature, called Automatic Tuning (SQL Server 2017) has been introduced.

While SQL Server has made progress in this area, the open source world has made advances as well. OtterTune has made some initial advances tuning and optimizing two open source database flatforms:  MySQL and Postgres.

Here are key links to learn more:

AI topics - OtterTune

A detailed look into OtterTune by Adrian Colyer

Research Paper - Automatic Database Management System Tuning Through Large-scale Machine Learning

Wednesday, August 9, 2017

Forecast vs. Predict - what's the difference?

We get ask this frequently.

Fortunately, the team at PowerPivotPro has provided a nice write-up. So, rather than re-create the wheel, here it is:

The Difference Between Forecasting & Predictive Analytics


Thursday, May 25, 2017

SSRS support for Power BI Desktop - preview download available!

A preview of the on-premises SSRS with support for Power BI is now in preview.  We'll technically, it's called "Power BI Report Server" preview.  But under the covers, as far as we know, it is an enhanced version of SSRS.  For the SQL Server 2016 release, Microsoft completely rebuilt the SSRS engine, with full support for HTML5, and now Power BI Desktop reports.

About a year ago, we reported that both the SSRS and Power BI teams indicated future Power BI support as part of SSRS. And earlier in the year, we did an initial review of the Azure based VM that included the enhanced SSRS engine. That review found that SSRS did indeed do a nice job with Power BI Desktop reports.

The preview download is available here:  Microsoft Power BI Report Server - May 2017 Preview.

Monday, May 8, 2017

Power BI Desktop to SSRS -

Back on November 10, 2016, we wrote about the coming support for Power BI Desktop reports by SSRS.  See Upload Power BI Desktop .pbix to SSRS 2016.  We'll a few days ago, on May 3, 2017, the SSRS Team Blog posted an announcement that it was almost ready.  Almost.

It's called Power BI Report Server, and as of May 8, 2017, it is still in the "...coming soon." status. The sort-of good news is that if you have an Enterprise Edition, per-core license, with Active Software Assurance (SA), it is included. Or if you have Power BI Premium licenses.  For all the rest of us, we'll not so much.  If you have looked over our earlier review, it seamlessly worked with SSRS's rebuild engine. So, we are hoping that this is still the case.

You can review the official Power BI Report Server introduction here: Introducing Power BI Report Server  , along with the SSRS Team Blog here:  Introducing Power BI Report Server for on-premises Power BI report publishing.


Getting to ISO format 'yyyyMMdd' with a single function?

The Goal:


Convert a date in SQL Server into the ISO format 'yyyyMMdd' with a single function, and without having to CAST a character value into an integer.  Alas, we have yet to find such an approach.

Before the FORMAT command, the most expedient approach was to use the CONVERT function, combined with a second convert function, or CAST the value to an integer value.  For example:


 declare @adate date = '2017-05-01';
 select 
 ISOdate_CONVERT = cast(convert(varchar(12), @adate,112) as int);


FORMAT


Starting with SQL Server 2012, we were introduced to the FORMAT statement. It had promise, but like the CONVERT statement, we still had to issue a second CAST statement. Output from FORMAT is nvarchar. Sure, we could skip the second CAST function, and just rely on the implicit conversion from nvarchar to integer, but that would be poor form.  SQL Server still has to make the conversion, and we have to believe that an implicit conversion is not only sloppy, but perhaps a bit slower. So, with FORMAT, we now have two options to convert a date, or datetime into an ISO formatted date - yyyyMMdd.

 declare @adate date = '2017-05-01';
 select 
 ISOdate_CONVERT = cast(convert(varchar(12), @adate,112) as int)
,ISOdate_FORMAT  = cast(format(@adate,'yyyyMMdd','en-US')as int); 


Which to use?

Since have not done any performance testing, either one would suffice. What we would all like is a method of converting a date directly into an ISO format, with a single function.

Sunday, April 23, 2017

Azure Analysis Services general availability - April 19, 2017

Microsoft announced the general availability of the Azure Analysis Services platform. But it only supports the inferior tabular model. With Azure's scalability, the ideal use of an Azure Analysis Services platform would be to create a multi-dimensional ROLAP model against an Azure SQL data warehouse.  Announcing Azure Analysis Services general availability.

Friday, March 24, 2017

Power BI - The New Matrix

Over at PowerPivotPro, Matt Allington has insight into the "preview' of the newly released New Matrix for Power BI Desktop. The New Matrix.

Test to see if your email account has been Breached

Use this site to see if your email has been breached:

haveibeenpwned.com

SQL Server Agent added to Linux CTP 1.4

On March 17th, Microsoft announced the addition of SQL Server Agent to the latest CTP of SQL Server for Linux. While the Linux release is not a full function release, it does add the core part of job scheduling.  SQL Server next version CTP 1.4 now available.

You can get the specific details on installing SQL Server Agent on Linux, along with some sample T-SQL to create and manage jobs at:  SQL Server on Linux: Running jobs with SQL Server Agent.

Sunday, February 19, 2017

Row Count of all tables - returned in a single dataset

When you need the row count from a table, the most common way is to use count(*). But what if you need the row count for all the tables in your database?  You could use the sp_MSforeachtable stored procedure:

    EXEC sys.sp_MSforeachtable 'select ''?'',count(*) from ?;';

Nice, but the return format is a bit difficult to manage, since it effectively runs the query for as many times as you have tables in your database.  The output might look something like this:


               
          -------------------- -----------
          [dbo].[Product_info] 504

                           
          --------------------- -----------
          [dbo].[database_info] 24



Sure, you could package this to run an insert into a temp table, but still, it's a bit awkward. What we want is a single, elegant query. One that returns a single dataset. Fortunately, we can get the information from the DMV sys.dm_db_partition_stats. One of the columns returned from this DMV is row_count. The one caveat is that row_count is approximate.

Our query will look something like this:


  select p.row_count, tableName = object_name(object_id), * 
   from sys.dm_db_partition_stats as p
inner join sys.sysobjects o 
      on p.object_id = o.id  and o.xtype = N'U'

where p.index_id IN( 0,1) and p.row_count > 0
   order by p.row_count desc;

A table is either a heap, or it has a clustered index. Heaps have an index_id of 0, while clustered indexes have an index_id of 1. They are mutually exclusive.

You can read more about sys.dm_db_partition_stats on MSDN here: https://msdn.microsoft.com/en-us/library/ms187737.aspx.




Thursday, February 16, 2017

A tool to Optimize SQL Server Indexes

SQL Server Management Studio has a nifty tool that allows you to setup daily maintenance plans that run a CHECKDB and will also rebuild your indexes. Great. But, those index rebuilds called by the maintenance tools have a drawback. A big, log growing, disk hogging drawback.

It wants to do full rebuilds of all indexes. And these processes are logged. Low fragmentation? No problem - full rebuild. Large table? No problem - full rebuild.  Low on disk space on your server, and you really do not want your log file to grow. Problem!

Solution!

Years ago, in my days as a DBA, I created a process to review all the indexes for fragmentation, and then do targeted updates. But, that was years ago when we all carried flip-phones. Fortunately, Ola Hallengren has released a complete set of index optimization tools that work with current versions of SQL Server.

You can find them here:  https://ola.hallengren.com/

And, if you need some additional suggestions on the best way to adjust all the optional parameters, start with the post at Brent Ozar:  Tweaking the Defaults for Ola Hallengren’s Maintenance Scripts.

And did I consider digging out my old index optimization scripts? After looking over the work Ola Hallengren did - not a chance. 

Logging the spaced used by Databases in SQL Server

I recently had a need to setup  a process to periodically capture the size of the database files, along with the used (or unused) space. And not just the data files, the log files as well. Years ago, when I was doing more DBA work, I had a script to do just that.  Unfortunately, it was for a much older version of SQL Server, and it no longer worked.

No problem, just find the current DMV's that provide the data.  It turns out, it's not as straight forward as I would have liked. Sure there a many procedures, DMV's and DBCC commands that provide some of the data, but not all of the data. And I could have just used the profiler, and then pulled up the requisite form in SSMS, but that seems less informative than reading what the community had to offer.


sp_spaceused, sp_databases, DBCC SQLPERF(logspace) Oh my!


Looking over the posts, it seems there were endless ways to get some of the data. But ideally, I wanted a single SELECT that returned just the data I wanted, with out all of the wild SQL CTE, inline-query or other such code.

After reviewing the posts, many, perhaps all of the procedures and commands I found provided valuable information. Just not what I wanted. We have:

       sp_spaceused, but it returns data in two data sets. Not ideal.  Next we have...

       sp_databases, but it just returns a list of databases and the size. Not ideal.  Or use...

       DBCC showfilestats,   Ok, but still incomplete. And, since it is not listed on the main DBCC                                                     page at MSDN, its most likely deprecated.

And speaking of DBCC, there are other commands to consider. But what I was really hoping to find was a simple way to use the sys.database_files DMV. It is very close, but not quite complete.

            select * from sys.database_files;


In time, I found a post by Glenn Berry of SQLSkills. His query used a simple SELECT against the sys.database_files DMV, but with just a few extra additions. He used FILEPROPERTY.

To fit my needs, all I needed was a few extra bits added to the query provided by Glenn Berry. I now had the ideal database size monitoring statement. A query that told me all I needed to know to monitor the database sizes over time.

       
  SELECT 
   @@SERVERNAME as ServerName 
 , db_Name()  as DatabaseName
 , cast(CURRENT_TIMESTAMP as date) as RunDate
 , cast(CURRENT_TIMESTAMP as time) as Runtime
 , name AS LogicalFileName 
 , physical_name AS PhysicalFileName
 , CAST(size/128 AS DECIMAL(10,2)) AS TotalSizeMB
 , CAST(size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') 
                 AS int)/128.0 AS decimal(10,2))  AS AvailSpaceMB
 , case when (size/128) != 0 then
        CAST(100 - (((size/128.0 
                              - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int
                                      )/128.0 )/(size/128))*100)  AS DECIMAL(10,2))  
  else 0 end  as PercentUsed
 , type, type_desc,state, state_desc,size, max_size ,file_id,  file_guid, data_space_id
 , growth, is_media_read_only,  is_read_only, is_sparse, is_percent_growth
        , is_name_reserved, create_lsn, drop_lsn, read_only_lsn, read_write_lsn
        , differential_base_lsn, differential_base_guid, differential_base_time
 , redo_start_lsn, redo_start_fork_guid, redo_target_lsn, redo_target_fork_guid
        , backup_lsn
   FROM sys.database_files;        
 


There was just one problem. The query was database specific. And I needed the information for all databases.

sp_MSforeachdb to the rescue

All that was required was to repackage the query and run it using sp_MSforeachdb.  And you'll notice I push all of this into a table using a procedure. 


       
if exists(select * from sys.objects 
         where object_id = OBJECT_ID('dbo.usp_database_file_info') and type = N'P')
 DROP PROCEDURE dbo.usp_database_file_info;
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE PROCEDURE dbo.usp_database_file_info
AS 
begin 
 declare @myCommand Nvarchar(4000);
 select @myCommand  =  
    ' USE ? ' 
  + 'insert into test.dbo.database_file_info  ( ServerName, DatabaseName, RunDate, Runtime, LogicalFileName '
  + ', PhysicalFileName, TotalSizeMB, AvailSpaceMB, PercentUsed, type, type_desc, state, state_desc, size, max_size '
  + ', file_id, file_guid, data_space_id, growth, is_media_read_only, is_read_only, is_sparse, is_percent_growth' 
  + ', is_name_reserved, create_lsn, drop_lsn, read_only_lsn, read_write_lsn, differential_base_lsn, differential_base_guid'
  + ', differential_base_time, redo_start_lsn, redo_start_fork_guid, redo_target_lsn, redo_target_fork_guid, backup_lsn ) '
  + ' SELECT @@SERVERNAME as ServerName' 
  + ', db_Name()  as DatabaseName' 
  + ', cast(CURRENT_TIMESTAMP as date) as RunDate'
  + ', cast(CURRENT_TIMESTAMP as time) as Runtime'
  + ',  name AS LogicalFileName' 
  + ', physical_name AS PhysicalFileName'
  + ', CAST(size/128 AS DECIMAL(10,2)) AS TotalSizeMB'
  + ', CAST(size/128.0 - CAST(FILEPROPERTY(name, ''SpaceUsed'') AS int)/128.0 AS decimal(10,2))  AS AvailSpaceMB'
  + ', Case when (size/128) != 0 then'
  + ' CAST(100 - (((size/128.0 - CAST(FILEPROPERTY(name, ''SpaceUsed'') AS int)/128.0 )/(size/128))*100)  AS DECIMAL(10,2))'  
  + '   else 0 end  as PercentUsed'
  + ', type, type_desc,state, state_desc,size, max_size ,file_id,  file_guid, data_space_id'
  + ', growth, is_media_read_only,  is_read_only, is_sparse, is_percent_growth, is_name_reserved'
  + ', create_lsn, drop_lsn, read_only_lsn, read_write_lsn, differential_base_lsn, differential_base_guid, differential_base_time'
  + ', redo_start_lsn, redo_start_fork_guid, redo_target_lsn, redo_target_fork_guid, backup_lsn'
  + ' FROM sys.database_files;';

   --  EXEC (@myCommand);
    EXEC sp_MSforeachdb  @myCommand ;
end

All we need is the table create statement:

       
if exists(  select * from sys.objects where object_id = OBJECT_ID('dbo.database_file_info') )
 DROP TABLE dbo.database_file_info;
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[database_file_info](
 tblID bigint identity(1,1) not null,
 [ServerName] [nvarchar](128) NULL,
 [DatabaseName] [nvarchar](128) NULL,
 [RunDate] [date] NULL,
 [Runtime] [time](7) NULL,
 [LogicalFileName] [sysname] NOT NULL,
 [PhysicalFileName] [nvarchar](260) NOT NULL,
 [TotalSizeMB] [decimal](10, 2) NULL,
 [AvailSpaceMB] [decimal](10, 2) NULL,
 [PercentUsed] [decimal](10, 2) NULL,
 [type] [tinyint] NOT NULL,
 [type_desc] [nvarchar](60) NULL,
 [state] [tinyint] NULL,
 [state_desc] [nvarchar](60) NULL,
 [size] [int] NOT NULL,
 [max_size] [int] NOT NULL,
 [file_id] [int] NOT NULL,
 [file_guid] [uniqueidentifier] NULL,
 [data_space_id] [int] NOT NULL,
 [growth] [int] NOT NULL,
 [is_media_read_only] [bit] NOT NULL,
 [is_read_only] [bit] NOT NULL,
 [is_sparse] [bit] NOT NULL,
 [is_percent_growth] [bit] NOT NULL,
 [is_name_reserved] [bit] NOT NULL,
 [create_lsn] [numeric](25, 0) NULL,
 [drop_lsn] [numeric](25, 0) NULL,
 [read_only_lsn] [numeric](25, 0) NULL,
 [read_write_lsn] [numeric](25, 0) NULL,
 [differential_base_lsn] [numeric](25, 0) NULL,
 [differential_base_guid] [uniqueidentifier] NULL,
 [differential_base_time] [datetime] NULL,
 [redo_start_lsn] [numeric](25, 0) NULL,
 [redo_start_fork_guid] [uniqueidentifier] NULL,
 [redo_target_lsn] [numeric](25, 0) NULL,
 [redo_target_fork_guid] [uniqueidentifier] NULL,
 [backup_lsn] [numeric](25, 0) NULL
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[database_file_info] ADD  CONSTRAINT 
   [PK_database_file_info] PRIMARY KEY CLUSTERED ( [tblID] ASC )
 WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF
     , IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON
     , ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
       
 


You can find Glenn Berry's original posted reply on MSDN at:
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/e3f1d036-bfe7-42a3-a43e-7ae7577c3849/dbcc-command-for-data-file-size?forum=sqldatabaseengine.





Cycle the SQL Server and Agent Logs Periodically

Have you ever tried to open the log file for SQL Server, and then wait and wait. Realizing that the log is massive since your server has not been restarted for weeks or months?  We'll congratulation! It's great that you have not had to take the server down for the last x weeks or months. But,....

 ......about that massive log file.  Well, more good news, all you need to do is create a single SQL Agent job with a few steps.


  1. EXEC sys.sp_cycle_errorlog;
  2. EXEC dbo.sp_cycle_agent_errorlog;

That's it.  Typically, a week is a good duration, but every setup is different.

Sunday, February 5, 2017

Replication Setup -FIX- Unable to list directory contents

You're trying to setup SQL Server Replication on a server, and it fails. Looking thru the error message you find this:


        An exception occurred while executing a Transact-SQL statement or batch.
        (Microsoft.SQLServer.ConnectionInfo)

        Destination path .............is not valid. Unable to list directory contents. Specify a valid

            destination path.
            Changed database context to 'master'. (Microsoft SQL Server, Error: 14430)

Unable to list directory contents. Specify a valid destination path.

We were setting up an existing SQL Server 2014 Standard edition to run as a combined Distributor / Publisher, and ran into this problem. Looking over the blogs, the one thing that became apparent was that there was quite a bit of confusion for the problem. Below is how we worked out the steps and configuration issues that ultimately led to success.

Unable to list directory contents


Ultimately, our focus was on permissions. And, in general, the consensus was to run the Configure Replication Wizard as SA. So, this means that any file/folder permissions fall to the account running SQL Server.  For our example, that would be:  MYDOMAIN\sqlserviceacct

We'll admit that after several failed setup attempts, some of the following may be excessive. But they finally worked.

Preparation

  1. SQL Server service running a domain account:  such as  MYDOMAIN\sqlserviceacct
  2. Replication Snapshot Share
    1. Verify that the account "MYDOMAIN\sqlserviceacct" can both
      1. read the folder
      2. read the UNC share
    2. In addition to the other accounts that will need access to support replication!
  3. Folder(s) to hold the Distribution Database. For example, your distribution database will be placed in the folder E:\SQLsvr\Data\Distribution
    1. Verify that the account "MYDOMAIN\sqlserviceacct" can READ the top level folder
    2. Verify that the account "MYDOMAIN\sqlserviceacct" can READ the next level folder.
    3. Verify that the account "MYDOMAIN\sqlserviceacct" has control over the FINAL folder. In this case E:\SQLsvr\Distribution. 
  4. EXEC sp_configure 'xp_cmdshell', 1;
    1. with a newly setup SQL Server, you may not have disabled xp_cmdshell. But, this server was already in production, and since it did not need xp_cmdshell, it was disabled (a best practice).
    2. Before running the Wizard, we temporarily enabled xp_cmdshell. Once the install was completed, we promptly disabled xp_cmdshell.
  5. Consider running:  sp_removedbreplication.
    1. One thread suggested that you run sp_removedbreplication. It was listed under the premise that you had an earlier replication installation. We did not, but we still ran the procedure to clear out any partial replication objects that may have been left from our earlier failed install. 

sp_configure 'xp_cmdshell', 1;

We are sure you know how to do this, but just incase, see: xp_cmdshell Server Configuration Option.
Basically (from MSDN):

       
-- To allow advanced options to be changed.  
EXEC sp_configure 'show advanced options', 1;  
GO  
-- To update the currently configured value for advanced options.  
RECONFIGURE;  
GO  
-- To enable the feature.  
EXEC sp_configure 'xp_cmdshell', 1;  
GO  
-- To update the currently configured value for this feature.  
RECONFIGURE;  
GO         
 

Running the Wizard - Configure Replication

  1. Using SSMS, connect to SQL Server as SA.
    Looking over many of the postings, it was generally agreed to run the wizard as SA. 
  2. SnapShot folder
    1. UNC required, such as \\myserver\myreplicationshare
    2. Verify that your UNC is exactly the one you setup.
  3. Database folder
    1. Verify that your folder is exactly the one you plan to use, with the permissions set as listed above.
  4. Script - yes you want a copy of the script.
That should lead to success.  If not, go back and verify that your SQL Server service account really does have access to the folders. And then make sure you have them spelled correctly.

Wednesday, January 11, 2017

Database Rankings

A listing of database rankings:






Sunday, January 8, 2017

SQL Server connections - so many ways

We found a posting on mssqlfun.com that outlined 21 ways to connect to SQL Server.
you can check out the posting here: Different Ways to Connect to SQL Server Instance

Here is summary of their findings:

       

1. Connecting using computer name

2. Connecting using system IP

3. Connecting using loopback IP 127.0.0.1

4. Connecting using localhost

5. Connecting using (local)

6. Connecting using just a dot (.)

7. Connecting using \\.\pipe\sql\query

8. Connecting using \\.\pipe\SQLLocal\MSSQLSERVER

9. Connecting using computer name with prefix np: 
    (to force to use named pipe protocol)

10. Connecting using \\.\pipe\SQLLocal\MSSQLSERVER with prefix np: 
     (to force to use named pipe protocol)

11. Connecting using (local) with prefix np: 
     (to force to use named pipe protocol)

12. Connecting using just a dot (.) with prefix np: 
     (to force to use named pipe protocol)

13. Connecting using \\.\pipe\sql\query with prefix np: 
     (to force to use named pipe protocol)

14. Connecting using localhost with prefix np: 
     (to force to use named pipe protocol)

15. Connecting using computer name with prefix tcp: 
     (to force to use TCP protocol)

16. Connecting using system IP with prefix tcp: (to force to use TCP protocol) 
     – Connect using TCP only without prefix also

17. Connecting using localhost with prefix tcp: (to force to use TCP protocol)

18. Connecting using (local) with prefix tcp: (to force to use TCP protocol)

19. Connecting using just a dot (.) with prefix tcp: (to force to use TCP protocol)

20. Connecting using loopback IP 127.0.0.1 with prefix tcp: (to force to use TCP protocol) 
   – Connect using TCP only without prefix also

21. Connecting using specifying port no.