Search This Blog

Unable to open SQL Server Configuration Manager

SQL Server Configuration Manager:Cannot connect to WMI Provider-Invalid class[0x80041010]
This error occurs then .mof(Managed Object Format)  dont get installed and registered correctlly.
To resolve this error we have to re-register  the *.mof files
command
mofcomp.exe “C:\Program Files (x86)\Microsoft SQL Server\100\Shared\sqlmgmproviderxpsp2up.mof”

SQL Server : How to check the Authentication Mode with tSQL.

Of course you know, we've got two different Authentication Modes -  Windows Authentication  Mixed-mode Authentication
Where 'Mixed-Mode is both Windows Authentication and SQL Server Authentication.
Windows Authentication is the most secure, but we often have to use SQL Authentication, for the non-Windows platforms, such as Linux.  Hence, the Mixed-mode approach.
Here are a few different ways, aside from SSMS, that you can use to check your authentication mode:
1.  xp_instance_regread -This procedure allows us to read the registry, where SQL Server stores '1' for Windows Authentication, and '2' for SQL Server / Mixed Mode Authentication.
DECLARE @AuthenticationMode INTEXEC master.dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer',
N'LoginMode', @AuthenticationMode OUTPUTSELECT CASE @AuthenticationMode
WHEN 1 THEN 'Windows Authentication' WHEN 2 THEN 'Windows and SQL Server Authentication' ELSE 'Unknown'END as [Authentication Mode]

2.  Server Property -The Server Property function returns '1' for Windows Authentication, and '0' for SQL Server/Mixed-Mode Authentication. 
SELECT CASE SERVERPROPERTY('IsIntegratedSecurityOnly') WHEN 1 THEN 'Windows Authentication' WHEN 0 THEN 'Windows and SQL Server Authentication'
END as [Authentication Mode]NOTE:  It is not the same as what is stored in the registry, and returned from the use of xp_instance_regread.3. xp_loginfo  -  Returns a value of 'Windows NT Authentication' for Windows Authentication and 'Mixed' for SQL Server/Mixed-Mode Authentication.EXEC master.sys.xp_loginconfig 'login mode'





SQL Server : Retrieve memory details from the OS


select 
total_physical_memory_kb,
available_physical_memory_kb,
system_cache_kb,
system_high_memory_signal_state, 
system_low_memory_signal_state,
system_memory_state_desc as system_memory_state 
from 
sys.dm_os_sys_memory


Check out BOL (Books Online), there are other details as well, but this is the one I use to get a quick look at the overall state of the system memory.

SQL Server :Table counts

' SELECT COUNT(*) ' on those big tables can be a little invasive.  Not only fairly timely, but the used resources can be extreme.  Here are a few different methods for using the COUNT function.  Understand, these methods are just selective counts, per table.  If you need to retrieve counts across multiple tables, possibly for comparison, you could try something like this:  Counts Across Multiple Tables
Or even this, to compare counts across databases:  Compare Two Databases

-- Just one table:
1. Return count of all records in the table
SELECT COUNT(*) FROM dbo.TABLE

2. Return count of all values for the given COLUMN, excluding NULL values:
SELECT COUNT(COLUMN) FROM dbo.TABLE

3. Return the count of distinct values for the given COLUMN:
SELECT COUNT(DISTINCT(COLUMN)) FROM dbo.TABLE

(Does not work with MSAccess.)


-- All tables:
1.  Return a count for all tables.
CREATE TABLE dbo.TABLECOUNTS
 (
  TABLENAME VARCHAR(255) NOT NULL,
  RECORDCOUNT INT NOT NULL
  )
GO


EXEC sp_msforeachtable
  "INSERT INTO TABLECOUNTS SELECT '?' AS TABLENAME, COUNT(*) FROM ?"
SELECT * FROM dbo.TABLECOUNTS

-- Of course, you'll see the fully qualified table names, like this:  [dbo].[tablename].  Use the following updates, if desired, to get rid of the '[dbo].', and the ending right bracket, for each TABLENAME:

UPDATE dbo.TABLECOUNTS
SET TABLENAME = REPLACE(TABLENAME,'[dbo].[','')


UPDATE dbo.TABLECOUNTS
SET TABLENAME = REPLACE(TABLENAME,']','')


SELECT * FROM TABLECOUNTS


DROP TABLE TABLECOUNTS


2. Notably faster, but remember -- it is only a close estimate.
   The sysindexes table is not updated real time, so your numbers may be a little less than accurate.

SELECT SO.Name, SI.rows
FROM sysindexes SI, SysObjects SO
WHERE SI.id = SO.ID
AND SI.indid < 2
AND SO.Type = 'U'
ORDER BY so.name

3. Similar to #2, but statistics are updated first with DBCC UPDATEUSAGE.

DECLARE @SQL NVARCHAR(255)
SET @SQL = 'DBCC UPDATEUSAGE (' + DB_NAME() + ')'
EXEC sp_executeSQL @SQL


SELECT
Schema_name(t.schema_id) AS SchemaName,
t.name AS TableName,
i.rows AS [Rows]
FROM 
sys.tables AS t INNER JOIN sys.sysindexes AS i
ON t.object_id = i.id
AND i.indid < 2
ORDER BY
t.name
If you've got v2005 or v2008, you should try to steer clear from querying sysindexes.  It will work, but it is said to be removed in a future version.  Probably best to use the DMV's instead, like this:

SELECT 
o.name, 
ddps.row_count
FROM 
indexes i INNER JOIN sys.objects o
ON sys. i.OBJECT_ID = o.OBJECT_ID
INNER JOIN sys.dm_db_partition_stats ddps
ON i.OBJECT_ID = ddps.OBJECT_ID
AND i.index_id = ddps.index_id
WHERE 
i.index_id < 2
AND o.is_ms_shipped = 0
ORDER BY 
o.NAME

SQL Server:Last Record Written

In many situations you'll find yourself looking at date-specific data.  For example,
in the electronic trading industry, you may find yourself looking at 'trade date', or the datetime at which a particular trade was performed.  In the HFT (high frequency trading) world, it is all about when (and what)... the datetime attributes for the trading activity are very important for positions, trending and analysis, and many other things.

When was the last time a trade was submitted by a particular trading desk?


DECLARE @date DATETIME
SELECT @date = 
                         MAX(TradeDate) 
                         FROM dbo.TradeTable 
                         WHERE TradingDesk = 5   --Trading desks are numerically identified.
    SELECT
          TradeDate,TradingDesk,TraderName,Exchange,Symbol,Price,Quantity,OrderID
    FROM
          dbo.TradeTable
    WHERE
          TradingDesk = 5
          AND TradeDate = @date


Ok, let's get a little more clever, and create a stored procedure with input parameters:


   IF OBJECT_ID('usp_lasttradedetails','p')>0
   DROP PROC dbo.usp_LastTradeDetails
   GO
   CREATE PROC dbo.usp_LastTradeDetails (
       @desk INT
   )
   AS 
   SET NOCOUNT ON;
   /*
   Allows retrieval of the last trade written, for the given @tradingdesk.
   EXEC dbo.usp_LastTradeDetails @desk = 5
   */


  SELECT @date = 
                         MAX(TradeDate) 
                         FROM dbo.TradeTable 
                         WHERE TradingDesk = @desk


  SELECT
     TradeDate,TradingDesk,TraderName,Exchange,Symbol,Price,Quantity,OrderID
  FROM
     dbo.TradeTable
  WHERE
     TradingDesk = @desk
     AND TradeDate = @date


SET NOCOUNT OFF;

Which indexes are missing?

Yes, yes, you can actually answer that question now... with the DMV's.  Short story, one of the wonderful new features of SQL, introduced in v2005, are the Dynamic Management Views.  There are two types, Server & Database-scope, and they provide extensive information about the current state of the SQL Server, and database(s).  This data can be very helpful to monitor, diagnose and administer your servers, reactively AND proactively.  

The DMVs are replacements for the system tables used within SQL Server 2000. 

This statement joins the missing index DMVs, in an effort to identify the most beneficial missing indexes, AND provide the statements necessary to create them.  


SET NOCOUNT ON;

SELECT 
avg_user_impact AS average_improvement_percentage, 
avg_total_user_cost AS average_cost_of_query_without_missing_index, 
'CREATE INDEX idx_' + [statement] +  
ISNULL(equality_columns, '_') +
ISNULL(inequality_columns, '_') + ' ON ' + [statement] + ' (' + ISNULL(equality_columns, ' ') + 
ISNULL(inequality_columns, ' ') + ')' + ISNULL(' INCLUDE (' + included_columns + ')', '') AS     create_missing_index_command
FROM 
sys.dm_db_missing_index_details a INNER JOIN sys.dm_db_missing_index_groups b
        ON a.index_handle = b.index_handle INNER JOIN sys.dm_db_missing_index_group_stats c
             ON b.index_group_handle = c.group_handle
WHERE 
avg_user_impact > = 40


SET NOCOUNT OFF;


NOTE:   In the CREATE INDEX statement, 'idx' is my standard index naming convention for non-clustered indexes... change as you see fit.

SQL Server Virtual Log Files

Each SQL Server database transaction log is composed of one or more physical files. Internal to each of these physical files are structures known as Virtual Log Files, or VLFs. Having too many or too few VLFs will impact the performance of your databases.

In this post I am just giving you a quick method to return the VLF count for each of your databases.  I will come back in the near future to help you determine whether you have too many or too few VLFs, and to provide you a method for correcting them.


/*
VLF count retrieval - 
Each transaction log file is divided logically into smaller segments called virtual log files.  The number of VLFs in each database will grow based on the autogrowth settings for the log file,  and how often transactions write to disk.   Too many VLFs will slow your log backups, and can   even slow down database recovery.

The VLF count is normal and expected in every database.  Larger VLF counts, however, are an 
impediment, and must be cleaned up.

This query returns the VLF count per database.   */


USE master;

-- Variables
DECLARE 
@query VARCHAR(100),
@dbname SYSNAME,
@filecount INT
  
-- Table variable 
  DECLARE @databases Table (dbname SYSNAME)  
  INSERT @databases  (DBNAME)
  SELECT name
  FROM sys.databases 
  WHERE state = 0  --  << only online databases
  
-- Table variable for results  
  DECLARE @Results Table  (DBNAME SYSNAMEVLFCount INT)
 
  DECLARE @MajorVersion INT
  SET @MajorVersion 
LEFT(CAST(SERVERPROPERTY('ProductVersion'AS NVARCHAR(max)),CHARINDEX('.',CAST(SERVERPROPERTY('ProductVersion') AS NVARCHAR(max)))-1

-- Table variable to capture DBCC loginfo output 
  IF @MajorVersion < 11 -- vSQL2012 
  BEGIN
      DECLARE @dbccloginfoA table  
      (  
        FileID tinyint,  
        FileSize bigint 
        StartOffset bigint,  
        FSeqno int 
        [Status] tinyint,  
        Parity tinyint 
        CreateLSN numeric(25,0 
      )  

     WHILE EXISTS(SELECT TOP 1 DBNAME FROM @databases)  
     BEGIN
        SET @dbname = (SELECT TOP 1 DBNAME FROM @databases)
        SET @query = 'DBCC LOGINFO (' + '''' @dbname + ''') '  

        INSERT @dbccloginfoA (FileID,FileSize,StartOffset,FSeqno,Status,Parity,CreateLSN)
        EXEC (@query)  

        SET @filecount @@rowcount  

        INSERT @Results (DBNAME,VLFCount)
        VALUES (@DBNAME, @filecount)

        DELETE @databases 
        WHERE dbname = @dbname  

      END
  END
  ELSE 
  BEGIN 
      DECLARE @dbccloginfoB TABLE 
      (  
          RecoveryUnitID int
          FileID tinyint,  
          FileSize bigint,  
          StartOffset bigint 
          FSeqno int,  
          [Status] tinyint 
          Parity tinyint,  
          CreateLSN numeric(25,0)  
      ) 
 
    WHILE EXISTS(SELECT TOP 1 DBNAME FROM @databases)
    BEGIN
        SET @dbname = (SELECT TOP 1 DBNAME FROM @databases
        SET @query = 'DBCC LOGINFO (' + '''' + @dbname + ''') '  

        INSERT @dbccloginfoB(RecoveryUnitId,FileID,FileSize,StartOffset,FSeqno,Status,Parity,CreateLSN
        EXEC (@query

        SET @filecount @@rowcount

        INSERT @Results
        VALUES (@dbname, @filecount)  

        DELETE @databases WHERE dbname = @dbname 
    END
  END 
  
-- output results
SELECT
dbname [Database], 
filecount [VLF Count]
FROM 
@Results
ORDER BY
dbname


Take a look at each of these links for much more information regarding the VLF's:
     Transaction Log Physical Architecture
     High Virtual Log File (VLF) Count
     Transaction Log VLFs – too many or too few?

How to find the model name or serial number of a Windows server

Ok, ok...  Not necessarily tSQL, but definitely quite useful!  Today, for example, I was evaluating a customer's server, attempting to answer the question -- is the server adequate for the firm's enterprise reporting solution?  As-is, not even close.  But, I wanted to know the model of the machine, so that I could determine whether we could add more disk, and what our other resource limitations were.

Easy sneasy.  Open a command prompt on your server and type the following to return your computer model:

         wmic csproduct get name
Or, type in this at the command prompt to return the serial number for the server:
       
         wmic bios get serialnumber
Why is this helpful to you, as a DBA?  Well, with that information, you can get online and and look up the server specs.  In this case, it's an HP Proliant SL390s G7, which DOES allow for additional drives, based on the model, be it 1U, 2U or 4U.  For example, if it is a 4U, you can have up to eight 2.5" hot plug hard drive bays.



In my book, as a DBA you need to know you hardware layer just as much as you do the software layer.  This is a little tip that will help you do exactly that.

What are SQL Server Statistics?

Statistics store information about the distribution of data in your tables.  The optimizer uses this metadata to estimate the selectivity of a given statement, and create the optimal query plan.  To manage SQL Server efficiently, you really need a good understanding of the statistics.  In this post I will just provide some information that may help you to better understand, and manage your statistics

Statistics are automatically created on each of the index columns, when an index is created, and they are created automatically by the optimizer, when AUTO_CREATE_STATISTICS is enabled. Auto-created stats names include the column name and object ID in hexadecimal format, like this:     _WA_Sys_00000002_79FD19BE


Statistics can also be created manually, via the CREATE STATISTICS statement... though I, personally, have never had a need.  Generally speaking, the optimizer typically does quite fine. 

That's how they are created, but where are they stored? Statistics are stored in the sysindexes table.  We can also find additional information within the sys.stats and sys.indexes metadata views.  

By default, the statistics are updated automatically.  This occurs when 20% of the data in a table changes. It's important to know that this threshold can sometimes be a little high on the larger tables.  For this reason, it's not a bad idea to create a custom routine specific to the larger tables alone.  Or, even to just update the stats manually on those larger tables, on a periodic basis.  

How do you know if your statistics are being updated?  Run this statement to see if AUTO UPDATE is enabled:

       EXEC sp_helpdb YourDBName

The status line in the result-set should include 'IsAutoUpdateStatistics'.  If it doesn't, your statistics are not being updated.... ouch.  You'll want to correct that with this next statement, but remember -- if stats haven't been updated for a while, this 1st run can be pretty timely, and invasive.  Be sure to do this after hours:

      USE YourDBName;
      -- Enable Auto Update of Statistics
      ALTER DATABASE YourDBName
      SET AUTO_UPDATE_STATISTICS ON;
      GO
      -- Update Statistics for whole database 
      EXEC sp_updatestats
      GO

So, how does SQL Server actually use the statistics?  The optimizer uses this metadata to estimate how many rows each operator in the query plan will operate on.  That estimation is used to calculate the cost of each operator, and identify the cheapest query plan for the statement you are running.  Outdated, inaccurate or missing statistics can dramatically impact this process.  If the optimizer cannot accurately estimate this cost, you are going to see substantial performance degradation.

Performance not quite what it should be?  Well... when were your statistics last updated? Real easy to check:  

     SELECT 
          OBJECT_NAME(object_id) [Table],
          STATS_DATE(object_id, stats_id) [LastUpdated],
          name [Statistic]
     FROM 
         sys.stats
    WHERE 
         OBJECT_NAME(object_id) NOT LIKE 'sys%'
    ORDER BY 
         STATS_DATE(object_id, stats_id) DESC

    TIP:   Keep that one in your quick list. Somebody says 'why is it so slow?'.... start by 
             checking the stats.

Alright, so how do you read your stats?  I've listed several different methods below.  Run them on your system, take a look at your output.  This should help to better familiarize yourself with the statistics overall.  

     -- using sys.partitions and sys.indexes
     SELECT
     OBJECT_NAME(p.object_id[Table],
             p.index_id [IndexID],
             i.name [Index],
             p.rows [#Rows]
      FROM
     sys.partitions p INNER JOIN sys.indexes i 
        ON p.object_id = i.object_id
                AND p.index_id = i.index_id
      WHERE   
      p.object_id = OBJECT_ID(N'Production.WorkOrder');

     -- using sys.dm_db_index_physical_stats
     SELECT  
       OBJECT_NAME(ips.object_id) [Table],
             ips.index_id [IndexID],
             i.name [Index],
             ips.record_count [#Rows]
     FROM
 sys.dm_db_index_physical_stats(DB_ID(N'AdventureWorks2012'),OBJECT_ID(N'Production.WorkOrder'),NULL,NULL,'DETAILED'ips JOIN sys.indexes i 
  ON ips.object_id i.object_id
                  AND ips.index_id = i.index_id
     WHERE
    ips.index_level = 0;


     -- using sysindexes, sysobjects, sysusers, and information_schema.tables
     SELECT
         schemas.name [Schema],
         tbls.name [Table],
         i.name [Index],
         i.id [TableID],
         i.indid [IndexID],
         i.GroupID,
         i.rowmodctr [ModifiedRows],
          (SELECT MAX(rowcnt) 
           FROM sysindexes i2 
           WHERE i.id = i2.id
           AND i2.indid < 2[RowCnt],
             CONVERT(DECIMAL(18,8),CONVERT(DECIMAL(18,8),i.rowmodctr) / 
              CONVERT(DECIMAL(18,8),
(SELECT MAX(rowcntFROM sysindexes i2 WHERE i.id = i2.id AND i2.indid < 2) ) )      
           [ModifiedPct],
         stats_date( i.id, i.indid ) [LastStatsUpdate],
    'False' [Processed]
     FROM
          sysindexes i INNER JOIN sysobjects tbls
             ON i.id = tbls.id INNER JOIN sysusers schemas 
                 ON tbls.uid schemas.uid INNER JOIN information_schema.tables tl
                    ON tbls.name tl.table_name
                    AND schemas.name = tl.table_schema
                    AND tl.table_type = 'BASE TABLE'
     WHERE
          0 < i.indid 
         AND i.indid < 255
         AND table_schema <> 'sys'
         AND i.rowmodctr <> 0
AND (SELECT MAX(rowcntFROM sysindexes i2 WHERE i.id = i2.id AND i2.indid < 2) > 0


     -- using sys.stats and sys.dm_db_stats_properties
     SELECT 
          st.object_id [TableID],
          OBJECT_NAME(st.object_id[Table],
          st.name [Index],
          STATS_DATE(st.object_id, st.stats_id) [LastUpdated],
          modification_counter [RowsModified]
     FROM
          sys.stats st CROSS APPLY sys.dm_db_stats_properties(st.object_id, st.stats_id) sp 
    WHERE
          STATS_DATE(st.object_idst.stats_id) <= DATEADD(DAY,-1,GETDATE()) 
         AND modification_counter > 0
         AND OBJECTPROPERTY(st.object_id'IsUserTable' ) = 1



That's it for now.  It's pretty simple -- if your statistics are out of date, or missing, the optimizer is not able to make optimal query plan choices.  This will impede query performance.  In some cases very dramatically.  I have given you a lot of choices for viewing your statistics, and I have even sampled an actual statistic update.  I encourage you to take a look at Ola Hallengren's database maintenance solution, which includes a good piece for maintaining SQL Server statistics.   See here:   

       https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html
Another good reference is at Brent Ozar's site, the sp_Blitz script.  Among many other things, this procedure will tell you where your auto stats updates are disabled.  I encourage you to take a look at this page, but -- don't stop with the statistics.  You can use sp_Blitz to perform a very fast, yet hugely informative health check on your systems. Check it out.

      http://www.brentozar.com/blitz/auto-update-stats-disabled/


Lastly, take a look here for more in depth information regarding the statistics:
 
      http://msdn.microsoft.com/en-us/library/ms190397.aspx

Monitoring SQL Server database transaction log usage

In this post I will provide a method for monitoring your database transaction logs using DBCC SQLPERF(logspace), which can be used to return transaction log space usage statistics for your databases.  We know there are many different causes for the transaction logs to expand.  It could be a very large or long-running transaction like index maintenance, or even a data archival routine (with lots of deletes).  You may also be using a FULL Recovery Model, but you aren't performing transaction log backups... or you're not performing them as frequently as you should.  

There are many causes for log growth, and you should be aware of that expansion so that you can act before a problem occurs.  This tip is a good way to keep an eye on that transaction log growth and utilization.  

First we need a table to store the output of DBCC SQLPERF(logspace).  We will use this table to store log use statistics over time.  This will enable you to analyze historically, which will be helpful for trending and diagnosing database and log growth.   

     -- create table to store output of DBCC SQLPERF(logspace)
     CREATE TABLE dbo.MonitorLogs (
        LogID INT PRIMARY KEY IDENTITY(1,1),
        LogDate DATETIME NOT NULL DEFAULT(GETDATE()),
        DBName VARCHAR(100) NOT NULL,
        LogSizeMB DECIMAL(18, 4) NOT NULL,
        LogSpaceUsed DECIMAL(18, 4) NOT NULL,
        LogStatus INT NOT NULL
      )

Now this is how we will execute the DBCC statement, directing the output to our new table:

      INSERT dbo.MonitorLogs (DBName,LogSizeMB,LogSpaceUsed,LogStatus)
      EXEC ('DBCC SQLPERF(logspace)')

Now we need to 'create' another day of data so that we can test the process completely.  To do this, I am just going to copy the data just inserted to MonitorLogs, using DATEADD to modify the LogDate to yesterday:

     INSERT dbo.MonitorLogs (LogDate,DBName,LogSizeMB,LogSpaceUsed,LogStatus)
     SELECT DATEADD(d,-1,LogDate),DBName,LogSizeMB,LogSpaceUsed,LogStatus
     FROM dbo.MonitorLogs

Check the data, you will now see statistics from yesterday as well.  You will use this last statement to compare yesterday's data with today's.  The principal is just the monitoring of your transaction log space over time.

     DECLARE @Yesterday DATE = CONVERT(DATEGETDATE()-1);
     DECLARE @Today DATE = CONVERT(DATEGETDATE());

     SELECT 
     COALESCE(b.DBName,a.DBName) [Database],
     a.LogSizeMB [TodayLogSizeMB],
     b.LogSizeMB [YestLogSizeMB],
     a.LogSizeMB - b.LogSizeMB [SizeMBDiff],
     a.LogSpaceUsed [TodayLSpaceUsed],
     b.LogSpaceUsed [YestLSpaceUsed],
     a.LogSpaceUsed - b.LogSpaceUsed [UsedDiff]
    FROM
     dbo.MonitorLogs a FULL OUTER JOIN dbo.MonitorLogs b 
        ON a.DBName = b.DBName
    WHERE 
    CONVERT(DATE, a.LogDate) = @Yesterday 
    AND CONVERT(DATEb.LogDate) = @Today
    ORDER BY 
    TodayLogSizeMB DESC


That's pretty much it.  Your result set will look like this:

This is just one way to get the job done.  In the newer releases you should take a look at both of these DMVs:    

sys.dm_tran_database_transactions   http://msdn.microsoft.com/en-us/library/ms186957.aspx
sys.dm_tran_active_transactions        http://msdn.microsoft.com/en-us/library/ms174302.aspx