Overview of some often used SQL Server TSQL code (2005/2008/2012/2014/2016)

Version : 4.6
Date : 22/06/2014
By : Albert van der Sel

Remark : Please refresh this page to see any updates. It might take a few seconds to fully load this page.

This document lists some often used TSQL code and statements. It's, ofcourse, not much compared
to extensive documentation like for example "Books Online" or "BOL".
But maybe it's convienient to have some often used code "close together" in an easy to browse document.
Mostly, it shows commands only, with only minimal comment.

Note: In TSQL code, anything behind "--", or between "/* */", is comment.

It might take a few seconds to load this document...




changes with respect to the base version 4.6, from 22/06/2014:

- 29/05/2017 Added section 4, on Performance analyzing.


Main Contents:

Section 1. General Code and Examples:

1.0. CHECK ON THE PROGRESS OF A BACKUP, OR RESTORE, WITH TSQL.
1.1. GET A LISTING OF FILE AND FILEGROUP INFORMATION
1.2. CREATING A DATABASE USING A TSQL SCRIPT, AND ADDING/RENAMING OF FILES.
1.3. CREATING A SQL SERVER LOGIN AND A WINDOWS LOGIN (meaning: creating accounts)
1.4. SOME DICTIONARY QUERIES FOR VIEWING SESSIONS, LOCKS, BLOCKING SESSIONS, OBJECT ACCESS
1.5. SOME DICTIONARY QUERIES FOR VIEWING WAITS
1.6. SOME EXAMPLES OF USING xp_cmdshell
1.7. QUICKLY CREATE A COPY TABLE FROM AN EXISTING TABLE (WITH ALL DATA)
1.8. A FEW NOTES ABOUT "SCRIPTS", AND STORED PROCEDURES AND FUNCTIONS
1.9. CREATING A "LOOP" IN A SCRIPT, OR PROCEDURE, USING A CURSOR
1.10. HOW TO KILL A SESSION
1.11. HOW TO FIND AN OBSTRUCTING, OR BLOCKING, SESSION
1.12. SOME OFTEN USED STANDARD FUNCTIONS
1.13. REMOVING "UNWANTED" CHARACTERS FROM A FIELD OF A TABLE, OR STRING
1.14. REMOVING "UNWANTED" QUOTES FROM A FIELD OF A TABLE, OR STRING
1.15. SIMPLE OVERVIEW MOST IMPORTANT SQL SERVER DATATYPES
1.16. THE CAST AND CONVERT CONVERSION FUNCTIONS
1.17. A FEW WORDS ON GRANTING ROLES AND PERMISSIONS TO LOGINS (users) USING TSQL
1.18. A FEW WORDS ON SHOWING PERMISSIONS USING TSQL
1.19. GET A LISTING OF ALL COLUMNNAMES WITH ALL TABLES, AND DATATYPES, USING TSQL
1.20. A FEW NOTES ABOUT THE "@@" FUNCTIONS
1.21. HOW TO SCRIPT YOUR DATABASE
1.22. HOW TO GENERATE THE INSERT STATEMENTS FOR A TABLE WITH DATA
1.23. OVERVIEW SQL SERVER VERSIONS AND BUILDS v 7 - 2012
1.24. LOGGING FROM STORED PROCEDURES AND TSQL
1.25. USING EXEC IN TSQL CODE
1.26. HOW TO VIEW WHETHER PROCESSES (SPIDS) WERE NTLM OR KERBEROS AUTHENTICATED
1.27. A FEW NOTES ON USER DEFINED FUNCTIONS
1.28. HOW TO GET A LIST OF PK AND FK CONSTRAINTS
1.29. HOW TO IMPLEMENT PARTITIONING
1.30. A FEW NOTES ON DYNAMIC SQL
1.31. A FEW NOTES ON BACKUP AND RESTORE USING TSQL STATEMENTS
1.32. COMPARING TABLES: FIND ROWS WHICH ARE IN ONE TABLE, BUT WHICH ARE NOT IN ANOTHER SIMILAR TABLE.
1.33. A FEW NOTES ON REVIVING SQL ACCOUNTS AFTER MOVE (OR RESTORE) OF A DATABASE
1.34. A FEW NOTES ON INDEX ANALYSIS
1.35. USING PROMPT TOOLS (OR GRAPHICAL TOOLS LIKE SSMS) TO CONNECT TO A NAMED INSTANCE, ON SOME PORT
1.36. SOME REMARKS ON THE SCOM ACCOUNT IN SQL SERVER.
1.37. SOME SYSTEMVIEW QUERIES ON 2005/2008 DATABASE MIRRORING
1.38. SOME SYSTEMVIEW QUERIES ON 2012/2014 "ALWAYSON" CLUSTER
1.39. "ALWAYSON" CLUSTER AND TSQL
1.40. CHANGE A USER THROUGHOUT A SET OF DATABASES
1.41. CREATE A SQL SERVER AGENT PROXY ACCOUNT.
1.42. SOME DBCC MEMORY statements.
1.43. CREATE A SERVER-SIDE TRACE.
1.44. SHOW ACCESSED OBJECTS.
1.45. BCP TABLE DATA EXPORTS OR IMPORTS.
1.46. SQL SERVER AND SSL.
1.47. OPENQUERY() AND OPENROWSET().
1.48. SOME QUERIES TO SEE THE STATE OF THE TRANSACTION LOG.
1.49. THE COLUMNSTORE INDEX (2012/2014/2016).

Section 2. Some further exercises:

2.1. Excercise 1: SIMPLE EXAMPLE OF CODE TO GENERATE THE BASIC "CREATE TABLE" STATEMENTS
2.2. Excercise 2: SIMPLE EXAMPLE OF AN AUTOMATED TSQL BACKUP SCRIPT

Section 3. Old Stuff:

3.1. OLDER txt FILE LISTING DBA SCRIPTS AND STATEMENTS

Section 4. Performance analysis.

4.1. SOME VERY TRIVIAL STUFF YOU MIGHT WANT TO CHECK FIRST (or to keep in mind...).
4.2. SOME SQL SERVER POINTERS TO INVESTIGATE PERFORMANCE.

  --Listing of changed database objects (meaning DDL, as of a certain date).
  --Listing of Read/Write IO, and "stalls", on the database files.
  --Identifying the top "wait" events.
  --Listing of Tables and Indexes, types of index, ordered by the number of rows.
  --Identifying re-compilations.
  --Some DBCC commands related to plans, memory, caches, and how to remove plans.
  --Locks and Latches.



Section 1. General code plus examples.

1.0. QUICK CHECK ON THE PROGRESS OF YOUR BACKUP USING TSQL:


If you run a backup job, or if you use the 'BACKUP DATABASE ' or 'BACKUP LOG ' TSQL command, were you never curious
as to what the progress of the backup is, like "%complete" and "estimated time left"?
Well, now you can. The following Query should work on your system. Try this one:

SELECT session_id, status, blocking_session_id, wait_type,
percent_complete, total_elapsed_time, command, estimated_completion_time
FROM sys.dm_exec_requests
WHERE command like '%BACKUP%'

You can use the same statement to see the progress of a RESTORE, or ALTER command, like for example:

SELECT session_id, status, blocking_session_id, wait_type,
percent_complete, total_elapsed_time, command, estimated_completion_time
FROM sys.dm_exec_requests
WHERE command like '%RESTORE%'



1.1. GET A LISTING OF FILE AND FILEGROUP INFORMATION:


Often you need a listing of your database files and characteristics, like paths, sizes etc..
Here are a few queries that will show you that information.


=> 1.1.1 View filenames and sizes of the current database:

-- use Yourdatabase_Name
-- go

-- Overall listing, and names, and free/used space:

SELECT name AS [File Name] , physical_name AS [Physical Name], size/128 AS [Total Size in MB],
size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS [Available Space In MB]
FROM sys.database_files;

-- Selected info from sysfiles:

SELECT sysfiles.fileid, sysfiles.groupid, sysfiles.size,
(sysfiles.size * 8 / 1024) AS "SIZE_IN_MB",
substring(sysfiles.name, 1, 30) AS NAME,
substring(sysfiles.filename, 1, 100) AS FILENAME,
substring(sysfilegroups.groupname, 1, 40) AS GROUPNAME
FROM sysfiles, sysfilegroups
WHERE sysfiles.groupid=sysfilegroups.groupid


=> 1.1.2 View names and sizes from all databases:

SELECT db_name(database_id), Name [Database], Physical_Name [Physical file], size*8/1024 [Size_MB] FROM sys.master_files

exec sp_databases


=> 1.1.3 Very Nice One: View names, sizes, and many other properties from all databases:

SELECT
substring(d.name,1,40) [DBNAME], d.recovery_model_desc,
substring(d.collation_name,1,35) [Collation], substring(d.page_verify_option_desc,1,20) [CheckPage],
substring(m.Physical_Name,1,90) [Physical file], m.size*8/1024 [Size_MB], substring(d.state_desc,1,10) [Status]
FROM sys.databases d, sys.master_files m
WHERE m.database_id=d.database_id
ORDER BY d.name



1.2. CREATING A DATABASE USING A TSQL SCRIPT, AND ADDING/RENAMING FILES:


1.2.1 Example of creating a Database (called "SALES").

CREATE DATABASE SALES
ON
PRIMARY
(NAME=salesPrimary,
FILENAME='C:\mssql\data\sales\sales.mdf',
SIZE=100MB,
MAXSIZE=200,
FILEGROWTH=20),
FILEGROUP salesData
(NAME=salesData01,
FILENAME='D:\mssql\data\sales\salesdata01.ndf',
SIZE=100MB,
MAXSIZE=200,
FILEGROWTH=10),
FILEGROUP salesIndex
(NAME=salesIndex01,
FILENAME='E:\mssql\data\sales\salesindex01.ndf',
SIZE=100MB,
MAXSIZE=200,
FILEGROWTH=10)
LOG ON
(NAME=salesLog01,
FILENAME='F:\mssql\data\sales\saleslog01.ldf',
SIZE=100MB,
MAXSIZE=200,
FILEGROWTH=10)
GO

You can adjust such a script to your own needs.

Notes: for performance and availability reasons:

  • The lines in "bold" are the starts of the "filegroup" definitions. The primary filegroup, (with the .mdf file), is supposed
    to be reserved for the database dictionary (comparable to the system tablespace in Oracle).

  • Try to create seperate filegroups to place the tables and indexes on: don't put them into the "primary" (.mdf) file.
    So, the .mdf file is supposed to be reserved for the dictionary, the .ldf file(s) are for the transaction log, and
    the ordinary (regular) datafiles (for tables and indexes), uses the .ndf extension.

  • Always try to seperate the transactionlog, from the datafiles, on different drives/filesystems.

  • Try to create seperate filegroup(s) (containing .ndf files) for holding tables, and indexes.



1.2.2 Example of adding a "FileGroup" and file to a Database.

Suppose, for example, that we did not specified an INDEX filegroup for the CREATE statement of the
SALES database, as shown above.
Now, we can add a seperate filegroup (and file) with the intention to hold the indexes of the database.

ALTER DATABASE SALES
ADD FILEGROUP [INDEX]
GO

ALTER DATABASE SALES
ADD FILE
(
NAME = Sales_Index,
FILENAME = 'E:\MSSQL\DATA\SALES\sales_index.ndf',
SIZE = 1200MB,
MAXSIZE = 1500MB,
FILEGROWTH = 50MB
)
TO FILEGROUP [INDEX]
GO

Note: You can create, or add, multiple filegroups to a database, designated for some purpose.
For example, with the "CREATE object ... ON FILEGROUP filegroup_name" statement for an TABLE, or INDEX, will determine on
which filegroup this object will be put on.

=> Some other examples:

-> Altering the size of a Database file:

ALTER DATABASE SALES
MODIFY FILE
(
NAME = Sales_Index,
size=360000MB
);
GO

ALTER DATABASE msdb
MODIFY FILE
(NAME = MSDBLog,
SIZE = 100MB);
GO

-> Adding a database file to TEMPDB:

TempDB is a "special" Database, in use for sort operations at query runtime, temporary tables, index rebuilds etc..
It does not use exactly the same construction of "filegroups" as with user databases.
It only has the default filegroup, and transaction log files.

ALTER DATABASE TEMPDB
ADD FILE
(
NAME = TempDB_5,
FILENAME = 'T:\SQL_TempDB\TempDB_5.ndf',
SIZE = 8192MB,
MAXSIZE = 16384MB,
FILEGROWTH = 64MB
)
TO FILEGROUP [default]
GO

1.2.3 Renaming Logical and Physical files.

1. Rename logical file names of Database files:

Let's create a test database.

CREATE DATABASE TEST1
ON
PRIMARY
(NAME=Test1Primary,
FILENAME='R:\SQLData\test1.mdf',
SIZE=100MB,
MAXSIZE=200,
FILEGROWTH=20),
FILEGROUP Test1Data
(NAME=Test1Data,
FILENAME='R:\SQLData\test1data.ndf',
SIZE=100MB,
MAXSIZE=200, FILEGROWTH=10)
LOG ON
(NAME=Test1Log,
FILENAME='S:\SQLlog\test1log.ldf',
SIZE=100MB,
MAXSIZE=200,
FILEGROWTH=10)
GO

Now, let's see what the "logical" and "physical" names are:

SELECT substring(name,1,20) AS [Logical FileName] , substring(physical_name,1,30) AS [PhysicalName], size/128 AS [Total Size],
size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS [Available Space In MB]
FROM sys.database_files;

Logical FileName.....Physical Name..................Total Size.......Available Space In MB
Test1Primary.........R:\SQLData\test1.mdf...........100..............98.625000
Test1Log.............S:\SQLlog\test1log.ldf.........100..............99.570313
Test1Data............R:\SQLData\test1data.ndf.......100..............99.937500

Ok, let's rename, for example, two Logical filenames:

ALTER DATABASE [TEST1]
MODIFY FILE (NAME=N'Test1Primary', NEWNAME=N'Primary')
GO


ALTER DATABASE [TEST1]
MODIFY FILE (NAME=N'Test1Log', NEWNAME=N'Tlog')
GO

Logical FileName.....Physical Name..................Total Size.......Available Space In MB
Primary..............R:\SQLData\test1.mdf...........100..............98.625000
TLog.................S:\SQLlog\test1log.ldf.........100..............99.570313
Test1Data............R:\SQLData\test1data.ndf.......100..............99.937500


2. Rename the physical file names of Database files:

First, put the database in single user mode
Then detach the database. After that, using explorer or the command prompt etc.., rename the physical files.

ALTER DATABASE TEST1 SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO

USE [master]
GO
EXEC master.dbo.sp_detach_db @dbname = N'TEST1'
GO

Now, use whatever tool to rename or move the phyiscal files.

Attach the database again:

USE [master]
GO
CREATE DATABASE Test1 ON
( FILENAME = N'R:\SQLDATA\test1test.mdf' ),
( FILENAME = N'R:\SQLDATA\test1datatest.ndf' ),
( FILENAME = N'S:\SQLLog\test1log.ldf' )
FOR ATTACH
GO

Put database in multi user mode again:

ALTER DATABASE TEST1 SET MULTI_USER
GO



1.2.3 Some related Systemviews / Dictionary views
:
=> For viewing database characteristics:

select * from sys.databases

select name, database_id, create_date, user_access_desc, state_desc from sys.databases

=> For viewing Filegroup and file information:

-- Go to the database you are interrested in:
-- USE DATABASENAME -- e.g.: use sales
-- GO

Get all file information:

SELECT * FROM sys.sysfiles

SELECT * FROM sys.database_files

Overall listing, and names and free/used space:

SELECT name AS [File Name] , physical_name AS [Physical Name], size/128 AS [Total Size in MB],
size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS [Available Space In MB]
FROM sys.database_files;

Selected info from sysfiles:

SELECT sysfiles.fileid, sysfiles.groupid, sysfiles.size,
(sysfiles.size * 8 / 1024) AS "SIZE_IN_MB",
substring(sysfiles.name, 1, 30) AS NAME,
substring(sysfiles.filename, 1, 100) AS FILENAME,
substring(sysfilegroups.groupname, 1, 40) AS GROUPNAME
FROM sysfiles, sysfilegroups
WHERE sysfiles.groupid=sysfilegroups.groupid



1.3. CREATE A "SQL SERVER" LOGIN AND A "WINDOWS" LOGIN:


There are two main types of logins (accounts):

- A "Windows" login (authenticated by Windows, either local, or by AD)
- A (internal) SQL login (defined in-, and authenticated by SQL Server)

(and indeed a few other special type of accounts are possible.)

1.3.1 Create an (internal) SQL Server account (also called "login"):

This is an internal SQL Server account, independent from Windows authentication.
Anyone who knows this account and password, can logon to SQL server itself,
and it does not matter on what type of machine or Domain (if applicable) the user is working from,
as longs as the user has the client software and all neccesary networking software.
So, the account that the real user used to logon to the OS, has nothing to do with the "SQL Server account".

CREATE LOGIN login_name WITH PASSWORD=password;

Example:

CREATE LOGIN webuser WITH PASSWORD='beerisok!'

Most important options/clauses (there are more):

MUST_CHANGE (must change password at first logon)
DEFAULT_DATABASE = database
DEFAULT_LANGUAGE = language
CHECK_EXPIRATION = { ON | OFF}
CHECK_POLICY = { ON | OFF}

Example:

CREATE LOGIN webuser WITH PASSWORD='welcome' MUST_CHANGE, CHECK_EXPIRATION=on

Example:

CREATE LOGIN mary WITH PASSWORD='welcome01', DEFAULT_DATABASE=TEST

(Some options need to be used in combination with others.)

Example: Creating a SQL Login, and grant access to the TEST database, and grant the login some common roles:

CREATE LOGIN mary WITH PASSWORD='welcome01', DEFAULT_DATABASE=TEST
GO
USE TEST
GO
EXEC sp_grantdbaccess 'mary', 'mary'
GO
EXEC sp_addrolemember 'db_datareader', 'mary'
GO
EXEC sp_addrolemember 'db_datawriter', 'mary'
GO


1.3.2 Create a Windows login:

This time, a user which has been authenticated by Windows (OS logon or Domain logon),
needs only be sort of "defined" in SQL Server. The authentication was already done by Windows, so
in creating the account, you do not specify a password.
In much literature, you see terms like "Windows authentication Mode", or "Trusted connection".
In most cases, we are talking about Domain accounts, like "Domain_name\User_name".
But also local accounts (of a Server, or other machine) can be added in the same way.

CREATE LOGIN [Domain\user | Localmachine\user] FROM WINDOWS;

Examples:

CREATE LOGIN [ABCCORP\john] FROM WINDOWS;

CREATE LOGIN [SERVER1\mary] FROM WINDOWS;

Creating logins does not mean those accounts have any permissions in SQL Server, unless you explicitly grant roles or permissions.

-Usually, you grant "roles" to accounts, by which the useraccounts inherit certain permissions.
-Only in special cases, you grant permissions to individual logins directly.

Roles in SQL Server have quite the same functionality as "roles" in other database systems like Oracle.

Accounts and roles are also sometimes called "principals".

Note:

About granting "roles" and other permissions to logins: please see Section 1.17.


1.3.3 Using some special clauses:

In the CREATE LOGIN statement (new account), or the ALTER LOGIN statement (existing account), the following options can be used:

CHECK_POLICY = { ON | OFF }
CHECK_EXPIRATION = { ON | OFF }

Using the graphical SQL Server Management Studio (SSMS), makes it easy to create both types of logins.

- Note that with a Windows Login, the "enforce password expiration" and "enforce password expiration" are grayed out.
- With a SQL Login, you can check or uncheck those options.

So, per default, Windows Logins inherit the password policy as is defined in the security mechanism of the Operating System,
thus either the Local Machine, or AD.

For a SQL account, you can set or unset those options. If you set it, it inherits the password policy- and expiration
as is defined in the OS.

When using TSQL, you can achieve the same effect as in the graphical SSMS utility.
In the CREATE LOGIN statement (new account), or the ALTER LOGIN statement (existing account), just use the following options:

CHECK_EXPIRATION = { ON | OFF }

Applies only to SQL Server logins.
It specifies whether password expiration policy should be enforced on this login.

CHECK_POLICY = { ON | OFF }

Applies only to SQL Server logins.
It specifies that the Windows password policies of the computer on which SQL Server is running should be enforced on this login.



1.4. SOME DICTIONARY QUERIES FOR VIEWING SESSIONS, LOCKS, BLOCKING SESSIONS, OBJECT ACCESS:


1.4.1 VIEWING SESSIONS:

SELECT session_id,
substring(convert(varchar(20),login_time),1,17) AS LOGINTIME,
substring(host_name,1,10) AS HOSTNAME,
substring(program_name,1,30) AS PROGRAM,
substring(login_name,1,10) AS LOGINNAME,
substring(nt_user_name,1,10) AS NT_USER,
substring(status,1,10) AS STATUS,
lock_timeout,
row_count
FROM sys.dm_exec_sessions

SELECT spid, cpu, physical_io, blocked, cmd, waittime,
substring(convert(varchar(20),last_batch), 1, 20) as "LASTBATCH",
substring(nt_username, 1, 15) AS "USERNAME",
substring(loginame, 1, 20) AS "LOGINNAME",
substring(hostname, 1, 15) AS "HOSTNAME",
DB_NAME(dbid) AS "DB",
substring(program_name, 1, 40) AS "PROGRAM"
FROM master.dbo.sysprocesses

Note the field "blocked" in the above query, which also allows you to easily identify blocked sessions.


SELECT spid, cpu, physical_io, blocked, cmd, waittime,
substring(convert(varchar(20),last_batch), 1, 20) as "LASTBATCH",
substring(nt_username, 1, 15) AS "USERNAME",
substring(loginame, 1, 20) AS "LOGINNAME",
substring(hostname, 1, 15) AS "HOSTNAME",
DB_NAME(dbid) AS "DB",
substring(program_name, 1, 40) AS "PROGRAM"
FROM master.dbo.sysprocesses
WHERE cmd not like 'AWAIT%' -- show only active statements.
-- AND loginame <> 'sa'

SELECT distinct
SUBSTRING(DB_NAME(dbid),1,30) as "DATABASE",
substring(hostname, 1, 15) AS "HOSTNAME",
substring(loginame, 1, 20) AS "LOGINNAME",
substring(program_name, 1, 40) AS "PROGRAM",
db_name(dbid) as "Database"
FROM master.dbo.sysprocesses
where loginame not in ('sa')
and DB_NAME(dbid) not in ('master','msdb')
order by DB_NAME(dbid)


1.4.2 VIEWING LOCKS AND BLOCKING SESSIONS:

exec sp_lock

SELECT
s.login_name, s.nt_user_name,
r.session_id AS BLOCKED_SESSION_ID,
r.blocking_session_id AS BLOCKING_SESSION_ID,
s.program_name,
r.start_time,r.status,r.command,database_id,
r.wait_type,r.open_transaction_count,r.percent_complete,r.cpu_time,r.reads,r.writes,r.deadlock_priority
from sys.dm_exec_sessions s, sys.dm_exec_requests r
where s.session_id=r.session_id AND blocking_session_id > 0

SELECT a.session_id AS blocked_session_id, b.session_id AS blocking_session_id, c.text AS blocking_text,
e.wait_type AS blocking_resource,
e.wait_duration_ms, d.text AS blocked_text FROM sys.dm_exec_requests a INNER JOIN sys.dm_exec_requests b
ON a.blocking_session_id = b.session_id CROSS APPLY sys.dm_exec_sql_text(b.sql_handle) c CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) d
INNER JOIN sys.dm_os_waiting_tasks e ON e.session_id = b.session_id

SELECT
L.request_session_id AS SPID,
DB_NAME(L.resource_database_id) AS DatabaseName,
O.Name AS LockedObjectName,
P.object_id AS LockedObjectId,
L.resource_type AS LockedResource,
L.request_mode AS LockType,
ST.text AS SqlStatementText,
ES.login_name AS LoginName,
ES.host_name AS HostName,
TST.is_user_transaction as IsUserTransaction,
AT.name as TransactionName,
CN.auth_scheme as AuthenticationMethod
FROM sys.dm_tran_locks L
JOIN sys.partitions P ON P.hobt_id = L.resource_associated_entity_id
JOIN sys.objects O ON O.object_id = P.object_id
JOIN sys.dm_exec_sessions ES ON ES.session_id = L.request_session_id
JOIN sys.dm_tran_session_transactions TST ON ES.session_id = TST.session_id
JOIN sys.dm_tran_active_transactions AT ON TST.transaction_id = AT.transaction_id
JOIN sys.dm_exec_connections CN ON CN.session_id = ES.session_id
CROSS APPLY sys.dm_exec_sql_text(CN.most_recent_sql_handle) AS ST
WHERE resource_database_id = db_id()
ORDER BY L.request_session_id


1.4.3 COUNTING LOCKS IN A DATABASE, SHOWING TYPE OF LOCKS:


SELECT request_session_id, request_type, request_mode FROM sys.dm_tran_locks where resource_database_id="dbid" -- fill in dbid

or (same type of query)

SELECT request_session_id, request_type, request_mode FROM sys.dm_tran_locks where DB_NAME(resource_database_id)='name' -- fill the database name

-- list the "harder"locks (exclusive row etc..:

SELECT COUNT(*) FROM sys.dm_tran_locks where resource_database_id=5 -- fill in dbid
and request_mode in ('IX','SIX','X','Sch-M','U','RangeX-X','IU','SIU')

SELECT COUNT(*) FROM sys.dm_tran_locks where db_name(resource_database_id)='name' -- fill in name
and request_mode in ('IX','SIX','X','Sch-M','U','RangeX-X','IU','SIU')

SELECT distinct s.login_name, s.nt_user_name, r.request_session_id, r.request_type, r.request_mode FROM sys.dm_tran_locks r, sys.dm_exec_sessions s
WHERE s.session_id=r.request_session_id
and r.request_mode in ('IX','SIX','X','Sch-M','U','RangeX-X','IU','SIU')

select db_name(resource_database_id), request_type, request_mode, request_session_id from sys.dm_tran_locks
where request_mode not in ('S')


1.4.4 ATTEMPTING TO REINDEX IN STANDARD EDITION (no good lock management while reindex):

-- Just a try:

set nocount on
DECLARE @TableName varchar(255)
DECLARE @AllowPLock INT
DECLARE @LOCKS INT

DECLARE MyCursor CURSOR FOR
SELECT DISTINCT t.table_name, s.allow_page_locks FROM information_schema.tables t, sys.indexes s
WHERE t.table_type = 'base table' AND t.TABLE_NAME=OBJECT_NAME(s.object_id) AND s.allow_page_locks=1
AND t.table_name not like 'ROS_SHI%'
AND t.table_name not like 'C%'

SELECT @LOCKS=(SELECT COUNT(*) FROM sys.dm_tran_locks where resource_database_id=38 --Choose your DBID
and request_mode in ('IX','SIX','X','Sch-M','U','RangeX-X','IU','SIU'))

IF @LOCKS=0
BEGIN

OPEN MyCursor

FETCH NEXT FROM MyCursor INTO @TableName, @AllowPLock
WHILE @@FETCH_STATUS = 0

BEGIN

SELECT @LOCKS=(SELECT COUNT(*) FROM sys.dm_tran_locks where resource_database_id=38 --Choose your DBID
and request_mode in ('IX','SIX','X','Sch-M','U','RangeX-X','IU','SIU'))

IF @LOCKS=0
BEGIN
EXEC ('DBCC DBREINDEX('+@TableName+','+''''',80)')
END
FETCH NEXT FROM MYCursor INTO @TableName, @AllowPLock
END

CLOSE MyCursor

DEALLOCATE MyCursor

END


1.4.5 VIEWING TABLE ACCESS WITH LAST UPDATE OF TABLE, LAST SCAN:

Fill in the variable @TABLENAME, with your tablename, and execute the script below:

DECLARE @TABLENAME VARCHAR(128)
SET @TABLENAME='fill in your table name'

SELECT DB_NAME(ius.[database_id]) AS [Database],
OBJECT_NAME(ius.[object_id]) AS [TableName],
MAX(ius.[last_user_scan]) AS [last_user_scan_or_last_access],
MAX(ius.[last_user_seek]) AS [last_user_seek],
MAX(ius.[last_user_update]) AS [last_user_update]
FROM sys.dm_db_index_usage_stats AS ius
WHERE ius.[database_id] = DB_ID()
AND ius.[object_id] = OBJECT_ID(@TABLENAME)
GROUP BY ius.[database_id], ius.[object_id];

SELECT [TableName] = name, create_date, modify_date
FROM sys.tables
WHERE name = @TABLENAME


1.5. SOME DICTIONARY QUERIES FOR VIEWING WAITS:


1.5.1 View IO statistics on all database files note: (dbid, fileid)=(null,null):

SELECT * FROM fn_virtualfilestats(null, null)


1.5.2 View Pending IO (disk & network)

SELECT SUM(pending_disk_io_count) AS [Number of pending I/Os] FROM sys.dm_os_schedulers

SELECT * FROM sys.dm_io_pending_io_requests


1.5.3 View IO statistics on all database files of a specific database (dbid, fileid)=(dbid,null):

-- e.g. like dbid=5:
SELECT * FROM fn_virtualfilestats(5, null)


1.5.4 View the top 10 wait statistics:

-- All

select top 10 *
from sys.dm_os_wait_stats
order by wait_time_ms desc

1.5.5 View the top 10 wait statistics, except for well known wait_types due to system processes

select top 10 *
from sys.dm_os_wait_stats
where wait_type not in
(
'KSOURCE_WAKEUP', 'SLEEP_BPOOL_FLUSH', 'BROKER_TASK_STOP',
'XE_TIMER_EVENT', 'XE_DISPATCHER_WAIT', 'FT_IFTS_SCHEDULER_IDLE_WAIT',
'SQLTRACE_BUFFER_FLUSH', 'CLR_AUTO_EVENT', 'BROKER_EVENTHANDLER',
'LAZYWRITER_SLEEP', 'BAD_PAGE_PROCESS', 'BROKER_TRANSMITTER',
'CHECKPOINT_QUEUE', 'DBMIRROR_EVENTS_QUEUE', 'LAZYWRITER_SLEEP',
'ONDEMAND_TASK_QUEUE', 'REQUEST_FOR_DEADLOCK_SEARCH', 'LOGMGR_QUEUE',
'SLEEP_TASK', 'SQLTRACE_BUFFER_FLUSH', 'CLR_MANUAL_EVENT',
'BROKER_RECEIVE_WAITFOR', 'PREEMPTIVE_OS_GETPROCADDRESS',
'PREEMPTIVE_OS_AUTHENTICATIONOPS', 'BROKER_TO_FLUSH'
)
order by wait_time_ms desc

Much better statement can be found at Randall's blog, together with a nice explanation of wait types.
You can find that here.


1.5.6 View Signal Waits (cpu) and Resource Waits (other waits like IO, locks etc..):

SELECT signal_wait_time_ms=sum(signal_wait_time_ms),'%signal (cpu) waits' = cast(100.0 * sum(signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2)),
resource_wait_time_ms=sum(wait_time_ms - signal_wait_time_ms),'%resource waits'= cast(100.0 * sum(wait_time_ms - signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2))
FROM sys.dm_os_wait_stats


1.5.7 Most cpu consuming statements:

select top 10
object_name(st.objectid), db_name(st.dbid), total_worker_time/execution_count AS AverageCPUTime,
CASE statement_end_offset
WHEN -1 THEN st.text
ELSE SUBSTRING(st.text,statement_start_offset/2,statement_end_offset/2)
END AS StatementText
from
sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
ORDER BY AverageCPUTime DESC


1.5.8 sessions and resources (1):

select session_id, host_name, login_name, cpu_time, memory_usage, reads, writes
from sys.dm_exec_sessions order by reads asc


1.5.9 sessions and resources (2):

select e.session_id, e.host_name, e.login_name, e.cpu_time, e.memory_usage, e.reads, e.writes,
db_name(r.database_id), r.wait_type, r.wait_time, r.open_transaction_count
from sys.dm_exec_sessions e, sys.dm_exec_requests r
where e.session_id=r.session_id order by db_name(r.database_id)


1.5.10 Top Queries and stored procedures that have been recompiled (plan_generation_num):

select top 20
sql_text.text, sql_handle, plan_generation_num, execution_count,
db_name(dbid), object_name(objectid)
from
sys.dm_exec_query_stats a
cross apply sys.dm_exec_sql_text(sql_handle) as sql_text
where
plan_generation_num >1
order by plan_generation_num desc


1.5.11 Get an idea of available memory in SQL Server:

If you do this:

select name, type_desc
from sys.all_objects where name like '%memory%'

You will find all DM views which can give you information on memory usage of SQL Server.

If you do this:

select name, type_desc
from sys.all_objects
where name like '%_os_%' and type_desc='VIEW'

You get a lot of views back which you can query on general performance information, but not only of SQL, but about
the OS and the Host as well.

At least, the following query is quite interesting for obtaining a general impression on the amount of available memory:

select total_physical_memory_kb, available_physical_memory_kb,
system_memory_state_desc
from sys.dm_os_sys_memory

In the last column, you may find values like:

"Available physical memory is high"
"Available physical memory is low"

and a few other possible values, which speaks for themselves.

By the way, in section 1.42, you will find some information on a few "DBCC MEMORY" statements.



1.6. SOME EXAMPLES USING xp_cmdshell:


With the (external) stored procedure "xp_cmdshell", you can call DOS (cmd) commands and execute OS shell scripts,
from your SQL code.

To see if it works on your system, try the following from a Query Window:

exec xp_cmdshell 'dir c:\'

This should provide a listing of files and directories in the root of C:\

If instead you see this returned:

Msg 15281, Level 16, State 1, Procedure xp_cmdshell, Line 1
SQL Server blocked access to procedure 'sys.xp_cmdshell' of component 'xp_cmdshell' because
this component is turned off as part of the security configuration for this server.
A system administrator can enable the use of 'xp_cmdshell' by using sp_configure.
For more information about enabling 'xp_cmdshell', see "Surface Area Configuration" in SQL Server Books Online.

Then you must 'activate' it, using either "sp_configure" or using the graphical "Surface Area Configuration" utility.

Warning: many people working in security related fields, advise not to enable this procedure
due to possible risks involved.


1.6.1 Enabling "xp_cmdshell" using "sp_configure":

Copy and paste the following lines of code into a Query Window. Then highlight and execute it.

EXEC master.dbo.sp_configure 'show advanced options', 1

RECONFIGURE

EXEC master.dbo.sp_configure 'xp_cmdshell', 1

RECONFIGURE

To find out if it works now, try again the following from a Query Window:

exec xp_cmdshell 'dir c:\'


1.6.2 Some examples using "xp_cmdshell":


Example 1:

EXEC master.dbo.xp_cmdshell'c:\test\mybatch.cmd'


Example 2:

SELECT @LOGSTRING=@ART_NR+' :'+@ACTION+' '+CONVERT(VARCHAR(64),@ACTION_DATE)
SELECT @log_cmd='echo'+' '+@LOGSTRING+' >> C:\TEMP\LOAD_FILE.LOG'

EXEC master.dbo.xp_cmdshell @log_cmd


Example 3:

SELECT @totalcommAND='bcp ##BCP_LOAD in '+@importpath+' -c -F2 -T'

EXEC @RESULT = master.dbo.xp_cmdshell @totalcommAND
IF (@RESULT <> 0)
BEGIN
SET @MESSAGE='Error loading data in temporary table. Possibly wrong path or file not found.'
GOTO error_section
END


Example 4:

DECLARE @cmd sysname, @var sysname
SET @var = 'Hello world'
SET @cmd = 'echo ' + @var + ' > var_out.txt'
EXEC master..xp_cmdshell @cmd

DECLARE @cmd sysname, @var sysname
SET @var = 'dir/p'
SET @cmd = @var + ' > dir_out.txt'
EXEC master..xp_cmdshell @cmd


Example 5:

DECLARE @FileName varchar(50),
@bcpCommand varchar(256)

SET @FileName = REPLACE('c:\authors_'+CONVERT(char(8),GETDATE(),1)+'.txt','/','-')

SET @bcpCommand = 'bcp "SELECT * FROM pubs..authors ORDER BY au_lname" queryout "'
SET @bcpCommand = @bcpCommand + @FileName + '" -U albert -P password -c'

EXEC master..xp_cmdshell @bcpCommand



1.7. QUICKLY CREATE A COPY TABLE FROM AN EXISTING TABLE (WITH ALL DATA):


Suppose you have a table X with datarows.
If you want to create a copy table, with the same columns, datatypes, and all datarows, you can issue
a statement like the example below. Note that the copy table Y should NOT exist beforehand.
Table Y will be created "on the fly".

Example:

select * into Y
from X

This statement will create table Y with the same columns, datatypes, and the same rows, from table X.
But table Y will not have similar "constraints" (like Primary Key, Foreign Keys) like table X has.
You will only have a duplicate table Y, with the same rows, as table X has.

Note:
The upper SQL statement is the SQL Server equivalent of the famous Oracle CTAS statement (Create Table As Select).

Note: as an alternative:
You can easily script table "X", for example, using SSMS. Then when you have the script, change the name of the table to
another appropriate name, like "Y" (in this example we ignore any constraints in the script like a possible primary key name).

Next, create table Y (execute the script).

Next, use the SQL statement:

INSERT INTO Y
SELECT * FROM X



1.8. A FEW NOTES ABOUT "SCRIPTS", AND STORED PROCEDURES AND FUNCTIONS:


If you want to run TSQL code, there are several ways to interact to SQL Server, for example:

- Starting a graphical "query window" from SSMS (the graphical SQL Server Management Studio.
- From a Operating System prompt, starting a client like "sqlcmd" or "osql".
- Other tools/clients using OLEDB or ODBC etc.. to send TSQL and receive resultsets.

Using the graphical "Query Window" is probably the most "popular" method.
If you are not so familiar with the query window, then just start SMSS and logon to SQL Server.
It should not be too difficult to create a "query window", by browsing the various buttons and menu's:
In the upper left corner of SSMS, you should find a "New Query" control.

Globally speaking, you can create two "types" of plain TSQL code:

  1. Or you just type in TSQL code in the Query Window, and when done, execute it. Ofcourse, you can also write that code in any editor (like notepad, or notepad+ etc..), save it as a file anywhere on the filesystem, and when needed, you can copy/paste it in the Query Window and run the script. In this method, you are just building scripts (sometimes also called TSQL batches).


  2. You can also write "stored procedures" and "functions". These have a sort of defined format, but the body of the code can be anything you want that procedure or function to do. This time, you always start out with "CREATE PROCEDURE procedure_name AS..." and then follows the body of your code. Big difference with just scripts is, that procedures and functions are stored in the database, and you can grant permissions on them, as well as there are other levels of control. Also, objects like procedures are pre-compiled and parsed, and generally speaking, execute fast.

Whether you write scripts, or build procedures, you have the full power of TSQL available, like:
IF .. THEN .. ELSE.. statements, WHILE loops, CASE statements, and all other fabulous logic that's build into TSQL.

1.8.1 A few examples of Scripts:

Example:

Suppose you have a database for just playing around.

Suppose you have (or create) these PRODUCTS and DAYORDERS tables:

CREATE TABLE [dbo].[Products]
(
Product_id INT NOT NULL,
Product_Name VARCHAR(20) NOT NULL,
Unit_price DECIMAL(7,2) NOT NULL,
No_In_Stock INT NOT NULL,
Must_Order CHAR(1) NOT NULL, -- we use it as a boolean field: y or n
)
GO

CREATE TABLE [dbo].[DayOrders]
(
Order_id int NOT NULL,
Product_id int NOT NULL,
Quantity_Sold int NOT NULL,
)
GO

Note: often, as the schema who owns the tables, the database owner (dbo) is used.

Let insert a few records for demonstration purposes:

insert into products values (1,'bicycle',200.00,5,'N')
insert into products values (2,'motoroil',5.75,7,'N')
insert into products values (3,'hammer',10.00,4,'N')
insert into products values (4,'roadmap',3.00,10,'N')

insert into DayOrders values (177,1,4)
insert into DayOrders values (178,4,20)
insert into DayOrders values (179,3,12)

Here we have only a few records per table, but let's imagine, that here we are dealing with thousends of records...

The DayOrders table collects what is sold today. But, is the invetory in the products table still sufficient,
or should we order new products? Let's say that when (Quantity_Sold - No_In_Stock) > 0, then we must order
new items for that particular product. Let's create a script to determine what is the status:

DECLARE @ProductId INT
DECLARE @MaxProductId INT
DECLARE @PRODUCT_NAME VARCHAR(20)
DECLARE @NO_IN_STOCK INT
DECLARE @QUANTITY_SOLD INT

SET @ProductId=1
SET @MaxProductId=(SELECT MAX(Product_id) from PRODUCTS)

WHILE @ProductId <= @MaxProductId
BEGIN
  IF exists (select Product_Id from DayOrders where Product_id=@ProductId)

  SET @NO_IN_STOCK=(SELECT NO_IN_STOCK FROM PRODUCTS WHERE Product_id=@ProductId)
  SET @QUANTITY_SOLD=(SELECT QUANTITY_SOLD FROM DAYORDERS WHERE Product_id=@ProductId)
  SET @PRODUCT_NAME=(SELECT PRODUCT_NAME FROM PRODUCTS WHERE Product_id=@ProductId)

  IF (@QUANTITY_SOLD-@NO_IN_STOCK) > 0
    BEGIN
    PRINT 'FOR PRODUCT '+@PRODUCT_NAME+' WE MUST ORDER '+convert(varchar(10),@QUANTITY_SOLD-@NO_IN_STOCK)+' ITEMS'
    END

SELECT @ProductId=@ProductId+1

END

The result the script produces is:

FOR PRODUCT hammer WE MUST ORDER 8 ITEMS
FOR PRODUCT roadmap WE MUST ORDER 10 ITEMS

By the way, if you have typed in, or through copy/paste, or via another method, loaded a script, if you "highlight" the code
with your mouse, after that you can execute it using the "Execute" button in the SSMS menu.
Or you can use the Ctrl-E keycombination.

Typically, a script starts with a variable declaration like "DECLARE @ProductId INT", where a variablename
starts with the "@" character. Also, you must tell SQL Server what the datatype of that variable is.

Next, in this script, we assign two variables a certain value, like "SET @ProductId=1", which sets the beginmarker
of the loop we will define a few lines down.
The endmarker of the loop, is the Maximum of the product_id, so that we can be sure to parse through all product_id's.
Next, we actually start the loop, using the "WHILE expression=true" construct.
In this case, we want the loop to cycle using "WHILE @ProductId <= @MaxProductId"

The main purpose is, just to show you how such a "block of TSQL code" usually "looks" like.

Please take notice of the logic like:

WHILE (some condition is true)
BEGIN
  Some statements
END

And also take notice of logic like:

IF (some condition is true)
BEGIN
  Some statements
END

In most programmatic "languages", the IF syntax is like "IF condition THEN some_statements ELSE other_statements".
But in TSQL, we do not have the "THEN". What we can use is (the ELSE branch is optional, and use it if neccessary):

IF (some condition is true)
  BEGIN
    Some statements
  END
ELSE
  BEGIN
    Some other statements
  END

Also the "BEGIN" and "END" enclosures, are often just optional. You may use them to make code easier to read, but
in many constructs (like the IF), it's totally optional.

A few other examples of blocks of TSQL code:

Here are a few other examples to browse through:

-- Take a look at this block of TSQL code:

DECLARE @I INT

SET @I=1

WHILE @I<5
BEGIN
  PRINT 'My boss bumped his head '+convert(varchar(8),@I)+' times, hehe!'
SELECT @I=@I+1 -- needed to increment the "counter" of the loop
END

Output:

My boss bumped his head 1 times, hehe!
My boss bumped his head 2 times, hehe!
My boss bumped his head 3 times, hehe!
My boss bumped his head 4 times, hehe!

Question: Why do we see 4 records in the result, and not 5?


-- Take a look at this block of TSQL code:

DECLARE @EMP_NAME VARCHAR(64)

SELECT @EMP_NAME=(SELECT emp_name FROM EMPLOYEES WHERE emp_id=2)

IF @EMP_NAME='HARRY'
  PRINT 'WE FOUND HARRY'
ELSE
  PRINT 'IT IS NOT HARRY'


So depending on the emp_name where emp_id=2, we can find 'HARRY' or possibly another name.

Note:
By the way, hopefully it's clear, that after you have "declared" a variable, and after you have assigned a value
to that variable, then if you "select" that variable again, the value will be printed (or shown).
For example:

DECLARE @x VARCHAR(32) -- declaration
SELECT @x='abc123'-- assignment of a value
SELECT @x -- show the value

Then the 'second' SELECT will show you the value of @x. You may also use the PRINT statement to show the value.


1.8.2 A few simple examples of creating Stored Procedures:

Contrary to just TSQL scripts (or blocks of TSQL code), a Stored Procedure, is really stored in the Database!
This makes "control", in various ways, a lot better, and it can be used by any (remote) client if needed.
Because it's "inside" the database, any user or program (if permitted) can use it.

You should create a stored procedure, in general terms, like so:

CREATE PROCEDURE procedure_name [optional parameter list]
AS
YOUR STATEMENTS

A few simple examples to get the general idea:

-- Take a look at this code:

A simple procedure that gets a record from a table, based on a parameter "@cust_id"

CREATE PROCEDURE stp_GetCustomerInfo @cust_id int

AS
SELECT Cust_name, city, country
FROM CUSTOMERS
WHERE Cust_id=@cust_id

-- Take a look at this code:

CREATE Procedure dbo.stp_insertEmployee
@emp_id INT, -- a number of input variables, matching the fields of the target table
@emp_name VARCHAR(20),
@salary DECIMAL(7,2),
@hiredate DATETIME,
@gender CHAR(1)

AS
BEGIN
  INSERT INTO EMPLOYEE
  VALUES
  (@emp_id,@emp_name,@salary,@hiredate,@gender)
END

Note: If you created a stored procedure, and if you want to run it from a query tool,
use the syntax:
exec procedure_name

So, in the above examples, you would use:

exec stp_GetCustomerInfo 233

where we used for example 233 as a cust_id.

exec stp_insertEmployee 1,'harry',2000,'2011/1/1','M'

Where we want to insert a record defined with the values "1,'harry',2000,'2011/1/1','M'"

If you want to execute a stored procedure which does not have any parameter, just use:

exec procedure_name


More on Parameters:

Suppose we create these two simple procedures, which will just print their input variables.

CREATE procedure usp_proc1 @input1 varchar(32)
AS

BEGIN
PRINT @input1
END
GO

CREATE procedure usp_proc2 @input2 varchar(32)
AS

BEGIN
PRINT @input2
END
GO

Now let's create a Main routine, that calls both sp's:

CREATE procedure usp_Main
@inp1 varchar(32),
@inp2 varchar(32)
AS

BEGIN

exec usp_proc1 @inp1
exec usp_proc3 @inp2

END
GO

Now, you would for example execute usp_main like so:

exec usp_Main 'Hi', 'Reader'

which would return:

Hi
Reader

You could also create the procedure usp_Main, with two inputvariables, which if they are not set,
will revert to "default" values. In the example below, the defaults are @inp1='Hi' and @inp2='Reader'.

CREATE procedure usp_Main
@inp1 varchar(32)='Hi',
@inp2 varchar(32)='Reader'
AS

BEGIN

exec usp_proc1 @inp1
exec usp_proc3 @inp2

END
GO

In this case, you could execute the procedure as simply as:

exec usp_Main

Ofcourse, you can also call the procedure with the two inputvariables as well, thereby not
using the defaults.


1.8.3 Table variables:

Many programmatic/scripting languages, use the concepts of arrays, or records (or whatever they are called),
as "placeholders" for multiple values.
In TSQL you can declare a variable as a Table datatype. This is really great, since this variable
looks and feels and behaves like a real table.

Here is a simple example:

DECLARE @TABMETA TABLE
(
OWNER VARCHAR(128),
TABLE_NAME VARCHAR(128)
)

INSERT INTO @TABMETA
SELECT TABLE_SCHEMA, TABLE_NAME from INFORMATION_SCHEMA.TABLES

SELECT * FROM @TABMETA

This ofcourse has countless applications.

This section not even scratched "the surface" of creating TSQL code. But if you would read the other
sections below, the topic will be extended somewhat.



1.9. CREATING A SIMPLE LOOP IN A SCRIPT, OR STORED PROCEDURE, USING A CURSOR:


There are several ways to create a "loop" in your scripts (or stored procedures), using TSQL constructs.
One of them, is using a socalled "CURSOR".

You may view a cursor as some "virtual table", which you use in your loop, starting at the "top" of the "table",
and while cycling the loop, all the way down to the "bottom" of the virtual table, where the loop finishes.

As will be explained below, generally, a cursor is not the optimal construct if you will have large datasets.
But for "administrative-like" code, usually it's OK.

Example 1:

Take a look at the following example. Suppose I want to rebuild the indexes of all tables in a database.
As such, I want to "dynamically" generate all neccessary DBCC DBREINDEX statements for all tables.
In SQL Server 2005/2008, the dictionary view "information_schema.tables", contains all table names,
as well as other information.
So, if I define (or declare) a cursor (the virtual table in memory) as:

DECLARE MyCursor CURSOR FOR
SELECT table_name FROM information_schema.tables

Then, after "opening" the cursor (declaring only, is not enough), I will have that virtual table which holds
all tablenames. This I can use for my loop.
To get the first tablename from the cursor, can be done using "FETCH NEXT.."
When you start grabbing from the cursor, the "FETCH NEXT.." will actually be the first value.
Then you start the "WHILE" loop, which will only end if the @@FETCH_STATUS is not equal "0".
So, "WHILE @@FETCH_STATUS = 0", means "we keep looping as long as the @@FETCH_STATUS remains 0.
Then, when at last, when the "FETCH_STATUS <> 0", we have took the last value from the cursor, and the loop must stop.

Take a close look at this code:

set nocount on
DECLARE @TableName varchar(255)

DECLARE MyCursor CURSOR FOR
  SELECT table_name FROM information_schema.tables
  WHERE table_type = 'base table'

OPEN MyCursor

FETCH NEXT FROM MyCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
  SELECT 'DBCC DBREINDEX('+@TableName+','+''''',90)'
  FETCH NEXT FROM MYCursor INTO @TableName
END

CLOSE MyCursor

DEALLOCATE MyCursor

You should always CLOSE and DEALLOCATE your cursor, in order to "clear" it from memory.

Example 2:

Suppose you have an EMPLOYEE table, containing for all employees, the emp_id, emp_name and emp_salary.
You need to fill a new table "EMP_BONUS", and store in that table the emp_id, and emp_bonus.
The emp_bonus is calculated using "emp_salary . 0.1"

Actually, this can easily be done using a relatively simple SQL statement, but I just want to demonstrate
how to solve this using a cursor.

So, you could use the following script:

DECLARE @EMPID INT
DECLARE @SALARY DECIMAL(7,2)
DECLARE @BONUS DECIMAL(7,2)

DECLARE MyCursor CURSOR FOR
  SELECT emp_id, emp_salary FROM EMPLOYEES

OPEN MyCursor

FETCH NEXT FROM MyCursor INTO @EMPID, @SALARY
WHILE @@FETCH_STATUS = 0

BEGIN

  SELECT @BONUS=(@SALARY * 0.1)

  INSERT INTO EMP_BONUS
  VALUES
  (@EMPID, @BONUS)

 FETCH NEXT FROM MyCursor INTO @EMPID, @SALARY

END

CLOSE MyCursor

DEALLOCATE MyCursor

As we have seen, with the use of a cursor, you can create pretty neat loops in your code.
The cursor itself is NOT the loop. It acts like a virtual table which defines the "range" that the WHILE loop
operates on. So, if you want to pinpoint the actual loop in the code, it's ofcourse the WHILE loop.
Notes:

(1):
There are more "types", or "variations", of cursors, than is shown in this simple document.
You can easily see that from this basic TSQL declare syntax:

DECLARE cursor_name CURSOR[ LOCAL | GLOBAL ][ FORWARD_ONLY | SCROLL ][ STATIC | KEYSET |
DYNAMIC | FAST_FORWARD ][ READ_ONLY | SCROLL_LOCKS | OPTIMISTIC ][ TYPE_WARNING ]
FOR select_statement[ FOR UPDATE [ OF column_name [ ,...n ] ] ]

See for example BOOKS ONLINE for more information.

(2):
Using a cursor to plough through really large tables, might not be a good idea.
Everybody "has different ideas" of what is actually "large", but be carefull.
A cursor is great for tables with a number of rows < 500000 (arbitrarily choosen),
but using tables with many millions of rows, might prove to be problematic.
First, such a cursor is always very memory consuming, and if you declare a cursor using a large table
and update that same table in the script, serious locking issues may arise, if users are
are modifying that table as well.

Not withstanding these warnings, a cursor is great for using in loops, if you have relatively smaller
datasets, just as in example 1, for administrative scripts for the DBA.
 


1.10. HOW TO KILL A SESSION (SPID):


This is easily done from a query tool, like the Query Window in SSMS.
If you have found the "spid", that is, the "SQL Server Process ID", (or Session ID as it is nowadays called),
that you need to terminate, then issue the command:

kill @@spid

where the @@spid variable is the session id, like for example 178.
So, in this case, the command would be:

kill 178



1.11. HOW TO FIND AN OBSTRUCTING, OR BLOCKING, SESSION:


Normally, transactions goes so fast that other users will not notice the locking features of SQL Server.
Locks (for example a rowlock, or a pagelock) is just a normal mechanism in ensuring integrety in multi-user access.

But sometimes a user (or the program the user is using), will hold a lock too long, and thus it might
block other users for accessing the resource.

Often "killing" the obstructing session might be the only way to let the other users continue their work.
How to kill a session was shown in section 1.10.
Finding blocking sessions was already shown in section 1.4.

But let's emulate a typical case here.

Example:

First, in your test database, create a test table "X" like:

CREATE TABLE X
(
id INT,
name VARCHAR(20))

Having just one testrecord in X, is enough for our demo, so let's insert this one:

INSERT INTO X VALUES (1,'hammer')

Now let's create a (standard) SQL login, with permissions in your TEST database:

CREATE LOGIN mary WITH PASSWORD='welcome01', DEFAULT_DATABASE=TEST
GO
USE TEST
GO
EXEC sp_grantdbaccess 'mary', 'mary'
GO
EXEC sp_addrolemember 'db_datareader', 'mary'
GO
EXEC sp_addrolemember 'db_datawriter', 'mary'
GO

Since mary is granted the db_datareader and db_datawriter roles, she may modify table X.

Let's start a session as mary. Go to the OS prompt, and start a "sqlcmd" session:

C:\> sqlcmd -U mary -P welcome01
1> use TEST
2> GO
Changed database context to 'TEST'.
1> BEGIN TRANSACTION
2> UPDATE X
3> SET NAME='bycicle' WHERE ID=1
4> GO

(1 rows affected)

Note that mary started a TRANSACTION, but not committed it yet. This means she holds an exclusive rowlock
on that specific record in table X.

Now you start a seperate independent Query Window in SSMS (or also use a sqlcmd session).
Try to select the table X.

SELECT * FROM x

Your query will excecute, but no resultset will show. Your session is blocked by the session of mary.

Let's view the blocking lock: (query is usable in 2000/2005/2008)

SELECT spid, cpu, physical_io, blocked, cmd, waittime,
substring(convert(varchar(20),last_batch), 1, 20) as "LASTBATCH",
substring(nt_username, 1, 15) AS "USERNAME",
substring(loginame, 1, 20) AS "LOGINNAME",
substring(hostname, 1, 15) AS "HOSTNAME",
substring(program_name, 1, 40) AS "PROGRAM"
FROM master.dbo.sysprocesses

This should cleary show that your session is blocked by mary's session.

Also, let's try the following query (query is usable in 2005/2008)

SELECT
s.login_name, s.nt_user_name,
r.session_id AS BLOCKED_SESSION_ID,
r.blocking_session_id AS BLOCKING_SESSION_ID,
s.program_name,
r.start_time,r.status,r.command,database_id,
r.wait_type,r.open_transaction_count,r.percent_complete,r.cpu_time,r.reads,r.writes,r.deadlock_priority
from sys.dm_exec_sessions s, sys.dm_exec_requests r
where s.session_id=r.session_id AND blocking_session_id > 0

That query will show your blocked session id, and the offending session id.

To end the lock, you might consider killing that blocking session id, with:

kill blocking_session_id

Like for example:

kill 52



1.12. SOME OFTEN USED STANDARD FUNCTIONS:


In this section, it's handy to have a small table with a few demo records,
so, let's make the following demo table in a test database:

CREATE TABLE B
(
id INT,
name VARCHAR(100))

Now, let's insert these few test records:

INSERT INTO B VALUES (1,'123:BB:Johnson') -- note that there is a sort of "code" before the name Johnson
INSERT INTO B VALUES (2,'  Carlson') -- note that there is white space before the name Carlson
INSERT INTO B VALUES (3,'I want a lot of gold in my bathroom')

SELECT * FROM B

Resultset:

id  name
1   123:BB:Johnson
2     Carlson
3   I want a lot of gold in my bathroom

For any of the following functions, it's true that they can operate on fields of a table in a SELECT query, and
they can equally well operate on a variable.

For example:

SELECT substring(name, 1, 10) from EMPLOYEE

Here, we used the substring() function to retrieve just a part of the name field of the EMPLOYEE table in a SELECT query.

SELECT @smallvar=substring(@bigvar,10,5)

In the above statement, we assigned the "@smallvar" variable, a value that's just a part of the "@bigvar" variable.


1. The LTRIM() and RTRIM() functions:

Suppose somebody mistakingly inserted the (2,'  Carlson;) row into table B. The mistake then is the white space
on the left of the name 'Carlson'. If you want to Left trim, or Right trim spaces from character fields (or variables), you
can use the LTRIM and RTRIM functions.
Function LTRIM: Removing leading spaces.
Function RTRIM, Removing trailing spaces.

Example 1:

Just compare the resultsets from the two queries below:

SELECT name FROM B

123:BB:Johnson
  Carlson
I want a lot of gold in my bathroom

SELECT LTRIM(name) FROM B

123:BB:Johnson
Carlson
I want a lot of gold in my bathroom

Example 2:

Now let's demonstrate LTRIM and RTRIM using a piece of TSQL code:

DECLARE @varwithspaces varchar(100)
DECLARE @varwithoutspaces varchar(100)

SELECT @varwithspaces=' the_text '
SELECT @varwithoutspaces=LTRIM(RTRIM(@varwithspaces))

-- Now show both variables:

SELECT @varwithspaces
SELECT @varwithoutspaces

If you execute above code, you should see this:

  the_text
(1 row(s) affected)

the_text
(1 row(s) affected)


2. The substring() function:

The substring() function has the following syntax:

SUBSTRING(string, startposition, length)

So you can use it if you want to "clip" a field or variable, that is, you want only a piece of the text,
starting from position "startposition" with a lenght of "length".

Example 1.

Take a look at the following piece of TSQL:

DECLARE @bigvar varchar(100)
DECLARE @smallvar varchar(100)

SELECT @bigvar='12345678910abcdefghi'
SELECT @smallvar=substring(@bigvar,10,5)

-- Now show both variables:

SELECT @bigvar
SELECT @smallvar

If you would execute the above code fragment, you should see this for output:

12345678910abcdefghi

(1 row(s) affected)

10abc

(1 row(s) affected)

Note how the "10abc" piece of text, is exactly the original string as of position 10 with a length of 5 characters.

Example 2.

The example below, should speak for itself.


CREATE TABLE TABLONG
(
mycolumn VARCHAR(128)
)

CREATE TABLE TABSHORT
(
mycolumn VARCHAR(128)
)

INSERT INTO TABLONG VALUES ('00123-CODE:09876')
INSERT INTO TABLONG VALUES ('00123-CODE:09877')

SELECT * FROM TABLONG

mycolumn
------------------------
00123-CODE:09876
00123-CODE:09877

(2 row(s) affected)

INSERT INTO TABSHORT
SELECT SUBSTRING(mycolumn,12,5)
FROM TABLONG

Notice that the rows in table TABSHORT are the clipped data from table TABLONG.

Note: in case you did not know, it's indeed possible to INSERT rows into a table, using a SELECT from another table.

SELECT * FROM TABSHORT

mycolumn
-------------------------------
09876
09877

(2 row(s) affected)


3. The LEN() function:

This one is very easy. It returns the length of a string, or field of a table.

Example 1:

If you had created table "B" as was shown in the beginning of this section, then take a look at this:

SELECT LEN(NAME) FROM B

-----------
14
10
35

(3 row(s) affected)

Note how LEN() just returns how many characters a field or variable has. Note that also spaces are counted.
So, LEN() is very usable to determine the length of a field or variable.
In other sections, we will find very good use of the LEN() function, I assure you!

Example 2:

Take a look at this TSQL:

DECLARE @MyVar varchar(100)
SELECT @MyVar='This is some text'

SELECT @MyVar
SELECT LEN(@MyVar)

If you run that code, you should see:

This is some text

(1 row(s) affected)

17

(1 row(s) affected)


4. The REPLACE() function:

This is a great function, to replace existing text in a field of a table (or variable),
by some other text.

Syntax:

REPLACE(field, 'string_to_be_replaced', 'replacement')

Example 1:

DECLARE @MyString VARCHAR(128)
DECLARE @MyOtherString VARCHAR(128)

SET @MyString='The great Mr. A'
SET @MyOtherString=REPLACE(@MyString,'Mr','Mrs')
SELECT @MyOtherString

------------------
The great Mrs. A

(1 row(s) affected)

Example 2:

Suppose you have this data in the INVENTORY table:

id   Article   Manufacturer
5   bycicle   ABC Corp
6   Hammer   XYZ Corp
7   MotorOil   XYZ Corp
8   Roadmap   EFG Corp
etc..

Now, suppose we need to replace the XYZ in "XYZ Corp" data, with DEVO:

UPDATE INVENTORY
SET manufacturer=REPLACE(manufacturer, 'XYZ','DEVO')

Then the manufacturer field, only where the data contained 'XYZ', will be changed to contain 'DEVO'.

id   Article   Manufacturer
5   bycicle   ABC Corp
6   Hammer   DEVO Corp
7   MotorOil   DEVO Corp
8   Roadmap   EFG Corp
etc..



5. The PATINDEX() function:

In the next section (section 1.13), we will see a great example of the usuability of this function.

Syntax:
PATINDEX("pattern", field or variable)

Essentially, the PATINDEX() function works like this.
If you want to find the starting position of where a string of a certain "pattern" starts from
in a column of a table, or any expression, you can USE the PATINDEX("pattern", column) function.

An example will illustrate this.

Example 1:

In the beginning of this section, we created table "B", and here is it's content:

SELECT * FROM B

Resultset:

id  name
1   123:BB:Johnson
2     Carlson
3   I want a lot of gold in my bathroom

Now suppose I want to know where the "pattern" "gold" starts from in record no 3,
we can use this:

SELECT PATINDEX('%gold%',name)
FROM B WHERE id=3

If you run that statement, you would find that the pattern %gold% starts from position 17.
So, the PATINDEX() function returns an integer.
See the next section for a useable example.


6. The CHARINDEX() function:

This function returns the starting position of the "expression1" in a character string (expression2).
So, it works remarkably the same as the function PATINDEX(). There are a few minor differences though.
PATINDEX can use wildcard characters, but CHARINDEX cannot.

Syntax: CHARINDEX ( expression1 , expression2 [ , start_location ] )

If "expression1" in "expression2" is not found, the function returns "0".

Here are a few simple examples:

SELECT CHARINDEX('/','abcd/efgh')

Will return "5".

SELECT CHARINDEX('John','This morning John went to work, but he really disliked it')

Will return "14".



1.13. REMOVING "UNWANTED" CHARACTERS FROM A FIELD OR STRING:


Suppose you have a variable, containing "unwanted" characters like "\", or "%" etc..
and suppose you need to "clean" that, so that only ordinary characters like a-z, A-Z, and 0-9 remain,
the following script might be handy.
To appreciate what the script does, copy/paste it in a Query Window and execute it.

DECLARE @s varchar(100)
DECLARE @i INT

SELECT @s = 'asd i/.,<>as>[{}]vnbv' -- assign a value to variable @s, so before cleaning
SELECT @s -- show @s as it is now

SELECT @i = patindex('%[^a-z^A-Z^0-9^ ]%', @s)
  while @i > 0
  BEGIN
    SELECT @s = replace(@s, substring(@s, @i, 1), '')
    SELECT @i = patindex('%[^a-z^A-Z^0-9^ ]%', @s)
  END

SELECT @s -- show @s as it is after "cleaning


Then these would be the values of @s before and after the cleaning:

asd i/.,<>as>[{}]vnbv
asd iasvnbv

Ofcourse, we can apply that to fields of a table as well.
Just take a look at this example:


CREATE TABLE Z
(
id INT,
name VARCHAR(20))

Having just one testrecord in Z, is quite enough for our demo, so let's insert this one:

INSERT INTO Z VALUES (1,'abc/vv>()&%aaa')

SELECT * FROM Z

Resultset:

id name
1 abc/vv>()&%aaa

Now lets run the following script:

DECLARE @s varchar(100)
DECLARE @i INT

SELECT @s = (SELECT NAME FROM Z WHERE ID=1)-- assign a value to variable @s
SELECT @s -- show @s as it is now

SELECT @i = patindex('%[^a-z^A-Z^0-9^ ]%', @s)
  while @i > 0
  BEGIN
    SELECT @s = replace(@s, substring(@s, @i, 1), '')
    SELECT @i = patindex('%[^a-z^A-Z^0-9^ ]%', @s)
  END

SELECT @s -- show @s as after "cleaning

The two "SELECT @s" in the above script, will produce:

abc/vv>()&%aaa -- the first select @s
abcvvaaa -- the second select @s

So, we succeeded in removing the unwanted characters. The main mechanism for this to work, is the
PATINDEX() function, which we have seen (among other functions) in the former section.



1.14. REMOVING "UNWANTED" QUOTES FROM A FIELD OR STRING:


Suppose you have a table with rows like:

name city
'Harry' 'Boston'
'Miriam' 'Seattle'
etc..

where all fields are enclosed by ''.
To create a new table without these quotes around the data, you can use the example below.
Maybe you can create a more "elegant" or more "efficient" piece of TSQL code.
Actually, the script below is not really good enough to run on any table of value.
It only demonstrates the use of the LEN() function, as was discussed in the former section.

Example:

Let's create the table (TAB1) first and insert some rows with quotes around the data.
Then we create a table (TAB2) with the same structure as TAB1, which will hold the "cleaned" rows after the script has run.


CREATE TABLE TAB1 -- name the table TAB1 if you want to follow below script
(
name varchar(32),
city varchar(32)
)


CREATE TABLE TAB2 -- name the table TAB2 if you want to follow below script
(
name varchar(32),
city varchar(32)
)

INSERT INTO TAB1 VALUES ('''Harry''', '''Boston''')
INSERT INTO TAB1 VALUES ('''Miriam''', '''Seattle''')

Note the use of '''text''', in order to insert text data surrounded with quotes, into the table.

Let's see what's in table TAB1:

name   city
'Harry'   'Boston'
'Miriam'   'Seattle'

Now, let's fill table "tab2" using the following script.

DECLARE @name varchar(32)
DECLARE @city varchar(32)
DECLARE @length_name INT
DECLARE @length_city INT
DECLARE @name2 varchar(32)
DECLARE @city2 varchar(32)

DECLARE cur1 CURSOR FOR
SELECT name, city FROM tab1

OPEN cur1
FETCH NEXT FROM cur1 INTO @name, @city

WHILE (@@fetch_status<>-1)
BEGIN

SELECT @length_name =LEN(@name)
SELECT @length_city =LEN(@city)

SELECT @name2 =substring(@name,2,@length_name-2)
SELECT @city2 =substring(@city,2,@length_city-2)

INSERT INTO TAB2
values
(@name2,@city2)

FETCH NEXT FROM cur1 INTO @name, @city

END

CLOSE cur1
DEALLOCATE cur1

Note:

Since we are busy with quotes anyway, suppose you need to assign a value to a (string) variable,
where the value includes a quote, you can do it like so:

DECLARE @VAR1 VARCHAR(64)
SET @VAR1='Appie''s'
SELECT @var1 -- this statement shows the value of @VAR1

You should see the result:

Appie's



1.15. SIMPLE OVERVIEW MOST IMPORTANT SQL SERVER DATATYPES:


SQL Server uses "strong typing". In this case it means that suppose a column of a table is of datatype "datetime",
then you cannot insert just any number into that field. Likewise, if a column is of datatype "integer", you cannot
just put a number with decimals into that field.
Sometimes you must be able to "convert" a variable or field of a certain datatype, into another datatype.

SQL Server allows you to use a large number of different datatypes.
We all know the "administrative like" datatypes like integer, decimal(n,m), datetime, varchar(n) and char(n),
which are often used in many administrative databases (administration, bookkeeping, orderentry, logistics etc..).
But there are lots more, like the XML datatype, which is native to store XML, or varbinary which you can use
to store any binary (like docs etc..)

Here are the most often used datatypes, with a short description:

________________________________________________________________________________________________
character based data:
________________________________________________________________________________________________
char(length): A fixed length character datatype. For example, "char(4)" will always reserve 4 bytes of space.
________________________________________________________________________________________________
nchar(length): just like char, but this time UNICODE
________________________________________________________________________________________________
varchar(length): A variable length character datatype. It will only store what's needed, up to the specified length.
________________________________________________________________________________________________
nvarchar(length): just like varchar, but this time UNICODE
________________________________________________________________________________________________
text: if you need to store a lot of text. Max 1G characters.
________________________________________________________________________________________________
ntext: just like text, but now in UNICODE
________________________________________________________________________________________________
nummeric based data:
________________________________________________________________________________________________
decimal(n,m): decimal notation/type, meaning n digits before the "," or "." seperator, and m precision after.
________________________________________________________________________________________________
nummeric(n,m): same as decimal(n.m)
________________________________________________________________________________________________
float(n): can be used for large floating point numbers
________________________________________________________________________________________________
bigint: whole numbers=integers: -2^63 to 2^63; use 8 bytes storage
________________________________________________________________________________________________
int: whole number=integers: -2^31 to 2^31; uses 4 bytes storage
________________________________________________________________________________________________
smallint: whole number=integers: -32768 to 32767; uses 2 bytes storage
________________________________________________________________________________________________
tinyint: whole numbers=integers 0 to 255; uses 1 byte storage
________________________________________________________________________________________________
date/time datatype:
________________________________________________________________________________________________
datetime: stores date + time (accuracy 0.00333 s); 8 bytes; 01/01/1753 - 31/12/9999
________________________________________________________________________________________________
datetime2: stores date + time (accuracy 100 ns); 6 to 8 bytes; 01/01/0001 - 31/12/9999
________________________________________________________________________________________________
smalldatetime: like above, but a "smaller" variant; 4 bytes; 01/01/1900 - 06/06/2079
________________________________________________________________________________________________
date: stores just plain dates like "12/12/2010"; 3 bytes
________________________________________________________________________________________________
time: stores plain time, with accuracy of 100 ns; 3 to 5 bytes
________________________________________________________________________________________________
binary datatypes:
________________________________________________________________________________________________
image: e.g.: to store pdf, word files, images, or other binary data; up to 2G
________________________________________________________________________________________________
varbinary(max): e.g.: to store pdf, word files, images, or other binary data; up to 2G
________________________________________________________________________________________________
binary: Fixed-length binary data with a length of n bytes, where n from 1 through 8,000. storage is n bytes.
________________________________________________________________________________________________
XML datatype:
________________________________________________________________________________________________
XML: "native" datatype to store XML documents or fragments
________________________________________________________________________________________________
Spatial Datatype:
________________________________________________________________________________________________
geometry: supports geometric data (points, lines, and polygons)
________________________________________________________________________________________________
geography: a Euclidean coordinate system. This type represents geographic objects on an area on the Earth's surface
________________________________________________________________________________________________
uniqueidentifier:
________________________________________________________________________________________________
uniqueidentifier: Globally Unique ID, which is "supposed" to uniquely determine a row, between entities, instances, and objects.
________________________________________________________________________________________________


Choose your datatype carefully:

If you choose "too wide" datatypes, you fill a database block "too quickly". This means that a smaller number of rows
fits in a table page.
This cost performance, because the database needs to read more pages, to get the same information.
For example, suppose some table has a "COMMENT" column. Think of the consequence if some developer choose a
a datatype of "char(2000)", meaning a fixed column lenght of 2000 bytes.
Now if the comment is at most just a few words, then that's a true waste.

As another example: If you are sure you can use a smallint, then don't take a bigint.
Carefully choosing datatypes, not only saves space, but will also lower computing power as well.



1.16. THE CAST AND CONVERT CONVERSION FUNCTIONS:


Sometimes you must be able to "convert" a variable, or field, of a certain datatype, into another datatype.
Ofcourse, some datatypes are very similar (like char, varchar). For those, SQL Server will use "implicit conversion"
at certain events.
For example, if you have a table with char columns, and you insert those rows to another (similar) table with varchar columns,
then that will work (provided that the varchar columns are not too small to hold the char values).
But at other times, you really need to "convert" values from one type to another.

For this, the CAST() and CONVERT() conversion functions can be used. CAST() and CONVERT() are almost equal in functionality.
You can say that CONVERT() offers somewhat more options to pick from.
We are going to illustrate the use of CONVERT and CAST with a couple of examples.

The syntax for CAST() is:
CAST ( expression AS data_type [ (length ) ])

The syntax for CONVERT is:
CONVERT ( data_type [ ( length ) ] , expression [ , style ] )

The syntax might look "difficult", which it is not.
The "expression" is a variable, or field" that you want to convert.
So let's take a look at a few examples:

Example 1

In this example, we will use one of SQL Server "internal" functions. The function GETDATE(), just gets the current date/time,
and we can use it to assign a value to a variable of datatype "datetime".


DECLARE @orderdate DATETIME
DECLARE @orderdate_as_char VARCHAR(30)

SELECT @orderdate=GETDATE() -- now @orderdate will be assigned a value

SELECT @orderdate_as_char=CAST(@orderdate AS varchar(30))

-- Let's print the values of the variables:

SELECT @orderdate
SELECT @orderdate_as_char

If you run the above code, you will see something like the following output.
Your specific output may differ somewhat, depending on the "collation" of your system.

2011-05-07 16:28:37.630

(1 row(s) affected)

May 7 2011 4:28PM

(1 row(s) affected)

Exercise: run the same code, but this time use "CAST(@orderdate AS varchar(10))", so now we convert
to a varchar of length 10. What do you see?

Notice that the syntax of CAST (as shown above), actually is no more than:
CAST ( variable_or_field AS new_data_type [ (length ) ])

Example 2

DECLARE @l_rowcount INT
SET @l_rowcount=30 -- Also SET can be used to assign a value to a variable

PRINT 'NUMBER OF ROWS PROCESSED: '+@l_rowcount

If we run that code, we get:

Msg 245, Level 16, State 1, Line 4
Conversion failed when converting the varchar value 'NUMBER OF ROWS PROCESSED: ' to data type int.

So, the PRINT statement, to print to your screen, expects concatenations (using "+") to be of the same type.
That is,

PRINT 'some string'+@string_variable

is OK.
So let's change the third line in above code into:

PRINT 'NUMBER OF ROWS PROCESSED: '+CAST(@l_rowcount AS varchar)

and now all is well!

Example 3

The CONVERT() function works exactly the same as CAST(). Only, if you take a look at both syntaxes as shown above,
you can use additionally (and optionally) in CONVERT(), a "style" clause, which determines the formatting.
Let's take a look at an example.

DECLARE @orderdate DATETIME
DECLARE @orderdate_as_char VARCHAR(30)

SELECT @orderdate=GETDATE() -- now @orderdate will be assigned a value, which is the current date/time

-- Now let's try different "styles" and see what we get

SELECT CONVERT(varchar(30), @orderdate) -- no style selected
SELECT CONVERT(varchar(30), @orderdate, 113)
SELECT CONVERT(varchar(30), @orderdate, 111)
SELECT CONVERT(varchar(30), @orderdate, 102)
SELECT CONVERT(varchar(30), @orderdate, 20)
SELECT CONVERT(varchar(30), @orderdate, 10)
SELECT CONVERT(varchar(10), @orderdate, 20)

Try to run the above code. You should see various representations.



1.17. A FEW WORDS ON GRANTING ROLES AND PERMISSIONS TO LOGINS:


This is not a section where you get a formal presentation on how to deal with security and permissions in SQL Server.
SQL Server permissions can be granted on server, database, schema, and database object level.

Instead, here only a quick practical intro is presented on how to work with roles
and how you can grant permissions to users and roles. Also we will show how to script the memberlist of roles.

There are occasions where working with the graphical SSMS is really easy, like granting a role to a login.
But suppose you want a lot of logins to grant a certain role or permission (e.g.: to a table).
Then scripting is the way to go (this is not to say that the graphical studio does not provide you lots of tools as well).

There are two "main" ways here:

  1. You can grant a "role" to a login. The role is comparable to a group, with a predefined set of permissions. Using TSQL, you will use the specialized system stored procedures to add a user to a role.
  2. You can grant a certain "permission" to a login (like select, or execute), on a certain object (like a table, stored procedure). Using TSQL, you will use the GRANT statement.


In SQL Server documentation, the "talk" is like this:

You GRANT a role or permission TO a "principal". It's an expensive name for anything
that can access the Server. So, the common principal we can think of, is a login.

You GRANT a role or permisson ON a "securable", which is an object
like a Table or View.

Only one thing is very important here:

- You grant a permission (like SELECT), TO a login, ON an object (like a Table), using the GRANT statement.

So, if you are already at "home" with for eaxmple Oracle, GRANTING a permission works exactly the same
in SQL Server, like for example:

GRANT SELECT ON hr.EMPLOYEES TO harry

- Granting a user a role (or adding a user to a role) has to be done using a system stored procedure.

But, in most databases, like Oracle, you use the GRANT statement as well when you grant a role.
In SQL Server it is a bit different: here you need to use a few specific system stored procedures,
like for example:

EXEC sp_addrolemember 'db_datareader', 'mary'


Let's take a look at roles first.
.
1.17.1 Roles:

There are actually three types of roles:

- database roles: every database has a number of roles with predefined permissions.
- server roles: every server/instance has one set of server roles with predefined permissions.
- application roles: enables an application to run with its own, user-like privileges.

- Database roles:

Every database under your Instance, has the same set of database roles.
Each role has a certain predefined set of permissions.
For example, the "db_datareader" role has the permission to SELECT any table of the database.
You probably agree that's not too hard to derive from the "name" of the role, what it's purpose is.
The following database roles are present in any database:

________________________________________________________________________________________________
db_datareader: members can select any user table
________________________________________________________________________________________________
db_datawriter: members can select, insert, update, delete statements
________________________________________________________________________________________________
db_accessadmin: members can control access to the database
________________________________________________________________________________________________
db_backupoperator: members can backup the database
________________________________________________________________________________________________
db_ddladmin: members can issue ddl (create and modify objects) c
________________________________________________________________________________________________
db_denydatareader: members cannot read any data in the user tables within a database
________________________________________________________________________________________________
db_denydatawriter: members cannot modify any data in the user tables within a database
________________________________________________________________________________________________
db_owner: members have all permissions
________________________________________________________________________________________________
db_securityadmin: members control permissions on securables
________________________________________________________________________________________________
public: role where all users are member of. Should have the least possible permissions
________________________________________________________________________________________________

- To assign a user a database role (grant a role) use the "sp_addrolemember" stored procedure, like for example:

USE Your_database_name
GO

EXEC sp_addrolemember 'db_datareader', 'mary'
GO

- To remove a user from a database role, use:

USE Your_database_name
GO

EXEC sp_droprolemember 'db_datareader', 'mary'

It is possible to create a database role, if you feel that the existing roles do not conform to your
security policies. You can create a database role, then grant the neccessary permissions to it, and
grant the role to users.
You can create a database role like so:

USE Your_database_name
GO

CREATE ROLE role_name
GO

- Server roles:

These are "server wide" roles. So there is only one set of "server roles" per Instance.
Regular database users should not be member of these roles. They are usually meant for users
with dba and admin duties.

The following Server roles are present for your Server:

________________________________________________________________________________________________
bulkadmin: Granted: ADMINISTER BULK OPERATIONS
________________________________________________________________________________________________
dbcreator: Granted: CREATE DATABASE
________________________________________________________________________________________________
diskadmin: Granted: ALTER RESOURCES
________________________________________________________________________________________________
processadmin: Granted: ALTER ANY CONNECTION, ALTER SERVER STATE
________________________________________________________________________________________________
securityadmin: Granted: ALTER ANY LOGIN
________________________________________________________________________________________________
serveradmin: Granted: ALTER ANY ENDPOINT, ALTER RESOURCES, ALTER SERVER STATE, ALTER SETTINGS, SHUTDOWN, VIEW SERVER STATE
________________________________________________________________________________________________
setupadmin: Granted: ALTER ANY LINKED SERVER
________________________________________________________________________________________________
sysadmin: (has full power) Granted with GRANT option: CONTROL SERVER
________________________________________________________________________________________________

- To assign a user Server role, use the "sp_addsrvrolemember " stored procedure, like for example:

EXEC sp_addsrvrolemember 'mary', 'sysadmin'
GO

- To remove a user from a server role, use the "sp_dropsrvrolemember " stored procedure, like for example:

EXEC sp_dropsrvrolemember 'mary', 'sysadmin'


1.17.2 Granting Permissions:

The database roles 'db_datareader" and "db_datawriter", might have a too wide range of permissions.

What is possible, is that you GRANT permissions to a user on a selection of tables and other objects.
What is even better, is that you create a role, grant the neccessary permissions to that role, and as
the last step, grant the role to users.
This is better, since you then do not grant permissions to (possibly a large) number of users.
You only need to add and remove users to that role, as neccessary.

Anyway, you need to know how to grant individual permissions on objects (securables) to principals (users, roles), using the GRANT statement.

There are very many classes on which the GRANT statement can operate.
The most import ones are:

DML Data Manipulation, or "Object Level Rights", like granting select, insert, update, delete, execute permissions.
DDL Data Definition: like granting CREATE object, DROP object permissions.

Here are a few representative examples:

GRANT SELECT on CONTACTS to harry

GRANT SELECT, INSERT on EMPLOYEE to harry

GRANT SELECT, INSERT, UPDATE on ORDERS to SALESUSERS -- a database role you created

GRANT EXECUTE on stp_InsertEmployee to SALESUSERS

For stored procedures, the additional remarks are important:

USE SALES;
GRANT EXECUTE ON OBJECT::HumanResources.uspUpdateEmployeeHireInfo
TO username;
GO

GRANT EXECUTE ON dbo.procname TO username;

GRANT VIEW SERVER STATE TO username;

IMPORTANT:

grant view server state to [Domain\opsmgr-sql]
go
grant view any definition to [Domain\opsmgr-sql]
go

GRANT VIEW DEFINITION ON [dbo].[usp_GetUserDetails] TO [username]

EXECUTE AS user = 'special_user'

EXECUTE YourProcerdure

REVERT

-- Use the following if the sp creates objects....

CREATE PROCEDURE dbo.usp_Demo2
WITH EXECUTE AS OWNER
AS
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].table_2') AND type in (N'U'))
CREATE TABLE table_2 (id int, data nchar(10))
INSERT INTO table_2

SELECT top 5 * from dbo.table_1;

GO

GRANT EXEC ON dbo.usp_Demo1 TO test;
GO

works!!!

END IMPORTANT:

DENY statement is possible too, like in:

DENY INSERT,UPDATE,DELETE,TAKE OWNERSHIP ON EMPLOYEE To harry

DENY INSERT,UPDATE,DELETE,TAKE OWNERSHIP ON ORDERS To harry



1.18. A FEW WORDS ON SHOWING PERMISSIONS USING TSQL:


1.18.1 Getting a list of Server role members:

Most critical for viewing permissions, is to see which accounts are member of which Server roles.
For example, the "sysadmin" Server role has no limitations whatsoever, so you must be extra careful
on who is member here.

SELECT
rolename = rolep.name,
membername = memp.name
FROM
sys.server_role_members rm
JOIN sys.server_principals rolep ON rm.role_principal_id = rolep.principal_id
JOIN sys.server_principals memp ON rm.member_principal_id = memp.principal_id


1.18.2 Getting a list of Domain Group members:

If you want to see which members exists in a Domain Group, which is added as a "login" in SQL Server,
then the "xp_logininfo" stored procedure can be used. It can use a number of parameters, which you can easily lookup.
This can be handy, if you do not have AD access, while you still want to see which members exist in a certain AD group
which was added in SQL Server.

For example:

exec xp_logininfo 'MyDomain\MyGroup', 'members'


1.18.3 Getting a list of Database role members:

For a specific database:
Per database, you can use the "sp_helpuser" stored procedure, to see which database users,
are member of which database role(s).

e.g.:

USE database_name
GO

exec sp_helpuser

You can also use:

USE database_name
GO

select user_name(role_principal_id) AS ROLE, user_name(member_principal_id) AS USERNAME
from sys.database_role_members

For all databases in one run:
The following script generates statements, which you can run to view all database role members
over all databases.

DECLARE @dbname VARCHAR(64)

DECLARE cur1 CURSOR FOR
SELECT name FROM sys.databases
where name not in ('master', 'msdb','model','tempdb')

OPEN cur1
FETCH NEXT FROM cur1 INTO @dbname

WHILE (@@fetch_status<>-1)
BEGIN
PRINT '--FOR DATABASE: '+@dbname
PRINT 'USE ['+@dbname+']'
PRINT 'GO'
PRINT 'PRINT '+''''+@dbname+''''
PRINT 'GO'
PRINT 'exec (''sp_helpuser'')'
PRINT 'GO'
PRINT ' '

FETCH NEXT FROM cur1 INTO @dbname
END

CLOSE cur1
DEALLOCATE cur1


1.18.4 Getting other listings:

Let's see if we can build scripts that will create listings.

Before we do so, I like to attend you on a specific system view, namelely "sys.all_objects".
This view indeed has information on almost all objects, so on other "system views" as well !
The columns in this view, which I think are of interest, are "name" and "type_desc".
So, if you want to know which dictionary system view has information on "principals", or "table" etc..,
you might try queries like:

select name, type_desc from sys.all_objects where name like '%princip%'

select name, type_desc from sys.all_objects where name like '%role%'

select name, type_desc from sys.all_objects where name like '%permis%'

select name, type_desc from sys.all_objects where name like '%protect%'

select name, type_desc from sys.all_objects where name like '%user%'
etc..

script 1: Viewing permissions on Tables in a database. Usable for 2005/2008/2012

USE database_name
GO

select * from INFORMATION_SCHEMA.TABLE_PRIVILEGES

script 2: Viewing permissions. usable for 2005/2008/2012

USE database_name
GO

select sys.schemas.name 'Schema'
, sys.objects.name Object
, sys.database_principals.name username
, sys.database_permissions.type permissions_type
, sys.database_permissions.permission_name
, sys.database_permissions.state permission_state
, sys.database_permissions.state_desc
, state_desc + ' ' + permission_name + ' on ['+ sys.schemas.name + '].[' + sys.objects.name + '] to [' + sys.database_principals.name + ']' COLLATE LATIN1_General_CI_AS
from sys.database_permissions
join sys.objects on sys.database_permissions.major_id = sys.objects.object_id join sys.schemas on sys.objects.schema_id = sys.schemas.schema_id join sys.database_principals on sys.database_permissions.grantee_principal_id = sys.database_principals.principal_id order by 1, 2, 3, 5

script 3: Viewing permissions. usable for 2005/2008/2012 (sometimes this fails)

SELECT CASE WHEN P.state_desc = 'GRANT_WITH_GRANT_OPTION' THEN 'GRANT' ELSE P.state_desc END AS cmd_state,
P.permission_name,
'ON '+ CASE P.class_desc
WHEN 'DATABASE' THEN 'DATABASE::'+QUOTENAME(DB_NAME())
WHEN 'SCHEMA' THEN 'SCHEMA::'+QUOTENAME(S.name)
WHEN 'OBJECT_OR_COLUMN' THEN 'OBJECT::'+QUOTENAME(OS.name)+'.'+QUOTENAME(O.name)+
CASE WHEN P.minor_id <> 0 THEN '('+QUOTENAME(C.name)+')' ELSE '' END
WHEN 'DATABASE_PRINCIPAL' THEN
CASE PR.type_desc
WHEN 'SQL_USER' THEN 'USER'
WHEN 'DATABASE_ROLE' THEN 'ROLE'
WHEN 'APPLICATION_ROLE' THEN 'APPLICATION ROLE'
END +'::'+QUOTENAME(PR.name)
WHEN 'ASSEMBLY' THEN 'ASSEMBLY::'+QUOTENAME(A.name)
WHEN 'TYPE' THEN 'TYPE::'+QUOTENAME(TS.name)+'.'+QUOTENAME(T.name)
WHEN 'XML_SCHEMA_COLLECTION' THEN 'XML SCHEMA COLLECTION::'+QUOTENAME(XSS.name)+'.'+QUOTENAME(XSC.name)
WHEN 'SERVICE_CONTRACT' THEN 'CONTRACT::'+QUOTENAME(SC.name)
WHEN 'MESSAGE_TYPE' THEN 'MESSAGE TYPE::'+QUOTENAME(SMT.name)
WHEN 'REMOTE_SERVICE_BINDING' THEN 'REMOTE SERVICE BINDING::'+QUOTENAME(RSB.name)
WHEN 'ROUTE' THEN 'ROUTE::'+QUOTENAME(R.name)
WHEN 'SERVICE' THEN 'SERVICE::'+QUOTENAME(SBS.name)
WHEN 'FULLTEXT_CATALOG' THEN 'FULLTEXT CATALOG::'+QUOTENAME(FC.name)
WHEN 'FULLTEXT_STOPLIST' THEN 'FULLTEXT STOPLIST::'+QUOTENAME(FS.name)
WHEN 'SEARCH_PROPERTY_LIST' THEN 'SEARCH PROPERTY LIST::'+QUOTENAME(RSPL.name)
WHEN 'SYMMETRIC_KEYS' THEN 'SYMMETRIC KEY::'+QUOTENAME(SK.name)
WHEN 'CERTIFICATE' THEN 'CERTIFICATE::'+QUOTENAME(CER.name)
WHEN 'ASYMMETRIC_KEY' THEN 'ASYMMETRIC KEY::'+QUOTENAME(AK.name)
END COLLATE Latin1_General_100_BIN AS securable,
'TO '+QUOTENAME(DP.name) AS grantee,
CASE WHEN P.state_desc = 'GRANT_WITH_GRANT_OPTION' THEN 'WITH GRANT OPTION' ELSE '' END AS grant_option,
'AS '+QUOTENAME(G.name) AS grantor
FROM sys.database_permissions AS P
LEFT JOIN sys.schemas AS S
ON P.major_id = S.schema_id
LEFT JOIN sys.all_objects AS O
JOIN sys.schemas AS OS
ON O.schema_id = OS.schema_id
ON P.major_id = O.object_id
LEFT JOIN sys.types AS T
JOIN sys.schemas AS TS
ON T.schema_id = TS.schema_id
ON P.major_id = T.user_type_id
LEFT JOIN sys.xml_schema_collections AS XSC
JOIN sys.schemas AS XSS
ON XSC.schema_id = XSS.schema_id
ON P.major_id = XSC.xml_collection_id
LEFT JOIN sys.columns AS C
ON O.object_id = C.object_id
AND P.minor_id = C.column_id
LEFT JOIN sys.database_principals AS PR
ON P.major_id = PR.principal_id
LEFT JOIN sys.assemblies AS A
ON P.major_id = A.assembly_id
LEFT JOIN sys.service_contracts AS SC
ON P.major_id = SC.service_contract_id
LEFT JOIN sys.service_message_types AS SMT
ON P.major_id = SMT.message_type_id
LEFT JOIN sys.remote_service_bindings AS RSB
ON P.major_id = RSB.remote_service_binding_id
LEFT JOIN sys.services AS SBS
ON P.major_id = SBS.service_id
LEFT JOIN sys.routes AS R
ON P.major_id = R.route_id
LEFT JOIN sys.fulltext_catalogs AS FC
ON P.major_id = FC.fulltext_catalog_id
LEFT JOIN sys.fulltext_stoplists AS FS
ON P.major_id = FS.stoplist_id
LEFT JOIN sys.registered_search_property_lists AS RSPL
ON P.major_id = RSPL.property_list_id
LEFT JOIN sys.asymmetric_keys AS AK
ON P.major_id = AK.asymmetric_key_id
LEFT JOIN sys.certificates AS CER
ON P.major_id = CER.certificate_id
LEFT JOIN sys.symmetric_keys AS SK
ON P.major_id = SK.symmetric_key_id
JOIN sys.database_principals AS DP
ON P.grantee_principal_id = DP.principal_id
JOIN sys.database_principals AS G
ON P.grantor_principal_id = G.principal_id
WHERE P.grantee_principal_id IN (USER_ID('TestUser1')); -- or add more users in "IN" clause.

Please change "TestUser1' in the statement above, in the actual database user. You can add more users as well.
The above statement will generate "GRANT" statements for the particular user.

script 4: usable for 2000/2005/2008/2012

SELECT
[UserName] = CASE princ.[type]
WHEN 'S' THEN princ.[name]
WHEN 'U' THEN ulogin.[name] COLLATE Latin1_General_CI_AI
END,
[UserType] = CASE princ.[type]
WHEN 'S' THEN 'SQL User'
WHEN 'U' THEN 'Windows User'
END,
[DatabaseUserName] = princ.[name],
[Role] = null,
[PermissionType] = perm.[permission_name],
[PermissionState] = perm.[state_desc],
[ObjectType] = obj.type_desc,--perm.[class_desc],
[ObjectName] = OBJECT_NAME(perm.major_id),
[ColumnName] = col.[name]
FROM
--database user
sys.database_principals princ
LEFT JOIN
--Login accounts
sys.login_token ulogin on princ.[sid] = ulogin.[sid]
LEFT JOIN
--Permissions
sys.database_permissions perm ON perm.[grantee_principal_id] = princ.[principal_id]
LEFT JOIN
--Table columns
sys.columns col ON col.[object_id] = perm.major_id
AND col.[column_id] = perm.[minor_id]
LEFT JOIN
sys.objects obj ON perm.[major_id] = obj.[object_id]
WHERE
princ.[type] in ('S','U')
UNION
--List all access provisioned to a sql user or windows user/group through a database or application role
SELECT
[UserName] = CASE memberprinc.[type]
WHEN 'S' THEN memberprinc.[name]
WHEN 'U' THEN ulogin.[name] COLLATE Latin1_General_CI_AI
END,
[UserType] = CASE memberprinc.[type]
WHEN 'S' THEN 'SQL User'
WHEN 'U' THEN 'Windows User'
END,
[DatabaseUserName] = memberprinc.[name],
[Role] = roleprinc.[name],
[PermissionType] = perm.[permission_name],
[PermissionState] = perm.[state_desc],
[ObjectType] = obj.type_desc,--perm.[class_desc],
[ObjectName] = OBJECT_NAME(perm.major_id),
[ColumnName] = col.[name]
FROM
--Role/member associations
sys.database_role_members members
JOIN
--Roles
sys.database_principals roleprinc ON roleprinc.[principal_id] = members.[role_principal_id]
JOIN
--Role members (database users)
sys.database_principals memberprinc ON memberprinc.[principal_id] = members.[member_principal_id]
LEFT JOIN
--Login accounts
sys.login_token ulogin on memberprinc.[sid] = ulogin.[sid]
LEFT JOIN
--Permissions
sys.database_permissions perm ON perm.[grantee_principal_id] = roleprinc.[principal_id]
LEFT JOIN
--Table columns
sys.columns col on col.[object_id] = perm.major_id
AND col.[column_id] = perm.[minor_id]
LEFT JOIN
sys.objects obj ON perm.[major_id] = obj.[object_id]
UNION
--List all access provisioned to the public role, which everyone gets by default
SELECT
[UserName] = '{All Users}',
[UserType] = '{All Users}',
[DatabaseUserName] = '{All Users}',
[Role] = roleprinc.[name],
[PermissionType] = perm.[permission_name],
[PermissionState] = perm.[state_desc],
[ObjectType] = obj.type_desc,--perm.[class_desc],
[ObjectName] = OBJECT_NAME(perm.major_id),
[ColumnName] = col.[name]
FROM
--Roles
sys.database_principals roleprinc
LEFT JOIN
--Role permissions
sys.database_permissions perm ON perm.[grantee_principal_id] = roleprinc.[principal_id]
LEFT JOIN
--Table columns
sys.columns col on col.[object_id] = perm.major_id
AND col.[column_id] = perm.[minor_id]
JOIN
--All objects
sys.objects obj ON obj.[object_id] = perm.[major_id]
WHERE
--Only roles
roleprinc.[type] = 'R' AND
--Only public role
roleprinc.[name] = 'public' AND
--Only objects of ours, not the MS objects
obj.is_ms_shipped = 0
ORDER BY
princ.[Name],
OBJECT_NAME(perm.major_id),
col.[name],
perm.[permission_name],
perm.[state_desc],
obj.type_desc--perm.[class_desc]


1.18.5 Some special functions:

The fn_my_permissions() function

The below function applies to SQL Server 2005/2008/2012.

-- Server Level:
select * from fn_my_permissions(NULL, NULL)

-- Database Level:
select * from fn_my_permissions(NULL, 'database')

-- Object level, like the object dbo.EMPLOYEE:
select * from fn_my_permissions('dbo.EMPLOYEE', 'object')

Not only a list of possible permissions can be shown, but you can let the function apply to a user as well:

execute ('select * from fn_my_permissions(NULL, ''database'')') AS USER = 'harry'

The above statement should show you what Database Level permissions harry has.

execute ('select * from fn_my_permissions(''dbo.EMPLOYEE'', ''object'')') AS USER = 'harry'

The above statement should show you permissions harry has on the table EMPLOYEE.



1.19. GET A LIST OF ALL COLUMNNAMES WITH ALL TABLENAMES, AND DATATYPES, USING TSQL:


Sometimes, it just is handy to have an ordered list of all Columnnames, with their datatypes,
and corresponding Tablenames, of all tables in your database.

Here are a few queries that will produce such a list.

-- Script 1: for all SQL Server versions

SELECT substring(c.name, 1, 30) as "ColumName",
c.xtype,
substring(object_name(c.id),1,30) as "TableName",
substring(t.name,1,30) as "DataType"
FROM syscolumns c, systypes t
WHERE c.xtype=t.xtype
ORDER By object_name(c.id)

Optionally you can use a AND (after the WHERE clause), like for example "AND object_name(c.id)='Orders'


-- Script 2: for SQL Server 2005/2008

SELECT * FROM INFORMATION_SCHEMA.COLUMNS -- get all info

SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE, IS_NULLABLE FROM INFORMATION_SCHEMA.COLUMNS -- get selected info



1.20. A FEW NOTES ABOUT THE "@@" FUNCTIONS:


We already have seen a couple of functions is this note, like for example the string function "REPLACE()", or
the system functions (that are typically used by the DBA) like for example "fn_virtualfilestats()" etc..

With a function, you would typically expect parentheses "(" and ")", where in between you can place the inputvalue(s),
and or expressions, like "myfunction(5)" or "substring(cust_code,1,8)".

In SQL Server, there exists a rather special class of system functions.
Usually, they don;t take values from us, although for some functions a SET statement is allowed.
They have names like "@@functionname", so their names always start with "@@".
They behave in a sense like a Global variable, and you can retrieve it's value by selecting the functionname.
In a moment, you will see some examples.
The "context" in which they apply, is very divers: some functions have only meaning on the Instance Level (like @@cpu_busy),
while others can perfectly be used in your scripts (like @@error).

We already have seen a few of those functions, like @@SPID, which is a "Session ID number" associated with every session.

Example 1:Some system statistical @@ functions

Try the following statements in a Query Window:

select @@CPU_BUSY
select @@IO_BUSY
select @@IDLE
select @@Total_Errors

These are just a few examples of those system statistical @@ functions.
For example, @@CPU_BUSY returns the time that SQL Server has spent working since it was last started.
As another example, @@Total_Errors represent the number of disk write errors encountered by SQL Server since last started.

Ofcourse, it might not be directly trivial how you could benefit from those type of functions,
but maybe it's nice to know they exist.

Example 2: "@@version"

This function returns a resultset, telling you the current version of SQL Server.
So, if we just do this:

select @@version

we find the current version of this SQL Server installation.

Example 3: "@@rowcount" and "@@error"

These are quite usable in your scripts. In section 1.13, we had created a script that removed "unwanted" characters
from a variable, or a field. Now suppose we want to clean a certain field in all rows of a table.
In other words, we want to walk through all rows of that table. Now you might think of using a cursor, or some
other form of a WHILE loop, using some sort of a counter that you need to construct.
Now take a look at the following example, with an quite interesting loop:

-- let's first create a test table:

create table #a
(
s varchar(100)
)

-- now insert a couple of testrecords, with good text and noise text:

INSERT #a (s) SELECT 'asd i/.,<>as>[{}]vnbv'
INSERT #a (s) SELECT 'aaa'
INSERT #a (s) SELECT '123 ''h 9)'

-- what's in the table now?

SELECT * FROM #a

asd i/.,<>as>[{}]vnbv
aaa
123 'h 9)

-- now let's clean the table:

while @@rowcount > 0
update #a
set s = replace(s, substring(s, patindex('%[^a-z^A-Z^0-9^ ]%', s), 1), '')
WHERE patindex('%[^a-z^A-Z^0-9^ ]%', s) <> 0

-- So, what's in the table now?

SELECT * FROM #a

asd iasvnbv
aaa
123 h 9

So, our script worked. But the main thing is: did you noticed how we used the @@rowcount function,
as the "while true" criteria for the while loop? So, sometimes you can really simplify your WHILE loop,
because @@rowcount returns the number of rows of the last statement, or the number of rows associated
with the current TSQL block.

But you can use it at ordinary SQL statements as well, if you want to know how many rows are processed.
For example, if you want to know how many rows were affected at an UPDATE statement, you can use @@rowcount as
in this example:

UPDATE EMPLOYEES
SET DEPARTMENT = 'Amsterdam' WHERE city = 'The Haque'
SELECT @@rowcount AS 'RowsChanged'




1.21. HOW TO SCRIPT YOUR DATABASE:


It's easy to script a Database, including all objects (or a selection), using the graphical Management Studio.
To do that, rightclick your database, choose "Tasks", and next choose "Generate Scripts".

But how to do it from the commandline, so that you can automate the process?
For this, you can use the "sqlpubwiz" utility.
To download it, you might use the following link:

download sqlpubwiz utility from Microsoft (english)

The utility has quite a few prerequisites, but that's all described in the link above.

Using it from the commandline, to script the SALES database, you would use a command
similar to the example below:

C:\> sqlpubwiz script -d SALES -S MYSERVER -U sa -P password C:\sources\sales.sql



1.22. HOW TO GENERATE THE CORRESPONDING INSERT STAMENTS FOR A TABLE WITH DATA:


Maybe you have a certain table with N rows, and you want to generate the corresponding N INSERT statements,
just as if you would have an empty table, and then could use those INSERT statements, to fill it up.

Method 1:

Actually, the "sqlpubwiz" utility from section 1.21, can be used to achieve this.
You can script a complete database, either with structure and all data, or just only the structure of the objects.
So, for just one, or all tables, this tool might be of help.
To get further instruction, use the "help" function of the utility, like in

C:\> sqlpubwiz help script

By the way, if you do not need to use sqlpubwiz in scripts (in automated solutions), you might as well
enter the graphical mode, by just entering "sqlpubwiz" from the prompt, in the directory where the utility was installed.


Method 2:

You can create a stored procedure which will do the job: generate the insert statements for a given table.
You can find many free examples on the internet.
The code below, can be used for tables having administrative datatypes like char(), varchar(), decimal() etc..
When a table uses more "complicated" datatypes, the result may not be what you like.

Create the procedure listed below, in your Database. Suppose you now want to create the INSERT statements for
the CUSTOMERS table, use the following query:

exec generate_inserts CUSTOMERS

Source of the procedure:

create proc generate_inserts @table varchar(20)
--Generate inserts for table @table
AS
declare @cols varchar(1000)
declare @col varchar(50)

set @cols=''

declare colcur
cursor for
select column_name
from information_schema.columns
where table_name=@table

open colcur

fetch next from colcur into @col

while @@fetch_status=0
begin
select @cols = @cols + ', ' + @col

fetch next from colcur into @col
end

close colcur
deallocate colcur

select @cols = substring(@cols, 3, datalength(@cols))

--select @cols

declare @sql varchar(4000)
declare @colname varchar(100),
@coltype varchar(30)

select @sql = 'select replace(''insert ' + @table + ' (' + @cols + ') '

select @sql = @sql + 'values ('''

declare ccur
cursor for
select column_name, data_type
from information_schema.columns
where table_name=@table

open ccur

fetch from ccur into @colname, @coltype

while @@fetch_status=0
begin
if @coltype in ('varchar', 'char', 'datetime')
select @sql=@sql + ''''''

select @sql=@sql + ' + coalesce(convert(varchar, ' + @colname + '), ''null'') + '

if @coltype in ('varchar', 'char', 'datetime')
select @sql=@sql + ''''''
select @sql = @sql + ''', '''

fetch from ccur into @colname, @coltype
end

close ccur
deallocate ccur

select @sql=substring(@sql, 1, datalength(@sql)-3)

select @sql=@sql + ')'', ''''''null'''''', ''null'') from ' + @table

exec (@sql)



1.23. OVERVIEW SQL SERVER VERSIONS AND BUILDS version 7 - 2012:


Sometimes it can be handy to have a simple listing of all SQL Server versions and builds.
You can find that list using the following link:

List of SQL Server versions (html file)



1.24. LOGGING FROM STORED PROCEDURES AND TSQL:


Often, you want to run a stored procedure, or just a TSQL block, and afterwards you want to view
some logging of this procedure, in order to see if that code ran succesfully or not.

There are many ways to get errors or actions logged.
For example, to a file, the screen (when you run it "now"), or a table, or the Windows application log etc...
Personally, I think that logging in a sort of Admin table is very effective.
And its easy: it's just an INSERT statement.

let's view a couple of examples.

Suppose we have a stored procedure which uses an input variable. Suppose this procedure takes as input
the path to a file (like E:\UPLOAD\DATA), and then performs some sort of processing, using that file.

So, the start of the sp code could be something like this:

CREATE PROCEDURE usp_processfile
@FILE_PATH VARCHAR(128) -- the input var

AS
BEGIN
-- the rest of the code

Now, before you want to do any processing at all, you first want to check the "validity" of this
input variable. So, you want to know first that it's not NULL, and that it at least contains a "/" character somewhere,
which makes it likely that the input variable (which a user has typed in, or the procedure received from elsewhere),
is indeed a "path". So, we might have put this test in the beginning of our sp:

if @FILE_PATH IS NULL or (charindex('\', @FILE_PATH) = 0)
BEGIN
  SET @ERR_MESSAGE='Incorrect file path.'
  GOTO error_section
END

Now let's take a look at a few examples on how we can log that errormessage.

Logging to a table:

error_section:
  INSERT INTO ADMINTABLE
  VALUES
  (getdate(),'usp_processfile',@ERR_MESSAGE)

  PRINT @ERR_MESSAGE -- show to the screen as well
RETURN -- terminate procedure

Raising an error to a calling application:

error_section:
RAISERROR (@ERR_MESSAGE, 10, 1)
RETURN -- terminate procedure

Raising an error to a calling application and write an event in the SQL Server log and Windows eventlog:

error_section:
RAISERROR (@ERR_MESSAGE, 10, 1) with log
RETURN -- terminate procedure

Logging to a file:

error_section:
DECLARE @log_cmd varchar(128)
SELECT @log_cmd='echo'+' '+@ERR_MESSAGE+' >> C:\MANAGEMENT\LOAD_FILE.LOG'
EXEC master.dbo.xp_cmdshell @log_cmd

RETURN -- terminate procedure



1.25. USING "EXEC" IN TSQL AND STORED PROCEDURES:


Important: if you use "exec" (or sp_executesql) to dynamically create SQL from strings and variables,
and if one or more variables comes from user input, then there exists the potential danger of "SQL injection".
This could mean that the resulting code could be something that you most definitely do not want to execute,
and it could even be dangerous.
This is especially true for code that's exposed through public sites, like in a backend database for a website.
So always check the variables very carefully.

Usage 1:

In TSQL code, like TSQL batches and stored procedures, you might encounter the "exec" keyword.

You probably have used it many times already, since if you want to execute a stored procedure from a Query Window,
you would use:

exec StoredProcedureName

So, if you want to execute existing code in for example stored procedures, extended stored procedures
(which could be stored in a module or .dll), you simply call them using "exec".

This is very easy indeed. Here are a few other examples:

exec StoredProcedureName -- calling a regular sp from a query tool

exec xp_ExtendedStoredProcedureName -- calling an extended sp from a query tool

Now let's try this:

CREATE procedure usp_proc1 @input1 varchar(32)
AS

BEGIN
PRINT 'Hi from usp_proc1'
PRINT @input1
END
GO

CREATE procedure usp_proc2 @input2 varchar(32)
AS

BEGIN
PRINT 'Hi from usp_proc2'
PRINT @input2
END
GO

CREATE procedure usp_proc3 @input3 varchar(32)
AS

BEGIN
PRINT 'Hi from usp_proc3'
PRINT @input3
END
GO

Now let's create a Main routine, that calls all 3 upper sp's:

CREATE procedure usp_Main -- Create a sp which calls the other sp's
@inp1 varchar(32),
@inp2 varchar(32),
@inp3 varchar(32)
AS

BEGIN

  exec usp_proc1 @inp1

  exec usp_proc2 @inp2

  exec usp_proc3 @inp3

END
GO

If you would run usp_Main, like this:

exec usp_Main 'Hi', 'Dear', 'Reader'

you would get this:

Hi from usp_proc1
Hi
Hi from usp_proc2
Dear
Hi from usp_proc3
Reader

Usage 2:

There is a second important application of "exec", namely to execute code which has been dynamically created
from characterstrings and variables.
Usually, it would take a form as:

exec ('a list of strings and/or character-variables which indeed linked together form a command')

Or if a certain variable, for example @varcmd, is string already, and contains a command, then this works too:

exec @varcmd

In either case, the concatenated total string will be parsed, and if it indeed forms a command, it's executed.
It's really great! Here are a few examples to illustrate this:

Example 1:

DECLARE @dbname VARCHAR(64)

DECLARE cur1 CURSOR FOR
SELECT name FROM master.dbo.sysdatabases

OPEN cur1
FETCH NEXT FROM cur1 INTO @dbname

WHILE (@@fetch_status<>-1)
BEGIN
PRINT 'DATABASE FILES FOR: ' +@dbname
PRINT ' '
EXEC ('SELECT
fileid,
(size * 8 /1024) AS SIZE_IN_MB,
substring(name, 1, 32) AS NAME,
substring(filename, 1, 90) AS FILENAME
FROM '+@dbname+'.dbo.sysfiles')

FETCH NEXT FROM cur1 INTO @dbname
END

CLOSE cur1
DEALLOCATE cur1


Example 2:

SELECT @totalcommAND='bcp CUSTOMERS in '+@importpath+' -c -F2 -T'
EXEC xp_cmdshell @totalcommand


Notice how we concatenated serveral strings into @totalcommand, which we then feed to "exec".
The above code is often rewritten into the code below.


SELECT @totalcommAND='bcp ##BCP_LOAD in '+@importpath+' -c -F2 -T'
EXEC @RESULT = xp_cmdshell @totalcommand

Since the execution of "xp_cmdshell @totalcommand" will return a "returncode" (like 0, 1 etc..),
you can assign that to a variable like @RESULT or @RETVAL or something similar.
One advantage is that you can use @RETVAL for further tests, like shown below.

EXEC @RESULT = xp_cmdshell @totalcommand
IF (@RESULT <> 0)
BEGIN
  SET @MESSAGE='Error loading data in temporary table. Possibly wrong path or file not found.'
  GOTO error_section
END



1.26. HOW TO VIEW WHETHER PROCESSES WERE SQL Server, OR NTLM, OR KERBEROS AUTHENTICATED:


In SQL Server 2005 and 2008, you can use the following query. It will show all spids, with their
associated loginname, hostname, program, and authentication method.

SELECT s.session_id, s.auth_scheme, e.host_name, e.program_name, e.nt_user_name
FROM sys.dm_exec_connections s, sys.dm_exec_sessions e
WHERE s.session_id=e.session_id



1.27. A FEW NOTES ON USER DEFINED FUNCTIONS:


There are plenty standard functions in SQL Server. In section 1.12, we have seen a couple of important
"string" functions, but there are lots of mathematical, and statistical, and other functions as well.

You can write your on functions too. Then they are called "User Defined Functions" (udf).

But we already can write our own stored procedures, so why do we need udf's?

A stored procedure could indeed do all sorts of jobs. Maybe they do not even "return" something
to the calling user or program. Maybe it just updates a table, and that could be all.

A function absolutely resembles a stored procedure, but generally, they should be smart compact code,
and always returns "something", like we are used from the standard functions in SQL Server.
It could return a "scalar" (like a number or string), or a "table-like" resultset, called a "table".

They are great. You know that you always "exec" a stored procedure, but your udf's can be used like
the common standard functions. So, you can use them in all sorts of SELECT statements.

A few simple examples will illustrate how to create and use them.

Example 1:

CREATE FUNCTION udfEmpByCity(@city varchar(20))
RETURNS TABLE
AS
RETURN (
  SELECT EmployeeID, LastName, FirstName
  FROM Employees
  WHERE (City=@City)
)

To access the resultset of this function you can use the next example:

SELECT * FROM udfEmpByCity('Seattle')

User defined functions thus can accept parameters, AND returns either a scalar value or a tablelike dataset.



1.28. HOW TO GET LISTS OF PK AND FK CONSTRAINTS:


Although the INFORMATION_SCHEMA views are (see the system views container of your database) intended
to provide you all sorts of information, included PK and FK relations, the "older" systemviews (like sysreferences)
are still great. But you are certainly encouraged to take a look at the INFORMATION_SCHEMA views.
Let try two oldfashioned queries, and one modern one:

Older queries (but should work):

-- Get a list of Referencing (FK) and Referred (PK) Tables:

SELECT substring(object_name(constid), 1, 40) AS FK,
  substring(object_name(fkeyid), 1, 40) AS "Referencing Table",
  substring(object_name(rkeyid), 1, 40) AS "Referenced Table"
FROM sysreferences
ORDER BY object_name(rkeyid)

-- Get a list of Tables and their Primary Keys:

SELECT substring(name,1,30) AS "PrimaryKey",
id, xtype, object_name(parent_obj) AS "Parent_table"
FROM sysobjects
WHERE xtype='PK'
ORDER BY object_name(parent_obj)

Ofcourse, if you want to be a bit more "modern", and better, just query views like INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS,
and INFORMATION_SCHEMA.COLUMNS, and INFORMATION_SCHEMA.TABLE_CONSTRAINTS etc..,
and find out what you can get from them.

Here is a nice example, that should work on all modern SQL versions:

Newer query:

SELECT
U.CONSTRAINT_NAME,
U.TABLE_SCHEMA,
U.TABLE_NAME,
U.COLUMN_NAME,
FK.update_referential_action,
FK.delete_referential_action,
FK.is_not_trusted,
U.ORDINAL_POSITION
FROM
INFORMATION_SCHEMA.KEY_COLUMN_USAGE U
INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS C ON U.CONSTRAINT_NAME = C.CONSTRAINT_NAME
INNER JOIN sys.foreign_keys FK ON U.CONSTRAINT_NAME = FK.name
WHERE
C.CONSTRAINT_TYPE = 'FOREIGN KEY'
order by U.TABLE_NAME,U.COLUMN_NAME



1.29. HOW TO IMPLEMENT PARTITIONING:


Tables might get large. Or even very large, like for example having 8000 Million rows.
As a consequence, the associated indexes are large too.

We already know that we can "partition", or separate, (whole) tables and indexes on there own "filegroups".
In this chapter, we go one step further, by "partitioning" distinguisable subsets of one table on their own filegroups.

There are two main reasons to partition a Table or Index.
  • You might enhance performance because of parallel IO on multiple FileGroups, and
  • You might have faster and smaller backup, because new and active records are in a certain partition,
    while historical records live in other partitions (thus filegroups), which you do not need to backup that often.

In this section, we show the "partitioning" of subsets of a table, on their own filegroups.

Ideally, you would have a certain column in your table, that lends itself easily to "get partioned". In other words,
that particular column would have values that are easily divided into subsets, like for example "years" or "months".
For example, you might have a table with some date column, and you could group records belonging to
to the years "2000", "2001, "2002" etc..

But even if it doesn't go in such a "natural way", you can always force some form of subsetting. For example, you have
a nummeric column, and you just are going to distinquish the following subsets:

1 - 10000000
10000001 - 20000000
20000001 - 30000000
etc..

Creating a partitioned Table, or Index, is a 3 step process.

Don't forget that the whole idea behind partitioning a table is this: create Ranges of values, where the rows of the table will
fall into, and make sure that you can store those different record subsets (the different Ranges), onto separate Filegroups.

- We start by defining a Partition Function. This is function that defines the boundaries, or Partition Ranges, that the subsets of rows will use.
- Secondly, we create a Partitioning Scheme, that defines the mappings of the Partition Ranges to FileGroups (individual storage structures).
- Thirdly, we create a Table, using the definitions above.

That's all.

So, a simple example will illustrate this.

1. Suppose we have a certain database, which uses the filegroups FG1, FG2, FG3 and FG4.

Suppose we have a table PARTSAMPLE that we want to partition. It uses the columns ID (datatype INT) and NAME (varchar(20)).
The values that ID can take, are from 1 to higher (for example 1,1000,1500,7000,20000,30000,89000 etc..

2. Now let's define the "Partition Ranges", or boundaries, that the subsets of rows can take:
We do that by creating a Partition Function, whereas later we are going to "bind" it somehow to the table.

CREATE PARTITION FUNCTION samplefunction (INT)
AS
RANGE LEFT FOR VALUES (10000, 20000, 30000)
GO

This function, is an independent object in the database. Later we will use it in the PARTSAMPLE table definition.

The "LEFT" or "RIGHT" in the function definition means if you want the interval to "leftsided" or "rightsided" as in:
10001 - 20000 (right)
10000 - 19999 (left)

3. Next, we will define the "Partition Scheme", that will relate the Ranges to the Filegroups:

CREATE PARTITON SCHEME samplescheme
AS
PARTITION samplefunction TO
([FG1], [FG2], [FG3], [FG4])
GO

You see that in the Partition Scheme definition, we relate the FileGroups to the "samplefunction" function, thus thereby
relating the FileGroups to the Ranges (or boundaries).

4. As the last step, we will define our PARTSAMPLE table, using the Partition Scheme defined above.

CREATE TABLE PARTSAMPLE
( ID INT NOT NULL,
NAME VARCHAR(20) NOT NULL
)
ON samplescheme(ID)
GO

So, if a record with an ID of 15000 would be inserted, it would be stored on FileGroup "FG2".
Likewise, if a record with an ID of 25000 would be inserted, it would be stored on FileGroup "FG3".

For an existing table, you could create an empty partitioned table with another name.
Next, INSERT... SELECT... the rows from the original non partitioned table to the new partitioned table.
When done, you can drop the original, and rename the new table.

Often, it's not that easy, because of various reasons like:

- take into account the PK-FK relations with other tables.
- it could take a long time, and costs in performance if the table is really large.
- the original table should not have any other transactions while inserting to the new table.
- take into account rebuilding and/or partitioning indexes as well.

So, partitioning a really large table, could be considered to be something like a "project"...



1.30. A FEW NOTES ON DYNAMIC SQL:


(1) The simple form:

In this simple case, we just want the SQL client to "create" or "print" series of statements,
which we otherwise needed to type one by one.

Here is an example. Suppose you want a number of databases to be put in READ ONLY mode, and you
are not very eager to type all individual statements.
Here is what you can do in a loop:

set nocount on

DECLARE @DBName varchar(255)

DECLARE MyCursor CURSOR FOR
SELECT name FROM sys.databases
WHERE name like 'SP20%'

OPEN MyCursor

FETCH NEXT FROM MyCursor INTO @DBName

WHILE @@FETCH_STATUS = 0
BEGIN
PRINT 'ALTER DATABASE '+@DBNAME+' SET READ_ONLY'
FETCH NEXT FROM MYCursor INTO @DBNAME
END

CLOSE MyCursor
DEALLOCATE MyCursor

Note that the above loop just prints a number of "ALTER DATABASE " statements,
for those databases with a name starting with "SP20"

(2) Using "exec" or "sp_executesql":

Important: if you use "exec" (or sp_executesql) to dynamically create SQL from strings and variables,
and if one or more variables comes from user input, then there exists the potential danger of "SQL injection".
This could mean that the resulting code could be something that you most definitely do not want to execute,
and it could even be dangerous.
This is especially true for code that's exposed through public sites, like in a backend database for a website.

Suppose you create TSQL code, then in that code, some strings and variables can be "concatenated"
in such a way, that a valid SQL has formed. Then you can really execute it.

We already have done that on many occasions. For example:

Example 1:

DECLARE @DBLIST TABLE ( -- variable of type "table"
ID INT IDENTITY(1,1),
DBNAME VARCHAR(32)
)

DECLARE @DBNAME VARCHAR(32)
DECLARE @I INT
DECLARE @DBLIST_MAX INT

INSERT INTO @DBLIST -- we fill the table variable with a list of our databases
(DBNAME)
SELECT name from sys.databases

SET @DBLIST_MAX=(SELECT MAX(ID) FROM @DBLIST)
SET @I=1

WHILE @I <= @DBLIST_MAX
BEGIN
  SET @DBNAME=(SELECT DBNAME FROM @DBLIST WHERE ID=@I)
  PRINT 'SOME INFO PER DATABASE USING sp_helpdb: '+@dbname
  PRINT ' '
  EXEC ('sp_helpdb '+@dbname)
  SET @I=@I+1 -- increase the counter by "1"
END

Yes, we already illustrated the use of "exec" in section 1.25, but maybe it was a good idea to list a couple
of more example.
Up to now, we always have used the "exec" keyword, to execute a command that was composed of strings alone
or strings and variables. There is much more to "exec" than what is shown here, for example, you might take a look
in BOL (Books Online) and search for the keyword "exec" for the complete syntax.

sp_executesql:

As another way of executing dynamically created sql, we can take a look at the very versitile
"sp_executesql" stored procedure. The use of "exec" and "sp_executesql" are very similar at first sight.
Both will let your TSQL block of code, execute any dynamically build DML or DLL SQL statement.

But there are important differences as well:

  1. Both can execute a command composed of strings and variables.
  2. In many occasions, sp_executesql will reuse an execution plan, because to the optimizer, it "looks" like
    as just another stored procedure with having input parameters. This is an advantage over just using "exec",
    especially if your code will be used heavily.
  3. sp_executesql can be parameterized
  4. Generally, sp_executesql seems to be regarded as "more secure" than just using "exec".
Let's illustrate sp_executesql by showing a few examples.

Example 1:

exec sp_executesql N'CREATE TABLE ABC1 (
ID INT NULL,
CUSTNAME VARCHAR(20))'

So, what's the big deal here? Using just exec, we can do the same thing:

exec ('CREATE TABLE ABC2 (
ID INT NULL,
CUSTNAME VARCHAR(20))')

Yes, it works the same. But in the next example, we use code that accepts a parameter,
which can easily be done using sp_executesql.

Example 2:

CREATE TABLE T1 (
cust_id int,
cust_name varchar(20))

insert into T1
values
(1,'Harry')

Now let's use sp_executesql using a parameter:

DECALE @varcmd nvarchar(128)
DECLARE @ID INT

SET @varcmd='SELECT * FROM T1 WHERE CUST_ID=@ID'

exec sp_executesql @varcmd, N'@ID INT', @ID=1

Example 3:

As another example of how comfortable sp_executesl can be, consider the following.
Suppose you have some fancy backupcommand constructed, and you have "put" that into the variable @cmd.
This variable @cmd expects a parameter, let's say @backup_start_date.
Then you might have constructed a command like below, to let it all work:

exec sp_executesql @cmd,@params=N'@backup_start_date datetime', @backup_start_date = @start_date

Example 4:

This example can be quite usefull at times. Suppose you declare a "very wide" character variable, say @SQL,
of datatype VARCHAR(1000), or better yet VARCHAR(2000), or even still better, NVARCHAR(MAX).
As the following example will show, you can put all sorts of "SQL" into this variable (it's just a string),
and when done, you can PRINT @SQL, and even EXEC @SQL:

DECLARE @SQL NVARCHAR(MAX)

SET @SQL='     -- note the starting quote
CREATE TABLE A (id int)
CREATE TABLE B (id int)
'     -- note the closing quote

PRINT (@SQL)

EXEC (@SQL) -- this will execute all sql statements in @SQL

The statement EXEC @SQL will thus create the tables A and B.
Ofcourse, if you need "to do other stuff" before you PRINT or EXEC the @SQL variable, you can add
strings later on to @SQL, like for example:

DECLARE @SQL NVARCHAR(MAX)

SET @SQL='
CREATE TABLE A (id int)
CREATE TABLE B (id int)
'

-- now do all sorts of other code

SET @SQL=@SQL +'
CREATE TABLE C (id int) -- or other strings
'




1.31 A FEW NOTES ON BACKUP AND RESTORE USING TSQL STATEMENTS:


If you want to see some basic Backup and Restore methods, using TSQL Statements, you might want
to take a look at this document:

Quick Backup & Restore SQL Server (html)




1.32 COMPARING TABLES: FIND ROWS WHICH ARE IN ONE TABLE. BUT WHICH ARE NOT IN ANOTHER SIMILAR TABLE:



1. Using the "EXCEPT" clause:

Actually, this is probably the best method.

If we have two similar tables, and we want to retrieve data from table1, which is not present in table2,
we can use EXCEPT, like in this example:

SELECT [t1].[ID]
,[t1].[Col1]
[t1].[Col2]
FROM [table1] [t1]
EXCEPT
SELECT [t2].[ID]
,[t2].[Col1]
,[t2].[Col2]
FROM [table2] [t2];

2. Using the "NOT IN (SELECT ..)" clause:

Actually, this is not the best method, if there is a chance on NULL values in the selected column.

This will only work fine, if the column you select on does NOT have NULL values.
If you are sure that NULLS are not present, for example, you know that the column used in the compare
is defined using NOT NULL, then the method presented below is good and simple enough.

Otherwise, in case of NULLs, use the "WHERE NOT EXISTS (SELECT..)" method.

Here is a very simple example to illustrate NOT IN.
First, make two testtables with the same layout, and let's insert some records:

create table CustomerA
(
cust_id int not null,
cust_name varchar(64) not null,
cust_city varchar(64) null
)

create table CustomerB
(
cust_id int not null,
cust_name varchar(64) not null,
cust_city varchar(64) null
)

insert into CustomerA values (1,'klm','amsterdam')
insert into CustomerA values (2,'abc','new york')
insert into CustomerA values (3,'def','moskou')
insert into CustomerA values (4,'ghi','alkmaar')
insert into CustomerA values (5,'jkl','djakarta')

insert into CustomerB values (1,'klm','amsterdam')
insert into CustomerB values (2,'abc','new york')
insert into CustomerB values (3,'def','moskou')
insert into CustomerB values (4,'ghi','alkmaar')
insert into CustomerB values (6,'qrs','tel aviv')
insert into CustomerB values (7,'tuv','basra')


Notice that the "cust_id" column is defined to be NOT NULL.
Now, to get the differences:

SELECT * FROM customerA
WHERE cust_id NOT IN (select cust_id from customerB)

cust_id..cust_name....cust_city
5........jkl..........djakarta

SELECT * FROM customerB
WHERE cust_id NOT IN (select cust_id from customerA)

cust_id..cust_name....cust_city
6........qrs..........tel aviv
7........tuv..........basra


3. Using "EXISTS" or "NOT EXISTS":

This is perferrable over using NOT IN.

Having NULL values in the "key" columns you use to compare tables (like cust_id in the example above),
might present difficulties in retreiving correct results, or does not return anything at all.
In this case, use the "WHERE EXISTS (SELECT..)" or the "WHERE NOT EXISTS (SELECT ..)" clause.
For example:

SELECT * FROM TABLE1 as m WHERE NOT EXISTS
(SELECT * TABLE2 as r where r.id=m.id)

Generally, "NOT EXISTS ()" queries have a better performance over "NOT IN ()".
Although they "look" very similar, be aware that "NOT IN" is not equivalent to "NOT EXISTS" when it comes to null values.

Exercise:
use the same tables and rows as was used in (1), and this time use the NOT EXISTS method.



1.33 A FEW NOTES ON REVIVING SQL ACCOUNTS AFTER MOVE (OR RESTORE) OF A DATABASE:


If you move, or restore a Database from one SQL Server on machine A, to another SQL Server installation
on machine B, the SQL database accounts are not "in sync" with the SQL logins (on B) anymore.

The following short note shows how to deal with this, using the special stored procedures
"sp_change_users_login" and "sp_help_revlogin".

Short note on reviving SQL accounts, after moving or restore of a Database on another machine.

=> Scripting logins:

Anyway, to script the logins on an Instance, in the master database use:

exec sp_help_revlogin

This produces a list of SQL- and Windows "CREATE LOGIN..." statements, which you can use on another Server
to import the accounts.

However, here is a script too. Here you can decide to print only the Windows accounts and/or the SQL accounts:

DECLARE @usr varchar(255)
DECLARE MyCursor CURSOR FOR
SELECT name from sys.sysusers where isntuser=1 or issqluser=1

OPEN MyCursor
FETCH NEXT FROM MyCursor INTO @usr

WHILE @@FETCH_STATUS = 0
BEGIN
PRINT 'sp_help_revlogin '+''''+@usr+''''
FETCH NEXT FROM MYCursor INTO @usr
END

CLOSE MyCursor
DEALLOCATE MyCursor


=> Sync SQL database accounts with logins:

If you have restore a database from InstanceA to InstanceB, you might need to "sync" the SQL accounts, meaning that
that the SID's as registered in the database, are synced to the SID's as registered in the Master database.
This could solve login problems.

Use this script in the database to get all needed statements:

DECLARE @usr varchar(255)

DECLARE MyCursor CURSOR FOR
SELECT name from sys.sysusers
where isntuser=0
and issqlrole=0
and name not like '%\%'
and name not in ('dbo','sys','guest','INFORMATION_SCHEMA', 'sa')

OPEN MyCursor

FETCH NEXT FROM MyCursor INTO @usr

WHILE @@FETCH_STATUS = 0
BEGIN
PRINT 'exec sp_change_users_login '+'''Update_One'''+','+''''+@usr+''''+','+''''+@usr+''''
FETCH NEXT FROM MYCursor INTO @usr
END

CLOSE MyCursor
DEALLOCATE MyCursor



1.34 A FEW NOTES ON INDEX ANALYSIS:


This section deals with the traditional clustered- and nonclustered indexes, as they are
used in SQL Server up to, and including, version 2008.

Since SQL 2012, a new type of index is available, called the "COLUMNSTORE INDEX", with a new
storage model and architecture.
For these indexes, please see section 1.49.

For this section, we will discuss the traditional (row-stored) clustered- and nonclustered indexes.

⇒ 1. Using the legacy "sysobjects" and "sysindexes" views:

One of the best queries for index analysis, was available since SQLServer 7. The following query really is great. It shows you
the tables, and all associated indexes defined on the tables, as well as other info like the number of rows.
In the Query Window of SSMS, you should choose the database of interest first, before you run this query.

SELECT
substring(sysobjects.name,1,50) AS TABLENAME,
substring(sysindexes.name,1,50) AS INDEXNAME,
sysobjects.id, sysindexes.indid, sysindexes.groupid,sysindexes.rows
FROM sysobjects, sysindexes
WHERE sysobjects.id=sysindexes.id
ORDER BY sysindexes.rows desc

⇒ 2. Using the "sys.dm_db_index_usage_stats" DMV:

This DMV provides us with data on "user_seeks", "user_scans", "user_updates" and much more, on all indexes in all databases.
So, it might help to "determine" the effectiveness of indexes.

But there might exists catches with this one.

  • It's probably tempting to think, that if the "updates" on an index are much higher than the "seeks"+"scans", then the index
    is not much used, and is probably not very effective. Maybe it's better to get rid of it.
    As you may have expected, low values for "seeks" indicate that an index does not work well.
  • Contrary, if the "seeks"+"scans" are much higher than the "updates", it's an indication that the index is properly used,
    and it's likely that we deal with an effective index here.

Although the info from querying "sys.dm_db_index_usage_stats" is very usefull, it's certainly better not to let it be your single
source of information.

As allways, see it as just one of the "observables" that helps in establishing the grand picture.

Warning:

Never drop an index "just like that", even if it seems to be useless one, unless you are confident it can't hurt.
For example, maybe some important "batch", running only once a week at night, refers to that particular index, and if that index is gone,
the batchrun may give you unpleasant results.

So, see it as a change which generally should be tested first.

To "see what's in it" try the following statement:

SELECT * FROM sys.dm_db_index_usage_stats

To make the output more friendly, we need to join this DMV with other systemviews like sysindexes.

Here is a usable query for obtaining statistics on seeks, scans, and updates, for all your indexes:

SELECT object_schema_name(indexes.object_id) + '.' + object_name(indexes.object_id) as objectName,
indexes.name, + indexes.type_desc,
us.user_seeks, us.user_scans, us.user_lookups, us.user_updates
FROM sys.indexes left outer join sys.dm_db_index_usage_stats us
on indexes.object_id = us.object_id
and indexes.index_id = us.index_id
and us.database_id = db_id()
order by us.user_seeks + us.user_scans + us.user_lookups desc

⇒ 3. Using the "sys.dm_db_index_physical_stats" function:

This function (DMF) returns fragmentation information on objects. Usually, a function takes one or more arguments,
like in function_name(arg1, arg2,..) This one can take 5 arguments. However, some, or all of them, might be "NULL".
The more arguments you specify as "not null" (so you put in a real value), the more you zoom in to a set of objects, or utimately, one index.

The funtion with all arguments looks like this:

sys.dm_db_index_physical_stats(database_id, object_id, index_id, partition_number, mode)

But you can leave one to all arguments as "NULL". So, for example, if partitioning is not used, leave it at "NULL".

What "fragmentation" actually means, and how to solve it, is the subject of section 13.3.

You probably often are interrested in the fragmentation level of all indexes in a certain database, or you want
to investigate a specific index.

In these cases, below is shown how you could use the "sys.dm_db_index_physical_stats" function.
If you know the "database id" (dbid), you can use that number. Otherwise just fill in the database name of your choice,
and the function "db_id(database_name)" will find this id for you.

SELECT object_name(object_id), index_id, index_type_desc, index_depth, avg_fragmentation_in_percent, avg_page_space_used_in_percent
FROM sys.dm_db_index_physical_stats (DB_ID('YOUR_DATABASE_NAME'), null,null,null,'DETAILED')
WHERE index_id<>0

SELECT object_name(object_id), index_id, avg_fragmentation_in_percent, avg_page_space_used_in_percent
FROM sys.dm_db_index_physical_stats (DB_ID('YOUR_DATABASE_NAME'),OBJECT_ID('YOUR_TABLE_NAME'),INDEX_ID,null,'DETAILED')

The first query, where you want to see the state of all indexes in a certain database, can take quite some time and resources
in a larger and active database. So, try it first in a test environment.

⇒ 4. General Advice:

Many tables will use a clustered index. You can have only one clustered index on a table. Basically, a clustered index
will have "sorted" (or ordered) the table rows, on the column(s) that were choosen for the index.
On the "leaf page level" the index and the table "are the same", which also makes clear why you can have only one clustered index.

You can have more than one non-clustered indexes on a table. A non-clustered index, is a external object (from the table),
and uses it's own private leaf pages.

If a large table has a clustered index and multiple non-clustered indexes, then a good policy is to place the table (= clustered index)
on it's own filegroup, and the non-clustered indexes on another filegroup. This is quite effective for performance if those filegroups
are defined on different sets of diskspindels.

⇒ 5. Examples of creating an index:

-Non-clustered:
CREATE NONCLUSTERED INDEX indx_employee_empname ON EMPLOYEE(EMP_NAME)

CREATE NONCLUSTERED INDEX indx_employee_emp_cp ON EMPLOYEE(CITY, POSTALCODE)

-Clustered:
CREATE CLUSTERED INDEX indx_employee_empid ON EMPLOYEE(EMPID)

-Strongly Simplified General syntax:
CREATE [CLUSTERED | NONCLUSTERED] INDEX ON TABLENAME(column1, column2,...) [ON filegroup_name]

⇒ 6. Reorganizing and Rebuilding an index:

What a "large table" is (and thus having large indexes too), just depends on what you are used to. Some find 50 million rows large,
but actually that's quite small for many (or most) other DBA's.
Quite a few "real-life" databases, have tables with over 5000 million rows, and even considerable larger.
And... even real monsters exists too (>100G rows).

Anyway: Reorganizing or Rebuilding a truly small table (say 100000 rows) does not take much time. Maybe within a few seconds or so.
However, rebuilding indexes on a large table may take a very long time. The larger the number of rows, the more difficult it gets.
For very large tables, you need partitioning, and handle partitions one at the time.

To get an idea:

- on a "reasonable" Win2K8/SQL2008 machine, with reasonable IO and other resources, then rebuilding a 50 million row table
with, say, 20 columns with standard datatypes, a clustered index and 5 nonclustered indexes, should certainly not take more than 20 minutes.

- on a "reasonable" Win2K8/SQL2008 machine, with reasonable IO and other resources, then rebuilding a (non-partitioned) 5000 million row table
might take very long, or, more likely, is a nightmare.

=> Index Rebuild: Too bad that with the Standard Edition, a Rebuild cannot be done "online". This means that you can always try to rebuild an index,
if there is no spid (useraccess) is accessing some row at the same time the rebuild does. So, generally, there are potentially high chanches on "locks".
If you are sure that some table is not accessed by some user, the rebuild will go fine.
The Enterprise Edition provides for the "Online" clause, which will rebuild the index while tableaccess by users is going on.
However, with large tables and high user concurrency, I have seen many "locks" all the time.

=> Index Reorganize: A Reorganize can always be done online.

REORGANIZE:
ALTER INDEX index_name ON table_name REORGANIZE -- just reorganize one index on a certain table
ALTER INDEX ALL ON table_name REORGANIZE -- reorganize all indexes for table_name

REBUILD:
ALTER INDEX index_name ON table_name REBUILD -- just rebuild one index on a certain table
ALTER INDEX ALL ON table_name REBUILD -- rebuild all indexes for table_name

ALTER INDEX ALL ON table_name
REBUILD WITH (FILLFACTOR = 80, SORT_IN_TEMPDB = ON)

Both the reorganize and rebuild statements can use a lot of additional "clauses" like FILLFACTOR and many more.
In essence, a Reorganize uses the existing Index, and logically reorder the pages and refills them again.
In essence, a Rebuild is like dropping the index and completely rebuild it again.
A rebuild is very effective, if you can afford it.


Dynamic statements to generate a script for all indexes for all tables:

Here are two simple loosp to generate the statements for a rebuild for all indexes for all tables in a Database:
One script uses the modern "ALTER TABLE.." statement, while the other use the older "DBCC DBREINDEX()" statement.
So, the endresults are listings of statements.

-- script 1:

set nocount on
DECLARE @TableName varchar(255)

DECLARE MyCursor CURSOR FOR
  SELECT table_name FROM information_schema.tables
  WHERE table_type = 'base table'

OPEN MyCursor

FETCH NEXT FROM MyCursor INTO @TableName

WHILE @@FETCH_STATUS = 0
BEGIN
  PRINT 'ALTER INDEX ALL ON '+@TableName+' REBUILD WITH (FILLFACTOR = 80, SORT_IN_TEMPDB = ON)'
  FETCH NEXT FROM MYCursor INTO @TableName
END

CLOSE MyCursor
DEALLOCATE MyCursor

-- script 2:

set nocount on
DECLARE @TableName varchar(255)

DECLARE MyCursor CURSOR FOR
  SELECT table_name FROM information_schema.tables
  WHERE table_type = 'base table'

OPEN MyCursor

FETCH NEXT FROM MyCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
  PRINT 'DBCC DBREINDEX('+@TableName+','+''''',90)'
  FETCH NEXT FROM MYCursor INTO @TableName
END

CLOSE MyCursor
DEALLOCATE MyCursor

Note: if the indexes are in some "schema" (owner) other than dbo, you can simply use notepad with the generated statements,
and replace text like 'ALL ON ' with 'ALL ON owner.'
Ofcourse, it's much smarter to use a second variable in such script, and get the owner as well from the "information_schema.tables" view.
This might be a nice exercise.

For more information on Performance tuning, you might take a look at this note.



1.35 USING PROMPT TOOLS (OR GRAPHICAL TOOLS LIKE SSMS) TO CONNECT TO A NAMED INSTANCE, ON SOME PORT:


There are a number of prompt (query) tools like "sqlcmd" or "osql" and the like. To connect to a default Instance on some
standallone SQL Server machine, is simple.
However, often you need to use such tools to connect to a "named" Instance, listening on another port instead of the default 1433 port.

Just take a look at the following examples how to do that.
Here we focus on how to connect, but many "switches" (or flags) are available, like for example "-E" for integrated security, or "-U -P" for using a SQL account.
For illustration, we will use the "sqlcmd" and "osql" utilities, but it's not different if you would use "bcp" or "SSMS" etc..

-- Connecting to the Default Instance on the machine called SQL50:

C:\> sqlcmd -S SQL50
1> quit

-- Connecting to the Named Instance "INST2" on a Cluster, on "Virtual Server" (resource) SQL60 (as registered in DNS), listening on port 1435:
Note the use of the "\" in the command below.

C:\> sqlcmd -S SQL60\INST2,1435
1> quit

-- Connecting to the Named Instance "INST2" on a Cluster in resource group SQL60, listening on port 1435, but now using raw TCPIP:

C:\> sqlcmd -S tcp:172.20.40.55,1435
1> quit

-- Same examples, but this time using "osql":

C:\> osql -E -S SQL50
1> quit

C:\> osql -E -S SQL60\INST2,1435
1> quit

While the "IP part" and the "port part" in other apps are often seperated by a colon ":", in SQL tools you must use a comma ",".



1.36 SOME REMARKS ON THE SCOM (System Center Operations Manager) ACCOUNT IN SQL SERVER:


SCOM uses it's own "OperationsManager" SQL Server database, as well as at least one "Management Server".
Next, you want to "discover" and "monitor" all other SQL Server Instances.

Monitoring requires that an agent be installed on the discovered computer.
An Operations Manager agent is a service that is installed on a computer that will be monitored.
The agent collects data, evaluate it against values in a "Management Pack", and creates alerts

A management server receives data from all those agents, and sometimes distributes configurations to agents on the monitored computers.

The agent is the "MonitoringHost.exe" process which run under the "action account" (local system), or a specific "Run As" account.

Sometimes, a "Management Pack (MP)" is uploaded to the SQL Server Instance.

The agent runs under "Local System" account. In former SQL versions, NT AUTHORITY\SYSTEM (Local System) had sysadmin privileges.
Since SQL 2008, there is no sysadmin right for this account anymore.

So, often a specific "Run As" account is used, which is a Domain User account. Suppose this account is "Domain\opsmgr-sql".

Windows Server actions:

The following must be realized on all monitored Servers:
- It must be member of the local Users group, or "Domain users" is member of the local Users group.
- Place "Domain\opsmgr-sql" in the local "Performance Monitor Users" , "Performance Log Users", and "EventLogReaders" local groups
- Assign "Log On Locally" to "Domain\opsmgr-sql".

SQL Server Instance actions:

-> Bad solution:
Just use NT AUTHORITY\SYSTEM (Local System) again.
Assign sysadmin rights to the "Local System" (local Admin group) account again. Ofcourse, most SQL Admins will NOT like that.

-> Slightly better solution:
-- Create a Domain User credential that will function as the "Run As" account. Perform the Server actions as listed above.
-- Add the "Domain\opsmgr-sql" login in SQL Server.
-- The "Domain\opsmgr-sql" login ("Run As" account), will be granted sysadmin privileges in SQL.
-- Associate this "Run As" account with the SCOM Run as profiles.

If considered it to be like a service account, with a very strong password (like 14 digits, like: letters, numbers, a few special characters),
and nobody is able to use it, then maybe it is ok.

-> Better solution:
-- Create a Domain User credential that will function as the "Run As" account. Perform the Server actions as listed above.
-- Add the "Domain\opsmgr-sql" login in SQL Server.
create login [Domain\opsmgr-sql] from windows
go
grant view server state to [Domain\opsmgr-sql]
go
grant view any definition to [Domain\opsmgr-sql]
go
grant select on sys.database_mirroring_witnesses to [Domain\opsmgr-sql]
go

Execute the statements generated by:

SELECT 'use ' + name + ' ;'
+ char(13) + char(10)
+ 'create user [Domain\opsmgr-sql] FROM login [Domain\opsmgr-sql];'
+ char(13) + char(10) + char(13) + char(10)
FROM sys.databases WHERE database_id = 1 OR database_id >= 3

use msdb; exec sp_addrolemember @rolename='SQLAgentReaderRole', @membername='Domain\opsmgr-sql'

-- Associate the Domain\opsmgr-sql, or the "Run As" account, with the SCOM Run as profiles.



1.37 SOME SYSTEMVIEW QUERIES ON 2005/2008 DATABASE MIRRORING:


System views:

Database "Mirroring" was sometimes used in SQL 2005/2008. In SQL 2012, it is ofcourse superseded by the AlwaysOn HA/DR implementation.
However, Database "Mirroring" is still supported in SQL 2012.

While traditional Clustering (using Shared storage) and 2012/2014 AlwaysOn, are true "cluster solutions", plain Database Mirroring is not.
Just like the name implies, we have Primary Database on an Instance, which is mirrored to a Secondary database on another Instance.
In this process, transaction log vectors are used, which are send from the Primary to the Secondary.
It can be seen as a "lightweight" HA implementation, since if a problem arises at the Primary, the Secondary can take over,
thereby switching the roles between the Primary and Secondary. However, only one type of setup makes "automatic" failover possible.

In Database "Mirroring", there are actually three possible setups:

"High Availability Mode", with a synchroneous commit over Primary and Secondary, and a third Witness Server.
"High Performance Mode", with a asynchroneous commit over Primary and Secondary, no Witness Server.
"High Protection Mode", with a synchroneous commit over Primary and Secondary, no Witness Server.

Only when using the "High Availability" mode, using an additional "Witness" machine (or quorum),
an automatic failover can be implemented.

When using the "High Performance" setup or "High Protection" setup, only a manual failover is possible.
The "high performance" in "High Performance Mode" actually refers to an application using the Primary, in that it does not
have to wait for a commit on both the Primary and Secondary database. Only a commit on the Primary is already considered a
complete transaction.

Thus, in all three cases, a form of HA can be implemented, although the "High Availability" setup achieves the best level.
Ofcourse, Database Mirroring is of far lesser power compared to SQL 2012 "AlwaysOn".

Here are some systemview queries to view the state and health of database mirrorring:

select substring(state_desc,1,20) as "STATE_MIRR_USER", substring(login_state_desc,1,30) as "MIRR_STATUS",
connect_time,login_time,substring(authentication_method,1,20) as "AUTH_METHOD",
substring(principal_name,1,30) as "PRINCIPLE_NAME" ,
substring(remote_user_name,1,30) as "REMOTE_USER",last_activity_time,
receives_posted,sends_posted,total_bytes_sent,total_bytes_received
from sys.dm_db_mirroring_connections

select substring(name,1,25) as "NAME",endpoint_id,substring(protocol_desc,1,20) ,substring(type_desc,1,20) as "STATE",state_desc
from sys.database_mirroring_endpoints

select substring(db_name(database_id),1,30) as "DBname", substring(mirroring_state_desc,1,30) as "STATE",
substring(mirroring_role_desc,1,20) as "ROLE", substring(mirroring_partner_name,1,35) AS "PARTNER_HOST",
substring(mirroring_partner_instance ,1,35) AS "PARTNER_INSTANCE" from sys.database_mirroring
where mirroring_state_desc is not NULL

Here is the full set of systemviews of SQL 2012:

select * from sys.database_mirroring
select * from sys.dm_db_mirroring_auto_page_repair
select * from sys.dm_db_mirroring_past_actions
select * from sys.database_mirroring_endpoints
select * from sys.database_mirroring_witnesses
select * from sys.dm_db_mirroring_connections

On SQL 2005, you should have at least have:

sys.database_mirroring
sys.database_mirroring_endpoints
sys.database_mirroring_witnesses
sys.dm_db_mirroring_connections

Failover commands:

=> Scenario 1: High Availability Mode. (Safety FULL with a witness).
In normal operation (witness is online), you do NOT need to issue a failover command.
This scenario provides the high safety with automatic failover. In the event of the failure of the principal,
the mirror forms a quorum with the witness. Automatic failover will take place, thereby minimizing the database downtime.
For example, prior to the failure, Server_A and Server_B acted as principal and mirror, respectively.
Server_A fails. Following the automatic failover, Server_B takes on the role of the principal. However, since there is no mirror
after the failover, the mirroring state is DISCONNECTED and the principal is "alone" without HA.
Once the database on Server_A becomes operational, it automatically assumes the role of the mirror.

Only in case the witness was offline, you need to issue the command:

ALTER DATABASE [database_name] SET PARTNER FAILOVER

=> Scenario 2: High Protection Mode. (Safety FULL without a witness).
This scenario provides high safety, but automatic failover is not allowed. In the event of failure of the principal,
the database service becomes unavailable. You need manual intervention to make the database service available.
You must break the mirroring session and then recover the mirror database.
For example, prior to the failure, Server_A and Server_B acted as principal and mirror respectively. Server_A fails.
You need to execute the following on Server_B to make the database service available:

ALTER DATABASE [database name] SET PARTNER OFF
RESTORE DATABASE [database name] WITH RECOVERY

Once Server_A becomes available, you need to re-establish the mirroring session.

=> Scenario 3: High Performance Mode. (SAFETY OFF).
In the event of failure of the principal, the database service becomes unavailable.
You can perform a force service to make the database service available on the mirror.
However, since the safety level is OFF, it is possible that there were transactions that didn’t make it to the mirror
at the time of the failure of the principal. These transactions will be lost. Therefore, manual failover with safety OFF
involves acknowledging the possibility of data loss.
For example, prior to the failure, Server_A and Server_B acted as principal and mirror respectively. Server_A fails.
You need to execute the following on Server_B to make the database service available:

ALTER DATABASE [database_name] SET PARTNER FORCE_SERVICE_ALLOW_DATA_LOSS



1.38 SOME SYSTEMVIEW QUERIES ON A SQL SERVER 2012/2014 "ALWAYSON" CLUSTER:


The following is about querying the metadata, or systemviews, of a SQL 2012/2014 AlwaysOn Cluster.

For example, take a look at:

select * from sys.dm_hadr_availability_group_states
select * from sys.dm_hadr_availability_replica_states
select * from sys.dm_hadr_database_replica_states
select * from sys.dm_hadr_database_replica_cluster_states
select * from sys.dm_hadr_availability_replica_cluster_nodes
select * from sys.dm_hadr_instance_node_map
select * from sys.dm_hadr_name_id_map
select * from sys.dm_tcp_listener_states
select * from sys.availability_groups
select * from sys.availability_groups_cluster
select * from sys.availability_databases_cluster
select * from sys.dm_hadr_instance_node_map
select * from sys.availability_replicas
select * from sys.availability_group_listener_ip_addresses
select * from sys.availability_read_only_routing_lists
select * from sys.availability_group_listeners

Ofcourse, quite a few of them, can also be used to monitor, for example to view the current state of the AG's, the synchronization states,
and other "health" information.
Just try the queries, then use the queries with selections of some columns of interest. Then you might try some joins, and see
what can be usefull in your situation.

Here are a few of my own queries and joins:


(1). Which databases reside under which AG:

select substring(g.name,1,20) as "AG_Name", substring(d.database_name,1,20) as "Database"
from sys.availability_groups_cluster g, sys.availability_databases_cluster d
where g.group_id=d.group_id

(2). Which databases reside under which AG under which Instance:

select substring(g.name,1,20) as "AG_Name", substring(d.database_name,1,20) as "Database", n.instance_name
from sys.availability_groups_cluster g, sys.availability_databases_cluster d, sys.dm_hadr_instance_node_map n
where g.group_id=d.group_id AND g.resource_id=n.ag_resource_id
order by n.instance_name

(3). Checking the Health of the Replica's on an Instance:

select replica_id, substring(db_name(database_id),1,30) as "DB", is_local, synchronization_state_desc, synchronization_health_desc,
log_send_rate, log_send_queue_size
from sys.dm_hadr_database_replica_states

(4). Showing Availability Groups, and their listeners and VIP's:

select substring(g.name,1,20) as "AG name" , l.listener_id,
substring(l.dns_name,1,30) as "Listener name", l.port,
substring(l.ip_configuration_string_from_cluster,1,35) AS "VIP"
from sys.availability_groups g, sys.availability_group_listeners l
where g.group_id=l.group_id

(5). Checking if the Secondary Replica's on INST3 are "good" to perform a Fail Over. See also query (3).

select * from sys.dm_hadr_database_replica_cluster_states
where replica_id=(select replica_id from sys.availability_replicas
where replica_server_name ='INST3')

(6). Finding the Primary AG's on an Instance:

IF SERVERPROPERTY ('IsHadrEnabled') = 1
BEGIN
SELECT
AGC.name -- Availability Group
, RCS.replica_server_name
, ARS.role_desc
, AGL.dns_name
FROM
sys.availability_groups_cluster AS AGC
INNER JOIN sys.dm_hadr_availability_replica_cluster_states AS RCS
ON
RCS.group_id = AGC.group_id
INNER JOIN sys.dm_hadr_availability_replica_states AS ARS
ON
ARS.replica_id = RCS.replica_id
INNER JOIN sys.availability_group_listeners AS AGL
ON
AGL.group_id = ARS.group_id
WHERE
ARS.role_desc = 'PRIMARY'
END

-- Other queries:

select replica_id, database_name,is_failover_ready
from sys.dm_hadr_database_replica_cluster_states

select substring(group_name,1,20) as "AG_Name",
substring(replica_server_name,1,35) as "Instance_Name",
substring(node_name,1,30) as "Node_Name"
from sys.dm_hadr_availability_replica_cluster_nodes

select substring(dns_name,1,30) as "DNS_LIstener_Name", port,
ip_configuration_string_from_cluster from sys.availability_group_listeners

select replica_id, substring(replica_server_name, 1,30) as "REPLICA_Server_Name",
substring(endpoint_url,1,30) as "Endpoint", availability_mode_desc
from sys.availability_replicas

If you want more information on Availability Groups and AlwaysOn,
you might take a look at this note.



1.39 ALWAYSON CLUSTER AND TSQL:


Here you will find some main TSQL statement examples, further illustrating managing AlwaysOn.

In the following examples, the Availability Group called "SALES_AG", will serve as our main example Availability Group.

FAILOVERS:

=> Performing a manual "Planned" failover of an Availability Group:

In this case, there is no error condition. You only want to perform a failover, because of,
for example, performance reasons, or planned maintenance of the current Primary Instance (or Node).

Connect to an Instance that has a Secondary Replica of the availability group that you want to Fail over:

ALTER AVAILABILITY GROUP SALES_AG FAILOVER;


=> Performing a "Forced" failover of an Availability Group:

There is a large difference between a "planned" failover, compared to the situation where you need to perform
a "forced" failover. Clearly, in latter case, some sort of a serious error condition has occurred.

For an important Production environment, you should be very carefull using a forced failover to a Secondary.
Always check all the logs involved.

In case you have decided to perform a forced failover:
Connect to an Instance that has a Secondary Replica of the availability group that that needs to Fail over,
(or to achieve a "role switch"), and use the statement:

ALTER AVAILABILITY GROUP SALES_AG FORCE_FAILOVER_ALLOW_DATA_LOSS;

Some further comments on possible "data loss":

If you use the "synchroneous commit" mode, you know that the state of the Primary and Secondary must be the same.
A transaction can only succeed, if the Primary, and Secondary, both have comitted that transaction.
It's also a requirement for "automatic failover". If transactions cannot succeed on the Secondary, then the system
should stall.
Anyway, you should be confident that databases should be equal, on both sides.

However, using the "asynchroneous commit" mode, it allows for a small lag at the Secondary. The Primary and Secondary,
in normal conditions, are usually "very close" in their states, but possibly a small lag exists at the Secondary.
But, the lag can be relevant to your business, and if so, then using the clause FORCE_FAILOVER_ALLOW_DATA_LOSS
might be problematic to make a Secondary the Primary.
If the Primary AG is still good and available, then you might consider re-creating the Secondary AG again.
If the Primary AG is "damaged", then you might consider FORCE_FAILOVER_ALLOW_DATA_LOSS anyway, to force a "role" switch,
and re-create a Secondary again.

But this scenario is actually a bit "theoretical". Although in principle it's all true, having an actual dataloss is quite unlikely.

Also, it's very unlikely that you are so unlucky to have a "truly" unsolvable sitution. But IT=IT and Murphy laws are true.
Although AlwaysOn is really "HA", at all times you should have a schedule of full backups, in combination with differential backups
and/or transactionlog backups.
Only that, will garantuee you can "go back" to a recent consistent state.


A FEW EXAMPLES OF MANAGING AG's:

=> Adding a Database to an AG:

From the Instance that hosts the Primary Replica:

ALTER AVAILABILITY GROUP SALES_AG ADD DATABASE HR;

=> Removing a Database from an AG:

From the Instance that hosts the Primary Replica:

ALTER AVAILABILITY GROUP SALES_AG REMOVE DATABASE HR;

=> Creating an AG:

Suppose you have the databases "HR", "Sales", and "Inventory".
Next, suppose you want them to failover like a "unit", so an Availability Group is the right answer to that.

Suppose that what you see is the Primary Instance, is SQL03N1\INST1.
What you see as a Secondary Instance, is SQL03N2\INST2.

Then the following might be a TSQL statement to create the Availablity Group "SALES_AG".
However, more work needs to be done to get full functioning Replica's, for example creating a listener, and joining
the Secondary Replica from the Secondary Instance.

Perform the following on the Primary Instance:

CREATE AVAILABILITY GROUP SALES_AG
FOR
DATABASE HR, Sales, Inventory
REPLICA ON
'SQL03N1\INST1' WITH
(
ENDPOINT_URL = 'TCP://sql03n1.antapex.org:5022',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL
),
'SQL03N2\INST2' WITH
(
ENDPOINT_URL = 'TCP://sql03n2.antapex.org:5022',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL
);

Perform the following on the Secondary Instance:

ALTER AVAILABILITY GROUP SALES_AG JOIN;

=> RESUMING DATA MOVEMENT (SYNCHRONIZATION):

Suppose on a secondary replica, the synchronization has stopped.
To resume the sync again, connect to that secondary Instance, and use:

ALTER DATABASE SALES SET HADR RESUME;

=> REMOVING A NODE:

ALTER AVAILABILITY GROUP [AGNAME]
REMOVE REPLICA ON N'OldServer';

If needed, repeat for other AG's.

Then, using cluster Andministrator, PS command:

PS C:\> Remove-ClusterNode OldServer

=> REMOVING AN AVAILABILITY GROUP:

DROP AVAILABILITY GROUP group_name

Remove-SqlAvailabilityGroup `
-Path SQLSERVER:\Sql\Computer\Instance\AvailabilityGroups\MyAg


=> REMOVE A SECONDARY DATABASE FROM AN AG :

ALTER DATABASE SALES SET HADR OFF;

=> CHANGE SYNCHRONISATION STATUS OF AN AG :

Go to the primary node of an AG, and change the status of a secondary node and replica:

ALTER AVAILABILITY GROUP [AG_SALES]
MODIFY REPLICA ON N'SQLN3' WITH (AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT)



1.40 REPLACE A USER ACCOUNT WITH ANOTHER ACCOUNT THROUGHOUT A SET OF DATABASES:


Suppose, a fixed SQL useraccount, or Windows account, has a certain set of permissions throughout
a large set of databases. Suppose, you want that account be replaced by another account.
The change can be done manually, by deleting the first account, and adding the new account,
and assign the right permissions to that new account, for all databases in question. But that is a lot of work!
We can automate that using the script below. The script will generate the right set of statements
to replace the account throughout all databases you want it to be done.

After the list of statements are generated, you can execute them all in one run. Or, just part of it,
(or just that selection of statements for one database, to watch the effect).

By not using "print" in the script below, you bypass the step of generating the statements.
However, just by having the statements, and paste them in a new query window, gives you the freedom to read
them first, and, maybe execute them all (just a part of it, in order to evaluate the effects).

Below is the script. In this example, the new user must be member of the "db_datareader" and "db_datawriter" roles,
and must be granted the permission to execute all stored procedures.

It's easy to adjust the script, in order to make it suitable for your requirements.

-- REPLACE A USER BY ANOTHER USER IN A SET OF DATABASES.
-- THIS SCRIPT DOES NOT ACTUALLY DO THE REPLACE DIRECTLY, BUT IT GENERATES THE STATEMENTS THAT WILL DO THE JOB.

-- For example:
-- drop user harold from a set of databases
-- add user mary to that set of databases
-- add user mary to certain roles e.g. db_datareader, db_datawriter, grant execute etc..

DECLARE @usr1 VARCHAR(64) -- the user account that must be replaced
DECLARE @usr2 VARCHAR(64) -- the new user
DECLARE @role1 VARCHAR(64)
DECLARE @role2 VARCHAR(64) -- if needed add additional role variable(s) like @role3
DECLARE @dbname VARCHAR(64)

-- now change the real names accordingly

SELECT @usr1='MyCorp\ams-srv'
SELECT @usr2='MyCorp\cmsservices'
SELECT @role1='db_datareader'
SELECT @role2='db_datawriter' -- if needed assign further variables with actual roles, like @role3='exec_app'

DECLARE cur1 CURSOR FOR
SELECT name
FROM master.dbo.sysdatabases
where name like 'sales%' -- this determines the set of all database, namely all that start with "sales" in their name.

OPEN cur1
FETCH NEXT FROM cur1 INTO @dbname

WHILE (@@fetch_status<>-1)

BEGIN

  PRINT 'USE ' +@dbname
  PRINT 'GO '
  PRINT ' '
  PRINT 'DROP USER '+'"'+@usr1+'"'
  PRINT 'GO'
  PRINT ' '
  PRINT 'EXEC sp_grantdbaccess '+''''+@usr2+''''+','+''''+@usr2+''''
  PRINT 'GO'   PRINT 'EXEC sp_addrolemember '+''''+@role1+''''+','+''''+@usr2+''''
  PRINT 'GO'   PRINT 'EXEC sp_addrolemember '+''''+@role2+''''+','+''''+@usr2+''''
  PRINT 'GO'
  PRINT ' '
  PRINT 'Grant EXECUTE to '+'"'+@usr2+'"'
  PRINT 'GO'
  PRINT ' '

FETCH NEXT FROM cur1 INTO @dbname

END

CLOSE cur1
DEALLOCATE cur1



1.41 CREATE A SQL SERVER AGENT PROXY ACCOUNT.


A SQL Server Agent proxy defines the security context for a SQL Agent job, or job step.
A proxy (just another account with the proper permissions) will provide the SQL Server Agent
with those proper permissions for the specific task.
It might be so that the credentials where the SQL Agent runs under, does not have the proper permissions
for the task (like accessing a share, or doing a cmd job), but by using the proxy account, then it will.
The Agent sort of "impersonates" the proxy account.

The proxy account is linked to a Windows account (e.g. Domain User), with sufficient rights.
But, it has a seperate "credential" within SQL Server, in order to use the account from the SQL environment.

Quite a few SQL Agent tasks involve remote machines, or accessing shares, or starting/stopping services.
You could give the SQL Agent service, lots of privileges, but that's certainly the wrong way.
This is so, since the SQL Server services would run with (too) much privileges, "always", which cannot be good.
So, we can solve it, by creating a "credential", which is used only when needed.

That's why you create a "credential" in SQL Server first (can be found in the security folder in SSMS),
and link it with the Domain User account (which has exactly the right permissions to perform the job, or job-step).
Once that's done, then further on you only need to refer to the "credential name".

It's easy to create a credential from SQL Server Management Studio. That's shown in the figure below.

Fig 2.



For the various types of tasks in the SQL Agent job (steps), a sort of category is in place (just defined), like
"cmd" tasks, "SSIS tasks" etc. These categories are called "subsystems".

Currently, (sql 2005/2008/2012), we have the following subsystems:

(1 TSQL)
2 Microsoft ActiveX Script
3 Operating System (CmdExec)
4 Replication Snapshot Agent
5 Replication Log Reader Agent
6 Replication Distribution Agent
7 Replication Merge Agent
8 Replication Queue Reader Agent
9 Analysis Services Query
10 Analysis Services Command
11 SSIS package execution
12 PowerShell Script


Now that we have a "credential" in place (linked to a suitable Domain account), let's create a "proxy".
It's very easy to do so using the graphical SQL Server Management Studio.
But let's use TSQL instead. Suppose our desired proxy name is "DBMaintenance", the the correct TSQL would be:

USE [msdb] GO EXEC msdb.dbo.sp_add_proxy @proxy_name=N'C',@credential_name=N'DBMaintenance', @enabled=1
GO

EXEC msdb.dbo.sp_grant_proxy_to_subsystem @proxy_name=N'DBMaintenance', @subsystem_id=3
GO

EXEC msdb.dbo.sp_grant_proxy_to_subsystem @proxy_name=N'DBMaintenance', @subsystem_id=11
GO

Resume:

1. create (or select) the Domain Account (if neccessary).
2. Create the Credential (and link the credential to the Domain account).
3. Run "sp_add_proxy" to link the credential to a new proxy name (and link it to the credential, and thus to the Domain account).
4. Run "sp_grant_proxy_to_subsystem" (1 or more times) to grant access to the proxy on SQL subsystems.



1.42 SOME DBCC MEMORY statements.


The following note, contains some simple examples of DBCC memory related statements.
If you like to see it, click here.

Further notes:

How to get an idea of available memory in SQL Server:

If you do this:

select name, type_desc
from sys.all_objects where name like '%memory%'

You will find all DM views which can give you information on memory usage of SQL Server.

If you want to view the mmory usage (database buffers) per database:

SELECT DB_NAME(database_id) AS [Database Name], COUNT(*) * 8/1024.0 AS [Cached Size (MB)]
FROM sys.dm_os_buffer_descriptors
WHERE database_id > 4
AND database_id <32767
GROUP BY DB_NAME(database_id)
ORDER BY [Cached Size (MB)] DESC;

If you do this:

select name, type_desc
from sys.all_objects
where name like '%_os_%' and type_desc='VIEW'

You get a lot of views back which you can query on general performance information, but not only of SQL, but about
the OS and the Host as well.

At least, the following query is quite interesting for obtaining a general impression on the amount of available memory:

select total_physical_memory_kb, available_physical_memory_kb,
system_memory_state_desc
from sys.dm_os_sys_memory

In the last column, you may find values like:

"Available physical memory is high"
"Available physical memory is low"

and a few other possible values, which speaks for themselves.



1.43 CREATE A SERVER-SIDE TRACE.


Using "SQL Server Profiler", you can define a trace in a "graphical" manner, and define
on which events you want to trace, in which file to save the results, and some other choices you can make.
Then you can start the trace, let it run for some time, and (possibly scheduled), stop the trace.

It can all also be setup, by solely using statements as well, not using "SSMS/SQL Server Profiler" at all.

So, you could create a .cmd batchfile, which calls sqlcmd, (or isql, or osql).
Then, sqlcmd then uses an .sql "inputfile" in which the trace is defined (the "events"), and ultimately gets started.

The main statements you can use are the following:

-- define a trace event and use it in the trace (@on):
exec sp_trace_setevent @TraceID, 17, 6, @on

-- start the trace with traceId=@traceid (@status=0):
exec sp_trace_setstatus @traceid = 2 , @status = 0

-- stop the trace with traceId=@traceid (@status=1):
exec sp_trace_setstatus @TraceID = 2 , @status = 1

-- get properties of all running traces (like traceid's, where files are placed etc..):
SELECT * FROM ::fn_trace_getinfo(NULL)

So, essentially, you could do something like this:

Create a batchfile "start.cmd":

-- start.cmd:

osql -E -i "MyTrace.sql" -o "MyTrace.log"

-- Start of "MyTrace.sql":

declare @maxfilesize int
set @maxfilesize = 1024 -- max size of an individual trace file, here it's 1 GB.

declare @filecount int
set @filecount = 10 -- max number of trace files, here it's 10.

declare @tracefile nvarchar(256)
set @tracefile = N'z:\sqltrace\MyTrace.trc' -- set the names and location of tracefiles.

DECLARE @stoptime DATETIME = DATEADD(minute,90,GETDATE()) -- set the length of the trace, here it's 90 minutes.

declare @rc int declare @TraceID int -- declare the unique trace id.

-- create a string with the command we are after.

exec @rc = sp_trace_create @TraceID output, 2, @tracefile, @maxfilesize, @stoptime, @filecount
if (@rc != 0) goto error

-- Set required events
declare @on bit
set @on = 1
exec sp_trace_setevent @TraceID, 14, 1, @on
exec sp_trace_setevent @TraceID, 14, 9, @on
exec sp_trace_setevent @TraceID, 14, 6, @on

..
(repeat that for all events you want to trace... typically 20, or 50 or more statements)
..

-- Set the Filters
declare @intfilter int
declare @bigintfilter bigint

exec sp_trace_setfilter @TraceID, 10, 0, 7, N'SQL Server Profiler - 23235ghf-0761-4g35-b2c5-33609784f91t'

exec sp_trace_setstatus @TraceID, 1 -- start the trace (status=1).

-- display trace id for you to note, and you can use that id to stop the trace later on.
select TraceID=@TraceID
goto finish

error:
select ErrorCode=@rc

finish:
go
-- end of "MyTrace.sql"

For more info on which events you can "set" using "sp_trace_setevent",
you might want to visit:

https://msdn.microsoft.com/en-us/library/ms186265.aspx
https://technet.microsoft.com/en-us/library/aa260314(v=sql.80).aspx



1.44 SHOW ACCESSED OBJECTS (like tables).


Above, we have already seen some queries which shows you accessed objects and locks.

Maybe it's a good idea to show a few of them here in a seperate section.

USE DatabaseName -- goto the database where you want to see locked and accessed objects.
GO

SELECT
L.request_session_id AS SPID,
ES.login_name AS LoginName,
DB_NAME(L.resource_database_id) AS DatabaseName,
O.Name AS LockedObjectName,
P.object_id AS LockedObjectId,
AT.transaction_begin_time,
L.resource_type AS LockedResource,
L.request_mode AS LockType,
ST.text AS SqlStatementText,
ES.host_name AS HostName,
CN.Local_net_address,
TST.is_user_transaction as IsUserTransaction,
AT.name as TransactionName,
CN.auth_scheme as AuthenticationMethod
FROM sys.dm_tran_locks L
JOIN sys.partitions P ON P.hobt_id = L.resource_associated_entity_id
JOIN sys.objects O ON O.object_id = P.object_id
JOIN sys.dm_exec_sessions ES ON ES.session_id = L.request_session_id
JOIN sys.dm_tran_session_transactions TST ON ES.session_id = TST.session_id
JOIN sys.dm_tran_active_transactions AT ON TST.transaction_id = AT.transaction_id
JOIN sys.dm_exec_connections CN ON CN.session_id = ES.session_id
CROSS APPLY sys.dm_exec_sql_text(CN.most_recent_sql_handle) AS ST
WHERE resource_database_id = db_id()
ORDER BY L.request_session_id

-- in the following, you still need to put the database name in the DB_ID() function:

SELECT [TableName] = OBJECT_NAME(object_id),
last_user_update, last_user_seek, last_user_scan, last_user_lookup
FROM sys.dm_db_index_usage_stats
WHERE database_id = DB_ID('DatabaseName')
order by last_user_lookup desc

=> Some other handy stuff (I hope):

SELECT spid, cpu, physical_io, blocked, cmd, waittime,
substring(convert(varchar(20),last_batch), 1, 20) as "LASTBATCH",
substring(nt_username, 1, 15) AS "USERNAME",
substring(loginame, 1, 20) AS "LOGINNAME",
substring(hostname, 1, 15) AS "HOSTNAME",
DB_NAME(dbid) AS "DB",
substring(program_name, 1, 40) AS "PROGRAM"
FROM master.dbo.sysprocesses
-- WHERE cmd not like 'AWAIT%' -- show only active statements.
-- AND loginame <> 'sa'

Using "WHERE cmd not like 'AWAIT%'" shows active connections for that moment.
Using "AND loginame <> 'sa'" shows the non-sa connections (not the sys processes).
Also note the "blocked" column, which shows you which spids block each other.

=> Showing Blocked sessions:

SELECT a.session_id AS blocked_session_id, b.session_id AS blocking_session_id, c.text AS blocking_text,
e.wait_type AS blocking_resource,
e.wait_duration_ms, d.text AS blocked_text FROM sys.dm_exec_requests a INNER JOIN sys.dm_exec_requests b
ON a.blocking_session_id = b.session_id CROSS APPLY sys.dm_exec_sql_text(b.sql_handle) c CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) d
INNER JOIN sys.dm_os_waiting_tasks e ON e.session_id = b.session_id

=> Showing some important locks (types) in some specific database:

SELECT distinct s.login_name, s.nt_user_name, r.request_session_id, r.request_type, r.request_mode FROM sys.dm_tran_locks r, sys.dm_exec_sessions s
WHERE s.session_id=r.request_session_id
and r.request_mode in ('IX','SIX','X','Sch-M','U','RangeX-X','IU','SIU')



1.45 BCP TABLE DATA IMPORTS OR EXPORTS TO OR FROM FLAT ASCII FILES.


There multiple ways to export table data to a flat file (.txt, or .csv), or to another format.
Likewise, there multiple ways to import data from a flat file (or excel sheet, or whatever source)
to a table.

We can use:
  1. SSIS,
  2. Import/Export Wizard (in SSMS, rightclick database -> Tasks -> Import data/Export Data
  3. old fashioned "bcp" prompt utility
Let's see some examples on bcp:

Example 1: import of data in flat file "contacts.txt", to the table CONTACTS:

C:\TEST> bcp SALES.dbo.contacts in contacts.txt -c -T -t,

The bcp utility can take a number of parameters.

SALES.dbo.contacts.........specifies which table in which database (here it is table contacts in database SALES)
in|out.....................specifies the "direction" that bcp works: in = from file to SQL Server table; out=from SQL Server table to file
-T.........................specifies that Windows Authentication is used (Trusted connection), or use:
-U username -P password....specifies that a SQL Server login is used. Then the credentials has to be supplied.
-c.........................specifies that a normal characterbased file is used.
-t........per default, bcp assumes a tab delimited file, meaning that the fields are seperated by a "tab" character.
...........................But suppose you have a "," comma delimited file, you can specify that with "-t,"
-F2........................Does the flat file uses a header, or is the first row already normal data?
...........................If the file uses a header row, then with "F2" you can tell bcp to start copying from the second row.
-S.........................Servername, or named InstanceName like SERVER\Instance

Whenever you do an import to a table from a file, you might need to delete the existing rows from that table first.
Unless ofcourse, these are new rows, and you just want to append data.

Example 2: export of Table data to a flat file:

D:\TEST> bcp SALES.dbo.customers out customers.txt -c -T -t;

If you would connect not to local Instance, but for example to a named instance INSTB on ServerB:

D:\TEST> bcp SALES.dbo.customers out customers.txt -S ServerB\INSTB -c -T -t;

Example 3: export of all tables to flat files:

DECLARE @tablename varchar(100)
DECLARE @sqlstr1 varchar(100)
DECLARE @sqlstr2 varchar(100)

DECLARE cur1 CURSOR FOR
SELECT table_name FROM information_schema.tables

OPEN cur1

FETCH NEXT FROM cur1 INTO @tablename

set @sqlstr1='bcp mydatabase.dbo.['
set @sqlstr2='.txt -S SQLCLUS10\SQLCLUS10A -c -T -t;'

WHILE (@@fetch_status<>-1)

BEGIN
PRINT @sqlstr1+@tablename+'] out '+@tablename+@sqlstr2
FETCH NEXT FROM cur1 INTO @tablename
END

CLOSE cur1
DEALLOCATE cur1

This script will generate the statements, which you then can copy in a DOS bat or cmd file.



1.46 SQL Server and SSL


Keep in mind that:

(1). We can use the "database encription" feature, which ecncrypts table data for certain columns. This uses the service master key,
and database masterkey, from which you create a certificate and a symmetric key, which you can use to encrypt table data.

(2). You can also use SSL (or IPSec) on the TCPIP stack of the Operating System. Then, configure SQL Server using "SQL server Configuration Manager",
and force SQL Server to use SSL. Here, we will only consider SSL and not IPSec.

For more information about (1), you can view section 13 of this note.

It's important to know that with (1), (selected) table data is always encrypted, even if you would browse it locally.
So, it's independent of any sort of network connection.
Using (2), traffic over the network is encrypted.

In this section, we will briefly spend a few words on option 2.

In effect, the Net library on the client and server, works with ssl to encrypt data.
You need to configure the SQL Server engine this way:

-Install a certificate in the OS of the server computer.

-On the Server, start "SQL Server Configuration Manager".

-Right-click the protocol for the server you want, and then click Properties.

Fig 3.




-On the Certificate tab, configure the Database Engine to use the certificate you had installed earlier.

-On the Flags tab, view or specify the protocol encryption option. The login packet will always be encrypted.

When the ForceEncryption option for the Database Engine is set to Yes,
all client/server communication is encrypted and clients that cannot support encryption are denied access.

When the ForceEncryption option for the Database Engine is set to No, encryption can be requested
by the client application but is not required.

If you applied changes, then you need to restart the SQL server service.


On clients, or App servers, a connect string then might use as part of the string:

... Encrypt=True;TrustServerCertificate=True ...

Other client configuration items are possible too, depending on the client.

Also, it turns out that when Encryt=True and TrustServerCertificate=False, the driver will attempt
to validate the SQL Server SSL certificate. By setting the property TrustServerCertificate=True
the driver will not validate the SQL Server SSL certificate.



1.47 OPENQUERY() AND OPENROWSET()


OPENQUERY():

1. Oracle example:

The OPENQUERY() function is used to retrieve data via a "linked server".

If we take a look at just a normal SELECT statement, it simply looks like this: SELECT * FROM TABLE_NAME

The OPENQUERY function is used like this:

SELECT * FROM OPENQUERY(LinkedServerName, 'Query')

It executes the specified pass-through query, on the specified linked server. This functionly works in all cases where you can create a linked server, like Oracle, DB2, Access, Excel etc.. Suppose we have created within SQL Server, a Linked Serevr object called "OraConn", which connects to an
Oracle Instance (with database 'TESTDB').

Then the OPENQUERY function coulde be used like:

SELECT *
FROM OPENQUERY (OraConn, 'SELECT * FROM JOHN.INVENTORY')

Where the Query 'SELECT * FROM JOHN.INVENTORY' operates in the Oracle database TESTDB, on John's table called INVENTORY.

2. Excel example:

As another example, let's try this with an Excel sheet:

Suppose we have the Excel sheet "D:\TEST_IMPORTS\sales.xls". In this sheet, we must have defined a 'Range".
In this example, the Range spans the area A1:B5, and we have named it "SalesOfEmployees".

Suppose we still have no LinkedServer to Excel.Now, let's try to make the linked server:

EXEC sp_addlinkedserver EXCEL, 'Jet 4.0', 'Microsoft.Jet.OLEDB.4.0', 'D:\TEST_IMPORTS\sales.xls', NULL,'Excel 5.0;'

Here, the name "EXCEL" is just the name we gave to our new Linked Server. It could have been anything (as long as it
complies to naming rules of SQL Server).
Once the Linked Server is there, we can create the following query in the Query Window:

SELECT * FROM OPENQUERY(Excel, 'SELECT * FROM SalesOfEmployees')

OPENROWSET():

This is a very handy function too. It can be used in many occasions.
Often it is used when you do not have a Linked Server, but you could have done so.
However, you just want to use a function that retrieves data, and which also uses all linked server "like"
parameters inside the function.

Let's try to acess John's INVENTORY table in the Oracle database "TESTDB1", like we did above.
However, this time the OPENROWSET function also need all connection parameters.

Example: getting data from an Oracle database.

SELECT * FROM OPENROWSET('MSDAORA', 'testdb';'john';'secret','SELECT * FROM JOHN.INVENTORY')

Example: Inserting a BLOB into a table.

There are many functions in SQL Server, for import/export of data. The OPENROWSET() function can also
be used to load a blob into a table. So let's try that. Suppose in C:\TEMP, we have the photo "mary.jpg".

TSQL statement for adding a BLOB using OPENROWSET():

UPDATE EMPLOYEE
SET EMPPHOTO =
(SELECT * FROM OPENROWSET (BULK 'C:\TEMP\mary.jpg', SINGLE_BLOB) a)
WHERE EMPID=3

SELECT * FROM EMPLOYEE

EMPID...EMPNAME...EMPSALARY..EMPPHOTO
1.......Harry.....2000.50....NULL
2.......Nadia.....3000.00....NULL
3.......Mary......5000.00....0xFFD8FFE000104A4649...



1.48 SOME QUERIES TO SEE THE STATE OF THE TRANSACTION LOG


In the sections above, occasionally we already have seen some queries which might help
in viewing the state of Transaction Log.
For example, sections 1.4 and 1.44 gives some info on processes, locks, and resource usage.

Furthermore, the following trivial remarks are very important:

(1):

The socalled "Recovery Model" of a database, has huge consequences on how SQL handles the Transaction Log.
It's very important to have a good idea on that subject, so if needed, make sure you understand the differences
between the "Full-", "Bulklogged-" and "Simple" models.

(2):

If you need to do repairs on the transaction log (e.g. it's corrupt, or missing), you might want to see the following
note (e.g. in section 9): Small note to Keep SQL Server Up and Running.

Below you will only find some statements that returns some information on your transaction log.

1. Overall listing, and names of database files, and free/used space:

USE YOUR_DATABASE_NAME
GO

SELECT name AS [File Name] , physical_name AS [Physical Name], size/128 AS [Total Size in MB],
size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS [Available Space In MB]
FROM sys.database_files;

You can repeat the query ofcourse, during a batch, or other actions that takes place in your database,
to see, for example, how fast files fill up.

You can get similar information, about the size and %usage of the logfiles, by using:

DBCC SQLPERF(logspace)

2. Find an open transaction that might be longduring:

DBCC OPENTRAN()

3. View the size and status of the Virtual Logfiles (VLF's): DBCC LOGINFO()

USE YOUR_DATABASE_NAME
GO

DBCC LOGINFO()

(or use for example dbcc loginfo(your_database_name) like for example DBCC LOGINFO('SALES');

The output shows several fields of information. Most usefull is, I think, the number of vlf's, and their status.
If you see a very large number of vlf's, then that can be optimized by trying to "rebuild/recreate" the Tlog
at some convienient moment.
If you see a very limited number of vlf's, then that can be optimized too.
It's not so easy to say which number fits your databases best. You need more research for that.
The increments by which the log file is allowed to grow, also has effect on the number of vlf's.

A status of '0' means that the vlf can be reused again. A status of '2', means it's still in use.
If you see a status of '2' for the most vlf's for an extended period of time, your Tlog (and application code)
can probably be optimized for better results.

4. The fn_dblog() function:

In some respect, this function replaces former DBCC commands. However, some say that it's "status"
is "undocumented", which I doubt a bit. However, use it on test systems first.
It's a very powerful command: it allows you to view the transaction log records in the active part
of the transaction log file for the current database, or even the entire log.

The following page from Paul Randal is particularly "cool":

Using fn_dblog, fn_dump_dblog, and restoring with STOPBEFOREMARK to an LSN

The upper page gives you exceptional info on fn_dblog() and related functions, even to view transaction log backups!


Here is the most simple example of "fn_dblog()" usage:


USE YOUR_DATABASE_NAME
GO

Select * FROM sys.fn_dblog(NULL,NULL)
GO

The output is certainly extensive and quite hard to interpret. You must take note of which fields you like to see,
and certainly also use the "WHERE" clause to filter records of interest.

sys.fn_dblog is the modernised successor of the DBCC LOG and DBCC LOGINFO commands.

5. The DBCC LOG(database_name, output_level) command:

The DBCC LOG(database_name, output_level) command shows the transaction log for the specified database between the brackets.
The ouput level can be either 0,1,2,3,4.
The output strongly resembles that of sys.fn_dblog(). It's nowadays recommended to focus on the latter function,
and study how to use it properly.

Example:

DBCC LOG('sales', 3)

6. The "Profiler" or "trace" utility:

If there is one utility you should investigate, then it's the standard "Profiler" utility.
using this tool, you can create traces which shows a wealth of process information, resource usage,
and too much other stuff to mention.
Indeed, there are many events you can trace that are related to the Transaction log.



1.49 THE COLUMNSTORE INDEX (2012/2014/2016)


The clustered and non-clustered indexes from section 1.34, are the "traditional" indexes used in SQL.

They always look "table-like" and thus the storage is row-oriented. Ofcourse, differences exist between
clustered and non-clustered indexes as we know from section 1.34.

The COLUMNSTORE INDEX is new. It is a completely different from the traditional indexes, since the storage is column-oriented.
Rows can be very inhomogeneous, while columns are very homogeneous (same datatype, usually the same length).
This means that storage and compression can be very high compared with row-oriented data, as tables and "traditional" indexes.

While with tables and traditional indexes, pages are filled up with rows, with a COLUMNSTORE INDEX the pages are
filled with the values of one column only, until all columns (defined in the index) are done.

Especially with DWH and other large tables, this type of index might result in a performance gain.
However, depending on the SQL version, the "update/modification" options vary largely.

Overall, Columnstore indexes work well for read-only queries on large data sets.
Columnstore indexes give high performance gains for queries that use full table scans, but are not well-suited
for queries that seek through the data, searching for a particular value (as often is the case in transactional databases).

Simple example (2012/2014/2016):

CREATE NONCLUSTERED COLUMNSTORE INDEX csi_FactTable
ON FactTable
(CustName, Orderkey, OrderDate)
WITH (DROP_EXISTING = ON,
MAXDOP = 2)
ON "default"
GO

So, as it is now, we have B-tree bases indexes (traditional clustered and non-clustered indexes),
and "columnar" indexes.


Simple example (2014/2016):

A clustered COLUMNSTORE index can only be created as of SQL 2014.

CREATE CLUSTERED COLUMNSTORE INDEX csi_MyTable
ON MyTable WITH (DROP_EXISTING = OFF)
Go

You must first create the rowstore table as a normal heap or traditional clustered index,
and then use the CREATE CLUSTERED COLUMNSTORE INDEX (Transact-SQL) statement to convert
the WHOLE table to a clustered columnstore index.
If you want the clustered columnstore index to have the same name as the clustered index, use the DROP_EXISTING option.

The following limitations exists:

-SQL 2012: Only a non-clustered COLUMNSTORE INDEX can be defined.
The index is not updatable. If effect, they are read-only.

-SQL 2014: Here, updateable clustered COLUMNSTORE indexes were introduced.
Nonclustered columnstore indexes are still read-only.

-SQL 2016: Architecture change. The limitations above, are lifted.

However some datatypes are still not possible, so in some cases, some table columns must be excluded
from the CREATE INDEX statement. See BOL or MSDN.




Section 2. Some Further Excercises:

2.1. Excercise: CODE TO GET THE BASIC CREATE TABLE STATEMENTS:


Surely, much better scripts exists than what you will find below. That's not the point.
The script below, will ONLY produce the basic "CREATE TABLE" statements, without the
PK and FK constraints, check constraints etc... So, it is only meant as an exercise.

It's just straightforward programming with variable declarations, loops etc..

It might be instructive to walk through the code. It contains many elements of what was discussed above.

DECLARE @TABNAME VARCHAR(128)
DECLARE @COLNAME VARCHAR(128)
DECLARE @COLTYPE VARCHAR(128)
DECLARE @COLLENGTH INT
DECLARE @COLSTART INT
DECLARE @COLEND INT
DECLARE @COLCOUNT INT
DECLARE @IS_NULLABLE VARCHAR(3)
DECLARE @PRINTNULL VARCHAR(10)
DECLARE @PRECISION INT
DECLARE @SCALE INT
DECLARE @COLSTRING VARCHAR(128)

SET NOCOUNT ON

DECLARE @TABMETA TABLE
(
I INT IDENTITY(1,1),
COLUMN_NAME VARCHAR(128),
DATA_TYPE VARCHAR(128),
CHARACTER_MAXIMUM_LENGTH VARCHAR(128),
IS_NULLABLE VARCHAR(3),
PRECISION INT,
SCALE INT
)

DECLARE cur1 CURSOR FOR
SELECT DISTINCT TABLE_NAME FROM INFORMATION_SCHEMA.COLUMNS --registers all columns of all tables

OPEN cur1
FETCH NEXT FROM cur1 INTO @TABNAME -- get the first tablename

WHILE (@@fetch_status<>-1)
BEGIN

DELETE FROM @TABMETA

-- get all column information of the table (in this loop), and put it into the tablevariable @TABMETA

INSERT INTO @TABMETA
(COLUMN_NAME,DATA_TYPE,CHARACTER_MAXIMUM_LENGTH,IS_NULLABLE,PRECISION,SCALE)
SELECT COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH,IS_NULLABLE, NUMERIC_PRECISION,NUMERIC_SCALE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME=@TABNAME ORDER BY ORDINAL_POSITION ASC

SELECT @COLCOUNT=(SELECT MIN(I) FROM @TABMETA)
SELECT @COLEND=(SELECT MAX(I) FROM @TABMETA)


PRINT 'CREATE TABLE '+@TABNAME+' ('
WHILE @COLCOUNT<=@COLEND
BEGIN
  SELECT @COLNAME=(SELECT LTRIM(RTRIM(COLUMN_NAME)) FROM @TABMETA WHERE I=@COLCOUNT)
  SELECT @COLTYPE=(SELECT LTRIM(RTRIM(DATA_TYPE)) FROM @TABMETA WHERE I=@COLCOUNT)
  SELECT @COLLENGTH =(SELECT CHARACTER_MAXIMUM_LENGTH FROM @TABMETA WHERE I=@COLCOUNT)
  SELECT @IS_NULLABLE=(SELECT IS_NULLABLE FROM @TABMETA WHERE I=@COLCOUNT)
  SELECT @PRECISION=(SELECT PRECISION FROM @TABMETA WHERE I=@COLCOUNT)
  SELECT @SCALE=(SELECT SCALE FROM @TABMETA WHERE I=@COLCOUNT)

  IF @IS_NULLABLE='NO'
    SELECT @PRINTNULL='NOT NULL'
  ELSE
    SELECT @PRINTNULL='NULL'

  IF @COLTYPE LIKE '%char%'
   SELECT @COLSTRING=@COLNAME+' '+@COLTYPE+'('+CAST(@COLLENGTH as VARCHAR)+')'+' '+@PRINTNULL
  ELSE
      IF @COLTYPE IN ('decimal','nummeric')
      SELECT @COLSTRING=@COLNAME+' '+@COLTYPE+'('+CAST(@PRECISION as VARCHAR)+','+CAST(@SCALE as VARCHAR)+')'+' '+@PRINTNULL
      ELSE
      SELECT @COLSTRING=@COLNAME+' '+@COLTYPE+' '+@PRINTNULL

  IF @COLCOUNT<@COLEND
  SELECT @COLSTRING=@COLSTRING+','

  PRINT @COLSTRING

  SELECT @COLCOUNT=@COLCOUNT+1
END
PRINT ')'

-- get a newline between the create table statements
PRINT char(13)

FETCH NEXT FROM cur1 INTO @TABNAME
END

CLOSE cur1
DEALLOCATE cur1


2.2. Excercise: SIMPLE EXAMPLE OF AN AUTOMATED TSQL BACKUP SCRIPT:


Read the following example carefully.

DECLARE @dbname VARCHAR(100)
DECLARE @DATE DATETIME
DECLARE @BACKUP_DATE VARCHAR(128)
DECLARE @FILE VARCHAR(128)
DECLARE @FULLFILE VARCHAR(128)
DECLARE @EXTENSION VARCHAR(128)

SELECT @DATE=GETDATE()
SELECT @BACKUP_DATE=CONVERT(VARCHAR(10),@DATE,20)
SELECT @FILE='\\backupsrv\sqlbackup$\sqlclus8a\' -- Attn: place here the right location like F:, or \\server\share
SELECT @EXTENSION='.bak'

DECLARE cur1 CURSOR FOR
SELECT name
FROM master.dbo.sysdatabases where name not in ('tempdb')

OPEN cur1
FETCH NEXT FROM cur1 INTO @dbname

WHILE (@@fetch_status<>-1)
BEGIN
SELECT @FULLFILE=@FILE+@dbname+@BACKUP_DATE+@EXTENSION

-- If you only want to PRINT the backup statements, then uncomment the PRINT statement
-- If you want the script to execute the backup statements, then uncomment the EXEC statement

-- EXEC ('BACKUP DATABASE '+'['+@dbname+']'+' TO DISK='''+@FULLFILE+''' WITH INIT ')
-- PRINT 'BACKUP DATABASE '+'['+@dbname+']'+' TO DISK='''+@FULLFILE+''' WITH INIT '

FETCH NEXT FROM cur1 INTO @dbname
END

CLOSE cur1
DEALLOCATE cur1

-- End of Script

Important:
- If the database names use a hyphen or underscore, you better enclose the database names in square brackets.
This was already taken care of in the script above.
- Using this script, the backup location will fill up with backup dumps, so some maintenance is still neccessary.
- If another backup routine (say, routine 2) will also run, and if you only need additional backups that
do not break the chain of subsequent backups of routine 2 (full and transaction log backups),
then use the "WITH COPY_ONLY" clause as well.
So, in the upper code, it then will be written as "WITH INIT, COPY_ONLY".



Section 3. Old stuff:

3.1. OLDER FILE LISTING QUERIES AND COMMANDS:


Some time ago, when SQL 7 and SQL 2000 were the current versions, as time went by, I created a txt file
listing DBA like queries and all sorts of example statements.
It's pretty old ofcourse. Still, a lot is usable for 2005 and 2008. For example, the use of many 'stringfunctions' have not changed,
so you might still find some usefull stuff in here.

If you want to view it, use this link:

Listing of SQL Server 2000 query examples (txt file)



4. A few words on Performance analysis:

This is often one of the toughest things to do.

I can't believe how often I was wrong in my first conclusions, which later on, turned to be "rather far" from the truth.

First of all, very often (and I mean really often), ill-written SQL, or program code, is often the main source of preformance problems.
And true, something like a missing index (which used to be there), can spoil the party too.

Index reorganize- or rebuilds can be very useful too. It can be very important to perform such maintenance on a regular basis.

However, in my experience, it's quite often not the main cause for (rather sudden) large performance drops.
As we will see later on, the app code itself is often to blame, or too many SQL (stored procedure) re-compiles,
or too many threads at once (CXPACKET), or too many threads going out their timeslot before performing uselfull work (Literals, SOS_SCHEDULER_YIELD),
or index(es) not being used, or wrong executionplans etc..


4.1. SOME VERY TRIVIAL STUFF YOU MIGHT WANT TO CHECK FIRST (or to keep in mind...):


The following is very trivial, and likely not to be the direct cause of your problem,
but it cannot hurt to check the following items:
  1. There are really some minimum requirements on RAM memory and number of cores (or processing capability).
    For example, unless you have a play-, test-, development database, it's usually hard to take something
    as small as 4G memory, seriously.

  2. If there was a rather "sudden" drop in performance, then check if a new app release (or patch, or fix) was performed.
    Ofcourse, this drop in performance should also already have been observed in preceding Test- or UAT upgrades.

  3. If there was a rather "sudden" drop in performance, and If using a VM's, you might check if the database VM
    (or App Server VM's) were moved to another busy, or overcommitted Host.

  4. If there was a rather "sudden" drop in performance, are you sure that not an AV version- or upgrade,
    have excluded to scan database files? (I only have seen such issues quite some time ago).

  5. If using SAN's as the underlying storage system, then the following item is probably not so relevant.
    However, you might use local SSD's or other sort of local storage (e.g. if using AlwaysOn clustering).

    Optional: Check that the datafiles, and the transactionlogs, use different drives of filesystems (different spindels).
    However, if using SAN's, it's not always easy to truly seperate them, because often they all are stored on
    some Volume(s) on some LUN('s)... somewhere.
    Thus notice, that this does not have to be an issue at all. But you might want to investigate it anyway.

  6. Check the guidelines for the "scratch workspace" TempDB.
    There are some guidelines on the number of files, and size etc.. Also, try to place it on a different drive/filesystem too.
    However, the same SAN considerations as above, plays a role here too.


Indeed, the upper items are very trivial. Now let's go to more SQL Server involved issues.
However, none of the points below represents an "absolute truth", and usually many different facets
of your system needs to be investigated.


4.2. SOME SQL SERVER POINTERS TO INVESTIGATE PERFORMANCE:


As said before, if performance is low, the best bet is to tune/investigate SQL code, stored procedures, triggers etc..,
and execution plans, which are in use on your system.

However, there are some other ways to investigate the degradation of performace.
Here is a listing of a selection of some of those pointers.

1. Listing of changed database objects (meaning DDL, as of a certain date)

-- To find a listing of changed objects, go to the database and try the following query.
-- It might help to determine a relation of a sudden performance drop since a certain time,
-- and changed objects. However, it will not show you a complete listing though.

select name, type_desc, create_date, modify_date
from sys.all_objects

select name, type_desc, create_date, modify_date
from sys.all_objects where modify_date > '2017-05-05' -- example date

2. Listing of Read/Write IO, and "stalls", on the database files.

-- Maybe your database consists of 10 files or so. Some databases may have just a few files,
-- while others may have many dozens of files.
-- It mightbe useful to find out if a certain file (or files), really spike out in reads and/or writes.
-- If so, you might consider to move a table or index to another filegroup.
-- The following function may shed som light on this:

SELECT * FROM fn_virtualfilestats(9, null)

Here, I used an example DB_ID, of "9". Every database on your instance, has it's unique DB_ID.

To see all sorts of database properties (like the DB_ID), you might use "select * from sys.databases".

3. Listing of Tables and Indexes, ordered by the number of rows.

-- One of the best queries for index analysis is found below.
-- It shows you the tables, and all associated indexes defined on the tables, as well as the number of rows.

-- From the listing, you might for example identify very large "logging" tables, which, if you empty them,
-- might improve performance.
-- You might also identify large tables, having too many, or too little indexes defined.
-- You might also find out that large frequently accessed tables, have no clustered index (indid=1, a heap=0).

SELECT
substring(sysobjects.name,1,50) AS TABLENAME,
substring(sysindexes.name,1,50) AS INDEXNAME,
sysobjects.id, sysindexes.indid, sysindexes.groupid,sysindexes.rows
FROM sysobjects, sysindexes
WHERE sysobjects.id=sysindexes.id
ORDER BY sysindexes.rows desc

4. Identifying the top "wait" events.

-- There are quite a few types of "wait" events (or causes) that might slow down processing.
-- The query below will not directly lead to a crisp conclusion to understand why performance is low.
-- But, it will show you the main classifications of waits, and an indication how often it occurs.

select top 10 *
from sys.dm_os_wait_stats
where wait_type not in
(
'KSOURCE_WAKEUP', 'SLEEP_BPOOL_FLUSH', 'BROKER_TASK_STOP',
'XE_TIMER_EVENT', 'XE_DISPATCHER_WAIT', 'FT_IFTS_SCHEDULER_IDLE_WAIT',
'SQLTRACE_BUFFER_FLUSH', 'CLR_AUTO_EVENT', 'BROKER_EVENTHANDLER',
'LAZYWRITER_SLEEP', 'BAD_PAGE_PROCESS', 'BROKER_TRANSMITTER',
'CHECKPOINT_QUEUE', 'DBMIRROR_EVENTS_QUEUE', 'LAZYWRITER_SLEEP',
'ONDEMAND_TASK_QUEUE', 'REQUEST_FOR_DEADLOCK_SEARCH', 'LOGMGR_QUEUE',
'SLEEP_TASK', 'SQLTRACE_BUFFER_FLUSH', 'CLR_MANUAL_EVENT',
'BROKER_RECEIVE_WAITFOR', 'PREEMPTIVE_OS_GETPROCADDRESS',
'PREEMPTIVE_OS_AUTHENTICATIONOPS', 'BROKER_TO_FLUSH'
)
order by wait_time_ms desc

Much better statement can be found at Randall's blog, together with a very nice explanation of the most important wait types.
You can find that here.
I surely recommend reading that blog.

Some important wait types are not hard to understand:

-PageIOLatch_: is associated with retrieving pages from/to memory from/to disk.

-WriteLog: Is associated with writes to the Transaction Log.

-LCK_M_U and LCK_M_X: is associated with processes trying to aquire locks (before modifying data).

Some wait types are a little bit more difficult to place (at first sight).

-CXPACKET: Might be an indication that processes (or threads) are waiting for others to complete.

It's a typical wait type, seen with parallel queries.
It might be worth it, to investigate to alter the MAXDOP value at the instance level.
In SSMS, if you rightclick the Instance name, and choose "properties" and next select "advanced",
you will see the "Max Degree Of Paralellism". A value of "1" means that no limits are in effect.
But you could for example, limit it to "4", if you have 6 cores, meaning that 4 may run in parallel, and
that might give rise to a better throughput.

-SOS_SCHEDULER_YIELD: Might be an indication that processes (or threads) are running out of their timeslots, before doing usefull work.

Often, this wait type is regarded as not being a problem. It's indeed often difficult to valueate this wait type correctly.

When this wait_type is in the upper region of the listing, it might point to bad execution plans.
In some cases, I have seen that setting the Database property "Parameterization", from "Simple" to "Forced",
can make quite a difference.
If it indeed helps, just depends on SQL that the app is fireing towards the database.

It's related to the fact that SQL tries to create a sort "uniform" plan for very similar SQL statements that differ
only in the "literal" like in "WHERE CUST_ID=022356.
Then it will store the same re-usable plan for any statement where we have "WHERE CUST_ID=@i".
It simply turns out, that if you use the "FORCED" parameterization, in exceptional cases, SQL is forced
to use a plan for the literal, which *might* help performance.

5. Identifying re-compilations.

-- SQL code like stored procedures will be pre-compiled in such a way, that the execution plan is calculated
-- and stored with the code.
-- It's not good for performance if you notice excessive re-compilations.
-- One reason for the re-compilations, might be the use of DDL in the code, or using TEMP tables.
-- In such case, SQL might "think" that the environment has changed and a recompilation should be done.
-- I must not exeggerate here, but generally it seems best to try to avoid the use of TEMP tables in code.
-- How can you find excessive re-compilations?
-- It can be done using a "trace", or by using the OS utility "perfmon", or simply by using Queries.
-- A very simple note on "tracing" by using the "SQL Profiler", can be found in (7) below.

It may help to see if indeed many temp tables are used. Tempory tables start their name with "#",
and are usually stored in TempDB.

USE TEMPDB
GO

SELECT substring(name, 1, 40), type_desc, create_date FROM sys.all_objects
WHERE name like '#%' order by create_date desc

A listing of a few objects says nothing at all. However, if you see a rather large number from
a recent time, it might be worth to look further.
In such case, I often script the whole database (including procedures), to an ascii file,
and next simply search that file (e.g. in notepad) for the identifier "CREATE TEMPORARY".

The following query might help in identifying recompiles, by finding large "plan_generation_num" counts:

select top 30
sql_text.text,
sql_handle,
plan_generation_num,
execution_count,
db_name(dbid) AS "Database", creation_time, last_execution_time
from
sys.dm_exec_query_stats a
cross apply sys.dm_exec_sql_text(sql_handle) as sql_text
where
plan_generation_num >1
order by plan_generation_num desc

6. Some DBCC commands related to memory.

Some DBCC commands may help solve performance problems.
There are always pro's and contra's in using them.
However, in some cases, they proved helpful in some situations that I have seen.

6.1. DBCC FREEPROCCACHE()

The statement can be used to clear the plan cache of the entire Instance, or to clear a specific plan (as of SQL2008).
The "plan cache" consists of the pre-calculated access methods to execute queries and stored procedures.
If you clear the whole cache, SQL Server will calculate those access methods again, whenever for example, a stored procedure
is called again. This might impact performance a litle bit, but generally speaking it should not hurt too much.

Why would you use it? In some cases, really badly programmed applications fill up your cache, with ineffective plans.
In very rare cases, it seems better "to start all over again", but you cannot reboot SQL Server.

In other cases, a too large part of the cache might be used for "single use" plans, for code that was once used, but then
was hardly called anymore. There are algolrithms in place to remove old entries, but you might need to speed it up.

Or, as another example, one stored procedure really behaves "crazy", and you want SQL Server to recalculate it again.

Example 1: clear all plans from the Instance:

DBCC FREEPROCCACHE()

Example 2: clear a specific plan (as of SQL2008):

Here is a good example from Microsoft's Technet.

USE AdventureWorks2008R2;
GO
SELECT * FROM Person.Address;
GO
SELECT plan_handle, st.text
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
WHERE text LIKE N'SELECT * FROM Person.Address%';
GO
result set:

plan_handle.........................................text
-------------------------------------------------- -----------------------------
0x060006001ECA270EC0215D05000000000000000000000000 SELECT * FROM Person.Address;

-- Remove that specific plan from the cache:

DBCC FREEPROCCACHE (0x060006001ECA270EC0215D05000000000000000000000000);

GO

Full syntax:

DBCC FREEPROCCACHE [ ( { plan_handle | sql_handle | pool_name } ) ] [ WITH NO_INFOMSGS ]

So, as you can see, 4 "regions" (so to speak) can be cleared: (1) the entire cache, (2) or just one plan handle, (3) but a sql handle
can be a parameter too (and it works similar as a plan handle), and (4) you can clear a certain "pool" using it's "pool name",
and more on pools in section 2.

You can also clear the "plan cache" associated with a specific database, using:

DBCC FLUSHPROCINDB(db_id)

Where the "db_id" is the "database_id". You can simply see all database names, and their id's, using
"SELECT name, database_id FROM sys.databases".
Ofcourse, you can use the function "db_id('databasename')" as well.

6.2. DBCC MEMORYSTATUS()

This statement delivers you a wealth of detailed memory status information.
It's certainly not easy to interpret all listed items here, but if you take time to study the output,
some parts should be fairly reckognizable.

The output starts with some general memory listings. Next, listings of "MEMORY_CLERKs" follow.
Then, all sorts of listings appear with regards to "pools" and "caches". Etc.. etc..
Actually, it is so much info, that it's useless to highlight some points from the output.
This typically is such a thing where it holds that "you must see it for yourself..."

Example:

DBCC MEMORYSTATUS()

6.3. DBCC FREESYSTEMCACHE()

You might think that this "looks" a lot like "DBCC FREEPROCCACHE()", but that one deals with cached "plans",
while here we have caches of objects related to a certain database, or the whole instance (not the buffer cache).
For example, we have a "connection pool", or an "object metadata cache", and lots of other systemcaches.
Also, memory is reserved for "result sets" (from queries and the like) as well.

You can clear memory of systemcaches associated with a certain database, or even Instance wide.

You might run such statements when you find in the SQL Server log that you are extremely low on memory.

Examples:

DBCC FREESYSTEMCACHE ('ALL')
DBCC FREESYSTEMCACHE ('sales') -- sales database
DBCC FREESYSTEMCACHE ('tempdb')
DBCC FREESYSTEMCACHE ('SQL plans')

6.4. DBCC DROPCLEANBUFFERS()

The checkpoint process should write "dirty" pages (modified pages) to the database files.
In a way, after that, you may call them "clean".

I myself have not seen much results of the command, but others do report some benefits.

So, in case of severe memory pressure, you might take the following actions:

CHECKPOINT
GO
DBCC DROPCLEANBUFFERS()
GO

It's interesting to see how Microsoft formulates the use of this command (from technet):

Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server.
To drop clean buffers from the buffer pool, first use CHECKPOINT to produce a cold buffer cache.
This forces all dirty pages for the current database to be written to disk and cleans the buffers.
After you do this, you can issue DBCC DROPCLEANBUFFERS command to remove all buffers from the buffer pool.


7. Locks and Latches.

Locks operate on a logical level. They are related to database DML operations like INSERT, UPDATE, DELETE, but even with SELECTS too.
If needed, the RDBMS engine aquires Locks on row(s), page(s) and objects, as required by the DML operation.
They must be used for synchronization, and implementing orderly use of for example rows of a table, so that modifications
on a certain row can only be done by one process at the time.

Latches are low-level synchronization objects, used by the SQL Server Storage engine.
They are especially associated with memory pages. When multiple processes (or multi-hreaded app) want to manipulate pages,
a sort of "flag" must be used in order to let pages be manipulated by one process at the time.
Here we talk about more low-level operations like IO of a page from or to a storage system.

In operating system language, such flags might be called "handles", or "semaphores", or "critical sections", depending
on the sort of flag. Such flags are synchronisation objects, used for an orderly access on resources.
To make that concrete: multiple processes may read a shared data structure, like a page in memory,
but writing to that one explicit shared data structure, must be performed by one process (or single-threaded) at the time.

In SQL Server they are called "latches".

Locks and latches have a sort of different scope, but it is true that also a lock must have
an underlying (more low-level) "physical" implementation, albeit a structure in memory describing
what it is meant for, for the interpreting application.

Locks:

We already have seen several queries showing blocked sessions (du to locking), or showing locks, and the types of locks.

For example:

SELECT distinct s.login_name, s.nt_user_name, r.request_session_id, r.request_type, r.request_mode FROM sys.dm_tran_locks r, sys.dm_exec_sessions s
WHERE s.session_id=r.request_session_id
and r.request_mode in ('IX','SIX','X','Sch-M','U','RangeX-X','IU','SIU')

Please see section 1.4 (1.4. SOME DICTIONARY QUERIES FOR VIEWING SESSIONS, LOCKS, BLOCKING SESSIONS, OBJECT ACCESS) for more information.

In this particular query, the type of lock, or request mode, in the WHERE clause, is restricted to be of the type
('IX','SIX','X','Sch-M','U','RangeX-X','IU','SIU')

Indeed, there are different types of Locks. Usually the duration of a lock is very short.

-Suppose a user does an update of some field in some row, then the lock should exist only for a fraction of a second.
-However, some silly batch might issue nasty locks for the whole duration of its run.

Some types of locks are very restrictive, like the "exclusive lock X", aquired, when an INSERT, UPDATE, DELETE
statement is performed.
Other Locks are more harmless (so to speak) the the "S" lockmode, used to signify that a process reads some resources
like the rows of a table.

A partial listing of classes of locks (thus generalizations) follows:

Shared (S): Used for read operations that do not change or update data, such as a SELECT statement.
Update (U): Used for Update operations.
Exclusive (X): Used for data-modification operations, such as INSERT, UPDATE, or DELETE.
Intent (I): Used to establish a lock hierarchy.
Schema (sch): Used when DDL operations take place (e.g. dropping a column etc..).
Bulk Update (BU): Used with Bulk Insert type of statements, or TABLOCK and similar clauses.

So, in some of the queries listed before, you might thus see the "X" locktype, or "IX" locktype, or "IU" locktype etc..

Among the large amount of webdocs on the Net, a rather nice and easy explanation of locks can be found here:

All about locking in SQL Server (sqlshack.com)

Just repeating a few queries here from section 1.4:

-- note the "blocked" sessions:

SELECT spid, cpu, physical_io, blocked, cmd, waittime,
substring(convert(varchar(20),last_batch), 1, 20) as "LASTBATCH",
substring(nt_username, 1, 15) AS "USERNAME",
substring(loginame, 1, 20) AS "LOGINNAME",
substring(hostname, 1, 15) AS "HOSTNAME",
DB_NAME(dbid) AS "DB",
substring(program_name, 1, 40) AS "PROGRAM"
FROM master.dbo.sysprocesses
WHERE cmd not like 'AWAIT%' -- show only active statements.
AND loginame <> 'sa'

-- quick overviews:

exec sp_lock

exec sp_who2

-- note the blocking- and blocked SPIDs, as well as the statements, in the query below:

SELECT a.session_id AS blocked_session_id, b.session_id AS blocking_session_id, c.text AS blocking_text,
e.wait_type AS blocking_resource,
e.wait_duration_ms, d.text AS blocked_text FROM sys.dm_exec_requests a INNER JOIN sys.dm_exec_requests b
ON a.blocking_session_id = b.session_id CROSS APPLY sys.dm_exec_sql_text(b.sql_handle) c CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) d
INNER JOIN sys.dm_os_waiting_tasks e ON e.session_id = b.session_id

-- just one general statement:

SELECT
L.request_session_id AS SPID,
DB_NAME(L.resource_database_id) AS DatabaseName,
O.Name AS LockedObjectName,
P.object_id AS LockedObjectId,
L.resource_type AS LockedResource,
L.request_mode AS LockType,
ST.text AS SqlStatementText,
ES.login_name AS LoginName,
ES.host_name AS HostName,
TST.is_user_transaction as IsUserTransaction,
AT.name as TransactionName,
CN.auth_scheme as AuthenticationMethod
FROM sys.dm_tran_locks L
JOIN sys.partitions P ON P.hobt_id = L.resource_associated_entity_id
JOIN sys.objects O ON O.object_id = P.object_id
JOIN sys.dm_exec_sessions ES ON ES.session_id = L.request_session_id
JOIN sys.dm_tran_session_transactions TST ON ES.session_id = TST.session_id
JOIN sys.dm_tran_active_transactions AT ON TST.transaction_id = AT.transaction_id
JOIN sys.dm_exec_connections CN ON CN.session_id = ES.session_id
CROSS APPLY sys.dm_exec_sql_text(CN.most_recent_sql_handle) AS ST
WHERE resource_database_id = db_id()
ORDER BY L.request_session_id

-- just another general statement, selecting on certain types of locks:

SELECT distinct s.login_name, s.nt_user_name, r.request_session_id, r.request_type, r.request_mode FROM sys.dm_tran_locks r, sys.dm_exec_sessions s
WHERE s.session_id=r.request_session_id
and r.request_mode in ('IX','SIX','X','Sch-M','U','RangeX-X','IU','SIU')

-- just another general statement, but extremely simple:

SELECT * FROM sys.dm_exec_requests
WHERE DB_NAME(database_id) = 'TheDatabaseNameOfInterest'
AND blocking_session_id <> 0

Latches:






EOF