Caveman's Blog

My commitment to learning.

Posts Tagged ‘Sql Server

SQL Server: Table Partitions as an archiving solution

leave a comment »

Problem Statement

How can you design an archiving solution on a large table without deleting any data and also achieve improved performance on CRUD operations (on the same table)? The assumption in this scenario is that the most recent data would be accessed more often that the older data.


Archiving solutions can be of two types; 1) passive and 2) active. A passive solution is one where the historic data is archived in another table on another database, making the data unavailable. An active solution is one where the historic data is archived and will still be available for access without much of an impact on the application performance. A large table typically contains millions of rows and probable has a size that runs into several gigabytes. Just the size of the table makes it very expensive to perform CRUD operations and difficult to maintain indexes.

Table Partitioning in SQL Server 2005 and up lets a table data and indexes to be stored in several smaller partitions. This feature is very useful in providing a way to easily maintain and perform database operations on that table. Each partition will be stored in a different file which can be part of a filegroup. Data is distributed evenly between the files in a filegroup. Allowed columns can be used as a partition key which is the criteria for partitioning. You will be able to define the boundaries of a partition by defining the limits of the partition key. This division is based on how you access the rows of the table. In addition to identifying the partition key, we also will want to include that key as part of the index and also partition the index. Partitioning the index is called as index alignment. This was when the index portion will be stored along with the data rows stored in the partition.

Dividing a table into several files gives us the flexibility of storing those files on separate drives. We can store the files that contain data from the recent years on faster drives as opposed to storing the older data on slower drives. Going with assumption in the problem statement that the most recent data is accessed more often that the older data, we will have improved the performance on this table because we will have faster response times thanks to the faster drives.

Note: All data types that are valid for use as index columns can be used as a partitioning column, except timestamp. The ntext, text, image, xml, varchar(max), nvarchar(max), or varbinary(max) data types cannot be specified. [1]

1. Partitioned Tables and Indexes
2. Create Partitioned Tables and Indexes

Written by cavemansblog

July 2, 2013 at 9:32 pm

SQL Server – Clean Buffers

with one comment

Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server. To drop clean buffers from the buffer pool, first use CHECKPOINT to produce a cold buffer cache. This forces all dirty pages for the current database to be written to disk and cleans the buffers. After you do this, you can issue DBCC DROPCLEANBUFFERS command to remove all buffers from the buffer pool. [1]


It is recommended these commands should not be executed in a production environment where the SQL Server caching helps in gaining performance. Running these commands could adversely impact the server performance.


Written by cavemansblog

December 10, 2012 at 11:34 pm

Posted in Uncategorized

Tagged with ,

Sql Server: Temp table vs table variable

leave a comment »

Temp table vs table variable. One question that lingers in my mind is; how to decide between using one over the other?  I will present some points that I have gathered that could aid in helping you make a decision in picking one over the other:

1. Reseed the index of a table variable is not allowed.

2. We will not be able to explicitly create Indexes on a table variable.

3. Altering the schema of a table variable is not possible

4. Table variables do not have to drop it after usage, they will be automatically cleaned up when they are out of scope.

5. Transactions involving table variables last only for the duration of an update on the table variable. Therefore, table variables require less locking and logging resources. They are not affected by transaction rollbacks.

6. Table variables contribute to more efficient Query Processing.

7. CHECK constraints, DEFAULT values and computed columns in the table type declaration cannot call user-defined functions.

8. The table declaration includes column definitions, names, data types, and constraints. The only constraint types allowed are PRIMARY KEY, UNIQUE, NULL, and CHECK.

Table variables will offer best performance when the data size is small and the . The next time you have to make a choice, choose wisely between the two and make a wise choice for the usage to derive optimal performance from your queries.

1. Table (Transact SQL) – Microsoft SQL Server

Written by cavemansblog

October 24, 2012 at 12:03 am

SQL Server: Case sensitive string comparison

leave a comment »

Here is a useful tidbit. We can use one of the following two methods to perform a case sensitive string/field comparison when a SQL Server database does not support case sensitivity.

declare @a as nvarchar(20) = 'Test'
declare @b as nvarchar(20) = 'TEST'

if(convert(varbinary,@a) = convert(varbinary,@b))
   select 'identical'
   select 'non-identical'

--Method 2:
if(@a = @b COLLATE Latin1_General_CS_AS)
   select 'identical'
   select 'non-identical'


Written by cavemansblog

August 15, 2012 at 10:26 pm

SQL Server: Restore a database from a .mdf file.

leave a comment »

In this blog post I will show you how to restore a database from a .mdf file alone. I am working with the AdventureWorks database in this demonstration. Download the .mdf file for the AdventureWorks database from CodePlex.

1. Open SQL Server Management Studio (SSMS).
2. Right click the Databases folder. select Attach from the context menu.
3. Click Add and select the appropriate .mdf file. Click Ok, and then click Ok again. You will get an error at this time because SSMS could not find the corresponding .ldf file.
4. Select the .ldf file entry and click Remove and click Ok.
5. You have successfully restored a database from the .mdf file.

Check out the steps as a pictorial in the slide show below:

This slideshow requires JavaScript.

Start/Stop SQL Server Replication Agent using TSQL

leave a comment »

On a recently client engagement we ran into an issue with a nightly SQL Server job. The job was unable to generate data for a certain warehouse database. As it tuns out to be via process of elimination, I was able to figure out that there were some SQL distribution agents that were interfering with the nightly SQL job that caused the issue. I have stopped the distribution agents and ran the job manually to able to successfully populate the data, followed by re-starting the agents. The process of stopping and starting was done using the SQL Server Management studio.

Now that I knew how to populate the data, I had to automate the process of stopping and starting the distribution agents before and after running the nightly SQL job. Here are the stored procedures that can be used to achieve the same:

--STOP the Distribution Agent:
sp_MSstopdistribution_agent @publisher, @publisher_db, @publication, @subscriber, @subscriber_db

--START the Distribution Agent:
sp_MSstartdistribution_agent @publisher, @publisher_db, @publication, @subscriber, @subscriber_db</pre>
  • @publisher is the name of the Server that is publishing
  • @publisher_db is the database of the publisher server
  • @publication is the name of the table/object that is getting published
  • @subscriber is the name of the subscriber server
  • @subscriber_db is the name of the subscriber database

The above commands have to be executed in the publisher server against the ‘distribution’ database. Here is an example:

exec distribution.dbo.sp_MSstartdistribution_agent @publisher  = 'PUBLISHERSERVER',
@publisher_db   = 'PUBLISHERDB',
@publication    = 'TABLE_EMPLOYEE',
@subscriber     = 'SUBSCRIBERSERVER',
@subscriber_db  = 'WAREHOUSEDB'

Written by cavemansblog

March 12, 2012 at 11:21 pm

SQL Server: Index defragmentation

leave a comment »

DBCC INDEXDEFRAG: Index defragmentation is the process that reduces the amount of index fragmentation.This process does not hold any table locks long term while defragmenting an index, hence does not block running queries or updates. This is unlike the index building process or the re-indexing process when a table lock is enforced. The underlying table cannot be modified, truncated, or dropped while an online index operation is in process.To make sure that the index operation can be rolled back, the transaction log cannot be truncated until the index operation has been completed; however, the log can be backed up during the index operation. It is not suggested to use on very fragmented indexes.  Here is an example of MSDN as to what happens when an index is defragmented:

Figure: Index defragmentation in action [1].

DBCC DBREINDEX: Faster than dropping and re-creating, but during rebuilding a clustered-index, an exclusive table lock is put on the table, preventing any table access by users. And during rebuilding a non-clustered index a shared table lock is put on the table, preventing all but SELECT operations to be performed on it.

REBUILD INDEX: Best performance, but places an exclusive table lock on the table, preventing any table access by users and shared table lock on the table, preventing all but SELECT operations to be performed on it.

Note: According to Microsoft best practices, index defragmentation is most effective when an index has at least 8 pages. DBCC INDEXDEFRAG is one of the deprecated command. The equivalent contemporary command is ALTER INDEX REORGANIZE

Here is an award winning solution for the SQL Server Indexes and Statistics maintenance. You can download IndexOptimize procedure and use it as a comprehensive solution for this purpose. The SQL Server Maintenance Solution website seems to me like a must have for all DBA’s. Following is the syntax for rebuilding or reorganizing indexes with fragmentation on all user databases

EXECUTE dbo.IndexOptimize @Databases = 'USER_DATABASES',
 @FragmentationLow = NULL,
 @FragmentationLevel1 = 5,
 @FragmentationLevel2 = 30

1. Microsoft SQL Server 2000 Index Defragmentation Best Practices
2. SQL Server Mainenance Solution