Caveman's Blog

My commitment to learning.

SQL Server: Table Partitions as an archiving solution

leave a comment »

Problem Statement

How can you design an archiving solution on a large table without deleting any data and also improve database performance on CRUD operations on this table? In this scenario the presumption is that the most recent data is accessed more often that the older data.


Archiving solutions can be passive or active. A passive solution is one where the historic data is archived in another table on another database, making the data unavailable. An active solution is one where the historic data is archived and still be available for access without much of an impact on the application performance. A large table typically contains millions of rows and probable has a size that runs into several gigabytes. Just the size of the table makes it very expensive to perform CRUD operations and maintain indexes.

SQL Server provides with a feature called “Table Partitioning”[1] that lets a table data and indexes to be stored in several smaller partitions. This proving a way to easily maintain and performing operations on that table. Each partition will be stored in a different file which can be part of a filegroup. Dividing a table into several files gives us the flexibility of storing those files on separate drives. This division is based on how you access the rows of the table.

We can store the files that contain data from the recent years on faster drives as opposed to storing the older data on slower drives. Going with presumption in the problem statement that the most recent data is accessed more often that the older data, we will have improved the performance on this table because we will have faster response times thanks to the faster drives.

1. Partitioned Tables and Indexes
2. Create Partitioned Tables and Indexes

Written by cavemansblog

December 28, 2012 at 9:32 pm

File upload – attachment size validation

leave a comment »

Restricting the size of a file upload is important validation that need to be performed by an online application so as to not risk the filling up the server disk space by a malicious intent. ASP.Net is provides an upload control that only provides a server side validation. By the time the control validates the size of the uploaded file, the physical file would have been already been copied to the server, which is tool late.

Client side technologies comes to the rescue in this scenario, where the validation of an attachment size can be implemented using a browser run-time like Flash, Silverlight, ActiveX, HTML5 etc. This way if attempts to upload files with unsupported sizes can be thwarted without any impact on your web server. Following are two free tools that can be employed for this purpose

  • SWFUpload is a flash based tool.
  • PLUpload is a versatile plugin that can support. This plugin slices a large file in small chunks and will send them out one by one to the server. You can then safely collect them on the server and combine into the original file. The size of the chunks and the acceptable file formats can be defined in the plugin UI definition.

We have implemented PLUpload with good success. This plugin also support multiple file uploads. Visit the plugin homepage to see the other rich features that are supported. The online forum is a treasure trove, where you can find the various implementations, code snippets and be able to participate in contributing to the community.


1. SWFUpload
2. PLUpload
3. PLUpload Forums

Written by cavemansblog

October 20, 2012 at 10:02 am

SQL Server – Clean Buffers

leave a comment »

Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server. To drop clean buffers from the buffer pool, first use CHECKPOINT to produce a cold buffer cache. This forces all dirty pages for the current database to be written to disk and cleans the buffers. After you do this, you can issue DBCC DROPCLEANBUFFERS command to remove all buffers from the buffer pool. [1]


It is recommended these commands should not be executed in a production environment where the SQL Server caching helps in gaining performance. Running these commands could adversely impact the server performance.


Written by cavemansblog

September 10, 2012 at 11:34 pm

Posted in Uncategorized

Tagged with ,

SQL Server: Case sensitive string comparison

leave a comment »

Here is a useful tidbit. We can use one of the following two methods to perform a case sensitive string/field comparison when a SQL Server database does not support case sensitivity.

declare @a as nvarchar(20) = 'Test'
declare @b as nvarchar(20) = 'TEST'

if(convert(varbinary,@a) = convert(varbinary,@b))
   select 'identical'
   select 'non-identical'

--Method 2:
if(@a = @b COLLATE Latin1_General_CS_AS)
   select 'identical'
   select 'non-identical'


Written by cavemansblog

August 15, 2012 at 10:26 pm

SQL Server: Data transfer for Excel using Linked Server

leave a comment »


I will demonstrate yet another way to perform data transfer to and from an excel file using a Linked Server in SQL Server 2008. In this demo I will show you how to:

  • Create a Linked Server using Excel as the data server
  • Import data to a SQL Server table from a spreadsheet
  • Export data to a spreadsheet from a SQL Server table

Create a Data Source

As the first step in this process I have created an Excel file with a spreadsheet named “Employee”, followed by defining the headers and inserting 5 records as showing in following illustration:

Create a Linked Server

Now that we have a data source let us create a linked server using an the Excel file created in the previous step as the data source. Open SQL Server Management Studion (SSMS) and expand “Server Objects” under the intended SQL Server, to find the Linked Servers item.Right click on Linked Servers and click on “New Linked Server” to see following dialog box. EXCEL_LINKED_SERVER is the name that I have chosen to call this linked server. Then we need to populate the Provider, Product Name, Data Source and Provider String like in this example and click the OK button.

A linked server should have been created successfully at this time. Right click on the “Linked Server” tree view item and click on refresh to see the newly created linked server. Expand the linked server to see the Exployee$ spreadsheet under the list of tables.

Import Data

At this point you should be able to access the Excel spread sheet just like any other SQL Server database table.  Let us import data into a table named DataFeed in the Demo database. It is important to note that the spreadsheet name in the SQL query has to be accessed only by preceding spreadsheet name  with “…”  Only one dot is used to represent the current database, two dots to represent another database in the SQL Server instance and three dots represent a Linked Server. When we would have ran the following SQL query, a table named “DataFeed” would have been created and 5 records from this table would be displayed in the results pane.

Export Data

Let us insert a couple of records into the DataFeed table that was created in the above step, followed by exporting those two rows to the spreadsheet. When the following query us run, you will observe that the spread sheet will have 7 records.

Now Let us look at the new data in the spreadsheet:

Written by cavemansblog

July 28, 2012 at 5:47 pm

Sample Code: Xml validation using XSD

leave a comment »

Starting today I will post some useful code snippets, samples to perform simple coding tasks.  First in this series is a code sample on how to validate an XML file using an XSD file.

Import the following namespaces

using System;
using System.IO;
using System.Xml;
using System.Xml.Schema;
using System.Xml.XPath;

Validate a XML File

public static void ValidateXmlFile(string XMLfilename, string XSDfilename)
    //make sure the file exists
    if (!File.Exists(XMLfilename))
        Console.WriteLine("Error: XML Data File '{0}' not found.", XMLfilename);
        return 0;

    //Standard schema validation first
    XmlDocument document = new XmlDocument();
    catch (XmlException xmle)
        Console.WriteLine("!!!Error!!!\r\nXML Validation Failure: {0}\r\n\r\n", xmle.Message);
        return 0;
    document.Schemas.Add(XSDfilename, "");
    ValidationEventHandler validation = new ValidationEventHandler(SchemaValidationHandler);

Define a Schema Validation Handler

private static void SchemaValidationHandler(object sender, ValidationEventArgs e)
     switch (e.Severity)
         case XmlSeverityType.Error:
             Console.WriteLine("Schema Validation Error: {0}", e.Message);
         case XmlSeverityType.Warning:
             Console.WriteLine("Schema Validation Warning: {0}", e.Message);
    return 0;

Written by cavemansblog

July 6, 2012 at 11:17 am

SQL Server: Incorrect SET options on a stored procedure error

leave a comment »

We had to put out a another fire at work when a Stored Procedure that was not modified in ages started to fail. Following is the error that was caught by the application.

INSERT failed because the following SET options have incorrect settings: ‘ANSI_NULLS, QUOTED_IDENTIFIER’. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations.

As specified in the error above, there was something wrong with the SET options. After a little bit of research I figured out that if the SET options are not correctly defined, this error could occur. Especially when a filtered index is added to a table, sql server requires it to be created with SET QUOTED_IDENTIFIER setting as ON. Take a look at the following blog post to recreate this error.

First attempt at fixing the error by SET ing the correct options on the stored procedure did not help the cause:




Solution: Apparently a new index was added to a table, was causing the issue. This index was interfering with a row insert on this table. Disabling the filtered index fixed the issue. This to me seems like a temporary solution, we still have to figure out how to make the filtered index work for this table.


Get every new post delivered to your Inbox.