Caveman's Blog

My commitment to learning.

Archive for the ‘Sudheer Reddy Battula’ Category

Dependency Injection: Unity Application Block

leave a comment »


In this post I will show you how to use Microsoft Unit Application Block to achieve Dependency Injection. We will see the two methods of configuring the IOC container, firstly the “.config” way and then the “inline” method. Let us take a look at the following code snippet and see how this code can be improved.

class Program
{
    static void Main(string[] args)
    {
        Service svc = new Service();
        svc.Print();
    }
}

public class Service
{
    public void Print()
    {
        Employee empl = new Employee();
        empl.PrintEmployeeTitle("Test Employee");
    }
}

public class Employee
{
    public void PrintEmployeeTitle(string name)
    {
        Console.WriteLine("Employee Name: {0}, Title:{1}", name, "Some Title");
    }
}

What does this code do?

The above code snippet if from a console application, where we are creating an instance of the Service class and are calling the Print method. The Print method in turn instantiates the Employee class and then calls the PrintEmployeeTitle method of the Employee class to print the employee name and title. The PrintEmployeeTitle method writes the name and title of an employee to the console.

What is wrong?

Nothing. While there is nothing wrong with this code, if we closely observer, we can notice that the Employee class instance could not exist without an instance of the Service class. Basically they both are tightly coupled, meaning to say we can only have one implementation of the Employee class to be consumed by the Service class at any given instance of time.

What if we have a scenario when we want to test more than one implementation of the Employee class or if the Employee class implementation is a work-in-progress? Here is where Dependency Injection design pattern comes to our rescue. I hope I have set some context before I explaining about DI and its implementation.

Solving the problem

Decoupling the Employee class life cycle management from the Service class is the primary objective.  The advantage of decoupling the Employee class are that 1) we will be in a position to provide multiple implementations to the Employee class 2) be able to select the kind of implementation that is suitable for our purpose and 3) manage the life cycle of the Employee class. We define an interface IEmployee with one method PrintEmployeeTitle and define two implementations for this demo purpose. The first implementation is what we already had above and the second is a MockEmployee Class.

public class Employee : IEmployee
{
    public void PrintEmployeeTitle(string name)
    {
        Console.WriteLine("Employee Name: {0}, Title:{1}", name, "Some Title");
    }
}

public class MockEmployee : IEmployee
{
    public void PrintEmployeeTitle(string name)
    {
        Console.WriteLine("Employee Name: {0}, Title:{1}", name, "Some MOCK Title");
    }
}

public interface IEmployee
{
    void PrintEmployeeTitle(string name);
}

Let us see how we can delegate the Employee class instantiation to the client (Class:Program; Method: Main) and then inject the Employee object into the Service object.

Dependency Injection

Service class has a dependency on the Employee class and our objective here is inject this dependency into the Service class from the Client. Dependency injection is a software design pattern that allows a choice of component to be made at run-time rather than compile time [2]. One way to achieve this is via passing the Employee object reference to the Service class constructor like in the code below:

class Program
{
    static void Main(string[] args)
    {
        Employee empl = new Employee();
        Service svc = new Service(empl);
        svc.Print();
    }
}

public class Service
{
    private IEmployee empl;
    public Service(IEmployee empl)
    {
        this.empl = empl;
    }

    public void Print()
    {
        empl.PrintEmployeeTitle("Test Employee");
    }
}

Another way of injecting the Employee reference into the Service object is via setting the instance of Employee class to an I IEmployee property of the Service class.

class Program
{
    static void Main(string[] args)
    {
        Employee empl = new Employee();
        Service svc = new Service();
        svc.empl = empl;
        svc.Print();
    }
}

public class Service
{
    IEmployee _empl;
    public IEmployee empl
    {
        set
        {
            this._empl = value;
        }
    }
    public void Print()
    {
        _empl.PrintEmployeeTitle("Test Employee");
    }
}

We have so far been able to decouple the Service class and the Employee class, however we still have to create an instance of the Employee class to implement dependency injection. Any change in to the Employee class creation mechanism with require a code change and also a code recompile.  This is the point where IOC framework comes handy in automating the creation and injection of the dependency via just one configuration.

Inversion of control

In software engineering, Inversion of Control (IoC) is an object-oriented programming practice where the object coupling is bound at run time by an assembler object and is typically not known at compile time using static analysis [1]. Like discussed earlier we are going to transfer the control of creating the Employee object to the IOC framework rather than keeping it with the Client, mean to say we are performing an “Inversion of Control”.

The configured entities are loaded into an IOC container at run-time and will be injected into the appropriate classes. We can implement .Net Dependency Inject using any one of the following IOC containers:

Microsoft Unity IoC Container

Now let us look at the two ways of configuring and implementing DI using Microsoft Unity Container. Before we can use the Unity container you have to download and install the Microsoft Unit Application Block from the Microsoft Patterns and Practices website. The dll’s necessary for this implementation can be found under the “Drive:/Program Files/Microsoft Unity Application Block x.0/Bin/” folder and should be added as references to your project.

Please accept my apologies for a very crude/rude representation of the client (Class: Program, Method: Main) being able to select one of the three implementations of the Employee class use Dependency Injection via an IOC container.

Application Configuration File

When we have to implement DI using the Application Config file, we have to define Unity block section, define a container and register that namespace and the Class that is getting DI’ed.

<configSections>
<section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection,Microsoft.Practices.Unity.Configuration" />
</configSections>
<unity xmlns="http://schemas.microsoft.com/practices/2010/unity">
<alias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager,
Microsoft.Practices.Unity" />
<container name="TestService">
<register type="UnityFrameworkDemo.IEmployee, UnityFrameworkDemo"
mapTo="UnityFrameworkDemo.MockEmployee, UnityFrameworkDemo">
<lifetime type="singleton" />
</register>
</container>
</unity>

I had to also add a reference of the System.Configuration namespace to the project. Once we have this configuration all set, we have to update the client to 1) load the container that we defined in the configuration and 2) generate the Service class based on the configuration that we have defined:

class Program
{
    static void Main(string[] args)
    {
        try
        {
            container.LoadConfiguration("TestService");
            svc = container.Resolve();
            if (svc != null)
                svc.Print();
            else
                Console.WriteLine("Could not load the Service Class");
        }
        catch (Exception ex)
        {
            Console.WriteLine(ex.Message);
        }
    }
}

public class Service
{
    IEmployee _empl;
    public Service(IEmployee empl)
    {
        this._empl = empl;
    }

    public void Print()
    {
        _empl.PrintEmployeeTitle("Test Employee");
    }
}

You would also have noticed that I have added a constructor that accepts parameter of IEmployee type to the Service class. The IOC framework will use this constructor to generate an instance of the Service class and will also pass a reference of the Employee class into the Service instance. Following is the output of using the default Employee implementation:

Now switching to the Mock Employee implementation from the default implementaion is as simple as updating the register element of the configuration with the MockEmployee class name and then we get the following output


Inline

Lastly let us look at how we can configure an implementation inside the client instead of in the application config file. Basically you have to register the Interface and the implementation with the container in the client and your client is ready to make a call to the Service instance methods like shown in the code below:

container.RegisterType();
   svc = container.Resolve();
   if (svc != null)
      svc.Print();

Unity framework basically provides a fantastic approach to decouple the application layer code. You can define several containers and register several classes with the container and be able to enjoy the flexibility by implementing Dependency Injection using the Unity IOC container.

Happy Fourth of July !

References:
1. Inversion of Control – IOC – Wikipedia
2. Dependency Injection – Wikipedia

 

EF4: Searching Japanese text won’t work

leave a comment »


Problem: EF4 does not to return any data when searching for Japanese names from a user table.

Setup: The web application layers use the following technologies

ASP.Net <—-> Business Layer <—-> EF 4 <—-> SQL Server 2008

Stored procedures are being employed for data retrieval operations. All the stored procedure parameters are of nvarchar type and the data is stored in the tables as nvarchar type as well.

Troubleshooting:

1. Stored procedures work fine in SSMS.

exec sp_search_user N'Japanese text', N'Japanese Text'

Here is what I got from SQL Profiler: exec sp_search_user ‘??’, ‘??’

The Japanese text got replaced with ??

2. Adding RequestEncoding=”utf-8″ ResponseEncoding=”utf-8″ attributes to the Page directive had no impact on the outcome.

3. The data reached the data access layer intact and here is the code that makes a call to the function import:

public IQueryable SearchUsers(string first_name, string last_name)
{
//db is the database context
ObjectResult SearchResult = db.SearchUsers(first_name, last_name);
IQueryable users = from tmp in SearchResult.AsQueryable() select tmp;
return users;
}

4. I have also verified that the East Asian Language pack was indeed installed on the application server.

Solution: Apparently, I found out (with help from  TinMgAye) that the edmx file could not update the data types of the stored procedure to nvarchar from varchar.

There could be a flaw inside the edmx definition about nvarchar and varchar. How we can verify is, firstly make sure your stored procedure in SQL is accepting input parameter as nvarchar. Then try to remove all the function definition and stored procedure from edmx and update edmx again to include the stored procedure and function. Or you know what you are doing mode…. Right click edmx >> Open with >> choose XML editor, then look for stored procedure name in function tag and check the parameter type there.

<Function Name="Your SP Name"><Parameter Name="Para Name" Type="nvarchar" Mode="In" /> </Function>

Cheers !

Written by cavemansblog

June 22, 2012 at 8:56 pm

SQL Server: Get only the date and/or only the time part

leave a comment »


In this post I want to highlight some very useful Microsoft SQL Server System functions that can used to fetch datetime, only the date and only the time.

Fetch various date and times


SELECT SYSDATETIME() [DATE TIME]
,SYSDATETIMEOFFSET() [DATE TIME OFFSET]
,SYSUTCDATETIME() [UTC DATE TIME]
,CURRENT_TIMESTAMP [CURRENT_TIMESTAMP]
,GETDATE() [DATE]
,GETUTCDATE() [UTC DATE];

Fetch only the date part


SELECT CONVERT (date, SYSDATETIME()) [DATE]
,CONVERT (date, SYSDATETIMEOFFSET()) [DATE OFFSET]
,CONVERT (date, SYSUTCDATETIME()) [UTC DATE]
,CONVERT (date, CURRENT_TIMESTAMP) [CURRENT DATESTAMP]
,CONVERT (date, GETDATE()) [DATE]
,CONVERT (date, GETUTCDATE()) [UTC DATE];

Fetch only the date part


SELECT CONVERT (time, SYSDATETIME()) [SYS TIME]
,CONVERT (time, SYSDATETIMEOFFSET()) [ TIME OFFSET]
,CONVERT (time, SYSUTCDATETIME()) [UTC TIME]
,CONVERT (time, CURRENT_TIMESTAMP) [CURRENT_TIMESTAMP]
,CONVERT (time, GETDATE()) [TIME]
,CONVERT (time, GETUTCDATE()) [UTC TIME];


	

Written by cavemansblog

June 21, 2012 at 8:28 pm

Microsoft Surface Tablet – Impressive Ad

leave a comment »


I hope that this tablet is as good as the Ad. Productivity would be the key for this tablets success.

Go Microsoft !

Written by cavemansblog

June 18, 2012 at 9:06 pm

SQL Server: Restore a database from a .mdf file.

leave a comment »


In this blog post I will show you how to restore a database from a .mdf file alone. I am working with the AdventureWorks database in this demonstration. Download the .mdf file for the AdventureWorks database from CodePlex.

1. Open SQL Server Management Studio (SSMS).
2. Right click the Databases folder. select Attach from the context menu.
3. Click Add and select the appropriate .mdf file. Click Ok, and then click Ok again. You will get an error at this time because SSMS could not find the corresponding .ldf file.
4. Select the .ldf file entry and click Remove and click Ok.
5. You have successfully restored a database from the .mdf file.

Check out the steps as a pictorial in the slide show below:

This slideshow requires JavaScript.

SQL Server: Database deployment best practices

with one comment


Here are some of the SQL Server database deployment tips that have been very useful to me.

1. Store all your database object scripts as individual files under source control. This provides an excellent way to keep the database code organized and will also be very useful when auditing code changes. Another advantage would be that these scripts can be used to creating an instance of an application database with minimal amount of data.

2. Maintain a senior resource as a single point of contact to handle all SQL changes in non-DEV environments. This resource could also be responsible for code reviews and helping with code optimization. This provides an excellent way for enforce organizational best practices and it will also make it simple to keep track of changes in a given database environment.

3. Compare the database which is getting deployed with the last database that got updated in the software factory line. Comparing the schema and/or data will give you a contemporary view of the database state, thus helping you with a diligent way of preparing your deployment scripts. You can use tools like Redgate SQL-Compare, dbcomparer, etc for this purpose.

4. Make sure that the SQL scripts always successfully pass through the development, QA and stage environments before reaching the production/live environment. On a side note data in the production should be replicated (after scrubbing is necessary) to all the non-production environments as frequently as possible to achieve best level of code compatibility.

5. Always take a backup of the database, turn off replication agents and all SQL jobs, before deploying to the production/staging environment. This might add some overhead, but making this a habit will help with a error free, faster and a easy to restore deployment.

6. This may seem trivial by it is good practice to end every SQL Script with “GO”. SQL Server utilities interpret GO as a signal that they should send the current batch of Transact-SQL statements to an instance of SQL Server. The current batch of statements is composed of all statements entered since the last GO, or since the start of the ad-hoc session or script if this is the first GO [1].

7. Maintain the deployment scripts with a naming convention. This will help in organizing the scripts for executing them in predetermined sequence and will give you a good idea about what a script is intended to do. I usually name my files as follows: naming the scripts with number in the following format give established the order of execution and also give you an opportunity to add new script(s) in between any two scripts.

Convention: Script_[Number]_[Number]_[ObjectName]_[Operation].sql

  • Script_100_001_table_employee_add.sql
  • Script_200_001_spGetAllEmployees_update.sql
  • Script_200_002_viewEmployeeDetails_update.sql

8. It is good practice to sandwich your SQL script in between a Transaction and Rollback/Commit block. One exception to this rule (if you will) would be when you have a script that performs a huge number of data manipulation operations. This could some times cause the transaction log to grow really big and cause the server to run out of space.

9. Create data validation scripts for the deployment changes. This way you do not have to wait for the application testing to respond to any errors.

10. I have used script consolidation for some of the deployments I was involved with. File consolidation helps in saving time when you have to execute a ton of script files. Here is the code for the SQL Merge tool.

11. Prepare a deployment document for establishing and organizing the deployment process with the following information; this will be useful with maintaining compliance with the Sarbanes-Oxley Act [2].

  • Deployment Schedule
  • Activity ownership
  • Pre-deployment activities
  • Deployment activities
  • Post-deployment activities
  • Rollback plan
  • Contact list

References:
1. MSDN Online
2. Sarbanes-Oxley Act

SQL Server: Index defragmentation

leave a comment »


DBCC INDEXDEFRAG: Index defragmentation is the process that reduces the amount of index fragmentation.This process does not hold any table locks long term while defragmenting an index, hence does not block running queries or updates. This is unlike the index building process or the re-indexing process when a table lock is enforced. The underlying table cannot be modified, truncated, or dropped while an online index operation is in process.To make sure that the index operation can be rolled back, the transaction log cannot be truncated until the index operation has been completed; however, the log can be backed up during the index operation. It is not suggested to use on very fragmented indexes.  Here is an example of MSDN as to what happens when an index is defragmented:

Figure: Index defragmentation in action [1].

DBCC DBREINDEX: Faster than dropping and re-creating, but during rebuilding a clustered-index, an exclusive table lock is put on the table, preventing any table access by users. And during rebuilding a non-clustered index a shared table lock is put on the table, preventing all but SELECT operations to be performed on it.

REBUILD INDEX: Best performance, but places an exclusive table lock on the table, preventing any table access by users and shared table lock on the table, preventing all but SELECT operations to be performed on it.

Note: According to Microsoft best practices, index defragmentation is most effective when an index has at least 8 pages. DBCC INDEXDEFRAG is one of the deprecated command. The equivalent contemporary command is ALTER INDEX REORGANIZE

Here is an award winning solution for the SQL Server Indexes and Statistics maintenance. You can download IndexOptimize procedure and use it as a comprehensive solution for this purpose. The SQL Server Maintenance Solution website seems to me like a must have for all DBA’s. Following is the syntax for rebuilding or reorganizing indexes with fragmentation on all user databases


EXECUTE dbo.IndexOptimize @Databases = 'USER_DATABASES',
 @FragmentationLow = NULL,
 @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
 @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
 @FragmentationLevel1 = 5,
 @FragmentationLevel2 = 30

References:
1. Microsoft SQL Server 2000 Index Defragmentation Best Practices
2. SQL Server Mainenance Solution