.Net Notes


Table of Content:
1.Polymorphism
2.Virtual Keyword
3.Override Keyword
4.Static class
5.Calling one constructor in another & consructor overloading
6.Calling base class constructor from child class
7.Using Keyword
8.Reference & Out parameter
9.Abstract & Interface Class
10.Design Pattern - Singleton & MVC
11.Generics
12.Delegates
13.Asp.Net Authorization
14.Asp.Net Authentication
15.Setting SSL in website
16.More than one Web.config
17.Dot Net Versions Difference
18.Simplifying Deployment and Solving DLL Hell








1.Polymorphism

What is Polymorphism?
Polymorphism means same operation may behave differently on different classes.
Example of Compile Time Polymorphism: Method Overloading
Example of Run Time Polymorphism: Method Overriding

Example of Compile Time Polymorphism
Method Overloading- Method with same name but with different arguments is called method overloading.
- Method Overloading forms compile-time polymorphism.
- Example of Method Overloading:
class A1
{
void hello()
{ Console.WriteLine(“Hello”); }

void hello(string s)
{ Console.WriteLine(“Hello {0}”,s); }
}


Example of Run Time Polymorphism
Method Overriding- Method overriding occurs when child class declares a method that has the same type arguments as a method declared by one of its superclass.
- Method overriding forms Run-time polymorphism.
- Note: By default functions are not virtual in C# and so you need to write “virtual” explicitly. While by default in Java each function are virtual.
- Example of Method Overriding:
Class parent
{
virtual void hello()
{ Console.WriteLine(“Hello from Parent”); }
}

Class child : parent
{
override void hello()
{ Console.WriteLine(“Hello from Child”); }
}

static void main()
{
parent objParent = new child();
objParent.hello();
}
//Output
Hello from Child.




2.Virtual,Override  & New Keyword
 Virtual
The virtual keyword is used to modify a method, property, indexer or event declaration, and allow it to be overridden in a derived class. For example, this method can be overridden by any class that inherits it:
public virtual double Area() 
{
    return x * y;
}
The implementation of a virtual member can be changed by an overriding member in a derived class.

When a virtual method is invoked, the run-time type of the object is checked for an overriding member. The overriding member in the most derived class is called, which might be the original member, if no derived class has overridden the member.
By default, methods are non-virtual. You cannot override a non-virtual method.
You cannot use the virtual modifier with the staticabstract, private or override modifiers.
Virtual properties behave like abstract methods, except for the differences in declaration and invocation syntax.

It is an error to use the virtual modifier on a static property.
A virtual inherited property can be overridden in a derived class by including a property declaration that uses the override modifier.


3.Override 
Method overriding, in object oriented programming , is a language feature that allows a subclass or child class to provide a specific implementation of a method that is already provided by one of its superclasses or parent classes.Use the override modifier to modify a method, a property, an indexer, or an event. An override method provides a new implementation of a member inherited from a base class.You cannot override a non-virtual or static method. The overridden base method must be virtual, abstract, or override.
An override declaration cannot change the accessibility of the virtual method. Both the override method and the virtual method must have the same access level modifier.You cannot use the following modifiers to modify an override method:new   static    virtual   abstract

Example

In this example, there is a base class, Square, and a derived class, Cube. Because the area of a cube is the sum of the areas of six squares, it is possible to calculate it by calling the Area() method on the base class.
// cs_override_keyword.cs
// Calling overriden methods from the base class
using System;
class TestClass 
{
   public class Square 
   {
      public double x;

      // Constructor:
      public Square(double x) 
      {
         this.x = x;
      }

      public virtual double Area() 
      {
         return x*x; 
      }
   }

   class Cube: Square 
   {
      // Constructor:
      public Cube(double x): base(x) 
      {
      }

      // Calling the Area base method:
      public override double Area() 
      {
         return (6*(base.Area())); 
      }
   }

   public static void Main()
   {
      double x = 5.2;
      Square s = new Square(x);
      Square c = new Cube(x);
      Console.WriteLine("Area of Square = {0:F2}", s.Area());
      Console.WriteLine("Area of Cube = {0:F2}", c.Area());
   }
}

Output

Area of Square = 27.04
Area of Cube = 162.24

New
In C#, the new keyword can be used as an operator, a modifier, or a constraint.
new Operator
Used to create objects and invoke constructors. example:
Class1 obj  = new Class1();
It is also used to create instances of anonymous types:
var query = from cust in customers
            select new {Name = cust.Name, Address = cust.PrimaryAddress};
The new operator is also used to invoke the default constructor for value types. For example:
int i = new int();
In the preceding statement, i is initialized to 0, which is the default value for the type int. The statement has the same effect as the following:
int i = 0;
new Modifier
Used to hide an inherited member from a base class member. To hide an inherited member, declare it in the derived class by using the same name, and modify it with the newmodifier. For example:
public class BaseC
{
    public int x;
    public void Invoke() { }
}
public class DerivedC : BaseC
{
    new public void Invoke() { }
}
new Constraint
Used to restrict types that might be used as arguments for a type parameter in a generic declaration.
The new constraint specifies that any type argument in a generic class declaration must have a public parameterless constructor. To use the new constraint, the type cannot be abstract.
Apply the new constraint to a type parameter when your generic class creates new instances of the type, as shown in the following example:
class ItemFactory<T> where T : new()
{
    public T GetNewItem()
    {
        return new T();
    }
}






Access Modifiers (C# Reference)

Access modifiers are keywords used to specify the declared accessibility of a member or a type. This section introduces the four access modifiers:
The following five accessibility levels can be specified using the access modifiers:
public: Access is not restricted.
protected: Access is limited to the containing class or types derived from the containing class.
Internal: Access is limited to the current assembly.
protected internal: Access is limited to the current assembly or types derived from the containing class.
private: Access is limited to the containing type.




4.Static Class
static class is basically the same as a non-static class, but there is one difference: a static class cannot be instantiated. In other words, you cannot use the new keyword to create a variable of the class type. Because there is no instance variable, you access the members of a static class by using the class name itself. For example, if you have a static class that is named UtilityClass that has a public method namedMethodA, you call the method as shown in the following example:
UtilityClass.MethodA();
A static class can be used as a convenient container for sets of methods that just operate on input parameters and do not have to get or set any internal instance fields. For example, in the .NET Framework Class Library, the static System.Math class contains methods that perform mathematical operations, without any requirement to store or retrieve data that is unique to a particular instance of the Math class. That is, you apply the members of the class by specifying the class name and the method name, as shown in the following example.
double dub = -3.14;
Console.WriteLine(Math.Abs(dub));
Console.WriteLine(Math.Floor(dub));
Console.WriteLine(Math.Round(Math.Abs(dub)));

// Output:
// 3.14
// -4
// 3
As is the case with all class types, the type information for a static class is loaded by the .NET Framework common language runtime (CLR) when the program that references the class is loaded. The program cannot specify exactly when the class is loaded. However, it is guaranteed to be loaded and to have its fields initialized and its static constructor called before the class is referenced for the first time in your program. A static constructor is only called one time, and a static class remains in memory for the lifetime of the application domain in which your program resides. 

  • The following list provides the main features of a static class:
    • Contains only static members.
    • Cannot be instantiated.
    • Is sealed.
    • Cannot contain Instance Constructors.

Here is an example of a static class that contains two methods that convert temperature from Celsius to Fahrenheit and from Fahrenheit to Celsius:

 public static class TemperatureConverter
    {
        public static double CelsiusToFahrenheit(string temperatureCelsius)
        {
            // Convert argument to double for calculations.
            double celsius = Double.Parse(temperatureCelsius);

            // Convert Celsius to Fahrenheit.
            double fahrenheit = (celsius * 9 / 5) + 32;

            return fahrenheit;
        }
    }

to call method = TemperatureConverter.CelsiusToFahrenheit("10")  //we don't need to create object of class
http://msdn.microsoft.com/en-us/library/79b3xss3(v=VS.90).aspx

Static Constructors

A static constructor is used to initialize any static data, or to perform a particular action that needs performed once only. It is called automatically before the first instance is created or any static members are referenced.
class SimpleClass
{
    // Static constructor
    static SimpleClass()
    {
        //...
    }
}

Static constructors have the following properties:
  • A static constructor does not take access modifiers or have parameters.
  • A static constructor is called automatically to initialize the class before the first instance is created or any static members are referenced.
  • A static constructor cannot be called directly.
  • The user has no control on when the static constructor is executed in the program.
  • A typical use of static constructors is when the class is using a log file and the constructor is used to write entries to this file.
  • Static constructors are also useful when creating wrapper classes for unmanaged code, when the constructor can call the LoadLibrarymethod.
In this example, the class Bus has a static constructor and one static member, Drive(). When Drive() is called, the static constructor is invoked to initialize the class.
public class Bus
{
    // Static constructor:
    static Bus()
    {
        System.Console.WriteLine("The static constructor invoked.");
    }

    public static void Drive()
    {
        System.Console.WriteLine("The Drive method invoked.");
    }
}

class TestBus
{
    static void Main()
    {
        Bus.Drive();
    }
}

The static constructor invoked.
The Drive method invoked.


Constructors

Whenever a class or struct is created, its constructor is called. A class or struct may have multiple constructors that take different arguments. Constructors enable the programmer to set default values, limit instantiation, and write code that is flexible and easy to read.

If you do not provide a constructor for your object, C# will create one by default that instantiates the object and sets member variables to the default values as listed in Default Values Table (C# Reference). Static classes and structs can also have constructors.If a class does not have a constructor, a default constructor is automatically generated and default values are used to initialize the object fields. 

TYPES :
1.Instance Constructors (C# Programming Guide) Instance constructors are used to create and initialize any instance member variables when you use the new expression to create an object of a class.
e.g
class CoOrds
{
    public int x, y;

    // constructor
    public CoOrds()
    {
        x = 0;
        y = 0;
    }
}
This instance constructor is called whenever an object based on the CoOrds class is created. A constructor like this one, which takes no arguments, is called a default constructor. However, it is often useful to provide additional constructors. For example, we can add a constructor to the CoOrds class that allows us to specify the initial values for the data members:
// tcA constructor with two arguments:
public CoOrds(int x, int y)
{
    this.x = x;
    this.y = y;
}
This allows CoOrd objects to be created with default or specific initial values, like this:
CoOrds p1 = new CoOrds();
CoOrds p2 = new CoOrds(5, 3);
Instance constructors can also be used to call the instance constructors of base classes. The class constructor can invoke the constructor of the base class through the initializer, as follows:
class Circle : Shape
{
    public Circle(double radius)
        : base(radius, 0)
    {
    }
}


 A private constructor is a special instance constructor. It is generally used in classes that contain static members only. If a class has one or more private constructors and no public constructors, other classes (except nested classes) cannot create instances of this class. For example:
class NLog
{
    // Private Constructor:
    private NLog() { }

    public static double e = Math.E;  //2.71828...
}
The declaration of the empty constructor prevents the automatic generation of a default constructor. Note that if you do not use an access modifier with the constructor it will still be private by default. However, the private modifier is usually used explicitly to make it clear that the class cannot be instantiated. Private constructors are used to prevent creating instances of a class when there are no instance fields or methods, such as the Math class, or when a method is called to obtain an instance of a class. If all the methods in the class are static, consider making the complete class static.
3.Static Constructors (C# Programming Guide)A static constructor is used to initialize any static data, or to perform a particular action that needs to be performed once only. It is called automatically before the first instance is created or any static members are referenced.
class SimpleClass
{
    // Static variable that must be initialized at run time.
    static readonly long baseline;

    // Static constructor is called at most one time, before any
    // instance constructor is invoked or member is accessed.
    static SimpleClass()
    {
        baseline = DateTime.Now.Ticks;
    }
}
Static constructors have the following properties:
  • A static constructor does not take access modifiers or have parameters.
  • A static constructor is called automatically to initialize the class before the first instance is created or any static members are referenced.
  • A static constructor cannot be called directly.
  • The user has no control on when the static constructor is executed in the program.
  • A typical use of static constructors is when the class is using a log file and the constructor is used to write entries to this file.
  • Static constructors are also useful when creating wrapper classes for unmanaged code, when the constructor can call the LoadLibrarymethod.
  • If a static constructor throws an exception, the runtime will not invoke it a second time, and the type will remain uninitialized for the lifetime of the application domain in which your program is running.



5.Constructor Overloading
Broadly speaking, a constructor is a method in the class which gets executed when its object is created. Usually, we put the initialization code in the constructor. Constructor names are always same as the class name.Defining more than one constructor with different signatures for a single class is called CO.

Example :

To demonstrate the use of overloaded constructors we will create a new class to represent rectangular shapes. The class will allow the generation of a Rectangle object with specified Height and Width properties. We will then add a second constructor that requires only a single parameter for square shapes with matching height and width.
To begin, create a new console application and add a new class file named "Rectangle". Add the following code to the new class to create the properties:
class Rectangle
{
    private int _height;
    private int _width;

    public int Height
    {
        get { return _height; }
    }

    public int Width
    {
        get { return _width; }
    }
}
NB: In this code, the Height and Width properties have been made read-only. This is to simplify the code in this article.

Adding the Constructors

We can now add the first constructor to the class. This constructor accepts two parameters containing the height and width of the rectangle. Validation code ensures that they are both positive values before storing them in the property variables. By adding this constructor, the default constructor will be removed. Add the following constructor to the class:
public Rectangle(int height, int width)
{
    if (height <= 0) throw new ArgumentException("height");
    if (width <= 0) throw new ArgumentException("width");

    _height = height;
    _width = width;

    Console.WriteLine("Rectangle Constructor Called");
}
To add a second constructor, we simply declare another variation with a different signature. Add the following code that permits the creation of Rectangle objects that represent squares. It requires only a single parameter that can be stored in both the width and height property variables.
public Rectangle(int size)
{
    if (size <= 0) throw new ArgumentException("height");

    _height = _width = size;

    Console.WriteLine("Square Constructor Called");
}

Using Overloaded Constructors

Overloaded constructors can be used to instantiate objects in exactly the same manner as for classes with a single constructor. During compilation of the code, the compiler compares the signature used for the new object to those available in the class. If there is a perfect match, the corresponding constructor is used. Where there is no signature with the correct parameters, the compiler will look for a constructor that can be used with implicit casting. If no such constructor exists, a compiler error occurs.
We can now test the example code by modifying the program's main method as follows:
static void Main(string[] args)
{
    Rectangle rect = new Rectangle(4, 6);
    Console.WriteLine("Height: {0}", rect.Height);
    Console.WriteLine("Width: {0}", rect.Width);

    Rectangle square = new Rectangle(5);
    Console.WriteLine("Height: {0}", square.Height);
    Console.WriteLine("Width: {0}", square.Width);
}

/* OUTPUT

Rectangle Constructor Called
Height: 4
Width: 6
Square Constructor Called
Height: 5
Width: 5

*/

Constructor Interaction

Constructors can be very complex, performing many initialisation and validation functions for new objects. This can easily lead to large constructors with functionality that is repeated in each overloaded version. This, in turn, can lead to maintenance problems with the code when changes to construction logic are required. This can be minimised by having the constructor call methods within the class to perform common tasks. Another option is to allow code reuse by having the constructors call each other during object instantiation.


create a constructor that calls an existing constructor

Constructor Calling Syntax

To create a constructor that calls an existing constructor, a special syntax is used. The constructor is declared as usual and then a colon character (:) is appended. After the colon the this keyword and the parameter list of the called constructor is provided. Each parameter specified in the call must match one of those in the new constructor or be a literal value.
public Constructor(parameters-1) : this(parameters-2)
{
}
When the new constructor is utilised, the constructor indicated in the this command is executed first, then the code within the new constructor's code block is run. It is possible that the code block is empty where only a transformation of signature is required. In this case, only the original constructor is executed with the specified parameters.
This reuse of code can be demonstrated by modifying the constructor that creates a square Rectangle object as follows:
public Rectangle(int size) : this(size, size)
{
    Console.WriteLine("Square Constructor Called");
}
Running the program now shows the order of execution of constructors. The results are as follows:
Rectangle Constructor Called
Height: 4
Width: 6
Rectangle Constructor Called
Square Constructor Called
Height: 5
Width: 5



6.Call base class constructor from derived class :

Because classes cannot inherit constructors, a derived class must implement its own constructor and can only make use of the constructor of its base class by calling it explicitly.If the base class has an accessible default constructor, the derived constructor is not required to invoke the base constructor explicitly; instead, the default constructor is called implicitly as the object is constructed. However, if the base class does not have a default constructor, every derived constructor must explicitly invoke one of the base class constructors using the base keyword. The keyword base identifies the base class for the current object.
If you do not declare a constructor of any kind, the compiler creates a default constructor for you. Whether you write it yourself or you use the one provided by the compiler, a default constructor is one that takes no parameters. Note, however, that once you do create a constructor of any kind (with or without parameters), the compiler does not create a default constructor for you.

Example :  the new class ListBox derives from Window and has its own constructor, which takes three parameters. The ListBox constructor invokes the constructor of its parent by placing a colon (:) after the parameter list and then invoking the base class constructor with the keyword base:
public ListBox( int theTop, int theLeft, string theContents):
base(theTop, theLeft) // call base constructor 




using Directive 

The using directive has two uses:

  • 1.To allow the use of types in a namespace so that you do not have to qualify the use of a type in that namespace:
    using System.Text;
    
  • 2.To create an alias for a namespace or a type. This is called a using alias directive.
    using Project = PC.MyCompany.Project;
    
The using keyword is also used to create using statements, which help ensure that IDisposable objects such as files and fonts are handled correctly. See using Statement for more information.

A using alias directive cannot have an open generic type on the right hand side. For example, you cannot create a using alias for a List<T>, but you can create one for a List<int>.
Example :
namespace PC
{
    // Define an alias for the nested namespace.
    using Project = PC.MyCompany.Project;
    class A
    {
        void M()
        {
            // Use the alias
            Project.MyClass mc = new Project.MyClass();
        }
    }
    namespace MyCompany
    {
        namespace Project
        {
            public class MyClass { }
        }
    }
}


1.  Provides a convenient syntax that ensures the correct use of IDisposable objects.
The following example shows how to use the using statement.
using (Font font1 = new Font("Arial", 10.0f)) 
{
    byte charset = font1.GdiCharSet;
}
File and Font are examples of managed types that access unmanaged resources (in this case file handles and device contexts). There are many other kinds of unmanaged resources and class library types that encapsulate them. All such types must implement the IDisposableinterface.
As a rule, when you use an IDisposable object, you should declare and instantiate it in a usingstatement. The using statement calls theDispose method on the object in the correct way, and (when you use it as shown earlier) it also causes the object itself to go out of scope as soon as Dispose is called. Within the using block, the object is read-only and cannot be modified or reassigned.
The using statement ensures that Dispose is called even if an exception occurs while you are calling methods on the object. You can achieve the same result by putting the object inside a try block and then calling Dispose in a finally block; in fact, this is how the usingstatement is translated by the compiler.



8.Reference & Out parameter <b>Reference 

The ref keyword causes arguments to be passed by reference. The effect is that any changes to the parameter in the method will be reflected in that variable when control passes back to the calling method.
NoteNote
Do not confuse the concept of passing by reference with the concept of reference types. The two concepts are not the same. A method parameter can be modified by ref regardless of whether it is a value type or a reference type. There is no boxing of a value type when it is passed by reference.
To use a ref parameter, both the method definition and the calling method must explicitly use the ref keyword. For example:
class RefExample
    {
        static void Method(ref int i)
        {
            // Rest the mouse pointer over i to verify that it is an int.
            // The following statement would cause a compiler error if i
            // were boxed as an object.
            i = i + 44;
        }

        static void Main()
        {
            int val = 1;
            Method(ref val);
            Console.WriteLine(val);

            // Output: 45
        }
    }



An argument passed to a ref parameter must first be initialized. This differs from out, whose arguments do not have to be explicitly initialized before they are passed. For more information, see out.
Although the ref and out keywords cause different run-time behavior, they are not considered part of the method signature at compile time. Therefore, methods cannot be overloaded if the only difference is that one method takes a ref argument and the other takes an out argument. The following code, for example, will not compile:
class CS0663_Example
{
    // Compiler error CS0663: "Cannot define overloaded 
    // methods that differ only on ref and out".
    public void SampleMethod(out int i) { }
    public void SampleMethod(ref int i) { }
}


Overloading can be done, however, if one method takes a ref or out argument and the other uses neither as in the following example:
class RefOverloadExample
    {
        public void SampleMethod(int i) { }
        public void SampleMethod(ref int i) { }
    }



Properties are not variables. They are actually methods, and therefore cannot be passed as ref parameters.
For information about how to pass arrays, see Passing Arrays Using ref and out (C# Programming Guide).

Passing value types by reference, as demonstrated earlier in this topic, is useful, but ref is also useful for passing reference types. This allows called methods to modify the object to which the reference refers because the reference itself is being passed by reference. The following sample shows that when a reference type is passed as a ref parameter, the object itself can be changed. For more information, see Passing Reference-Type Parameters (C# Programming Guide).
class RefExample2
{
    static void Method(ref string s)
    {
        s = "changed";
    }
    static void Main()
    {
        string str = "original";
        Method(ref str);
        Console.WriteLine(str);
    }
}
// Output: changed



Out 
The out keyword causes arguments to be passed by reference. This is like the ref keyword, except that ref requires that the variable be initialized before it is passed. To use an out parameter, both the method definition and the calling method must explicitly use the outkeyword. For example:
class OutExample
{
    static void Method(out int i)
    {
        i = 44;
    }
    static void Main()
    {
        int value;
        Method(out value);
        // value is now 44
    }
}


Although variables passed as out arguments do not have to be initialized before being passed, the called method is required to assign a value before the method returns.
Although the ref and out keywords cause different run-time behavior, they are not considered part of the method signature at compile time. Therefore, methods cannot be overloaded if the only difference is that one method takes a ref argument and the other takes an out argument. The following code, for example, will not compile:
class CS0663_Example
{
    // Compiler error CS0663: "Cannot define overloaded 
    // methods that differ only on ref and out".
    public void SampleMethod(out int i) { }
    public void SampleMethod(ref int i) { }
}


Overloading can be done, however, if one method takes a ref or out argument and the other uses neither, like this:
class OutOverloadExample
{
    public void SampleMethod(int i) { }
    public void SampleMethod(out int i) { i = 5; }
}


Properties are not variables and therefore cannot be passed as out parameters.
For information about passing arrays, see Passing Arrays Using ref and out (C# Programming Guide).
Declaring an out method is useful when you want a method to return multiple values. The following example uses out to return three variables with a single method call. Note that the third argument is assigned to null. This enables methods to return values optionally.
class OutReturnExample
    {
        static void Method(out int i, out string s1, out string s2)
        {
            i = 44;
            s1 = "I've been returned";
            s2 = null;
        }
        static void Main()
        {
            int value;
            string str1, str2;
            Method(out value, out str1, out str2);
            // value is now 44
            // str1 is now "I've been returned"
            // str2 is (still) null;
        }
    }




9.Abstract & Interface class


What is an Abstract Class?

An abstract class is a special kind of class that cannot be instantiated. So the question is why we need a class that cannot be instantiated? An abstract class is only to be sub-classed (inherited from). In other words, it only allows other classes to inherit from it but cannot be instantiated. The advantage is that it enforces certain hierarchies for all the subclasses. In simple words, it is a kind of contract that forces all the subclasses to carry on the same hierarchies or standards.

What is an Interface?

An interface is not a class. It is an entity that is defined by the word Interface. Aninterface has no implementation; it only has the signature or in other words, just the definition of the methods without the body. As one of the similarities toAbstract class, it is a contract that is used to define hierarchies for all subclasses or it defines specific set of methods and their arguments. The main difference between them is that a class can implement more than one interface but can only inherit from one abstract class. Since C# doesn�t support multiple inheritance,interfaces are used to implement multiple inheritance.

Both Together

When we create an interface, we are basically creating a set of methods without any implementation that must be overridden by the implemented classes. The advantage is that it provides a way for a class to be a part of two classes: one from inheritance hierarchy and one from the interface.
When we create an abstract class, we are creating a base class that might have one or more completed methods but at least one or more methods are left uncompleted and declared abstract. If all the methods of an abstract class are uncompleted then it is same as an interface. The purpose of an abstract class is to provide a base class definition for how a set of derived classes will work and then allow the programmers to fill the implementation in the derived classes.
There are some similarities and differences between an interface and an abstractclass that I have arranged in a table for easier comparison:
Feature
Interface
Abstract class
Multiple inheritance
class may inherit several interfaces.
class may inherit only one abstract class.
Default implementation
An interface cannot provide any code, just the signature.
An abstract class can provide complete, default code and/or just the details that have to be overridden.
Access Modfiers
An interface cannot have access modifiers for the subs, functions, properties etc everything is assumed as public
An abstract class can contain access modifiers for the subs, functions, properties
Core VS Peripheral
Interfaces are used to define the peripheral abilities of a class. In other words both Human and Vehicle can inherit from a IMovableinterface.
An abstract class defines the core identity of aclass and there it is used for objects of the same type.
Homogeneity
If various implementations only share method signatures then it is better to useInterfaces.
If various implementations are of the same kind and use common behaviour or status then abstractclass is better to use.
Speed
Requires more time to find the actual method in the correspondingclasses.
Fast
Adding functionality (Versioning)
If we add a new method to an Interface then we have to track down all the implementations of the interface and define implementation for the new method.
If we add a new method to an abstract class then we have the option of providing default implementation and therefore all the existing code might work properly.
Fields and Constants
No fields can be defined in interfaces
An abstract class can have fields and constrants defined


Example :
INTERFACE

interface ISampleInterface
{
    void SampleMethod();
}

class ImplementationClass : ISampleInterface
{
    // Explicit interface member implementation: 
    void ISampleInterface.SampleMethod()
    {
        // Method implementation.
    }

    static void Main()
    {
        // Declare an interface instance.
        ISampleInterface obj = new ImplementationClass();

        // Call the member.
        obj.SampleMethod();
    }
}


ABSTRACT

abstract class BaseClass   // Abstract class
    {
        protected int _x = 100;
        protected int _y = 150;
        public abstract void AbstractMethod();   // Abstract method
        public abstract int X    { get; }
        public abstract int Y    { get; }
    }

    class DerivedClass : BaseClass
    {
        public override void AbstractMethod()
        {
            _x++;
            _y++;
        }

        public override int X   // overriding property
        {
            get
            {
                return _x + 10;
            }
        }

        public override int// overriding property
        {
            get
            {
                return _y + 10;
            }
        }

        static void Main()
        {
            DerivedClass o = new DerivedClass();
            o.AbstractMethod();
            Console.WriteLine("x = {0}, y = {1}", o.X, o.Y);
        }
    }
    // Output: x = 111, y = 161





10.Design Pattern

In software engineering, a design pattern is a general reusable solution to a commonly occurring problem in software design. A design pattern is not a finished design that can be transformed directly into code. It is a description or template for how to solve a problem that can be used in many different situations. Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Many patterns imply object-orientation or more generally mutable state, and so may not be as applicable in functional programming languages, in which data is immutable or treated as such.

Design patterns reside in the domain of modules and interconnections. At a higher level there are architectural patterns that are larger in scope, usually describing an overall pattern followed by an entire system.
Design patterns allow you to break down applications into common tasks and to keep a library of code with the most efficient implementation of those tasks. 
There is no generally accepted definition of a design pattern. A design pattern describes a commonly occurring problem and then describes the solution to that problem. The solution presented by the design pattern is a general repeatable solution so you can use it million times over without ever doing it the same way twice.

TYPES : 
Creational Patterns
Abstract Factory Creates an instance of several families of classes
Builder Separates object construction from its representation
Factory Method Creates an instance of several derived classes
Prototype A fully initialized instance to be copied or cloned
Singleton A class of which only a single instance can exist

Structural Patterns
Adapter Match interfaces of different classes
Bridge Separates an object’s interface from its implementation;
Composite A tree structure of simple and composite objects
Decorator Add responsibilities to objects dynamically
Facade A single class that represents an entire subsystem
Flyweight A fine-grained instance used for efficient sharing
Proxy An object representing another object

Behavioral Patterns
Chain of Resp. A way of passing a request between a chain of objects
Command Encapsulate a command request as an object
Interpreter A way to include language elements in a program
Iterator Sequentially access the elements of a collection
Mediator Defines simplified communication between classes
Memento Capture and restore an object's internal state
Observer A way of notifying change to a number of classes
State Alter an object's behavior when its state changes
Strategy Encapsulates an algorithm inside a class
Template Method Defer the exact steps of an algorithm to a subclass
Visitor Defines a new operation to a class without change



Singleton Pattern 

It lives in a family of creational patterns.
Creational patterns dictate how and when objects get created. Many instances require special behavior that can only be solved though creational techniques, rather than trying to force a desired behavior after an instance is created. One of the best examples of this type of behavioral requirement is contained in the Singleton pattern. 
The Singleton pattern was formally defined in the classic reference, Design Patterns: Elements of Reusable Software by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides (also known as the Gang of Four, or GoF). 
This pattern is one of the least complicated, as well as most popular, in Design Patterns

Logical Model
The model for a singleton is very straightforward. There is (usually) only one singleton instance. Clients access the singleton instance through one well-known access point. The client in this case is an object that needs access to a sole instance of a singleton. Figure 1 shows this relationship graphically.

                   Client ----- uses----- Singleton       
Figure 1. Singleton pattern logical model

Physical Model
The physical model for the Singleton pattern is also very simple. However, there are several slightly different ways that singletons have been implemented over time. Let's look at the original GoF singleton implementation. Figure 2 shows a UML model of the original Singleton pattern as defined in Design Patterns.

                                                 Singleton 
                                -static uniqueInstance : Singleton
                                -singletonData 
                                +static Instance() : Singleton----return &nbsp;uniqueInstance 
                                +SingletonOperation()
                                +GetSingletonData() 


Figure 2. Singleton pattern physical model from design patterns

What we see is a simple class diagram showing that there is a private static property of a singleton object as well as public method Instance() that returns this same property. This is really the core of what makes a singleton. The other properties and methods are there to show additional operations that may be allowed on the class. For the purpose of this discussion, let's focus on the instance property and method. 

Clients access any instance of a singleton only through the Instance method. How the instance gets created is not defined here. What we also want to be able to do is control how and when an instance will get created. In OO development, special object creation behavior is generally best handled in the constructor for a class. This case is no different. What we can do is define when and how we construct a class instance and then keep any client from calling the constructor directly. This is the approach always used for singleton construction. Let's look at the original example from Design Patterns. The C++ Singleton Sample Implementation Code example shown below is generally considered the default implementation for a singleton. This sample has been ported to many other programming languages and generally exists everywhere in very near this same form.


C++ Singleton Sample Implementation Code 

// Declaration
class Singleton {
public: 
static Singleton* Instance();
protected: 
Singleton();
private:
static Singleton* _instance;
}

// Implementation 
Singleton* Singleton::_instance = 0;

Singleton* Singleton::Instance() {
if (_instance == 0) {
_instance = new Singleton;
}
return _instance;
}


Let’s examine this code for a moment. This simple class has one member variable and that is a pointer to itself. Notice that the constructor is protected and that the only public method is the Instance method. In the implementation of the Instance method, there is a control block (if) that checks to see if the member variable has been initialized, and if not creates a new instance. This lazy initialization in the control block means that the Singleton instance is initialized, or created, only on the first call to the Instance() method. For many applications, this approach works just fine. But, for multithreaded applications, this approach proves to have a potentially hazardous side effect. If two threads manage to enter the control block at the same time, two instances of the member variable could be created. To solve this, you might be tempted to merely place a critical section around the control block in order to guarantee thread safety. If you do this, then all calls to the Instance method would be serialized and could have a very negative impact on performance, depending on the application. It is for this reason that another version of this pattern was created that uses something called a double-check mechanism. The next code sample shows an example of a double-check lock using Java syntax.
 
Double-Check Lock Singleton Code Using Java ì0;िन्टाक्स
 // C++ port to Java
class Singleton
{
public static Singleton Instance() {
if (_instance == null) {
synchronized (Class.forName("Singleton")) {
if (_instance == null) {
_instance = new Singleton();
}
}
}
return _instance;
}
protected Singleton() {}
private static Singleton _instance = null;

}


In the Double-Check Lock Singleton Code Using Java Syntax sample, we perform a direct port of the C++ code to Java code in order to take advantage of the Java critical section block (synchronized). The major differences are that there are no longer separate declaration and implementation sections, there are no pointer data types, and a new double-check mechanism is in place. The double check occurs at the first IF block. If the member variable is null, then the execution enters a critical section block where the member variable is double checked again. Only after passing this last test is the member variable instantiated. The general thinking is that there is no way that two threads can create two instances of the class using this technique. Also, since there is no thread blocking at the first check, most calls to this method would not get the performance hit of having to enter the lock. Currently, this technique is widely used in many Java applications when implementing a Singleton pattern. This technique is subtle but flawed. Some optimizing compilers can optimize out or reorder the lazy initialization code and reintroduce the thread safety problem. For a more in-depth explanation, see "


MVC Design Pattern


The Model-View-Controller (MVC) pattern separates the modeling of the domain, the presentation, and the actions based on user input into three separate classes [Burbeck92]:

Model. The model manages the behavior and data of the application domain, responds to requests for information about its state (usually from the view), and responds to instructions to change state (usually from the controller).

View. The view manages the display of information.

Controller. The controller interprets the mouse and keyboard inputs from the user, informing the model and/or the view to change as appropriate.

Figure 1 depicts the structural relationship between the three objects.


Figure 1: MVC class structure


It is important to note that both the view and the controller depend on the model. However, the model depends on neither the view nor the controller. This is one the key benefits of the separation. This separation allows the model to be built and tested independent of the visual presentation. The separation between view and controller is secondary in many rich-client applications, and, in fact, many user interface frameworks implement the roles as one object. In Web applications, on the other hand, the separation between view (the browser) and controller (the server-side components handling the HTTP request) is very well defined.


Model-View-Controller is a fundamental design pattern for the separation of user interface logic from business logic. Unfortunately, the popularity of the pattern has resulted in a number of faulty descriptions. In particular, the term "controller" has been used to mean different things in different contexts. Fortunately, the advent of Web applications has helped resolve some of the ambiguity because the separation between the view and the controller is so apparent.
 Variations



In Application Programming in Smalltalk-80: How to use Model-View-Controller (MVC) [Burbeck92], Steve Burbeck describes two variations of MVC: a passive model and an active model.


The passive model is employed when one controller manipulates the model exclusively. The controller modifies the model and then informs the view that the model has changed and should be refreshed (see Figure 2). The model in this scenario is completely independent of the view and the controller, which means that there is no means for the model to report changes in its state. The HTTP protocol is an example of this. There is no simple way in the browser to get asynchronous updates from the server. The browser displays the view and responds to user input, but it does not detect changes in the data on the server. Only when the user explicitly requests a refresh is the server interrogated for changes.


 Figure 2: Behavior of the passive model


The active model is used when the model changes state without the controller's involvement. This can happen when other sources are changing the data and the changes must be reflected in the views. Consider a stock-ticker display. You receive stock data from an external source and want to update the views (for example, a ticker band and an alert window) when the stock data changes. Because only the model detects changes to its internal state when they occur, the model must notify the views to refresh the display.


However, one of the motivations of using the MVC pattern is to make the model independent from of the views. If the model had to notify the views of changes, you would reintroduce the dependency you were looking to avoid. Fortunately, the Observer pattern [Gamma95] provides a mechanism to alert other objects of state changes without introducing dependencies on them. The individual views implement the Observerinterface and register with the model. The model tracks the list of all observers that subscribe to changes. When a model changes, the model iterates through all registered observers and notifies them of the change. This approach is often called "publish-subscribe." The model never requires specific information about any views. In fact, in scenarios where the controller needs to be informed of model changes (for example, to enable or disable menu options), all the controller has to do is implement the Observer interface and subscribe to the model changes. In situations where there are many views, it makes sense to define multiple subjects, each of which describes a specific type of model change. Each view can then subscribe only to types of changes that are relevant to the view.


Figure 3 shows the structure of the active MVC using Observer and how the observer isolates the model from referencing views directly.


 Figure 3: Using Observer to decouple the model from the view in the active model


Figure 4 illustrates how the Observer notifies the views when the model changes. Unfortunately, there is no good way to demonstrate the separation of model and view in a Unified Modeling Language (UML) sequence diagram, because the diagram represents instances of objects rather than classes and interfaces.


 Figure 4: Behavior of the active model



11.Generics
Generics are the most powerful feature of C# 2.0. Generics allow you to define type-safe data structures, without committing to actual data types. This results in a significant performance boost and higher quality code, because you get to reuse data processing algorithms without duplicating type-specific code. In concept, generics are similar to C++ templates, but are drastically different in implementation and capabilities. This article discusses the problem space generics address, how they are implemented, the benefits of the programming model, and unique innovations, such as constrains, generic methods and delegates, and generic inheritance. You will also see how generics are utilized in other areas of the .NET Framework such as reflection, arrays, collections, serialization, and remoting, and how to improve on the basic offering.
Generics were added to version 2.0 of the C# language and the common language runtime (CLR). Generics introduce to the .NET Framework the concept of type parameters, which make it possible to design classes and methods that defer the specification of one or more types until the class or method is declared and instantiated by client code. For example, by using a generic type parameter T you can write a single class that other client code can use without incurring the cost or risk of runtime casts or boxing operations, as shown here:


Example1: 

    public class Col<T>
    {
        T t;
        public T Val { get { return t; } set { t = value; } }
    }

protected void Page_Load(object sender, EventArgs e)
    {
        Col<int> test=new Col<int>();
        test.Val=1;
        Response.Write(test.Val);

        Col<string> test2 = new Col<string>();
        test2.Val = "one";
        Response.Write(test2.Val);
    }



Example2: 

// Declare the generic class.
public class GenericList<T>
{
void Add(T input) { }
}
class TestGenericList
{
private class ExampleClass { }
static void Main()
{
// Declare a list of type int.
GenericList<int> list1 = new GenericList<int>();

// Declare a list of type string.
GenericList<string> list2 = new GenericList<string>();

// Declare a list of type ExampleClass.
GenericList<ExampleClass> list3 = new GenericList<ExampleClass>();
}
}

Generics Implementation
In .NET 2.0, generics have native support in IL (intermediate language) and the CLR itself. When you compile generic C# server-side code, the compiler compiles it into IL, just like any other type. However, the IL only contains parameters or place holders for the actual specific types. In addition, the metadata of the generic server contains generic information.

The client-side compiler uses that generic metadata to support type safety. When the client provides a specific type instead of a generic type parameter, the client's compiler substitutes the generic type parameter in the server metadata with the specified type argument. This provides the client's compiler with type-specific definition of the server, as if generics were never involved. This way the client compiler can enforce correct method parameters, type-safety checks, and even type-specific IntelliSense.

The interesting question is how does .NET compile the generic IL of the server to machine code. It turns out that the actual machine code produced depends on whether the specified types are value or reference type. If the client specifies a value type, then the JIT compiler replaces the generic type parameters in the IL with the specific value type, and compiles it to native code. However, the JIT compiler keeps track of type-specific server code it already generated. If the JIT compiler is asked to compile the generic server with a value type it has already compiled to machine code, it simply returns a reference to that server code. Because the JIT compiler uses the same value-type-specific server code in all further encounters, there is no code bloating.

If the client specifies a reference type, then the JIT compiler replaces the generic parameters in the server IL with Object, and compiles it into native code. That code will be used in any further request for a reference type instead of a generic type parameter. Note that this way the JIT compiler only reuses actual code. Instances are still allocated according to their size off the managed heap, and there is no casting.

Generics Benefits
Generics in .NET let you reuse code and the effort you put into implementing it. The types and internal data can change without causing code bloat, regardless of whether you are using value or reference types. You can develop, test, and deploy your code once, reuse it with any type, including future types, all with full compiler support and type safety. Because the generic code does not force the boxing and unboxing of value types, or the down casting of reference types, performance is greatly improved. With value types there is typically a 200 percent performance gain, and with reference types you can expect up to a 100 percent performance gain in accessing the type (of course, the application as a whole may or may not experience any performance improvements). The source code available with this article includes a micro-benchmark application, which executes a stack in a tight loop. The application lets you experiment with value and reference types on an Object-based stack and a generic stack, as well as changing the number of loop iterations to see the effect generics have on performance.



What Generics Cannot Do
Under .NET 2.0, you cannot define generic Web services. That is, Web methods that use generic type parameters. The reason is that none of the Web service standards support generic services.

You also cannot use generic types on a serviced component. The reason is that generics do not meet COM visibility requirements, which are required for serviced components (just like you could not use C++ templates in COM or COM+).


Generic Constraints
With C# generics, the compiler compiles the generic code into IL independent of any type arguments that the clients will use. As a result, the generic code could try to use methods, properties, or members of the generic type parameters that are incompatible with the specific type arguments the client uses. This is unacceptable because it amounts to lack of type safety. In C# you need to instruct the compiler which constraints the client-specified types must obey in order for them to be used instead of the generic type parameters. There are three types of constraints. A derivation constraint indicates to the compiler that the generic type parameter derives from a base type such an interface or a particular base class. A default constructor constraint indicates to the compiler that the generic type parameter exposes a default public constructor (a public constructor with no parameters). A reference/value type constraint constrains the generic type parameter to be a reference or a value type. A generic type can employ multiple constraints, and you even get IntelliSense reflecting the constraints when using the generic type parameter, such as suggesting methods or members from the base type.

It is important to note that although constraints are optional, they are often essential when developing a generic type. Without them, the compiler takes the more conservative, type-safe approach and only allows access to Object-level functionality in your generic type parameters. Constraints are part of the generic type metadata so that the client-side compiler can take advantage of them as well. The client-side compiler only allows the client developer to use types that comply with the constraints, thus enforcing type safety.

An example will go a long way to explain the need and use of constraints. Suppose you would like to add indexing ability or searching by key to the linked list of Code block 3:
public class LinkedList<K,T>
{
T Find(K key)
{...}
public T this[K key]
{
get{return Find(key);}
}
}

This allows the client to write the following code:
LinkedList<int,string> list = new LinkedList<int,string>();

list.AddHead(123,"AAA");
list.AddHead(456,"BBB");
string item = list[456];
Debug.Assert(item == "BBB");


http://msdn.microsoft.com/en-us/library/ms379564(v=vs.80).aspx





12.Delegates
A delegate is a type that references a method. Once a delegate is assigned a method, it behaves exactly like that method. The delegate method can be used like any other method, with parameters and a return value, as in this example:

public delegate void Del(string message);
// Create a method for a delegate.
public static void DelegateMethod(string message)
{
System.Console.WriteLine(message);
}
// Instantiate the delegate.
Del handler = DelegateMethod;

// Call the delegate.
handler("Hello World");


Any method that matches the delegate's signature, which consists of the return type and parameters, can be assigned to the delegate. This makes is possible to programmatically change method calls, and also plug new code into existing classes. As long as you know the delegate's signature, you can assign your own delegated method.

This ability to refer to a method as a parameter makes delegates ideal for defining callback methods. For example, a sort algorithm could be passed a reference to the method that compares two objects. Separating the comparison code allows the algorithm to be written in a more general way.


Delegates have the following properties:
  • Delegates are similar to C++ function pointers, but are type safe.
  • Delegates allow methods to be passed as parameters.
  • Delegates can be used to define callback methods.
  • Delegates can be chained together; for example, multiple methods can be called on a single event.
  • Methods don't need to match the delegate signature exactly. For more information, see Covariance and Contravariance
  • C# version 2.0 introduces the concept of Anonymous Methods, which permit code blocks to be passed as parameters in place of a separately defined method. 

 Multicast Delegates
Multicast delegates provide functionality to execute more than one method.

Internally, a linked list of delegates (called Invocation List) is stored, and when the multicast delegate is invoked, the list of delegates will be executed in sequence.



The following example demonstrates how to compose multicast delegates. A useful property of delegate objects is that they can be assigned to one delegate instance to be multicast using the + operator. A composed delegate calls the two delegates it was composed from. Only delegates of the same type can be composed.

The - operator can be used to remove a component delegate from a composed delegate.

delegate void Del(string s);

class TestClass
{
static void Hello(string s)
{
System.Console.WriteLine(" Hello, {0}!", s);
}

static void Goodbye(string s)
{
System.Console.WriteLine(" Goodbye, {0}!", s);
}

static void Main()
{
Del a, b, c, d;

// Create the delegate object a that references
// the method Hello:
a = Hello;

// Create the delegate object b that references
// the method Goodbye:
b = Goodbye;

// The two delegates, a and b, are composed to form c:
c = a + b;

// Remove a from the composed delegate, leaving d,
// which calls only the method Goodbye:
d = c - a;

System.Console.WriteLine("Invoking delegate a:");
a("A");
System.Console.WriteLine("Invoking delegate b:");
b("B");
System.Console.WriteLine("Invoking delegate c:");
c("C");
System.Console.WriteLine("Invoking delegate d:");
d("D");
}




asynchronous callback
Delegate types are derived from the Delegate class in the .NET Framework. Delegate types are sealed—they cannot be derived from— and it is not possible to derive custom classes from Delegate. Because the instantiated delegate is an object, it can be passed as a parameter, or assigned to a property. This allows a method to accept a delegate as a parameter, and call the delegate at some later time. This is known as an asynchronous callback, and is a common method of notifying a caller when a long process has completed.

Example :
 public void MethodWithCallback(int param1, int param2, Del callback)
{
callback("The number is: " + (param1 + param2).ToString());
}

We can then pass the delegate created above to that method: 
MethodWithCallback(1, 2, handler); 

result : The number is: 3  





Output :
Invoking delegate a:
Hello, A!
Invoking delegate b:
Goodbye, B!
Invoking delegate c:
Hello, C!
Goodbye, C!
Invoking delegate d:
Goodbye, D! 



When to Use Delegates Instead of Interfaces (C# Programming Guide)
 Both delegates and interfaces allow a class designer to separate type declarations and implementation. A given interface can be inherited and implemented by any class or struct; a delegate can created for a method on any class, as long as the method fits the method signature for the delegate. An interface reference or a delegate can be used by an object with no knowledge of the class that implements the interface or delegate method. Given these similarities, when should a class designer use a delegate and when should they use an interface?

         Use a delegate when: 
  • An eventing design pattern is used.
  • It is desirable to encapsulate a static method.
  • The caller has no need access other properties, methods, or interfaces on the object implementing the method.
  • Easy composition is desired.
  • A class may need more than one implementation of the method.

    Use an interface when:
  •  There are a group of related methods that may be called.
  • A class only needs one implementation of the method.
  • The class using the interface will want to cast that interface to other interface or class types.
  • The method being implemented is linked to the type or identity of the class: for example, comparison methods.


Event + Delegate :
An event in C# is a way for a class to provide notifications to clients of that class when some interesting thing happens to an object. The most familiar use for events is in graphical user interfaces; typically, the classes that represent controls in the interface have events that are notified when the user does something to the control (for example, click a button).

Events, however, need not be used only for graphical interfaces. Events provide a generally useful way for objects to signal state changes that may be useful to clients of that object. Events are an important building block for creating classes that can be reused in a large number of different programs.

Events are declared using delegates. If you have not yet studied the Delegates Tutorial, you should do so before continuing. Recall that a delegate object encapsulates a method so that it can be called anonymously. An event is a way for a class to allow clients to give it delegates to methods that should be called when the event occurs. When the event occurs, the delegate(s) given to it by its clients are invoked.


Named Methods + Anonymous Methods :
A delegate can be associated with a named method. When you instantiate a delegate using a named method, the method is passed as a parameter.
This is called using a named method. Delegates constructed with a named method can encapsulate either a static method or an instanced method.
Using named methods is the only way to instantiate a delegate in previous versions of C#.
However, in a situation where creating a new method is undesirable overhead, C# 2.0 allows you to instantiate a delegate and specify a code block immediately that the delegate will process when called. These are called Anonymous Methods.

Example: 

// Declare a delegate
delegate void Printer(string s);

class TestClass
{
static void Main()
{
// Instatiate the delegate type using an anonymous method:
Printer p = delegate(string j)
{
System.Console.WriteLine(j);
};

// Results from the anonymous delegate call:
p("The delegate using the anonymous method is called.");

// The delegate instantiation using a named method "DoWork":
p = new Printer(TestClass.DoWork);

// Results from the old style delegate call:
p("The delegate using the named method is called.");
}

// The method associated with the named delegate:
static void DoWork(string k)
{
System.Console.WriteLine(k);
}
}

Output

The delegate using the anonymous method is called.

The delegate using the named method is called. 


NOTES:The scope of the parameters of an anonymous method is the anonymous-method-block.
It is an error to have a jump statement, such as goto, break, or continue, inside the anonymous method block whose target is outside the block. It is also an error to have a jump statement, such as goto, break, or continue, outside the anonymous method block whose target is inside the block.The local variables and parameters whose scope contain an anonymous method declaration are called outer or captured variables of the anonymous method.

For example, in the following code segment, n is an outer variable:
int n = 0;
Del d = delegate() { System.Console.WriteLine("Copy #:{0}", ++n); };

VB
C#
C++
F#
JScript
int n = 0;
Del d = delegate() { System.Console.WriteLine("Copy #:{0}", ++n); };


Unlike local variables, the lifetime of the outer variable extends until the delegates that reference the anonymous methods are eligible for garbage collection. A reference to n is captured at the time the delegate is created.
An anonymous method cannot access the ref or out parameters of an outer scope.
No unsafe code can be accessed within the anonymous-method-block.


Covariance and Contravariance in Delegates
Covariance and contravariance provide a degree of flexibility when matching method signatures with delegate types. Covariance permits a method to have a more derived return type than what is defined in the delegate. Contravariance permits a method with parameter types that are less derived than in the delegate type.
Example 1 (Covariance)
class Mammals
{
}

class Dogs : Mammals
{
}

class Program
{
// Define the delegate.
public delegate Mammals HandlerMethod();

public static Mammals FirstHandler()
{
return null;
}

public static Dogs SecondHandler()
{
return null;
}

static void Main()
{
HandlerMethod handler1 = FirstHandler;

// Covariance allows this delegate.
HandlerMethod handler2 = SecondHandler;
}
}



Example 2 (Contravariance)
System.DateTime lastActivity;
public Form1()
{
InitializeComponent();

lastActivity = new System.DateTime();
this.textBox1.KeyDown += this.MultiHandler; //works with KeyEventArgs
this.button1.MouseClick += this.MultiHandler; //works with MouseEventArgs

}

// Event hander for any event with an EventArgs or
// derived class in the second parameter
private void MultiHandler(object sender, System.EventArgs e)
{
lastActivity = System.DateTime.Now;
}



Generic Delegates
A delegate can define its own type parameters. Code that references the generic delegate can specify the type argument to create a closed constructed type, just like when instantiating a generic class or calling a generic method, as shown in the following example:

public delegate void Del<T>(T item);
public static void Notify(int i) { }

Del<int> m1 = new Del<int>(Notify);


C# version 2.0 has a new feature called method group conversion, which applies to concrete as well as generic delegate types, and enables you to write the previous line with this simplified syntax:

Del<int> m2 = Notify;

Generic delegates are especially useful in defining events based on the typical design pattern because the sender argument can be strongly typed and no longer has to be cast to and from Object.


13.ASP.NET Authorization

Visual Studio 2008
Authorization determines whether an identity should be granted access to a specific resource. In ASP.NET, there are two ways to authorize access to a given resource:
  • File authorization   File authorization is performed by the FileAuthorizationModule. It checks the access control list (ACL) of the .aspx or .asmx handler file to determine whether a user should have access to the file. ACL permissions are verified for the user's Windows identity (if Windows authentication is enabled) or for the Windows identity of the ASP.NET process. For more information, seeASP.NET Impersonation.
  • URL authorization   URL authorization is performed by the UrlAuthorizationModule, which maps users and roles to URLs in ASP.NET applications. This module can be used to selectively allow or deny access to arbitrary parts of an application (typically directories) for specific users or roles.
With URL authorization, you explicitly allow or deny access to a particular directory by user name or role. To do so, you create anauthorization section in the configuration file for that directory. To enable URL authorization, you specify a list of users or roles in the allowor deny elements of the authorization section of a configuration file. The permissions established for a directory also apply to its subdirectories, unless configuration files in a subdirectory override them.
The following shows the syntax for the authorization section:
<authorization> <[allow|deny] users roles verbs /> </authorization>
The allow or deny element is required. You must specify either the users or the roles attribute. Both can be included, but both are not required. The verbs attribute is optional.
The allow and deny elements grant and revoke access, respectively. Each element supports the attributes shown in the following table:
Attribute
Description
users
Identifies the targeted identities (user accounts) for this element.
Anonymous users are identified using a question mark (?). You can specify all authenticated users using an asterisk (*).
roles
Identifies a role (a RolePrincipal object) for the current request that is allowed or denied access to the resource. For more information, see Managing Authorization Using Roles.
verbs
Defines the HTTP verbs to which the action applies, such as GETHEAD, and POST. The default is "*", which specifies all verbs.
The following example grants access to the Kim identity and members of the Admins role, and denies access to the John identity (unless the John identity is included in the Admins role) and to all anonymous users:
<authorization> <allow users="Kim"/> <allow roles="Admins"/> <deny users="John"/> <deny users="?"/> </authorization>
The following authorization section shows how to allow access to the John identity and deny access to all other users:
<authorization> <allow users="John"/> <deny users="*"/> </authorization>
You can specify multiple entities for both the users and roles attributes by using a comma-separated list, as shown in the following example:
<allow users="John, Kim, contoso\Jane"/>
Note that if you specify a domain account name, the name must include both the domain and user name (contoso\Jane).
The following example allows all users to perform an HTTP GET for a resource, but allows only the Kim identity to perform a POSToperation:
<authorization> <allow verbs="GET" users="*"/> <allow verbs="POST" users="Kim"/> <deny verbs="POST" users="*"/> </authorization>
Rules are applied as follows:
  • Rules contained in application-level configuration files take precedence over inherited rules. The system determines which rule takes precedence by constructing a merged list of all rules for a URL, with the most recent rules (those nearest in the hierarchy) at the head of the list.
  • Given a set of merged rules for an application, ASP.NET starts at the head of the list and checks rules until the first match is found. The default configuration for ASP.NET contains an <allow users="*"> element, which authorizes all users. (By default, this rule is applied last.) If no other authorization rules match, the request is allowed. If a match is found and the match is a deny element, the request is returned with the 401 HTTP status code. If an allow element matches, the module allows the request to be processed further.
In a configuration file, you can also create a location element to specify a particular file or directory to which settings in that the locationelement should apply.




14.ASP.NET Authentication

Visual Studio 2008
Authentication is the process of obtaining identification credentials such as name and password from a user and validating those credentials against some authority. If the credentials are valid, the entity that submitted the credentials is considered an authenticated identity. Once an identity has been authenticated, the authorization process determines whether that identity has access to a given resource.
ASP.NET implements authentication through authentication providers, the code modules that contain the code necessary to authenticate the requestor's credentials. The topics in this section describe the authentication providers built into ASP.NET.
Term
Definition
Provides information on how to use Windows authentication in conjunction with Microsoft Internet Information Services (IIS) authentication to secure ASP.NET applications.
Provides information on how to create an application-specific login form and perform authentication using your own code. A convenient way to work with forms authentication is to use ASP.NET membership and ASP.NET login controls, which together provide a way to collect user credentials, authenticate them, and manage them, using little or no code. For more information, see Managing Users by Using Membership and ASP.NET Login Controls Overview.
You might also consider using Windows Live ID for user authentication. For information about how to use Windows Live ID to authenticate users for you website, see Windows Live ID SDK.


 

 

 

 

15.How to set up SSL by using IIS 5.0 and Certificate Server 2.0


  1. First, the Web server must make a certificate request. To do this, follow these steps:
    1. Start the Internet Service Manager (ISM), which loads the Internet Information Server snap-in for the Microsoft Management Console (MMC).
    2. Right-click the Web site on which you want to enable SSL, and click Properties.
    3. Click the Directory Security tab, and then click Server Certificate to start the Web Server Certificate Wizard.
    4. Click Next to start the wizard, and select Create a new certificate.
    5. Click Next, and select Prepare the request now, but send it later.
    6. Click Next, and give your certificate a name. You may want to match it with the name of the Web site. Now, select a bit length; the higher the bit length, the stronger the certificate encryption. Select Server Gated Cryptography if your users may be coming from countries with encryption restrictions.
    7. Click Next, and type your Organization and Organizational Unit. These values do not need to match any Active Directory entries.
    8. Click Next, and enter the common name. The common name must match the fully qualified domain name of the server as listed in DNS. For example, if the URL is https://www.mydomain.com/securedir, then the common name must be www.mydomain.com.
    9. Click Next, and type your country, state, and city or locality. Type the full name of your state here; do not abbreviate.
    10. Click Next, and select a location and file name to save your request to.
    11. Click Next twice, and then click Finish to close the wizard.
  2. Process your request through Certificate Server. To do this, follow these steps:
    1. Browse to http://CAServerName/CertSrv, and select Request a certificate.

      Note Do not use "localhost" as the server name. If you browse from the Certificate Server computer, use the computer name instead.
    2. Click Next and select Advanced request.
    3. Click Next and select Submit a certificate request using a base64 encoded PKCS #10 file or a renewal request using a base64 encoded PKCS #7 file. Click Next, and open the request file that you saved from the Web Certificate Wizard in Notepad. Paste the entire text of the file, including the BEGIN and END lines, into the Base64 Encoded Certificate Request text box.

      Note Depending on the configuration of the Certificate Server service, you may see radio buttons on this page instead of Additional Attributes. If the "Submit a Certificate Request or Renewal Request" page includes these radio buttons, select the Web server option. The default setting, Admin, will cause the SSL Web service to fail.
    4. Click Submit. You may be presented with a Certificate Pending dialog box. If you are prompted for download, skip to step 2i.
    5. Close your browser. On the Certificate Server computer, open the Certification Authority MMC.
    6. Expand the tree underneath the server name, and select the Pending Requestsfolder. Right-click the certificate that you just submitted (scroll to the right for more information to determine which certificate is yours if there are several pending), click All Tasks, and then click Issue. You may now close the Certification Authority MMC.
    7. Open a new browser window, and browse to the URL that is listed in step a. SelectCheck on a pending certificate.
    8. Click Next, and select the request that you made earlier.
    9. Click Next, select DER encoded, and then click the Download CA certificatelink. Save the certificate file to your Web server's local drive, and close your Web browser.
  3. Now, finish processing the request within IIS to install the certificate to the server, and enable SSL.
    1. Open the Internet Information Services MMC, right-click the Web site on which you want to enable SSL, and click Properties.
    2. Click the Directory Security tab, then click Server Certificate.
    3. Click Next, and select Process the pending request and install the certificate.
    4. Click Next, and enter the path and file name of the certificate that you saved in the last section.
    5. Click Next twice, and then click Finish to complete the wizard.
    6. Click the Web Site tab, and make sure that the SSL Port text box is populated with the port you would like SSL to run on. The default (and recommended) port is 443.
    7. Click OK to close the Web site Properties dialog box.
You can now use SSL on your server. Test the setup by connecting to the Web site's home page by using https instead of http. You have a valid connection if the page comes up and a small lock appears in the status bar in the lower right-hand corner of the browser.



16.Working with more than one Web.config file


Introduction

I would like to share what I have understood about working with more than one Web.config file from my latest ASP.NET application. We planned to have different Web.config files for sub-folders in the application root folder. It helps us to have small and easily maintainable configuration files.

Hierarchy of Web.config Files

System wide configuration settings are defined in the Machine.config for the .NET Framework. TheMachine.config file is located in the C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\CONFIG folder. The settings defined in this file are applicable to all ASP.NET applications in that system.
We can override these default settings by including a Web.config file in the application's root folder.
By including Web.config files in sub-folders, we can override the settings defined in the Web.config file in the application's root folder.
The following are sample section declarations from a Machine.config file:
 Collapse
<section name="processModel" 
  type="System.Web.Configuration.ProcessModelConfigurationHandler, 
        System.Web, Version=1.0.5000.0, Culture=neutral, 
        PublicKeyToken=b03f5f7f11d50a3a" 
  allowDefinition="MachineOnly"/>
 
<section name="sessionState" 
  type="System.Web.SessionState.SessionStateSectionHandler, 
        System.Web, Version=1.0.5000.0, Culture=neutral, 
        PublicKeyToken=b03f5f7f11d50a3a" 
  allowDefinition="MachineToApplication"/>
 
<section name="appSettings" 
  type="System.Configuration.NameValueFileSectionHandler, System,
        Version=1.0.5000.0, Culture=neutral, 
        PublicKeyToken=b77a5c561934e089"/>
There is an attribute allowDefinition specified in the first two section declarations with the values:MachineOnly and MachineToApplication.

What it does mean?

If allowDefinition="MachineOnly", then we can not override this section either in application level or in folder level. The only section declared in the Machine.config file with this settings is processModel.
If allowDefinition="MachineToApplication", then we can override these sections by the root directoryWeb.config. Sections with this setting in Machine.config are authenticationmachineKeysessionState,trust, and securityPolicy.
If allowDefinition attribute is omitted in a section declaration of the Machine.config file, we can override that section at any level.
We can override the section appSettings at any level and can access it by usingConfigurationSettings.AppSettings easily.

What is there in the sample project?

The sample source code is a simple ASP.NET web application with three Web Forms and three Web.config files. The root folder has a sub folder SubFolder_L1 that has SubFolder_L2, each has one Web Form and oneWeb.config file.
Web.config files have different and overridden keys. Web.config file in the root folder has the followingappSettings declarations:
 Collapse
<appSettings>
    <add key="root" value="Root folder's configuration file."/>
    <add key="color" value="Blue"/>
</appSettings>
Web.config file in Subfolder_L1 has the following appSettings declarations:
 Collapse
<appSettings>
    <add key="subfolder_l1" value="Subfolder_L1\web.config file."/>
    <add key="color" value="Green"/>
</appSettings>
The color setting is overridden by the subfolder configuration file. We can read the root element fromSubfolder_L1 or Subfolder_L2 by the following code:
 Collapse
lblConfig.Text = ConfigurationSettings.AppSettings["root"];
But we can not read configuration settings defined in Subfolder_L1's Web.config file from the root folder.






17.Dot Net Versions Difference(5:57 PM 12/01/2013)
Overview of .NET Framework release history
GenerationVersion numberRelease dateDevelopment toolDistributed with
1.01.0.3705.013 February 2002Visual Studio .NETN/A
1.11.1.4322.57324 April 2003Visual Studio .NET 2003Windows Server 2003
2.02.0.50727.427 November 2005Visual Studio 2005Windows Server 2003 R2
3.03.0.4506.306 November 2006Expression BlendWindows VistaWindows Server 2008
3.53.5.21022.819 November 2007Visual Studio 2008Windows 7Windows Server 2008 R2
4.04.0.30319.112 April 2010Visual Studio 2010N/A
4.54.5.50709.1792915 August 2012Visual Studio 2012Windows 8Windows Server 2012



.NET Framework 1.1
Built-in support for mobile ASP.NET controls. Previously available as an add-on for .NET Framework, now part of the framework.
Security changes – enable Windows Forms assemblies to execute in a semi-trusted manner from the Internet, and enable Code Access Security in ASP.NET applications.
Built-in support for ODBC and Oracle databases. Previously available as an add-on for .NET Framework 1.0, now part of the framework.
.NET Compact Framework – a version of the .NET Framework for small devices.
Internet Protocol version 6 (IPv6) support.
Numerous API changes.

.NET Framework 2.0
Generics
Language support for generics built directly into the .NET CLR.
Full 64-bit support for both the x64 and the IA-64 hardware platforms.
Numerous API changes.
SQL Server integration – .NET 2.0, VS 2005, and SQL Server 2005 are all tied together. This means that instead of using T-SQL, one can build stored procedures and triggers in any of the .NET-compatible languages.
A new hosting API for native applications wishing to host an instance of the .NET runtime. The new API gives a fine grain control on the behavior of the runtime with regards to multithreading, memory allocation, assembly loading and more (detailed reference). It was initially developed to efficiently host the runtime in Microsoft SQL Server, which implements its own scheduler and memory manager.
Many additional and improved ASP.NET web controls.
New data controls with declarative data binding.
New personalization features for ASP.NET, such as support for themes, skins, master pages and webparts.
.NET Micro Framework – a version of the .NET Framework related to the Smart Personal Objects Technology initiative.
Membership provider
Partial classes
Nullable types
Anonymous methods
Iterators
Data tables


.NET Framework 3.0

.NET Framework 3.0 consists of four major new components:
Windows Presentation Foundation (WPF), formerly code-named Avalon; a new user interface subsystem and API based on XML and vector graphics, which uses 3D computer graphicshardware and Direct3D technologies. See WPF SDK for developer articles and documentation on WPF.
Windows Communication Foundation (WCF), formerly code-named Indigo; a service-oriented messaging system which allows programs to interoperate locally or remotely similar to web services.
Windows Workflow Foundation (WF) allows for building of task automation and integrated transactions using workflows.
Windows CardSpace, formerly code-named InfoCard; a software component which securely stores a person's digital identities and provides a unified interface for choosing the identity for a particular transaction, such as logging in to a website.


.NET Framework 3.5
1. Added new features such as AJAX-enabled Web sites and LINQ
2. Source code of Base Class Library (BCL) has been partially released
3. New .NET Compact Framework 3.5 released

.NET Framework 4.0
Key focuses for this release are:
Parallel Extensions to improve support for parallel computing, which target multi-core or distributed systems.[13] To this end, technologies like PLINQ (Parallel LINQ),[14] a parallel implementation of the LINQ engine, and Task Parallel Library, which exposes parallel constructs via method calls.,[15] are included.
New Visual Basic .NET and C# language features, such as implicit line continuations, dynamic dispatch, named parameters, and optional parameters.
Support for Code Contracts.
Inclusion of new types to work with arbitrary-precision arithmetic (System.Numerics.BigInteger) and complex numbers (System.Numerics.Complex).


.NET Framework 4.5
Core Features
Ability to limit how long the regular expression engine will attempt to resolve a regular expression before it times out.
Ability to define the culture for an application domain.
Console support for Unicode (UTF-16) encoding.
Support for versioning of cultural string ordering and comparison data.
Better performance when retrieving resources.
Zip compression improvements to reduce the size of a compressed file.
Ability to customize a reflection context to override default reflection behavior through the CustomReflectionContext class.
Asynchronous operations
In the .NET Framework 4.5, new asynchronous features were added to the C# and Visual Basic languages. These features add a task-based model for performing asynchronous operations.
ASP.NET
Support for new HTML5 form types.
Support for model binders in Web Forms. These let you bind data controls directly to data-access methods, and automatically convert user input to and from .NET Framework data types.
Support for unobtrusive JavaScript in client-side validation scripts.
Improved handling of client script through bundling and minification for improved page performance.
Integrated encoding routines from the AntiXSS library (previously an external library) to protect from cross-site scripting attacks.
Support for WebSocket protocol.
Support for reading and writing HTTP requests and responses asynchronously.
Support for asynchronous modules and handlers.
Support for content distribution network (CDN) fallback in the ScriptManager control.




13.Simplifying Deployment and Solving DLL Hell

Introduction

The Microsoft® .NET Framework introduces several new features aimed at simplifying application deployment and solving DLL Hell. Both end users and developers are familiar with the versioning and deployment issues that can arise with today's component-based systems. For example, virtually every end user has installed a new application on their machine, only to find that an existing application mysteriously stops working. Most developers have also spent time with Regedit, trying to keep all the necessary registry entries consistent in order to activate a COM class.
The design guidelines and implementation techniques used in the .NET Framework to solve DLL Hell are built on the work done in Microsoft Windows® 2000, as described by Rick Anderson in The End of DLL Hell, and by David D'Souza, BJ Whalen, and Peter Wilson in Implementing Side-by-Side Component Sharing in Applications (Expanded). The .NET Framework extends this previous work by providing features including application isolation and side-by-side components for applications built with managed code on the .NET platform. Also, note that Windows XP provides the same isolation and versioning features for unmanaged code, including COM classes and Win32 DLLs (see How To Build and Service Isolated Applications and Side-by-Side Assemblies for Windows XP for details).
This article introduces the concept of an assembly and describes how .NET uses assemblies to solve versioning and deployment problems. In particular, we'll discuss how assemblies are structured, how they are named, and how compilers and the Common Language Runtime (CLR) use assemblies to record and enforce version dependencies between pieces of an application. We'll also discuss how applications and administrators can customize versioning behavior through what we call version policies.
After assemblies are introduced and described, several deployment scenarios will be presented, providing a sampling of the various packaging and distribution options available in the .NET Framework.

Problem Statement

Versioning

From a customer perspective, the most common versioning problem is what we call DLL Hell. Simply stated, DLL Hell refers to the set of problems caused when multiple applications attempt to share a common component like a dynamic-link library (DLL) or a Component Object Model (COM) class. In the most typical case, one application will install a new version of the shared component that is not backward compatible with the version already on the machine. Although the application that has just been installed works fine, existing applications that depended on a previous version of the shared component might no longer work. In some cases, the cause of the problem is even more subtle. For example, consider the scenario where a user downloads a Microsoft ActiveX® control as a side effect of visiting some Web site. When the control is downloaded it will replace any existing versions of the control that were present on the machine. If an application that has been installed on the machine happens to use this control, it too might potentially stop working.
In many cases there is a significant delay before a user discovers that an application has stopped working. As a result, it is often difficult to remember when a change was made to the machine that could have affected the app. A user may remember installing something a week ago, but there is no obvious correlation between that installation and the behavior they are now seeing. To make matters worse, there are few diagnostic tools available today to help the user (or the support person who is helping them) determine what is wrong.
The reason for these issues is that version information about the different components of an application aren't recorded or enforced by the system. Also, changes made to the system on behalf of one application will typically affect all applications on the machine—building an application today that is completely isolated from changes is not easy.
One reason why it's hard to build an isolated application is that the current run-time environment typically allows the installation of only a single version of a component or an application. This restriction means that component authors must write their code in a way that remains backward compatible, otherwise they risk breaking existing applications when they install a new component. In practice, writing code that is forever backward compatible is extremely difficult, if not impossible. In .NET, the notion of side by side is core to the versioning story. Side by side is the ability to install and run multiple versions of the same component on the machine at the same time. With components that support side-by-side, authors aren't necessarily tied to maintaining strict backward compatibility because different applications are free to use different versions of a shared component.

Deployment and Installation

Installing an application today is a multi-step process. Typically, installing an application involves copying a number of software components to the disk and making a series of registry entries that describe those components to the system.
The separation between the entries in the registry and the files on disk makes it very difficult to replicate applications and to uninstall them. Also, the relationship between the various entries required to fully describe a COM class in the registry is very loose. These entries often include entries for coclasses, interfaces, typelibs, and DCOM app IDs, not to mention any entries made to register document extensions or component categories. Oftentimes you end up keeping these in sync manually.
Finally, this registry footprint is required to activate any COM class. This drastically complicates the process of deploying distributed applications because each client machine must be touched to make the appropriate registry entries.
These problems are primarily caused by the description of a component being kept separate from the component itself. In other words, applications are neither self-describing nor self-contained.

Characteristics of the Solution

The .NET Framework must provide the following basic capabilities to solve the problems just described:
  • Applications must be self-describing. Applications that are self-describing remove the dependency on the registry, enabling zero-impact installation and simplifying uninstall and replication.
  • Version information must be recorded and enforced. Versioning support must be built into the platform to ensure that the proper version of a dependency gets loaded at run time.
  • Must remember "last known good." When an application successfully runs, the platform must remember the set of components—including their versions—that worked together. In addition, tools must be provided that allow administrators to easily revert applications to this "last known good" state.
  • Support for side-by-side components. Allowing multiple versions of a component to be installed and running on the machine simultaneously allows callers to specify which version they'd like to load instead of a version "forced" on unknowingly. The .NET Framework takes side by side a step farther by allowing multiple versions of the framework itself to coexist on a single machine. This dramatically simplifies the upgrade story, because an administrator can choose to run different applications on different versions of the .NET Framework if required.
  • Application isolation. The .NET Framework must make it easy, and in fact the default, to write applications that cannot be affected by changes made to the machine on behalf of other applications.

Assemblies: The Building Blocks

Assemblies are the building blocks used by the .NET Framework to solve the versioning and deployment issues just described. Assemblies are the deployment unit for types and resources. In many ways an assembly equates to a DLL in today's world; in essence, assemblies are a "logical DLLs."
Assemblies are self-describing through metadata called a manifest. Just as .NET uses metadata to describe types, it also uses metadata to describe the assemblies that contain the types.
Assemblies are about much more than deployment. For example, versioning in .NET is done at the assembly level—nothing smaller, like a module or a type, is versioned. Also, assemblies are used to share code between applications. The assembly that a type is contained in is part of the identity of the type.
The code access security system uses assemblies at the core of its permissions model. The author of an assembly records in the manifest the set of permissions required to run the code, and the administrator grants permissions to code based on the assembly in which the code is contained.
Finally, assemblies are also core to the type system and the run-time system in that they establish a visibility boundary for types and serve as a run-time scope for resolving references to types.

Assembly Manifests

Specifically, a manifest includes the following data about the assembly:
  • Identity. An assembly's identity consists of four parts: a simple text name, a version number, an optional culture, and an optional public key if the assembly was built for sharing (see section on Shared Assemblies below).
  • File list. A manifest includes a list of all files that make up the assembly. For each file, the manifest records its name and a cryptographic hash of its contents at the time the manifest was built. This hash is verified at run time to ensure that the deployment unit is consistent.
  • Referenced assemblies. Dependencies between assemblies are stored in the calling assembly's manifest. The dependency information includes a version number, which is used at run time to ensure that the correct version of the dependency is loaded.
  • Exported types and resources. The visibility options available to types and resources include "visible only within my assembly" and "visible to callers outside my assembly."
  • Permission requests. The permission requests for an assembly are grouped into three sets: 1) those required for the assembly to run, 2) those that are desired but the assembly will still have some functionality even if they aren't granted, and 3) those that the author never wants the assembly to be granted.
The IL Disassembler (Ildasm) SDK tool is useful for looking at the code and metadata in an assembly. Figure 1 is an example manifest as displayed by Ildasm. The .assembly directive identifies the assembly and the .assembly extern directives contain the information about other assemblies on which this one depends.
Figure 1. Example manifest as displayed by the IL Disassembler

Assembly Structure

So far, assemblies have been described primarily as a logical concept. This section helps make assemblies more concrete by describing how they are represented physically.
In general, assemblies consist of four elements: the assembly metadata (manifest), metadata describing the types, the intermediate language (IL) code that implements the types, and a set of resources. Not all of these are present in each assembly. Only the manifest is strictly required, but either types or resources are needed to give the assembly any meaningful functionality.
There are several options for how these four elements can be "packaged." For example, Figure 2 shows a single DLL that contains the entire assembly: the manifest, the type metadata, IL code, and resources.
Figure 2. DLL containing all assembly elements
Alternatively, the contents of an assembly may be spread across multiple files. In Figure 3, the author has chosen to separate some utility code into a different DLL and to keep a large resource file (in this case a JPEG) in its original file. One reason this might be done is to optimize code download. The .NET Framework will download a file only when it is referenced, so if the assembly contains code or resources that are accessed infrequently, breaking them out into individual files will increase download efficiency. Another common scenario in which multiple files are used is to build an assembly that consists of code from more than one language. In this case, you’d build each file (module) separately, then group them into an assembly using the Assembly Linker tool provided in the .NET Framework SDK (al.exe).
Figure 3. Assembly elements spread across multiple files

Versioning and Sharing

One of the primary causes of DLL Hell is the sharing model currently used in component-based systems. By default, individual software components are shared by multiple applications on the machine. For example, every time an installation program copies a DLL to the system directory or registers a class in the COM registry, that code will potentially have an effect on other applications running on the machine. In particular, if an existing application used a previous version of that shared component, that application will automatically start using the new version. If the shared component is strictly backward compatible this may be okay, but in many cases maintaining backward compatibility is difficult, if not impossible. If backward compatibility is not maintained, or cannot be maintained, this often results in applications that are broken as a side effect of other applications being installed.
A principle design guideline in .NET is that of isolated components (or assemblies). Isolating an assembly means that an assembly can only be accessed by one application—it is not shared by multiple applications on the machine and cannot be affected by changes made to the system by other applications. Isolation gives a developer absolute control over the code that is used by his application. Isolated, or application-private, assemblies are the default in .NET applications. The trend toward isolated components started in Microsoft Windows 2000 with the introduction of the .local file. This file was used to cause both the OS Loader and COM to look in the application directory first when trying to locate the requested component. (See the related article in the MSDN Library, Implementing Side-by-Side Component Sharing in Applications.)
However, there are cases where sharing an assembly between applications is necessary. It clearly wouldn't make sense for every application to carry its own copy of System.Windowns.Forms, System.Web or a common Web Forms control.
In .NET, sharing code between applications is an explicit decision. Assemblies that are shared have some additional requirements. Specifically, shared assemblies should support side by side so multiple versions of the same assembly can be installed and run on the same machine, or even within the same process, at the same time. In addition, shared assemblies have stricter naming requirements. For example, an assembly that is shared must have a name that is globally unique.
The need for both isolation and sharing leads us to think of two "kinds" of assemblies. This is a rather loose categorization in that there are no real structural differences between the two, but rather the difference is in how they will be used: whether private to one application or shared among many.

Application-Private Assemblies

An application-private assembly is an assembly that is only visible to one application. We expect this to be the most common case in .NET. The naming requirements for private assemblies are simple: The assembly names must only be unique within the application. There is no need for a globally unique name. Keeping the names unique isn't a problem because the application developer has complete control over which assemblies are isolated to the application.
Application-private assemblies are deployed within the directory structure of the application in which they are used. Private assemblies can be placed directly in the application directory, or in a subdirectory thereof. The CLR finds these assemblies through a process called probing. Probing is simply a mapping of the assembly name to the name of the file that contains the manifest.
Specifically, the CLR takes the name of the assembly recorded in the assembly reference, appends ".dll" and looks for that file in the application directory. There are a few variants on this scheme where the Runtime will look in subdirectories named by the assembly or in subdirectories named by the culture of the assembly. For example, a developer may choose to deploy the assembly containing resources localized to German in a subdirectory called "de" and to Spanish in a directory called "es." (See the .NET Framework SDK Guide for details.)
As just described, each assembly manifest includes version information about its dependencies. This version information is not enforced for private assemblies because the developer has complete control over the assemblies that are deployed to the application directory.

Shared Assemblies

The .NET Framework also supports the concept of a shared assembly. A shared assembly is one that is used by multiple applications on the machine. With .NET, sharing code between applications is an explicit decision. Shared assemblies have some additional requirements aimed at avoiding the sharing problems we experience today. In addition to the support for side by side describe earlier, shared assemblies have much more stringent naming requirements. For example, a shared assembly must have a name that is globally unique. Also, the system must provide for "protection of the name"—that is, preventing someone from reusing another's assembly name. For example, say you're a vendor of a grid control and you've released version 1 of your assembly. As an author you need assurance that no one else can release an assembly claiming to be version 2 or your grid control. The .NET Framework supports these naming requirements through a technique called strong names(described in detail in the next section).
Typically, an application author does not have the same degree of control over the shared assemblies used by the application. As a result, version information is checked on every reference to a shared assembly. In addition, the .NET Framework allows applications and administrators to override the version of an assembly that is used by the application by specifying version policies.
Shared assemblies are not necessarily deployed privately to one application, although that approach is still viable, especially if xcopy deployment is a requirement. In addition to a private application directory, a shared assembly may also be deployed to the Global Assembly Cache or to any URL as long as a codebase describing the location of the assembly is supplied in the application’s configuration file. The global assembly cache is a machine-wide store for assemblies that are used by more than one application. As described, deploying to the cache is not a requirement, but there are some advantages to doing so. For example, side-by-side storage of multiple versions of an assembly is provided automatically. Also, administrators can use the store to deploy bug fixes or security patches that they want every application on the machine to use. Finally, there are a few performance improvements associated with deploying to the global assembly cache. The first involves the verification of strong name signatures as described in the Strong Name section below. The second performance improvement involves working set. If several applications are using the same assembly simultaneously, loading that assembly from the same location on disk leverages the code sharing behavior provided by the OS. In contrast, loading the same assembly from multiple different locations (application directories) will result in many copies of the same code being loaded. Adding an assembly to the cache on an end user's machine is typically accomplished using a setup program based on the Windows Installer or some other install technology. Assemblies never end up in the cache as a side effect of running some application or browsing to a Web page. Instead, installing an assembly to the cache requires an explicit action on the part of the user. Windows Installer 2.0, which ships with Windows XP and Visual Studio .NET, has been enhanced to fully understand the concept of assemblies, the assembly cache and isolated applications. This means you will be able to use all of the Windows Installer features, such as on-demand install and application repair, with your .NET applications.
It’s often not practical to build an install package every time you want to add an assembly to the cache on development and test machines. As a result, the .NET SDK includes some tools for working with the assembly cache. The first is a tool called gacutil that allows you to add assemblies to the cache and remove them later. Use the /i switch to add an assembly to the cache:
gacutil /i:myassembly.dll 
See the .NET Framework SDK documentation for a full description of the 
      options supported by gacutil.
The other tools are a Windows Shell Extension that allows you to manipulate the cache using the Windows Explorer and the .NET Framework Configuration Tool. The Shell Extension can be accessed by navigating to the "assembly" subdirectory under your Windows directory. The .NET Framework Configuration Tool can be found in the Administrative Tools section of the Control Panel.
Figure 4 shows a view of the global assembly cache using the Shell Extension.
Figure 4. Global assembly cache

Strong names

Strong names are used to enable the stricter naming requirements associated with shared assemblies. Strong names have three goals:
  • Name uniqueness. Shared assemblies must have names that are globally unique.
  • Prevent name spoofing. Developers don't want someone else releasing a subsequent version of one of your assemblies and falsely claim it came from you, either by accident or intentionally.
  • Provide identity on reference. When resolving a reference to an assembly, strong names are used to guarantee the assembly that is loaded came from the expected publisher.
Strong names are implemented using standard public key cryptography. In general, the process works as follows: The author of an assembly generates a key pair (or uses an existing one), signs the file containing the manifest with the private key, and makes the public key available to callers. When references are made to the assembly, the caller records the public key corresponding to the private key used to generate the strong name. Figure 5 outlines how this process works at development time, including how keys are stored in the metadata and how the signature is generated.
The scenario is an assembly called "Main," which references an assembly called "MyLib." MyLib has a shared name. The important steps are described as follows.
Figure 5. Process for implementing shared names
  1. The developer invokes a compiler passing in a key pair and the set of source files for the assembly. The key pair is generated with an SDK tool called SN. For example, the following command generates a new key pair and saves it to a file:
    Sn –k MyKey.snk
    The key pair is passed to the compiler using the custom attribute 
            System.Reflection.AssemblyKeyFileAttribute as follows:
    
       <assembly:AssemblyKeyFileAttribute("TestKey.snk")>
    
    
  2. When the compiler emits the assembly, the public key is recorded in the manifest as part of the assembly's identity. Including the public key as part of identity is what gives the assembly a globally unique name.
  3. After the assembly has been emitted, the file containing the manifest is signed with the private key. The resulting signature is stored in the file.
  4. When Main is generated by the compiler, MyLib's public key is stored in Main's manifest as part of the reference to MyLib.
At run time, there are two steps the .NET Framework takes to ensure that strong names are giving the developer the required benefits. First, MyLib's strong name signature is verified only when the assembly is installed into the global assembly cache—the signature is not verified again when the file is loaded by an application. If the shared assembly is not deployed to the global assembly cache, the signature is verified every time the file is loaded. Verifying the signature ensures that the contents of MyLib have not been altered since the assembly was built. The second step is to verify that the public key stored as part of Main's reference to MyLib matches the public key that is part of MyLib's identity. If these keys are identical, the author of Main can be sure the version of MyLib that was loaded came from the same publisher that authored the version of MyLib with which Main was built. This key equivalence check is done at run time, when the reference from Main to MyLib is resolved.
The term "signing" often brings Microsoft Authenticode® to mind. It is important to understand that strong names and Authenticode are not related in any way. The two techniques have different goals. In particular, Authenticode implies a level of trust associated with a publisher, while strong names does not. There are no certificates or third-party signing authorities associated with strong names. Also, strong name signing is often done by the compiler itself as part of the build process.
Another consideration worth noting is the "delay signing" process. It is often the case that the author of an assembly doesn't have access to the private key needed to do the full signing. Most companies keep these keys in well-protected stores that can only be accessed by a few people. As a result, the .NET Framework provides a technique called "delay signing" that allows a developer to build an assembly with only the public key.In this mode, the file isn’t actually signed because the private key isn’t supplied. Instead, the file is signed later using the SN utility. See Delay Signing an Assembly in the .NET Framework SDK for details on how to use delay signing.

Version Policy

As just described, each assembly manifest records information about the version of each dependency it was built against. However, there are some scenarios in which the application author or administrator may wish to run with a different version of a dependency at run time. For example, administrators should be able to deploy bug fix releases without requiring that every application be recompiled in order to pick up the fix. Also, administrators must be able to specify that a particular version of an assembly never be used if a security hole or other severe bug is found. The .NET Framework enables this flexibility in version binding through version policies.

Assembly Version Numbers

Each assembly has a four-part version number as part of its identity (that is, version 1.0.0.0 of some assembly and version 2.1.0.2 are completely different identities as far as the class loader is concerned). Including the version as part of the identity is essential to distinguish different versions of an assembly for the purposes of side-by-side.
The parts of the version number are major, minor, build and revision. There are no semantics applied to the parts of the version number. That is, the CLR does not infer compatibility or any other characteristic of an assembly based on how the version number is assigned. As a developer you are free to change any portion of this number as you see fit. Even though there are no semantics applied to the format of the version number, individual organizations will likely find it useful to establish conventions around how the version number is changed. This helps maintain consistency throughout an organization and makes it easier to determine things like which build a particular assembly came from. One typical convention is as follows:
Major or minor. Changes to the major or minor portion of the version number indicate an incompatible change. Under this convention then, version 2.0.0.0 would considered incompatible with version 1.0.0.0. Examples of an incompatible change would be a change to the types of some method parameters or the removal of a type or method altogether.
Build. The Build number is typically used to distinguish between daily builds or smaller compatible releases.
Revision. Changes to the revision number are typically reserved for an incremental build needed to fix a particular bug. You'll sometimes hear this referred to as the "emergency bug fix" number in that the revision is what is often changed when a fix to a specific bug is shipped to a customer.

Default Version Policy

When resolving references to shared assemblies, the CLR determines which version of the dependency to load when it comes across a reference to that assembly in code. The default version policy in .NET is extremely straightforward. When resolving a reference, the CLR takes the version from the calling assembly’s manifest and loads the version of the dependency with the exact same version number. In this way, the caller gets the exact version that he was built and tested against. This default policy has the property that it protects applications from the scenario where a different application installs a new version of a shared assembly that an existing application depends on. Recall that before .NET, existing applications would start to use the new shared component by default. However, in .NET, the installation of a new version of a shared assembly does not affect existing applications.

Custom Version Policy

There may be times when binding to the exact version the application was shipped with isn’t what you want. For example, an administrator may deploy a critical bug fix to a shared assembly and want all applications to use this new version regardless of which version they were built with. Also, the vendor of a shared assembly may have shipped a service release to an existing assembly and would like all applications to begin using the service release instead of the original version. These scenarios and others are supported in the .NET Framework through version policies.
Version policies are stated in XML files and are simply a request to load one version of an assembly instead of another. For example, the following version policy directs the CLR to load version 5.0.0.1 instead of version 5.0.0.0 of an assembly called MarineCtrl:
 <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
  <dependentAssembly
   <assemblyIdentity name="MarineCtrl" publicKeyToken="9335a2124541cfb9" />
      <bindingRedirect oldVersion="5.0.0.0" newVersion="5.0.0.1" />   
  </dependentAssembly>
</assemblyBinding>

In addition to redirecting from a specific version number to another, you can also redirect from a range of versions to another version. For example, the following policy redirects all versions from 0.0.0.0 through 5.0.0.0 of MarineCtrl to version 5.0.0.1:
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
  <dependentAssembly
   <assemblyIdentity name="MarineCtrl" publicKeyToken="9335a2124541cfb9" />
      <bindingRedirect oldVersion="0.0.0.0-5.0.0.0" newVersion="5.0.0.1" />   
  </dependentAssembly>
</assemblyBinding>

Version Policy Levels

There are three levels at which version policy can be applied in .NET: application-specific policy, publisher policy and machine-wide policy.
Application-specific Policy. Each application has an optional configuration file that can specify the application’s desire to bind to a different version of a dependent assembly. The name of the configuration file varies based on the application type. For executable files, the name of the configuration file is the name of the executable + a ".config" extension. For example, the configuration file for "myapp.exe" would be "myapp.exe.config". Configuration files for ASP.NET applications are always "web.config".
Publisher Policy. While application-specific policy is set either by the application developer or administrator, publisher policy is set by the vendor of the shared assembly. Publisher policy is the vendor’s statement of compatibility regarding different versions of her assembly. For example, say the vendor of a shared Windows Forms control ships a service release that contains a number of bug fixes to the control. The original control was version 2.0.0.0 and the version of the service release is 2.0.0.1. Because the new release just contains bug fixes (no breaking API changes) the control vendor would likely issue publisher policy with the new release that causes existing applications that used 2.0.0.0 to now start using 2.0.0.1. Publisher policy is expressed in XML just as application and machine-wide policy are, but unlike those other policy levels, publisher policy is distributed as an assembly itself. The primary reason for this is to ensure that the organization releasing the policy for a particular assembly is the same organization that released the assembly itself. This is accomplished by requiring that both the original assembly and the policy assembly are given a strong name with the same key-pair.
Machine-wide Policy. The final policy level is machine-wide policy (sometimes referred to as Administrator policy). Machine-wide policy is stored in machine.config which is located in the "config" subdirectory under the .NET Framework install directory. The install directory is %windir%\microsoft.net\framework\%runtimeversion%. Policy statements made in machine.config affect all applications running on the machine. Machine-wide policy is used by Administrators to force all applications on a given machine to use a particular version of an assembly. The most comment scenario in which this is used is when a security or other critical bug fix has been deployed to the global assembly cache. After deploying the fixed assembly, the Administrator would use machine-wide version policy to ensure that applications don’t use the old, broken version of the assembly.

Policy Evaluation

The first thing the CLR does when binding to a strongly named assembly is determine which version of the assembly to bind to. The process starts by reading the version number of the desired assembly that was recorded in the manifest of the assembly making the reference. Policy is then evaluated to determine if any of the policy levels contain a redirection to a different version. The policy levels are evaluated in order starting with application policy, followed by publisher and finally administrator.
A redirection found at any level overrides any statement made by a previous level. For example, say that assembly A references assembly B. The reference to B in A’s manifest is to version 1.0.0.0. Furthermore, the publisher policy shipped with assembly B redirects the reference from 1.0.0.0 to 2.0.0.0. In addition, there is version policy is the machine-wide configuration file that directs the reference to version 3.0.0.0. In this case, the statement made at the machine level will override the statement made at the publisher level.

Bypassing Publisher Policy

Because of the order in which the three types of policy are applied, the publisher’s version redirect (publisher policy) can override both the version recorded in the calling assembly’s metadata and any application-specific policy that was set. However, forcing an application to always accept a publisher’s recommendation about versioning can lead back to DLL Hell. After all, DLL Hell is primarily caused by the difficulty of maintaining backwards compatibility in shared components. To further avoid the scenario where an application is broken by the installation of a new version of a shared component, the version policy system in .NET allows an individual application to bypass publisher policy. In other words, an application can refuse the take the publisher’s recommendation about which version to use. An application can bypass publisher policy using the "publisherPolicy" element in the application configuration file:
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<publisherPolicy apply="no"/>
</assemblyBinding>

Setting Version Policies with the .NET Configuration Tool

Fortunately, the .NET Framework ships with a graphical admin tool so you don't have to worry about hand editing XML policy files. The tool supports both application and machine-wide version policy. You’ll find the tool in the Administrative Tools section of Control Panel. The initial screen of the admin tool looks like Figure 6:
Figure 6. The Admin tool
The following steps describe how to set application-specific policy:
  1. Add your application to the Applications node in the tree view. Right-click the Applications node and click Add. The Add dialog box shows a list of .NET applications to pick from. If your application isn’t in the list, you can add it by clicking Other.
  2. Choose the assembly you’d like to set policy for. Right-click the Configured Assemblies node and click Add. One of the options is to pick an assembly from the list of assemblies that the application references, and it displays the following dialog as shown in Figure 7. Pick an assembly and click Select.
    Figure 7. Choosing an assembly
  3. In the Properties dialog box, enter the version policy information. Click the Binding Policy tab and enter the desired version numbers in the table as shown in Figure 8.
    Figure 8. Binding Policy tab

Deployment

Deployment involves at least two different aspects: packaging the code, and distributing the packages to the various clients and servers on which the application will run. A primary goal of the .NET Framework is to simplify deployment (especially the distribution aspect) by making zero-impact install and xcopy deployment feasible. The self-describing nature of assemblies allows us to remove our dependency on the registry, thereby making install, uninstall, and replication much simpler. However, there are scenarios where xcopy is not sufficient or appropriate as a distribution mechanism. For these cases, the .NET Framework provides extensive code download services and integration with the Windows Installer.

Packaging

There are three packaging options available in the first release of the .NET Framework:
  • As-built (DLLs and EXEs). In many scenarios, no special packaging is required. An application can be deployed in the format produced by the development tool. That is, a collection of DLLs and EXEs.
  • Cab files. Cab files can be used to compress your application for more efficient downloads.
  • Windows Installer packages. Microsoft Visual Studio .NET and other installation tools allow you to build Windows Installer packages (.msi files). The Windows Installer allows you to take advantage of application repair, on-demand install, and other Microsoft Windows application-management features.

Distribution Scenarios

.NET applications can be distributed in a variety of ways, including xcopy, code download, and through the Windows Installer.
For many applications, including Web applications and Web Services, deployment is as simple as copying a set of files to disk and running them. Uninstall and replication are just as easy—just delete the files or copy them.
The .NET Framework provides extensive code download support using a Web browser. Several improvements have been made in this area, including:
  • Zero-impact. No registry entries are made on the machine.
  • Incremental download. Pieces of an assembly are downloaded only as they are referenced.
  • Download isolated to the application. Code downloaded on behalf of one application cannot affect others on the machine. A primary goal of our code download support is to prevent the scenario where a user downloads a new version of some shared component as a side effect of browsing to a particular Web site and having that new version adversely affect other applications.
  • No Authenticode dialogs. The code access security system is used to allow mobile code to run with a partial level of trust. Users will never be presented with dialog boxes asking them to make a decision about whether they trust the code.
Finally, .NET is fully integrated with the Windows Installer and the application management features of Windows.

Summary

The .NET Framework enables zero-impact install and addresses DLL Hell. Assemblies are the self-describing, versionable deployment units used to enable these features.
The ability to create isolated applications is crucial because it allows applications to be built that won't be affected by changes made to the system by other applications. The .NET Framework encourages this type of application through application-private assemblies that are deployed within the directory structure of the application.
Side by side is a core part of the sharing and versioning story in .NET. Side by side allows multiple versions of an assembly to be installed and running on the machine simultaneously, and allows each application to request a specific version of that assembly.
The CLR records version information between pieces of an application and uses that information at run time to ensure that the proper version of a dependency is loaded. Version policies can be used by both application developers and administrators to provide some flexibility in choosing which version of a given assembly is loaded.
There are several packaging and distribution options provided by the .NET Framework, including the Windows Installer, code download, and simple xcopy.









by gadadhar tiwary : to be continued ...