About Me

Professional Practical HumanBeing

Thursday, November 18, 2010

Ten Must-Have Tools Every Developer Should Download Now

This article discusses:

NUnit to write unit tests
NDoc to create code documentation
NAnt to build your solutions
CodeSmith to generate code
FxCop to police your code
Snippet Compiler to compile small bits of code
Two different switcher tools, the ASP.NET Version Switcher and the Visual Studio .NET Project Converter
Regulator to build regular expressions
.NET Reflector to examine assemblies
This article uses the following technologies:
.NET, C# or Visual Basic .NET, Visual Studio .NET



Contents
Snippet Compiler
Regulator
CodeSmith
Building a Custom Template
NUnit
Writing an NUnit Test
FxCop
Lutz Roeder's .NET Reflector
NDoc
NAnt
NAnt in Action
Switch Tools
Conclusion


You cannot expect to build a first-class application unless you use the best available tools. Besides well-known tools such as Visual Studio® .NET, there are a multitude of small, lesser-known tools available from the .NET community. In this article, I'm going to introduce you to some of the best free tools available today that target .NET development. I'll walk you through a quick tutorial of how to use each of them, some of which will save you a minute here and there, while others may completely change the way that you write code. Because I am squeezing so many different tools into this single article, I will not be able to cover each of them extensively, but you should learn enough about each to decide which tools are useful for your projects.



Snippet Compiler
The Snippet Compiler is a small Windows®-based application that allows you to write, compile, and run code. This tool is useful if you have small pieces of code for which you don't want to create an entire Visual Studio .NET project (along with all the files that come with it).
As an example, let's say that I wanted to show you how to launch another application from the Microsoft® .NET Framework. In the Snippet Compiler I would start by creating a new file which creates a small console application. The snippet can be created inside the Main method of the console application, which is what I will do here. The following code snippet demonstrates how to create an instance of Notepad from the .NET Framework: Copy Code System.Diagnostics.Process proc = new System.Diagnostics.Process(); proc.StartInfo.FileName= "notepad.exe"; proc.Start(); proc.WaitForExit();Of course this snippet would not compile by itself, but that is where Snippet Compiler comes into play. Figure 1 shows this code sample in Snippet Compiler.


Figure 1 Snippet Compiler To test this snippet, just press the play button (green triangle), and it will run in debug mode. The snippet will generate a console application popup, and Notepad will appear. When you close Notepad, the console application will close as well.
Personally, I have found Snippet Compiler to be invaluable when trying to create a small example for someone who has asked me for help, when normally I would have to create a project, make sure everything compiles, send them the code snippet, and then delete the project. Snippet Compiler makes this process much easier and much more pleasant.
Snippet Compiler was written by Jeff Key and can be downloaded from http://www.sliver.com/dotnet/SnippetCompiler.



Regulator
Regulator is the most recent addition to my top tools list. It is a full-featured tool that makes it easy to build and test regular expressions. There is a renewed interest in regular expressions because of the excellent support for them in the .NET Framework. Regular expressions are used to define patterns in strings based on characters, frequency, and character order. They are most commonly used as a means to validate user input or as a way to find a string of characters inside a larger string—for instance, when looking for a URL or e-mail address on a Web page.
Regulator allows you to enter a regular expression and some input against which you would be running this expression. This way you can see how the regular expression will act and what kind of matches it will return before implementing it in your application. Figure 2 shows Regulator with a simple regular expression.


Figure 2 Regulator


The document contains the regular expression, in this example it is [0-9]* which should match any number of digits in a row. The box in the bottom-right contains the input for this regular expression, and the box on the bottom-left shows the matches that this regular expression finds in the input. The ability to write and test regular expressions in a separate application like this is much easier than trying to work with them in your app.
One of the best features in Regulator is the ability to search the online regular expressions library at regexlib.com. For example, if you enter the string "phone" in the search box, you will find more than 20 different regular expressions that will match various phone numbers, including expressions for UK, Australian, and many other phone numbers.
Regulator was written by Roy Osherove and can be downloaded at http://osherove.com/tools.



CodeSmith
CodeSmith is a template-based code-generation tool that uses a syntax similar to ASP.NET to generate any type of code or text. Unlike many other code-generation tools, CodeSmith does not require you to subscribe to a particular application design or architecture. Using CodeSmith, you can generate anything from a simple, strongly typed collection to an entire application.
When you are building an application, you will often find yourself repeating certain tasks, whether it's writing data access code or building custom collections. CodeSmith is particularly useful at such times because you can write templates to automate those tasks and not only improve your productivity but also automate the tasks that are the most tedious to perform.
CodeSmith ships with a number of templates, including ones for all the .NET collection types as well as ones to generate stored procedures, but the real power of this tool comes from being able to create custom templates. To get you started, I'll provide a quick introduction to building a custom template.



Building a Custom Template
CodeSmith templates are simply text files which you can create in any text editor. Their only requirement is that they be saved with the .cst file extension. The sample template that I'm going to build will accept a string and then build a class based on that string. The first step to creating a template is to add the template header, which declares the language of the template, the target language, and a brief description of the template: Copy Code <%@ CodeTemplate Language="C#" TargetLanguage="C#" Description="Car Template" %>The next part of the template is the property declarations, where you declare the properties that will be specified each time the template is run. With this template, the single property that I'm going to use is just a string, so the property declaration looks like this: Copy Code <%@ Property Name="ClassName" Type="String" Category="Context" Description="Class Name" %>[Editor's Update - 6/16/2004: The code in Figure 3 has been updated to be safe for multithreaded operations.]

Figure 3 Class Generation

Custom Template
Copy Code public sealed class <%= ClassName %>{ private static volatile <%= ClassName %>_instance; private <%= ClassName %>() {} private static readonly object _syncRoot = new object(); public static <%= ClassName %>Value { get { if (_instance == null) { lock(_syncRoot) { if (_instance == null) { _instance = new <%= ClassName %>(); } } } return _instance; } } }SingletonClass
Copy Code public sealed class SingletonClass { private static volatile SingletonClass _instance; private SingletonClass() {} private static readonly object _syncRoot = new object(); public static SingletonClass Value { get { if (_instance == null) { lock(_syncRoot) { if (_instance == null) { _instance = new SingletonClass(); } } } return _instance; } } }
As you can see, the template will take the string input and generate a singleton class using that class name. In the template body, the same opening and closing tags are used as in ASP.NET. In this template, I am simply inserting the property value, but you can also use any type of .NET code inside these tags. Once the template is complete, you load it into CodeSmith by either double-clicking or opening it from the CodeSmith application. Figure 4 shows this template loaded into CodeSmith.


Figure 4 CodeSmith Template


You can see that the property on the left is the one I declared in the template. If I enter "SingletonClass" as the class name and click the Generate button, the class shown in the bottom part of Figure 3 will be generated.
CodeSmith is relatively easy to use and can produce some incredible results if applied correctly. One of the most common sections of an application that is targeted for code generation is the data access layer. CodeSmith includes a special assembly called the SchemaExplorer which can be used to generate templates from tables, stored procedures, or almost any other SQL Server™ object.
CodeSmith was written by Eric J. Smith and is available for download at http://www.ericjsmith.net/codesmith.



NUnit
NUnit is an open source unit testing framework built for the .NET Framework. NUnit allows you to write tests in the language of your choice to test a specific function of your application. Unit tests are an excellent way to test the functionality of your code when you first write it, and also to provide a method for regression testing of your application. The NUnit application provides a framework for writing unit tests, as well as a graphical interface to run these tests and view the results.
Writing an NUnit Test
As an example, I'm going to test the functionality of the Hashtable class in the .NET Framework to determine if two objects can be added and then retrieved. My first step will be to add a reference to the NUnit.Framework assembly, which will give me access to the attributes and methods of the NUnit framework. Next I'll create a class and mark it with the TestFixture attribute. This attribute lets NUnit know that this class contains NUnit tests: Copy Code using System; using System.Collections; using NUnit.Framework; namespace NUnitExample { [TestFixture] public class HashtableTest { public HashtableTest() { } } }Next I'll create a method and mark it with the [Test] attribute so that NUnit knows that this method is a test. Then I'll set up a Hashtable and add two values to it, then use the Assert.AreEqual method to see if I can retrieve the same values that I added to the Hashtable, as shown in the following: Copy Code [Test] public void HashtableAddTest() { Hashtable ht = new Hashtable(); ht.Add("Key1", "Value1"); ht.Add("Key2", "Value2"); Assert.AreEqual("Value1", ht["Key1"], "Wrong object returned!"); Assert.AreEqual("Value2", ht["Key2"], "Wrong object returned!"); }This will confirm that I can add and then retrieve values from the Hashtable—a simple test, but one that showcases the capabilities of NUnit. There are a number of test types, as well as various Assert methods, that can be used to test every part of your code.
To run this test, I'll need to build the project, open the generated assembly in the NUnit application, and then click the Run button. Figure 5 shows the results. I get a warm and fuzzy feeling when I see that big green bar because it lets me know that the test passed. This simple example shows how easy and powerful NUnit and unit testing can be. Being able to write a unit test that can be saved and rerun whenever you change code not only makes it easier for you to detect defects in your code, but the result is that you can deliver better applications.


Figure 5 NUnit NUnit is an open-source project that is available for download from http://www.nunit.org. There is also an excellent NUnit Visual Studio .NET add-in which allows you to run unit tests directly from Visual Studio. This can be found at http://sourceforge.net/projects/nunitaddin. For more information on NUnit and its place in test-driven development, see the article "Test-Driven C#: Improve the Design and Flexibility of Your Project with Extreme Programming Techniques" in the April 2004 issue of MSDN® Magazine.



FxCop
The .NET Framework is very powerful, which means there is great potential to create excellent applications, but there is equal opportunity to create poor programs. FxCop is one of the tools that can be used to help create better applications by enabling you to examine an assembly and check it for compliance using a number of different rules. FxCop comes with a set number of rules created by Microsoft, but you can also create and include your own rules. For instance, if you decided that all classes should have a default constructor that takes no arguments, you could write a rule that checks for a constructor on each class of an assembly. This way, no matter who writes the code, you will have a certain level of consistency. If you want more information on creating custom rules, see John Robbins' Bugslayer column on the subject in the June 2004 issue of MSDN Magazine.
So let's take a look at FxCop in action and see what it finds wrong with the NUnitExample assembly that I have been working with. When you open FxCop you first need to create an FxCop project and then add to it the assembly that you want to test. Once the assembly is added to the project, you can press Analyze, and FxCop will examine the assembly. The errors and warning found in this assembly are shown in Figure 6.


Figure 6 Errors and Warning Found by FxCop


FxCop found a couple of problems with my assembly. You can double-click on an error to see the details, including a description of the rule and where you can find more information. (Something you can do for fun is run FxCop on the Framework assemblies and see what turns up.)
FxCop can help you create better, more consistent code, but it cannot make up for poor application design or just plain poor programming. FxCop is also not a replacement for peer code review, but because it can catch a lot of errors before code review, more time can be spent on serious issues rather than having to worry about naming conventions.
FxCop was developed by Microsoft and is available for download from http://www.gotdotnet.com/team/fxcop.



Lutz Roeder's .NET Reflector
The next essential tool is called .NET Reflector, which is a class browser and decompiler that can examine an assembly and show you just about all of its secrets. The .NET Framework introduced reflection which can be used to examine any .NET-based code, whether it is a single class or an entire assembly. Reflection can also be used to retrieve information about the various classes, methods, and properties included in a particular assembly. Using .NET Reflector, you can browse the classes and methods of an assembly, you can examine the Microsoft intermediate language (MSIL) generated by these classes and methods, and you can decompile the classes and methods and see the equivalent in C# or Visual Basic® .NET.
To demonstrate the workings of .NET Reflector, I am going to load and examine the NUnitExample assembly already shown. Figure 7 shows this assembly loaded in .NET Reflector.


Figure 7 NUnitExample Assembly Inside of .NET Reflector there are various tools that you can use to examine this assembly further. To view the MSIL that makes up a method, click on the method and select Disassembler from the menu.
In addition to being able to view the MSIL, you can also view the method as C# by selecting Decompiler under the Tools menu. You could also view this method decompiled to Visual Basic .NET or Delphi by changing your selection under the Languages menu. Here is the code that .NET Reflector generated: Copy Code public void HashtableAddTest() { Hashtable hashtable1; hashtable1 = new Hashtable(); hashtable1.Add("Key1", "Value1"); hashtable1.Add("Key2", "Value2"); Assert.AreEqual("Value1", hashtable1["Key1"], "Wrong object returned!"); Assert.AreEqual("Value2", hashtable1["Key2"], "Wrong object returned!"); }The previous code looks very much like the code I actually wrote for this method. Here is the actual code from this assembly: Copy Code public void HashtableAddTest() { Hashtable ht = new Hashtable(); ht.Add("Key1", "Value1"); ht.Add("Key2", "Value2"); Assert.AreEqual("Value1", ht["Key1"], "Wrong object returned!"); Assert.AreEqual("Value2", ht["Key2"], "Wrong object returned!"); }Although there are some minor differences with the code, they are functionally identical.
While this example was a good way to show actual code versus decompiled code, it does not represent what I consider to be the best use of .NET Reflector, which is to examine .NET Framework assemblies and methods. The .NET Framework offers many different ways to perform similar operations. For example, if you need to read a set of data from XML, there are a variety of different ways to do this using XmlDocument, XPathNavigator, or XmlReader. By using .NET Reflector, you can see what Microsoft used when writing the ReadXml method of the DataSet, or what they did when reading data from the configuration files. .NET Reflector is also an excellent way to see the best practices for creating objects like HttpHandlers or configuration handlers because you get to see how the team at Microsoft actually built those objects in the Framework.
.NET Reflector was written by Lutz Roeder and can be downloaded from http://www.aisto.com/roeder/dotnet.

NDoc
Code documentation is almost always a dreaded task. I am not talking about the early design documents, or even the more detailed design documents; I am talking about documenting individual methods and properties on classes. The NDoc tool will automatically generate documentation for your code using reflection to examine the assembly and using the XML generated from your C# XML comments. XML comments are only available for C#, but there is a Visual Studio .NET Power Toy called VBCommenter which will do something similar for Visual Basic .NET. In addition, the next release of Visual Studio will support XML comments for more languages.
With NDoc you are technically still documenting your code, but you are documenting as you write it (in the XML comments), which is much easier to swallow. The first step when using NDoc is to turn on XML comments generation for your assembly. Right-click the project and select Properties Configuration Properties Build, then enter a path in which to save the XML file in the XML Documentation File option. When the project is built, an XML file will be created with all of the XML comments included. Here is a look at a method from the NUnit example documented with XML: Copy Code /// /// This test adds a number of values to the Hashtable collection /// and then retrieves those values and checks if they match. /// [Test] public void HashtableAddTest() { //Method Body Here }The XML documentation on this method will be extracted and saved in the XML file, shown here: Copy Code This test adds a number of values to the Hashtable collection and then retrieves those values and checks if they match. NDoc uses reflection to look at your assembly, then reads the XML in this document, and matches them up. NDoc uses this data to create any number of different documentation formats, including HTML help files (CHMs). After generating the XML file, the next step is to load the assembly and the XML file into NDoc so they can be processed. This is done simply by opening NDoc and clicking the Add button.
Once the assembly and XML file are loaded into NDoc and after you customize the output using the range of properties available, clicking on the Generate button will start the process of generating the documentation. Using the default properties, NDoc generates some very attractive and functional .html and .chm files, thereby automating in a quick and efficient manner what would otherwise be a tedious task.
NDoc is an open source project and can be downloaded from http://ndoc.sourceforge.net.
NAnt
NAnt is a .NET-based build tool that, unlike the current version of Visual Studio .NET, makes it easy to create a build process for your project. When you have a large number of developers working on a single project, you can't rely on the build from a single user's box. You also do not want to have to build the project manually on a regular basis. Instead, you create an automated build process that runs every night. NAnt allows you to build your solution, copy files, run NUnit tests, send e-mail, and much more. Unfortunately, NAnt is lacking a nice looking graphical interface, but it does have a console application and XML files that specify which tasks should be completed during the build process. Note that MSBuild, the new build platform that's part of Visual Studio 2005, provides for very robust build scenarios and is similarly driven by XML-based project files.
NAnt in Action
In this example I am going to create an NAnt build file for the NUnitExample solution that I created earlier. First I need to create an XML file with the .build extension, place it in the root of my project, and then add an XML declaration to the top of the file. The first tag I need to add to the file is the project tag: Copy Code The NUnit Example Project The project tag is also used to set the name of the project, the default target, and the base directory. The description tag is used to set a brief description of this project.
Next, I'll add the property tag, which can be used to store a setting in a single location that can then be accessed from anywhere in the file. In this case, I am going to create a property called debug, which I can then set to true or false, reflecting whether or not I want the project to be compiled in the debug configuration. (In the end, this particular property does not actually affect how the project will be built; it is simply a variable that you set and which will be read from later when you are actually determining how to build the project.)
Next, I need to create a target tag. A project can contain multiple targets which can be specified when NAnt is run. If no target is specified, the default is used, which I set in the project element. In this example, the default target is build. Let's take a look at the target element, which will contain the majority of the build info: Copy Code Inside the target element, I am going to set the name of the target to build and create a description of what this target will do. I'll also create a csc element, which is used to specify what should be passed to the csc C# compiler. Let's take a look at the csc element: Copy Code First, I have to set the target of the csc element. In this case I will be creating a .dll file so I set the target to library. Next, I have to set the output of the csc element, which is where the .dll file will be created. Finally, I need to set the debug property, which determines whether the project will be compiled in debug. Since I created a property earlier to store this value, I can use the following string to access the value of that property: ${debug}. The csc element also contains a number of sub-elements. I need to create two elements: the references element will tell NAnt which assemblies I need to reference for this project, and the sources element will tell NAnt which files to include in the build. In this example, I reference the NUnit.Framework.dll assembly and include the HashtableTest.cs file. The complete build file is shown in Figure 8. (You would normally also create a clean target that would be used to delete the generated files, but I have omitted it for the sake of brevity.)

Figure 8 NAnt Build File
Copy Code The NUnit Example Project
To build this file I need to go to the root directory of my project, where the build file is located, and execute nant.exe from that location. If the build is successful, you can find the .dll and .pdb file in the bin directory of this application. While using NAnt is definitely not as easy as clicking Build in Visual Studio, it is a very powerful tool for developing a build process that runs on an automated schedule. NAnt also includes helpful features such as the ability to run unit tests or copy additional files (features that are not supported by the current Visual Studio build process).
NAnt is an open source project and can be downloaded from http://sourceforge.net/projects/nant.

Switch Tools
I have lumped together two separate tools under the heading Switch Tools. These two tools are rather simple, but can be extremely useful. The first is the ASP.NET Version Switcher, which can be used to switch the version of ASP.NET that a virtual directory is running under. The second tool is the Visual Studio Converter, which can be used to switch a project file from Visual Studio .NET 2002 to Visual Studio .NET 2003.
When IIS handles a request, it looks at the extension of the file that is being requested, and then based on the extension mappings for that Web site or virtual directory, it either delegates the request to an ISAPI extension or handles it itself. This is how ASP.NET works; extension mappings are registered for all of the ASP.NET extensions and directs them to the aspnet_isapi.dll. This works flawlessly until you install ASP.NET 1.1, which upgrades the extension mapping to the new version of aspnet_isapi.dll. This causes errors when an application built on ASP.NET 1.0 tries to run with version 1.1. To fix this, you can switch all of the extension mappings back to the 1.0 version of aspnet_isapi.dll, but with 18 extension mappings it is not a lot of fun to do this by hand. This is where the ASP.NET Version Switcher becomes useful. This small utility can be used to switch the version of the .NET Framework that any single ASP.NET application is using.


Figure 9 ASP.NET Version Switcher Figure 9 shows the ASP.NET Version Switcher in action. Using it is as simple as selecting the application and then selecting the version of the .NET Framework that you would like the application to use. The tool then uses the aspnet_regiis.exe command-line tool to switch the application to the selected version of the Framework. This tool will become even more useful as future versions of ASP.NET and the .NET Framework are released.
ASP.NET Version Switcher was written by Denis Bauer and is available for download from http://www.denisbauer.com/NETTools/ASPNETVersionSwitcher.aspx.
The Visual Studio .NET Project Converter (see Figure 10) is very similar to the ASP.NET Version Switcher, except that it is used to switch the version of a Visual Studio project file. Even though there is only a small difference between versions 1.0 and 1.1 of the .NET Framework, once a project file from Visual Studio .NET 2002 is converted to Visual Studio .NET 2003, it cannot be converted back. While this might not be an issue most of the time (since there are few breaking changes between the .NET Framework versions 1.0 and 1.1), at some point you may need to switch a project back. This converter can convert any solution or project file from Visual Studio 7.1 (Visual Studio .NET 2003) to Visual Studio 7.0 (Visual Studio .NET 2002), and back if necessary.


Figure 10 Visual Studio .NET Project Converter The Visual Studio .NET Project Converter was written by Dacris Software. It is available for download from http://www.codeproject.com/macro/vsconvert.asp.
Conclusion
This has been a bit of a whirlwind tour of these tools, but I have tried to present you with at least enough information to whet your appetite. I trust that this article has provided you with some insight into a couple of free tools that you can start using right away to write better projects. I also urge you to make sure you have all the other right tools available to you, whether it is newest version of Visual Studio, a powerful computer, or a free utility. Having the right tools can make all the difference.


Source :- http://msdn.microsoft.com/en-us/magazine/cc300497.aspx

10 Tips for Writing High-Performance Web Applications

Contents
Performance on the Data Tier
Tip 1—Return Multiple Resultsets
Tip 2—Paged Data Access
Tip 3—Connection Pooling
Tip 4—ASP.NET Cache API
Tip 5—Per-Request Caching
Tip 6—Background Processing
Tip 7—Page Output Caching and Proxy Servers
Tip 8—Run IIS 6.0 (If Only for Kernel Caching)
Tip 9—Use Gzip Compression
Tip 10—Server Control View State
Conclusion

Writing a Web application with ASP.NET is unbelievably easy. So easy, many developers don't take the time to structure their applications for great performance. In this article, I'm going to present 10 tips for writing high-performance Web apps. I'm not limiting my comments to ASP.NET applications because they are just one subset of Web applications. This article won't be the definitive guide for performance-tuning Web applications—an entire book could easily be devoted to that. Instead, think of this as a good place to start.
Before becoming a workaholic, I used to do a lot of rock climbing. Prior to any big climb, I'd review the route in the guidebook and read the recommendations made by people who had visited the site before. But, no matter how good the guidebook, you need actual rock climbing experience before attempting a particularly challenging climb. Similarly, you can only learn how to write high-performance Web applications when you're faced with either fixing performance problems or running a high-throughput site.
My personal experience comes from having been an infrastructure Program Manager on the ASP.NET team at Microsoft, running and managing www.asp.net, and helping architect Community Server, which is the next version of several well-known ASP.NET applications (ASP.NET Forums, .Text, and nGallery combined into one platform). I'm sure that some of the tips that have helped me will help you as well.
You should think about the separation of your application into logical tiers. You might have heard of the term 3-tier (or n-tier) physical architecture. These are usually prescribed architecture patterns that physically divide functionality across processes and/or hardware. As the system needs to scale, more hardware can easily be added. There is, however, a performance hit associated with process and machine hopping, thus it should be avoided. So, whenever possible, run the ASP.NET pages and their associated components together in the same application.
Because of the separation of code and the boundaries between tiers, using Web services or remoting will decrease performance by 20 percent or more.
The data tier is a bit of a different beast since it is usually better to have dedicated hardware for your database. However, the cost of process hopping to the database is still high, thus performance on the data tier is the first place to look when optimizing your code.
Before diving in to fix performance problems in your applications, make sure you profile your applications to see exactly where the problems lie. Key performance counters (such as the one that indicates the percentage of time spent performing garbage collections) are also very useful for finding out where applications are spending the majority of their time. Yet the places where time is spent are often quite unintuitive.
There are two types of performance improvements described in this article: large optimizations, such as using the ASP.NET Cache, and tiny optimizations that repeat themselves. These tiny optimizations are sometimes the most interesting. You make a small change to code that gets called thousands and thousands of times. With a big optimization, you might see overall performance take a large jump. With a small one, you might shave a few milliseconds on a given request, but when compounded across the total requests per day, it can result in an enormous improvement.

Performance on the Data Tier

When it comes to performance-tuning an application, there is a single litmus test you can use to prioritize work: does the code access the database? If so, how often? Note that the same test could be applied for code that uses Web services or remoting, too, but I'm not covering those in this article.
If you have a database request required in a particular code path and you see other areas such as string manipulations that you want to optimize first, stop and perform your litmus test. Unless you have an egregious performance problem, your time would be better utilized trying to optimize the time spent in and connected to the database, the amount of data returned, and how often you make round-trips to and from the database.
With that general information established, let's look at ten tips that can help your application perform better. I'll begin with the changes that can make the biggest difference.

Tip 1—Return Multiple Resultsets
Review your database code to see if you have request paths that go to the database more than once. Each of those round-trips decreases the number of requests per second your application can serve. By returning multiple resultsets in a single database request, you can cut the total time spent communicating with the database. You'll be making your system more scalable, too, as you'll cut down on the work the database server is doing managing requests.
While you can return multiple resultsets using dynamic SQL, I prefer to use stored procedures. It's arguable whether business logic should reside in a stored procedure, but I think that if logic in a stored procedure can constrain the data returned (reduce the size of the dataset, time spent on the network, and not having to filter the data in the logic tier), it's a good thing.
Using a SqlCommand instance and its ExecuteReader method to populate strongly typed business classes, you can move the resultset pointer forward by calling NextResult. Figure 1 shows a sample conversation populating several ArrayLists with typed classes. Returning only the data you need from the database will additionally decrease memory allocations on your server.

Figure 1 Extracting Multiple Resultsets from a DataReader
कोड कॉपी करें // read the first resultset reader = command.ExecuteReader(); // read the data from that resultset while (reader.Read()) { suppliers.Add(PopulateSupplierFromIDataReader( reader )); } // read the next resultset reader.NextResult(); // read the data from that second resultset while (reader.Read()) { products.Add(PopulateProductFromIDataReader( reader )); }

Tip 2—Paged Data Access
The ASP.NET DataGrid exposes a wonderful capability: data paging support. When paging is enabled in the DataGrid, a fixed number of records is shown at a time. Additionally, paging UI is also shown at the bottom of the DataGrid for navigating through the records. The paging UI allows you to navigate backwards and forwards through displayed data, displaying a fixed number of records at a time.
There's one slight wrinkle. Paging with the DataGrid requires all of the data to be bound to the grid. For example, your data layer will need to return all of the data and then the DataGrid will filter all the displayed records based on the current page. If 100,000 records are returned when you're paging through the DataGrid, 99,975 records would be discarded on each request (assuming a page size of 25). As the number of records grows, the performance of the application will suffer as more and more data must be sent on each request.
One good approach to writing better paging code is to use stored procedures. Figure 2 shows a sample stored procedure that pages through the Orders table in the Northwind database. In a nutshell, all you're doing here is passing in the page index and the page size. The appropriate resultset is calculated and then returned.

Figure 2 Paging Through the Orders Table
कोड कॉपी करें CREATE PROCEDURE northwind_OrdersPaged ( @PageIndex int, @PageSize int ) AS BEGIN DECLARE @PageLowerBound int DECLARE @PageUpperBound int DECLARE @RowsToReturn int -- First set the rowcount SET @RowsToReturn = @PageSize * (@PageIndex + 1) SET ROWCOUNT @RowsToReturn -- Set the page bounds SET @PageLowerBound = @PageSize * @PageIndex SET @PageUpperBound = @PageLowerBound + @PageSize + 1 -- Create a temp table to store the select results CREATE TABLE #PageIndex ( IndexId int IDENTITY (1, 1) NOT NULL, OrderID int ) -- Insert into the temp table INSERT INTO #PageIndex (OrderID) SELECT OrderID FROM Orders ORDER BY OrderID DESC -- Return total count SELECT COUNT(OrderID) FROM Orders -- Return paged results SELECT O.* FROM Orders O, #PageIndex PageIndex WHERE O.OrderID = PageIndex.OrderID AND PageIndex.IndexID > @PageLowerBound AND PageIndex.IndexID < @PageUpperBound ORDER BY PageIndex.IndexID END In Community Server, we wrote a paging server control to do all the data paging. You'll see that I am using the ideas discussed in Tip 1, returning two resultsets from one stored procedure: the total number of records and the requested data. The total number of records returned can vary depending on the query being executed. For example, a WHERE clause can be used to constrain the data returned. The total number of records to be returned must be known in order to calculate the total pages to be displayed in the paging UI. For example, if there are 1,000,000 total records and a WHERE clause is used that filters this to 1,000 records, the paging logic needs to be aware of the total number of records to properly render the paging UI.


Tip 3—Connection Pooling
Setting up the TCP connection between your Web application and SQL Server™ can be an expensive operation. Developers at Microsoft have been able to take advantage of connection pooling for some time now, allowing them to reuse connections to the database. Rather than setting up a new TCP connection on each request, a new connection is set up only when one is not available in the connection pool. When the connection is closed, it is returned to the pool where it remains connected to the database, as opposed to completely tearing down that TCP connection. Of course you need to watch out for leaking connections. Always close your connections when you're finished with them. I repeat: no matter what anyone says about garbage collection within the Microsoft® .NET Framework, always call Close or Dispose explicitly on your connection when you are finished with it. Do not trust the common language runtime (CLR) to clean up and close your connection for you at a predetermined time. The CLR will eventually destroy the class and force the connection closed, but you have no guarantee when the garbage collection on the object will actually happen. To use connection pooling optimally, there are a couple of rules to live by. First, open the connection, do the work, and then close the connection. It's okay to open and close the connection multiple times on each request if you have to (optimally you apply Tip 1) rather than keeping the connection open and passing it around through different methods. Second, use the same connection string (and the same thread identity if you're using integrated authentication). If you don't use the same connection string, for example customizing the connection string based on the logged-in user, you won't get the same optimization value provided by connection pooling. And if you use integrated authentication while impersonating a large set of users, your pooling will also be much less effective. The .NET CLR data performance counters can be very useful when attempting to track down any performance issues that are related to connection pooling. Whenever your application is connecting to a resource, such as a database, running in another process, you should optimize by focusing on the time spent connecting to the resource, the time spent sending or retrieving data, and the number of round-trips. Optimizing any kind of process hop in your application is the first place to start to achieve better performance. The application tier contains the logic that connects to your data layer and transforms data into meaningful class instances and business processes. For example, in Community Server, this is where you populate a Forums or Threads collection, and apply business rules such as permissions; most importantly it is where the Caching logic is performed.

Tip 4—ASP.NET Cache
API One of the very first things you should do before writing a line of application code is architect the application tier to maximize and exploit the ASP.NET Cache feature. If your components are running within an ASP.NET application, you simply need to include a reference to System.Web.dll in your application project. When you need access to the Cache, use the HttpRuntime.Cache property (the same object is also accessible through Page.Cache and HttpContext.Cache). There are several rules for caching data. First, if data can be used more than once it's a good candidate for caching. Second, if data is general rather than specific to a given request or user, it's a great candidate for the cache. If the data is user- or request-specific, but is long lived, it can still be cached, but may not be used as frequently. Third, an often overlooked rule is that sometimes you can cache too much. Generally on an x86 machine, you want to run a process with no higher than 800MB of private bytes in order to reduce the chance of an out-of-memory error. Therefore, caching should be bounded. In other words, you may be able to reuse a result of a computation, but if that computation takes 10 parameters, you might attempt to cache on 10 permutations, which will likely get you into trouble. One of the most common support calls for ASP.NET is out-of-memory errors caused by overcaching, especially of large datasets.Common Performance Myths One of the most common myths is that C# code is faster than Visual Basic code. There is a grain of truth in this, as it is possible to take several performance-hindering actions in Visual Basic that are not possible to accomplish in C#, such as not explicitly declaring types. But if good programming practices are followed, there is no reason why Visual Basic and C# code cannot execute with nearly identical performance. To put it more succinctly, similar code produces similar results. Another myth is that codebehind is faster than inline, which is absolutely false. It doesn't matter where your code for your ASP.NET application lives, whether in a codebehind file or inline with the ASP.NET page. Sometimes I prefer to use inline code as changes don't incur the same update costs as codebehind. For example, with codebehind you have to update the entire codebehind DLL, which can be a scary proposition. Myth number three is that components are faster than pages. This was true in Classic ASP when compiled COM servers were much faster than VBScript. With ASP.NET, however, both pages and components are classes. Whether your code is inline in a page, within a codebehind, or in a separate component makes little performance difference. Organizationally, it is better to group functionality logically this way, but again it makes no difference with regard to performance. The final myth I want to dispel is that every functionality that you want to occur between two apps should be implemented as a Web service. Web services should be used to connect disparate systems or to provide remote access to system functionality or behaviors. They should not be used internally to connect two similar systems. While easy to use, there are much better alternatives. The worst thing you can do is use Web services for communicating between ASP and ASP.NET applications running on the same server, which I've witnessed all too frequently. Figure 3 ASP.NET Cache There are a several great features of the Cache that you need to know. The first is that the Cache implements a least-recently-used algorithm, allowing ASP.NET to force a Cache purge—automatically removing unused items from the Cache—if memory is running low. Secondly, the Cache supports expiration dependencies that can force invalidation. These include time, key, and file. Time is often used, but with ASP.NET 2.0 a new and more powerful invalidation type is being introduced: database cache invalidation. This refers to the automatic removal of entries in the cache when data in the database changes. For more information on database cache invalidation, see Dino Esposito's Cutting Edge column in the July 2004 issue of MSDN®Magazine. For a look at the architecture of the cache, see Figure 3.









Tip 5—Per-Request Caching
Earlier in the article, I mentioned that small improvements to frequently traversed code paths can lead to big, overall performance gains. One of my absolute favorites of these is something I've termed per-request caching. Whereas the Cache API is designed to cache data for a long period or until some condition is met, per-request caching simply means caching the data for the duration of the request. A particular code path is accessed frequently on each request but the data only needs to be fetched, applied, modified, or updated once. This sounds fairly theoretical, so let's consider a concrete example. In the Forums application of Community Server, each server control used on a page requires personalization data to determine which skin to use, the style sheet to use, as well as other personalization data. Some of this data can be cached for a long period of time, but some data, such as the skin to use for the controls, is fetched once on each request and reused multiple times during the execution of the request. To accomplish per-request caching, use the ASP.NET HttpContext. An instance of HttpContext is created with every request and is accessible anywhere during that request from the HttpContext.Current property. The HttpContext class has a special Items collection property; objects and data added to this Items collection are cached only for the duration of the request. Just as you can use the Cache to store frequently accessed data, you can use HttpContext.Items to store data that you'll use only on a per-request basis. The logic behind this is simple: data is added to the HttpContext.Items collection when it doesn't exist, and on subsequent lookups the data found in HttpContext.Items is simply returned.

Tip 6—Background Processing
The path through your code should be as fast as possible, right? There may be times when you find yourself performing expensive tasks on each request or once every n requests. Sending out e-mails or parsing and validation of incoming data are just a few examples. When tearing apart ASP.NET Forums 1.0 and rebuilding what became Community Server, we found that the code path for adding a new post was pretty slow. Each time a post was added, the application first needed to ensure that there were no duplicate posts, then it had to parse the post using a "badword" filter, parse the post for emoticons, tokenize and index the post, add the post to the moderation queue when required, validate attachments, and finally, once posted, send e-mail notifications out to any subscribers. Clearly, that's a lot of work. It turns out that most of the time was spent in the indexing logic and sending e-mails. Indexing a post was a time-consuming operation, and it turned out that the built-in System.Web.Mail functionality would connect to an SMTP server and send the e-mails serially. As the number of subscribers to a particular post or topic area increased, it would take longer and longer to perform the AddPost function. Indexing e-mail didn't need to happen on each request. Ideally, we wanted to batch this work together and index 25 posts at a time or send all the e-mails every five minutes. We decided to use the same code I had used to prototype database cache invalidation for what eventually got baked into Visual Studio® 2005. The Timer class, found in the System.Threading namespace, is a wonderfully useful, but less well-known class in the .NET Framework, at least for Web developers. Once created, the Timer will invoke the specified callback on a thread from the ThreadPool at a configurable interval. This means you can set up code to execute without an incoming request to your ASP.NET application, an ideal situation for background processing. You can do work such as indexing or sending e-mail in this background process too. There are a couple of problems with this technique, though. If your application domain unloads, the timer instance will stop firing its events. In addition, since the CLR has a hard gate on the number of threads per process, you can get into a situation on a heavily loaded server where timers may not have threads to complete on and can be somewhat delayed. ASP.NET tries to minimize the chances of this happening by reserving a certain number of free threads in the process and only using a portion of the total threads for request processing. However, if you have lots of asynchronous work, this can be an issue. There is not enough room to go into the code here, but you can download a digestible sample at www.rob-howard.net. Just grab the slides and demos from the Blackbelt TechEd 2004 presentation.

Tip 7—Page Output Caching and Proxy Servers
ASP.NET is your presentation layer (or should be); it consists of pages, user controls, server controls (HttpHandlers and HttpModules), and the content that they generate. If you have an ASP.NET page that generates output, whether HTML, XML, images, or any other data, and you run this code on each request and it generates the same output, you have a great candidate for page output caching. By simply adding this line to the top of your page कोड कॉपी करें <%@ Page OutputCache VaryByParams="none" Duration="60" %>you can effectively generate the output for this page once and reuse it multiple times for up to 60 seconds, at which point the page will re-execute and the output will once be again added to the ASP.NET Cache. This behavior can also be accomplished using some lower-level programmatic APIs, too. There are several configurable settings for output caching, such as the VaryByParams attribute just described. VaryByParams just happens to be required, but allows you to specify the HTTP GET or HTTP POST parameters to vary the cache entries. For example, default.aspx?Report=1 or default.aspx?Report=2 could be output-cached by simply setting VaryByParam="Report". Additional parameters can be named by specifying a semicolon-separated list.
Many people don't realize that when the Output Cache is used, the ASP.NET page also generates a set of HTTP headers that downstream caching servers, such as those used by the Microsoft Internet Security and Acceleration Server or by Akamai. When HTTP Cache headers are set, the documents can be cached on these network resources, and client requests can be satisfied without having to go back to the origin server.
Using page output caching, then, does not make your application more efficient, but it can potentially reduce the load on your server as downstream caching technology caches documents. Of course, this can only be anonymous content; once it's downstream, you won't see the requests anymore and can't perform authentication to prevent access to it.

Tip 8—Run IIS 6.0 (If Only for Kernel Caching)
If you're not running IIS 6.0 (Windows Server™ 2003), you're missing out on some great performance enhancements in the Microsoft Web server. In Tip 7, I talked about output caching. In IIS 5.0, a request comes through IIS and then to ASP.NET. When caching is involved, an HttpModule in ASP.NET receives the request, and returns the contents from the Cache.
If you're using IIS 6.0, there is a nice little feature called kernel caching that doesn't require any code changes to ASP.NET. When a request is output-cached by ASP.NET, the IIS kernel cache receives a copy of the cached data. When a request comes from the network driver, a kernel-level driver (no context switch to user mode) receives the request, and if cached, flushes the cached data to the response, and completes execution. This means that when you use kernel-mode caching with IIS and ASP.NET output caching, you'll see unbelievable performance results. At one point during the Visual Studio 2005 development of ASP.NET, I was the program manager responsible for ASP.NET performance. The developers did the magic, but I saw all the reports on a daily basis. The kernel mode caching results were always the most interesting. The common characteristic was network saturation by requests/responses and IIS running at about five percent CPU utilization. It was amazing! There are certainly other reasons for using IIS 6.0, but kernel mode caching is an obvious one.

Tip 9—Use Gzip Compression
While not necessarily a server performance tip (since you might see CPU utilization go up), using gzip compression can decrease the number of bytes sent by your server. This gives the perception of faster pages and also cuts down on bandwidth usage. Depending on the data sent, how well it can be compressed, and whether the client browsers support it (IIS will only send gzip compressed content to clients that support gzip compression, such as Internet Explorer 6.0 and Firefox), your server can serve more requests per second. In fact, just about any time you can decrease the amount of data returned, you will increase requests per second.
The good news is that gzip compression is built into IIS 6.0 and is much better than the gzip compression used in IIS 5.0. Unfortunately, when attempting to turn on gzip compression in IIS 6.0, you may not be able to locate the setting on the properties dialog in IIS. The IIS team built awesome gzip capabilities into the server, but neglected to include an administrative UI for enabling it. To enable gzip compression, you have to spelunk into the innards of the XML configuration settings of IIS 6.0 (which isn't for the faint of heart). By the way, the credit goes to Scott Forsyth of OrcsWeb who helped me figure this out for the www.asp.net severs hosted by OrcsWeb.
Rather than include the procedure in this article, just read the article by Brad Wilson at IIS6 Compression. There's also a Knowledge Base article on enabling compression for ASPX, available at Enable ASPX Compression in IIS. It should be noted, however, that dynamic compression and kernel caching are mutually exclusive on IIS 6.0 due to some implementation details.

Tip 10—Server Control View State
View state is a fancy name for ASP.NET storing some state data in a hidden input field inside the generated page. When the page is posted back to the server, the server can parse, validate, and apply this view state data back to the page's tree of controls. View state is a very powerful capability since it allows state to be persisted with the client and it requires no cookies or server memory to save this state. Many ASP.NET server controls use view state to persist settings made during interactions with elements on the page, for example, saving the current page that is being displayed when paging through data.
There are a number of drawbacks to the use of view state, however. First of all, it increases the total payload of the page both when served and when requested. There is also an additional overhead incurred when serializing or deserializing view state data that is posted back to the server. Lastly, view state increases the memory allocations on the server.
Several server controls, the most well known of which is the DataGrid, tend to make excessive use of view state, even in cases where it is not needed. The default behavior of the ViewState property is enabled, but if you don't need it, you can turn it off at the control or page level. Within a control, you simply set the EnableViewState property to false, or you can set it globally within the page using this setting: कोड कॉपी करें <%@ Page EnableViewState="false" %>If you are not doing postbacks in a page or are always regenerating the controls on a page on each request, you should disable view state at the page level.
Conclusion
I've offered you some tips that I've found useful for writing high-performance ASP.NET applications. As I mentioned at the beginning of this article, this is more a preliminary guide than the last word on ASP.NET performance. (More information on improving the performance of ASP.NET apps can be found at Improving ASP.NET Performance.) Only through your own experience can you find the best way to solve your unique performance problems. However, during your journey, these tips should provide you with good guidance. In software development, there are very few absolutes; every application is unique.

Wednesday, November 17, 2010

SQL SERVER – 2008 – Insert Multiple Records Using One Insert Statement – Use of Row Constructor

I previously wrote article about SQL SERVER – Insert Multiple Records Using One Insert Statement – Use of UNION ALL. I am glad that in SQL Server 2008 we have new feature which will make our life much more easier. We will be able to insert multiple rows in SQL with using only one SELECT statement.

Previous method 1:
USE YourDB
GO
INSERT INTO MyTable (FirstCol, SecondCol)
VALUES ('First',1);
INSERT INTO MyTable (FirstCol, SecondCol)
VALUES ('Second',2);
INSERT INTO MyTable (FirstCol, SecondCol)
VALUES ('Third',3);
INSERT INTO MyTable (FirstCol, SecondCol)
VALUES ('Fourth',4);
INSERT INTO MyTable (FirstCol, SecondCol)
VALUES ('Fifth',5);
GO
Previous method 2:
USE YourDB
GO
INSERT INTO MyTable (FirstCol, SecondCol)
SELECT 'First' ,1
UNION ALL
SELECT 'Second' ,2
UNION ALL
SELECT 'Third' ,3
UNION ALL
SELECT 'Fourth' ,4
UNION ALL
SELECT 'Fifth' ,5
GO
SQL Server 2008 Method of Row Construction:
USE YourDB
GO
INSERT INTO MyTable (FirstCol, SecondCol)
VALUES ('First',1),
('Second',2),
('Third',3),
('Fourth',4),
('Fifth',5)

Source :- http://blog.sqlauthority.com/2008/07/02/sql-server-2008-insert-multiple-records-using-one-insert-statement-use-of-row-constructor/



SQL SERVER – Insert Data From One Table to Another Table – INSERT INTO SELECT – SELECT INTO TABLE

There are two different ways to implement inserting data from one table to another table. I strongly suggest to use either of the method over cursor. Performance of following two methods is far superior over cursor. I prefer to use Method 1 always as I works in all the case.

Method 1 : INSERT INTO SELECT
This method is used when table is already created in the database earlier and data is to be inserted into this table from another table. If columns listed in insert clause and select clause are same, they are are not required to list them. I always list them for readability and scalability purpose.
USE AdventureWorks
GO
----Create TestTable
CREATE TABLE TestTable (FirstName VARCHAR(100), LastName VARCHAR(100))
----INSERT INTO TestTable using SELECT
INSERT INTO TestTable (FirstName, LastName)
SELECT FirstName, LastName
FROM Person.Contact
WHERE EmailPromotion = 2
----Verify that Data in TestTable
SELECT FirstName, LastName
FROM TestTable
----Clean Up Database
DROP TABLE TestTable
GO

Method 2 : SELECT INTO
This method is used when table is not created earlier and needs to be created when data from one table is to be inserted into newly created table from another table. New table is created with same data types as selected columns.
USE AdventureWorks
GO
----Create new table and insert into table using SELECT INSERT
SELECT FirstName, LastName
INTO TestTable
FROM Person.Contact
WHERE EmailPromotion = 2
----Verify that Data in TestTable
SELECT FirstName, LastName
FROM TestTable
----Clean Up Database
DROP TABLE TestTable
GO

Both of the above method works with database temporary tables (global, local).

Source :- http://blog.sqlauthority.com (Very Imp)

Limiting the Number of Characters in a MultiLine TextBox

for TextBox onkeypress="javascript:CheckLengthStatus(this);" if 16000 is the max limit

function CheckLengthStatus(X) {
try {
var length = document.getElementById(X.id).value.length

if (length < 16000)
{ }
else
{ window.event.keyCode = 0; }
}
catch (e) { }
}

Tuesday, November 16, 2010

Determining the Control that Caused a PostBack

Many times you might need to perform some action on an ASP.NET postback based on the control that caused the postback to occur. Some scenarios for this might include a form with many regions, each having it's own CustomValidator and the ability to perform a postback when a button for the section is clicked. Another scenario might be to set focus back to the control that caused the postback.

There are two parts to the process of determining which control caused the postback. First, you access the __EVENTTARGET element of the form. If you've ever looked a the anatomy of an ASP.NET page (and if you haven't why are you reading this?), you'll notice that a hidden input tag is added to the form named __EVENTTARGET. This hidden input is set to the name of the control that was clicked in the __doPostBack JavaScript function and then the form is submitted. Looking at how the server controls are rendered as HTML tags you can see that the __doPostBack function is called by the controls to cause a postback, passing the name of the control to the function. You can access this hidden input from your code-behind as it is submitted with the form and can be found in the Params or Form collections. This the first part to getting the control that caused a postback. Once you have the name you can get a reference to the control via FindControl and use it as needed

string ctrlname = page.Request.Params.Get("__EVENTTARGET"); if (ctrlname != null && ctrlname != string.Empty) {     return this.Page.FindControl(ctrlname); }

This will work, but you'll soon find that something is missing. Although this will work for CheckBoxes, DropDownLists, LinkButtons, etc, this does not work for Button controls. This is where the second part comes in. But why is it that this doesn't work for Buttons? If you again take a look at how the server controls render as HTML, you'll see that the buttons do not call the __doPostBack Javascript function so the __EVENTTARGET is never set. Instead, the Buttons render as simpleinput type=“submit” tags. All the button does is cause the form to submit. That's it. However, you can still get to it, just in a different way. Since the button (or input) is what causes the form to submit, it is added to the items in the Form collection, along with all the other values from the submitted form. It is important to note, that other input type=“submit” tags on the form are not added to the Form collection unless it was the one that caused the form to submit. If you were to look in the Form collection for anything that is a button then that will be what caused the postback (assuming that it was a button that caused the page to submit). If you first check the __EVENTTARGET, then if that is blank look for a button in the Form collection then you will find what caused the postback. The complete code follows:

public static Control GetPostBackControl(Page page) {     Control control = null;      string ctrlname = page.Request.Params.Get("__EVENTTARGET");     if (ctrlname != null && ctrlname != string.Empty)     {         control = page.FindControl(ctrlname);     }     else     {         foreach (string ctl in page.Request.Form)         {             Control c = page.FindControl(ctl);             if (c is System.Web.UI.WebControls.Button)             {                 control = c;                 break;             }         }     }     return control; }

This method takes a parameter which is a reference to the Page, it then uses that to look for the control that caused the postback. You can easily use this as follows

Control c = PageUtility.GetPostBackControl(this.Page); if (c != null) {     //... }

You can now do some specific action based on the control that caused the postback or use an earlier post I made to set focus to the control.

LINQ Version :-

///

/// Gets the control which caused the postback.
///


/// The page in which the postback occurs. Most like will be "this.Page".
/// The control which caused the postback and null if the control was not found.
public static Control GetPostBackControl(Page page)
{
string eventTarget = page.Request.Params["__EVENTTARGET"];
if (!string.IsNullOrEmpty(eventTarget))
{
return page.FindControl(eventTarget);
}
else
{
// If __EVENTTARGET is null, the postback control is a button type. Use LINQ
// to query the Request's form variables order to find the button control.
var postbackButton = (from x in page.Request.Form.AllKeys
where page.FindControl(x) is Button || page.FindControl(x.Substring(0, x.Length - 2)) is ImageButton
select (x.EndsWith(".x") ? x.Substring(0, x.Length - 2) : (x.EndsWith(".y") ? x.Substring(0, x.Length - 2) : x))).FirstOrDefault();

return (!string.IsNullOrEmpty(postbackButton) ? page.FindControl(postbackButton) : null);
}
}

Source :-

http://ryanfarley.com/blog/archive/2005/03/11/1886.aspx

Monday, November 15, 2010

Manipulate Validator controls and Validation Group from JavaScript

Scenario 1:-
To enable Validator from JavaScript
ValidatorEnable('Validator Client ID', true);

To disable Validator from JavaScript
ValidatorEnable('Validator Client ID', false);

To Validate all the Validator control from JavaScript
Page_ClientValidate();

This function will return true if there is no validation error otherwise it will return false.

To Validate a Specific Validator Group from JavaScript
Page_ClientValidate('ValidationGroupName')

Example :-
OnClientClick="javascript:return SavePunchoutCommercial();"

function SavePunchoutCommercial() {
if (typeof (Page_ClientValidate) != 'function' Page_ClientValidate(ValidationGroup)) { }
}

So, this way we can manipulate validator controls and validation group from JavaScript. I hope this will help you out.


Scenario 2:-

Put this code for Button named “Done”

OnClientClick=”EnableDisableValidation();”

javascript function EnableDisableValidation() looks like below.

function EnableDisableValidation()
{
if($L(’rdbMyAddress’).checked)
{
return Page_ClientValidate(’Address’);
}
else if($L(’rdbAddressGift’).checked)
{
if( Page_ClientValidate())
{ return true;}
else
{ return false;}
}
}

Where $L is replacement of document.getElementById(”);

Page_ClientValidate(’Address’) function Validates All Control having property ValidationGroup =”Address”.

Where as Page_ClientValidate() Blindly validates all validation Controls within page.

Source :-
http://blog.zoomasp.net/?p=26
http://programminghack.wordpress.com/2008/09/25/manipulate-validator-controls-validation-group-from-javascript/


Sunday, November 14, 2010

Add JavaScript programmatically using RegisterStartupScript during an Asynchronous postback

Scenario 1:
Create an ASP.NET project. Now add a Label(lblDisplayDate) and a Button(btnPostback) control to the page.
On the Button click, register the following script dynamically using the ClientScriptManager.RegisterStartupScript to change the color of the label controls to Red.
C#
protected void Page_Load(object sender, EventArgs e)
{
lblDisplayDate.Text = System.DateTime.Now.ToString("T");
}
protected void btnPostback_Click(object sender, EventArgs e)
{
System.Text.StringBuilder sb = new System.Text.StringBuilder();
Include Script Tag in anchor
sb.Append(@"script language='javascript'");
sb.Append(@"var lbl = document.getElementById('lblDisplayDate');");
sb.Append(@"lbl.style.color='red';");
sb.Append(@"script");
if (!ClientScript.IsStartupScriptRegistered("JSScript"))
{
ClientScript.RegisterStartupScript(this.GetType(), "JSScript", sb.ToString());
}
}

Scenario 2:
Now wrap the label and the button control inside an UpdatePanel as shown below:
<div>
<asp:ScriptManager ID="ScriptManager1" runat="server">
asp:ScriptManager>
<asp:UpdatePanel ID="UpdatePanel1" runat="server">
<ContentTemplate>
<asp:Label ID="lblDisplayDate" runat="server" Text="Label">asp:Label>
<asp:Button ID="btnPostback" runat="server" onclick="btnPostback_Click"
Text="ClickMe" />
ContentTemplate>
asp:UpdatePanel>
div>
When you now run the application and click on the button, you will observe that the label is updated with the new time (which means the postback occurred), however the color of the label does not turn to red. This is because when using an Update Panel, the JavaScript that is dynamically added to the page using ClientScript.RegisterStartupScript() does not execute.
Solution:
Use the ScriptManager.RegisterStartupScript(). If you take a look at the methods of the ScriptManager class, you will observe that the methods to register client script to the page using the ClientScriptManager class, are also present in the ScriptManager class. So modify the code as shown below:
C#
protected void btnPostback_Click(object sender, EventArgs e)
{
System.Text.StringBuilder sb = new System.Text.StringBuilder();
Include Script Tag in anchor
sb.Append(@"script language='javascript'");
sb.Append(@"var lbl = document.getElementById('lblDisplayDate');");
sb.Append(@"lbl.style.color='red';");
sb.Append(@"script");
ScriptManager.RegisterStartupScript(btnPostback,this.GetType(), "JSCR", sb.ToString(),false);
}
Note 1: Observe that the last parameter to RegisterStartupScript is set to 'false'. This prevents us from inserting the 'script' tags automatically. Since we have already inserted the 'script' tags while creating the script in the StringBuilder, we do not need to insert them now. Setting the last parameter to 'true' in such cases is a very common mistake developers do.
Note 2: Building up javascript using the StringBuilder as shown above is not advisable in large projects. Instead you should keep the javascript in a seperate .js file and refer the .js.
Source :-