Blog Archives

Use ASP.NET’s HttpHandler to bridge the cross-domain gap

When you’re developing client-side applications, a problem you’ll almost inevitably have to deal with is how to work with services that reside outside your website’s domain. Though many modern APIs do support JSONP, which is a clever workaround to somewhat mitigate the cross-domain problem, JSONP has its own problems.

Worse, if you encounter an API with no JSONP support, the cross-domain barrier can quickly become a formidable one. CORS is slowly becoming a viable alternative, but it requires that the remote service support it via special HTTP headers and browser support for CORS is still not ubiquitous.

Until CORS is more broadly supported, an alternative solution is to bounce cross-domain requests through the web server that hosts your website. In ASP.NET, the best tool for implementing that sort of middleman endpoint is the HttpHandler.

In this post, I’ll show you how to create an HttpHandler to service cross-domain requests, how to use jQuery to communicate with the handler, and an example of one improvement that this approach makes possible.

An example remote API

To focus on an example that’s already familiar to many, I’m going to use Twitter. Twitter’s API does support JSONP, which is a viable alternative for consuming it across domains. In fact, the Twitter status that you see in my sidebar to the right was retrieved from Twitter’s API via JSONP.

However, not every service supports JSONP, its third-party script injection mechanism is sometimes problematic, and using JSONP robs us of niceties like local caching. So, for the sake of a good example, let’s find a way to use the Twitter API on the client-side without resorting to JSONP.

Specifically, I’m interested in querying the service for my last few status updates. The Twitter API request to accomplish that looks like this:

Twitter will respond to that with a JSON array of objects representing my (or your) last 20 tweets, which is exactly what we’re after.

The best tool for the job: HttpHandler

If you’re accustomed to using ASP.NET’s page methods and ScriptServices to facilitate communication between client and server, those tools begin to look like a hammer that matches every JSON-shaped nail in sight. However, when simply relaying an external API’s JSON through to the client, they often add unnecessary overhead and complexity.

Rather, a lower-level tool is more appropriate in this case.

HttpHandlers are one of ASP.NET’s most under-utilized tools. They’re simple to implement and allow you to handle requests closer to the metal than WebForms pages or MVC controller actions.

One place in particular where HttpHandlers shine is where you would otherwise consider writing Response.Write statements in a WebForms page’s code-behind. This anti-pattern of using ASPX’s code-behind to get closer to the metal looks similar to approaches that you’ll see on some other platforms, such as PHP, but is not equivalent.

Unfortunately, even if you don’t use WebForms controls or ASPX markup at all, executing that low-level code from an ASPX page’s code-behind requires that every request filter through the full page life cycle. That means even the simplest request still has to percolate all the way from PreInit to Unload, adding needless overhead.

Instead, the HttpHandler is where you should write that sort of code that ultimately boils down to Response.Write calls.

Choosing the right handler type

A tricky issue when you’re writing your first HttpHandler is that Visual Studio presents you with two templates, “ASP.NET Handler” and “Generic Handler”:

The add item dialog presents two choices of HttpHandler templates

Both are similar, but the “ASP.NET Handler” template’s approach requires modifying your web.config to configure which URL your handler accepts requests at. Mucking around in the web.config isn’t terribly difficult, but it’s extra friction which makes the process less approachable.

In the spirit of keeping things simple, let’s stick with the more traditionally file-based “Generic Handler”.

Getting started with your first HttpHandler

After choosing that template, specifying a name, and adding the new file to your site, you’ll end up with a bit of boilerplate code that includes this method:

public void ProcessRequest(HttpContext context) {   context.Response.ContentType = "text/plain";   context.Response.Write("Hello World"); }

If you start the site up in Visual Studio and then request your newly-created HttpHandler in a browser, Handler1.ashx if you accept the default name, you will see “Hello World” as you might expect.

That’s not very impressive yet, but the response you saw made its way to your browser without touching WebForms’ page life cycle or filtering through ASP.NET MVC’s routing engine and action filters. While those things are worthwhile niceties for the majority of your application, they’re unwanted overhead when all you need is to efficiently relay some content through the server.

Bouncing a request to Twitter through the HttpHandler

To adapt an HttpHandler for relaying requests to the Twitter API, we can use .NET’s handy WebClient class to make the request to Twitter’s API, and then return the result back through as the handler’s response:

public void ProcessRequest(HttpContext context) {   WebClient twitter = new WebClient();     // The base URL for Twitter API requests.   string baseUrl = "http://api.twitter.com/1/";     // The specific API call that we're interested in.   string request = "statuses/user_timeline.json?id=Encosia";     // Make a request to the API and capture its result.   string response = twitter.DownloadString(baseUrl + request);     // Set the content-type so that libraries like jQuery can    //  automatically parse the result.   context.Response.ContentType = "application/json";     // Relay the API response back down to the client.   context.Response.Write(response); }

That code simply makes an HTTP request to the Twitter API and blindly bounces the result back through as the HttpHandler’s response. For the time being, everything is hard-coded, but we’ll improve on that soon enough.

Using the handler proxy on the client-side

With our web server doing the heavy lifting, using this server-side proxy to make a remote request is trivial:

$.getJSON('TwitterProxy.ashx', function(tweets) {   // Call a magical function that does all the presentational work.   displayTweets(tweets); });

The jQuery code here is actually identical what you’d use when requesting the API via JSONP. Whether that response is truly being fulfilled by the specified URL or it’s being relayed through our HttpHandler, it’s all the same to the jQuery code on the client-side.

As you’ll see when we add caching, this can easily be exploited for good.

Mixing things up with QueryString parameters

The hard-coded approach works well enough, but what if we wanted to be able to query any Twitter account’s recent updates instead of being limited to just that boring Encosia character?

Since HttpHandlers receive an instance of the current HttpContext as the parameter to their ProcessRequest method, it’s easy to access QueryString parameters and react accordingly. For example, this would allow us to request any Twitter account’s timeline by via an id parameter on the QueryString:

public void ProcessRequest(HttpContext context) {   WebClient twitter = new WebClient();     string baseUrl = "http://api.twitter.com/1/";     // Extract the desired account ID from the QueryString.   string id = context.Request.QueryString["id"];     // Make a request to the API for the specified id.   string request = "statuses/user_timeline.json?id=" + id;     // Same as before, from here on out:   string response = twitter.DownloadString(baseUrl + request);     context.Response.ContentType = "application/json";   context.Response.Write(response); }

Now it works exactly the same way as before, but we can choose which Twitter account’s timeline is requested. For example, this URL would request Scott Guthrie‘s latest tweets:

TwitterProxy.ashx?id=ScottGu

Supplying parameters with $.getJSON

To pass this new parameter in from the client-side, you could handcraft the entire URL including the appropriate QueryString. Even better though, $.getJSON has an optional “data” argument that accepts a JavaScript object and converts it to QueryString parameters:

$.getJSON('TwitterProxy.ashx', { id: 'ScottGu' }, function(tweets) {   displayTweets(tweets); });

jQuery will automatically URLEncode the parameters you specify in the “data” argument and properly assemble them into the final URL to be requested:

Screenshot of the HttpHandler request generated by the jQuery code above.

Which is exactly what we need it to do.

Using a configuration object like this is cleaner than manually concatenating a string together and makes it easier to vary the parameter at runtime.

Improving performance with server-side caching

An advantage the HttpHandler proxy has over CORS and JSONP is that you can perform any arbitrary server-side processing that you wish, both before and after the remote service repsonds. A great way to take advantage of that is adding a server-side caching layer.

Server-side caching will reduce how often requests actually trigger API calls and can significantly improve performance for requests that are already cached. A caching middleman like this is especially valuable when dealing with rate-limited APIs like Twitter’s.

Let’s say that we wanted to cache Twitter responses for up to five minutes, for example:

public void ProcessRequest(HttpContext context) {   // This will be the case whether there's a cache hit or not.   context.Response.ContentType = "application/json";     // Check to see if the twitter status is already cached,   //   then retrieve and return the cached value if so.   // 8/3/11: Updated with more robust test, thanks to ctolkien.   object tweetsCache = context.Cache["tweets-" + id];     if (tweetsCache != null) {     string cachedTweets = tweetsCache.ToString();       context.Response.Write(cachedTweets);       // We're done here.     return;   }     WebClient twitter = new WebClient();     // Move along; nothing to see here. The concatenation is just   //  to avoid horizontal scrolling within the meager 492   //  pixels I have to work with here.   string url = "http://api.twitter.com/1/statuses/" +                "user_timeline.json?id=Encosia";     string tweets = twitter.DownloadString(url);     // This monstrosity essentially just caches the WebClient result   //  with a maximum lifetime of 5 minutes from now.   // If you don't care about the expiration, this can be a simple   //  context.Cache["tweets"] = tweets; instead.   context.Cache.Add("tweets", tweets,     null, DateTime.Now.AddMinutes(5),      System.Web.Caching.Cache.NoSlidingExpiration,     System.Web.Caching.CacheItemPriority.Normal,      null);     context.Response.Write(tweets); }

Adding the intermediate cache results in a tremendous performance improvement after the first request:

Screenshot of an initial uncached request to Twitter and then the subsequent, cached requests

With the server able to immediately serve requests within the five minute caching window, subsequent $.getJSON requests are an order of magnitude faster!

Perhaps even more importantly in the case of Twitter, these four refreshes only counted as one API call against my hourly rate-limit.

Conclusion

Using HttpHandlers as server-side proxies turns out to be a simple way to solve the pesky cross-domain restrictions that we’ve all run into from time to time. All said and done, using an HttpHandler to proxy third-party requests takes few lines of code, but offers nearly unlimited flexibility.

In addition to the obvious benefit of getting around the cross-domain restriction, bouncing requests through your own server potentially has a range of other benefits, including:

  • Error handling – This approach not only passes unhandled exceptions on the WebClient request back through to the client-side, but it also gives you the ability to enhance the error handling with your own sanity checks and constraints.
  • Caching – As shown in this post’s final example, you can very easily interject your own caching layer for requests passing through the HttpHandler proxy. That’s especially useful when working against rate limited or potentially slow/flaky APIs (like Twitter’s).
  • Security – When you’re accustomed to server-side programming, the revealing nature of client-side JavaScript can be unnerving. Learning to appropriately partition sensitive algorithms and data between client and server is key to mitigating that issue. Along those lines, moving the remote request to code running on your server is one way to keep sensitive information like API keys and passwords safely hidden from view-source and client-side developer tools.
  • Reliability – One of JSONP’s less obvious drawbacks is the fact that it relies on injecting a third-party script. However, your users may be using something like NoScript to purposely block third-party scripts, effectively shutting down your ability to use JSONP. Even if you prefer JSONP in most cases, a local server-side proxy can be helpful as a fallback in case of unexpected JSONP failures.

That’s not to say that there’s no downsides to this approach. When you’re using an HttpHandler proxy, it’s important to keep in mind that it can be slower since you’re making a series of two connections instead of a single, direct one. You also lose the ability to request content with the user’s third-party cookies attached to the request, which is helpful in some cases.

Overall, using server-side proxies is a very useful item to have in your toolbox. I hope this post has served to introduce you to the approach and/or given you better insight into how you can use HttpHandlers to your advantage.

Get the source

If you’d like to browse through a complete working example of what’s been covered in this post, take a look at the companion project at GitHub. Or, if you’d like to download the entire project and run it in Visual Studio to see it in action yourself, grab the ZIP archive.

HttpHandler-Proxy on GitHubHttpHandler-Proxy.zip

Related posts:

  1. AJAX, file downloads, and IFRAMEs
  2. The easiest way to break ASP.NET AJAX pages
  3. Why ASP.NET AJAX UpdatePanels are dangerous

You’ve been reading Use ASP.NET’s HttpHandler to bridge the cross-domain gap, originally posted at Encosia. I hope you enjoyed it, and thanks for reading.

If you’ve got any feedback, please click through and leave a comment; I’d love to hear from you. You can click here to jump directly to the comment section of this post.

Use ASP.NET’s HttpHandler to bridge the cross-domain gap

When you’re developing client-side applications, a problem you’ll almost inevitably have to deal with is how to work with services that reside outside your website’s domain. Though many modern APIs do support JSONP, which is a clever workaround to somewhat mitigate the cross-domain problem, JSONP has its own problems.

Worse, if you encounter an API with no JSONP support, the cross-domain barrier can quickly become a formidable one. CORS is slowly becoming a viable alternative, but it requires that the remote service support it via special HTTP headers and browser support for CORS is still not ubiquitous.

Until CORS is more broadly supported, an alternative solution is to bounce cross-domain requests through the web server that hosts your website. In ASP.NET, the best tool for implementing that sort of middleman endpoint is the HttpHandler.

In this post, I’ll show you how to create an HttpHandler to service cross-domain requests, how to use jQuery to communicate with the handler, and an example of one improvement that this approach makes possible.

An example remote API

To focus on an example that’s already familiar to many, I’m going to use Twitter. Twitter’s API does support JSONP, which is a viable alternative for consuming it across domains. In fact, the Twitter status that you see in my sidebar to the right was retrieved from Twitter’s API via JSONP.

However, not every service supports JSONP, its third-party script injection mechanism is sometimes problematic, and using JSONP robs us of niceties like local caching. So, for the sake of a good example, let’s find a way to use the Twitter API on the client-side without resorting to JSONP.

Specifically, I’m interested in querying the service for my last few status updates. The Twitter API request to accomplish that looks like this:

Twitter will respond to that with a JSON array of objects representing my (or your) last 20 tweets, which is exactly what we’re after.

The best tool for the job: HttpHandler

If you’re accustomed to using ASP.NET’s page methods and ScriptServices to facilitate communication between client and server, those tools begin to look like a hammer that matches every JSON-shaped nail in sight. However, when simply relaying an external API’s JSON through to the client, they often add unnecessary overhead and complexity.

Rather, a lower-level tool is more appropriate in this case.

HttpHandlers are one of ASP.NET’s most under-utilized tools. They’re simple to implement and allow you to handle requests closer to the metal than WebForms pages or MVC controller actions.

One place in particular where HttpHandlers shine is where you would otherwise consider writing Response.Write statements in a WebForms page’s code-behind. This anti-pattern of using ASPX’s code-behind to get closer to the metal looks similar to approaches that you’ll see on some other platforms, such as PHP, but is not equivalent.

Unfortunately, even if you don’t use WebForms controls or ASPX markup at all, executing that low-level code from an ASPX page’s code-behind requires that every request filter through the full page life cycle. That means even the simplest request still has to percolate all the way from PreInit to Unload, adding needless overhead.

Instead, the HttpHandler is where you should write that sort of code that ultimately boils down to Response.Write calls.

Choosing the right handler type

A tricky issue when you’re writing your first HttpHandler is that Visual Studio presents you with two templates, “ASP.NET Handler” and “Generic Handler”:

The add item dialog presents two choices of HttpHandler templates

Both are similar, but the “ASP.NET Handler” template’s approach requires modifying your web.config to configure which URL your handler accepts requests at. Mucking around in the web.config isn’t terribly difficult, but it’s extra friction which makes the process less approachable.

In the spirit of keeping things simple, let’s stick with the more traditionally file-based “Generic Handler”.

Getting started with your first HttpHandler

After choosing that template, specifying a name, and adding the new file to your site, you’ll end up with a bit of boilerplate code that includes this method:

public void ProcessRequest(HttpContext context) {   context.Response.ContentType = "text/plain";   context.Response.Write("Hello World"); }

If you start the site up in Visual Studio and then request your newly-created HttpHandler in a browser, Handler1.ashx if you accept the default name, you will see “Hello World” as you might expect.

That’s not very impressive yet, but the response you saw made its way to your browser without touching WebForms’ page life cycle or filtering through ASP.NET MVC’s routing engine and action filters. While those things are worthwhile niceties for the majority of your application, they’re unwanted overhead when all you need is to efficiently relay some content through the server.

Bouncing a request to Twitter through the HttpHandler

To adapt an HttpHandler for relaying requests to the Twitter API, we can use .NET’s handy WebClient class to make the request to Twitter’s API, and then return the result back through as the handler’s response:

public void ProcessRequest(HttpContext context) {   WebClient twitter = new WebClient();     // The base URL for Twitter API requests.   string baseUrl = "http://api.twitter.com/1/";     // The specific API call that we're interested in.   string request = "statuses/user_timeline.json?id=Encosia";     // Make a request to the API and capture its result.   string response = twitter.DownloadString(baseUrl + request);     // Set the content-type so that libraries like jQuery can    //  automatically parse the result.   context.Response.ContentType = "application/json";     // Relay the API response back down to the client.   context.Response.Write(response); }

That code simply makes an HTTP request to the Twitter API and blindly bounces the result back through as the HttpHandler’s response. For the time being, everything is hard-coded, but we’ll improve on that soon enough.

Using the handler proxy on the client-side

With our web server doing the heavy lifting, using this server-side proxy to make a remote request is trivial:

$.getJSON('TwitterProxy.ashx', function(tweets) {   // Call a magical function that does all the presentational work.   displayTweets(tweets); });

The jQuery code here is actually identical what you’d use when requesting the API via JSONP. Whether that response is truly being fulfilled by the specified URL or it’s being relayed through our HttpHandler, it’s all the same to the jQuery code on the client-side.

As you’ll see when we add caching, this can easily be exploited for good.

Mixing things up with QueryString parameters

The hard-coded approach works well enough, but what if we wanted to be able to query any Twitter account’s recent updates instead of being limited to just that boring Encosia character?

Since HttpHandlers receive an instance of the current HttpContext as the parameter to their ProcessRequest method, it’s easy to access QueryString parameters and react accordingly. For example, this would allow us to request any Twitter account’s timeline by via an id parameter on the QueryString:

public void ProcessRequest(HttpContext context) {   WebClient twitter = new WebClient();     string baseUrl = "http://api.twitter.com/1/";     // Extract the desired account ID from the QueryString.   string id = context.Request.QueryString["id"];     // Make a request to the API for the specified id.   string request = "statuses/user_timeline.json?id=" + id;     // Same as before, from here on out:   string response = twitter.DownloadString(baseUrl + request);     context.Response.ContentType = "application/json";   context.Response.Write(response); }

Now it works exactly the same way as before, but we can choose which Twitter account’s timeline is requested. For example, this URL would request Scott Guthrie‘s latest tweets:

TwitterProxy.ashx?id=ScottGu

Supplying parameters with $.getJSON

To pass this new parameter in from the client-side, you could handcraft the entire URL including the appropriate QueryString. Even better though, $.getJSON has an optional “data” argument that accepts a JavaScript object and converts it to QueryString parameters:

$.getJSON('TwitterProxy.ashx', { id: 'ScottGu' }, function(tweets) {   displayTweets(tweets); });

jQuery will automatically URLEncode the parameters you specify in the “data” argument and properly assemble them into the final URL to be requested:

Screenshot of the HttpHandler request generated by the jQuery code above.

Which is exactly what we need it to do.

Using a configuration object like this is cleaner than manually concatenating a string together and makes it easier to vary the parameter at runtime.

Improving performance with server-side caching

An advantage the HttpHandler proxy has over CORS and JSONP is that you can perform any arbitrary server-side processing that you wish, both before and after the remote service repsonds. A great way to take advantage of that is adding a server-side caching layer.

Server-side caching will reduce how often requests actually trigger API calls and can significantly improve performance for requests that are already cached. A caching middleman like this is especially valuable when dealing with rate-limited APIs like Twitter’s.

Let’s say that we wanted to cache Twitter responses for up to five minutes, for example:

public void ProcessRequest(HttpContext context) {   // This will be the case whether there's a cache hit or not.   context.Response.ContentType = "application/json";     // Check to see if the twitter status is already cached,   //   then retrieve and return the cached value if so.   // 8/3/11: Updated with more robust test, thanks to ctolkien.   object tweetsCache = context.Cache["tweets-" + id];     if (tweetsCache != null) {     string cachedTweets = tweetsCache.ToString();       context.Response.Write(cachedTweets);       // We're done here.     return;   }     WebClient twitter = new WebClient();     // Move along; nothing to see here. The concatenation is just   //  to avoid horizontal scrolling within the meager 492   //  pixels I have to work with here.   string url = "http://api.twitter.com/1/statuses/" +                "user_timeline.json?id=Encosia";     string tweets = twitter.DownloadString(url);     // This monstrosity essentially just caches the WebClient result   //  with a maximum lifetime of 5 minutes from now.   // If you don't care about the expiration, this can be a simple   //  context.Cache["tweets"] = tweets; instead.   context.Cache.Add("tweets", tweets,     null, DateTime.Now.AddMinutes(5),      System.Web.Caching.Cache.NoSlidingExpiration,     System.Web.Caching.CacheItemPriority.Normal,      null);     context.Response.Write(tweets); }

Adding the intermediate cache results in a tremendous performance improvement after the first request:

Screenshot of an initial uncached request to Twitter and then the subsequent, cached requests

With the server able to immediately serve requests within the five minute caching window, subsequent $.getJSON requests are an order of magnitude faster!

Perhaps even more importantly in the case of Twitter, these four refreshes only counted as one API call against my hourly rate-limit.

Conclusion

Using HttpHandlers as server-side proxies turns out to be a simple way to solve the pesky cross-domain restrictions that we’ve all run into from time to time. All said and done, using an HttpHandler to proxy third-party requests takes few lines of code, but offers nearly unlimited flexibility.

In addition to the obvious benefit of getting around the cross-domain restriction, bouncing requests through your own server potentially has a range of other benefits, including:

  • Error handling – This approach not only passes unhandled exceptions on the WebClient request back through to the client-side, but it also gives you the ability to enhance the error handling with your own sanity checks and constraints.
  • Caching – As shown in this post’s final example, you can very easily interject your own caching layer for requests passing through the HttpHandler proxy. That’s especially useful when working against rate limited or potentially slow/flaky APIs (like Twitter’s).
  • Security – When you’re accustomed to server-side programming, the revealing nature of client-side JavaScript can be unnerving. Learning to appropriately partition sensitive algorithms and data between client and server is key to mitigating that issue. Along those lines, moving the remote request to code running on your server is one way to keep sensitive information like API keys and passwords safely hidden from view-source and client-side developer tools.
  • Reliability – One of JSONP’s less obvious drawbacks is the fact that it relies on injecting a third-party script. However, your users may be using something like NoScript to purposely block third-party scripts, effectively shutting down your ability to use JSONP. Even if you prefer JSONP in most cases, a local server-side proxy can be helpful as a fallback in case of unexpected JSONP failures.

That’s not to say that there’s no downsides to this approach. When you’re using an HttpHandler proxy, it’s important to keep in mind that it can be slower since you’re making a series of two connections instead of a single, direct one. You also lose the ability to request content with the user’s third-party cookies attached to the request, which is helpful in some cases.

Overall, using server-side proxies is a very useful item to have in your toolbox. I hope this post has served to introduce you to the approach and/or given you better insight into how you can use HttpHandlers to your advantage.

Get the source

If you’d like to browse through a complete working example of what’s been covered in this post, take a look at the companion project at GitHub. Or, if you’d like to download the entire project and run it in Visual Studio to see it in action yourself, grab the ZIP archive.

HttpHandler-Proxy on GitHubHttpHandler-Proxy.zip

Related posts:

  1. Save yourself some typing when you call ASP.NET services
  2. 3 mistakes to avoid when using jQuery with ASP.NET AJAX
  3. jQuery 1.5?s AJAX rewrite and ASP.NET services: All is well

You’ve been reading Use ASP.NET’s HttpHandler to bridge the cross-domain gap, originally posted at Encosia. I hope you enjoyed it, and thanks for reading.

If you’ve got any feedback, please click through and leave a comment; I’d love to hear from you. You can click here to jump directly to the comment section of this post.

Use jQuery to extract data from HTML lists and tables

A question that I’ve been seeing more frequently these days is how to extract a JavaScript object from an HTML list or table, given no data or information other than the markup. It’s not ideal to work backwards from HTML, but sometimes you just don’t have a lot of choice in the matter.

Whether you’re enhancing legacy elements that have been generated on the server-side or want to parse the output of a third-party DHTML widget, there are a variety of situations where converting HTML to raw data is a legitimate need. You may have seen iterative solutions to this problem before. However, nested looping code gets messy fast, doesn’t feel much like idiomatic jQuery, and certainly isn’t as concise as you’d probably like.

Luckily, one of JavaScript’s lesser-known utility methods and jQuery’s implementation of it can improve the situation quite a bit. In this post, I’m going to show you how to use this method, jQuery’s cross-browser solution, and how to use it to extract data objects from arbitrary HTML lists and tables.

Array.map()

It turns out that there’s a tool perfectly suited to the task of coercing one data structure into another: map.

Map is a higher-order function that allows you to transform the contents of a collection by applying a function to each item, capturing the result, and building a new collection of those results.

Map is a perfect tool for translating a collection full of extraneous data into a tightly-focused collection of exactly the desired subset. Even better, JavaScript 1.6 even includes a native implementation of map, which is exposed as a method on the Array prototype.

For example, this is how you could use JavaScript 1.6?s Array.map() to analyze an array of strings and create a new array containing each string’s length:

var sites = ['Encosia', 'jQuery', 'ASP.NET', 'StackOverflow'];   // For each site in the array, apply this function  //  and build an array of the results. var lengths = sites.map(function(site, index) {   // Use the length of each name as its value in the new array.   return site.length; });   // This outputs: [7, 6, 7, 13] console.log(lengths);

This is a very simple example, but you can probably already imagine applying that same technique to an array of list elements or table rows. The concise expressiveness of the map approach is great for paring away extraneous markup and extracting just underlying data.

Mapping uncharted territory

Unfortunately, JavaScript 1.6 and its map implementation is not something that you can count on being available in older browsers. Notably, Internet Explorer doesn’t provide an Array.map() implementation until IE9.

Though that is disappointing, map isn’t difficult to manually implement. For example, this is a polyfill that the MDC recommends for patching Array.map() into older browsers:

if (!Array.prototype.map) {   Array.prototype.map = function(fun /*, thisp */)  {     "use strict";       if (this === void 0 || this === null)       throw new TypeError();       var t = Object(this);     var len = t.length >>> 0;     if (typeof fun !== "function")       throw new TypeError();       var res = new Array(len);     var thisp = arguments[1];     for (var i = 0; i < len; i++)  {       if (i in t)         res[i] = fun.call(thisp, t[i], i, t);     }       return res;   }; }

That’s a workable solution, but I doubt you’re very excited about the prospect of including all this code in your page. I know I wouldn’t be.

jQuery has you covered

If you’re already including jQuery in your pages, the good news is that jQuery has a built-in map implementation that works in every browser. In fact, jQuery provides two separate map methods: one that’s specially suited to working with jQuery selections and a general utility method that’s more similar to the polyfill shown above.

For working with HTML, I’m going to focus on using the former: .map().

To replicate the JavaScript 1.6 dependent example shown earlier, using jQuery’s implementation instead, the code would look like this:

var sites = ['Encosia', 'jQuery', 'ASP.NET', 'StackOverflow'];   // Same as before, using jQuery's map() implementation. var lengths = $(sites).map(function(index, site) {   // Use the length of each site name as its value in the new array.   return site.length; });   // This outputs: [7, 6, 7, 13] console.log(lengths);

Making “this” approach more concise

To condense the code a bit, we can take advantage of the execution context within the callback function. During each callback, this holds the value of the array item currently being operated on. So, there’s no need to bother capturing the callback’s two input parameters:

var lengths = $(sites).map(function() {   // "this" refers to the current array element as this callback is   //  applied to each array element.   return this.length; });

That isn’t a huge improvement, but every little bit helps and I’ll be using this in the examples throughout the rest of this post. So, I wanted to make sure what’s happening there is clear.

Unwrapping the result of jQuery’s .map()

The one quirk when using jQuery’s .map() method is that it sometimes returns a jQuery wrapped set; specifically, when you apply it to the result of a jQuery DOM selection. Even if your mapping function returns scalar values like strings and numbers, the end result of .map() will include the jQuery object prototype on each element.

That isn’t really a problem if you only intend to use that result immediately in your JavaScript code. However, the jQuery object prototype hanging off each element throws a wrench in the works if you try to use JSON.stringify() on the result of .map(). Since JSON serialization is such a common task when storing or transmitting JavaScript data, this quirk turns out to be a real issue.

The solution is to call jQuery’s get() method on those wrapped-array results, which boils them down to plain arrays. When you see .get() tagged onto the end of the examples ahead, that’s why it’s there.

Now, let’s take a look at applying .map() to HTML and using it to extract data.

Mapping the data within HTML unordered lists

Using .map() against an unordered list is one of the most straightforward examples to start with. Imagine you had this simple HTML markup:

<ul>   <li>Item 1</li>   <li>Item 2</li>   <li>Item 3</li> </ul>

To extract each of those items’ displayed value, you could use .map() like this:

// Returns ['Item 1', 'Item 2', 'Item 3'] $('li').map(function() {   // For each <li> in the list, return its inner text and let .map()   //  build an array of those values.   return $(this).text(); }).get();

Complicating things slightly, maybe the list items also have an HTML5 data- attribute that you need to collect in addition to their values:

<ul>   <li data-id="123">Item 1</li>   <li data-id="456">Item 2</li>   <li data-id="789">Item 3</li> </ul>

Using .map() to extract that more complex data is just as easy:

// Returns [{id: 123, text: 'Item 1'},  //          {id: 456, text: 'Item 2'}, //          {id: 789, text: 'Item 3'}] $('li').map(function() {   // $(this) is used more than once; cache it for performance.   var $item = $(this);     return {      // Note: using .data() to read HTML5 data- attributes      //  requires jQuery 1.4.3+. Use attr() in older versions.     id: $item.data('id'),      text: $item.text()   }; }).get();

As you can see, .map() is a powerful tool for concisely pulling arbitrary bits of data together into a useful structure. You could certainly do this with a temp variable and for-loop, but it’s hard to beat the clean expressiveness this approach lends your code.

There’s a great JavaScript learning opportunity in the code above, but it’s on a bit of a tangent. Rather than let this post run even longer, I wrote about that in a separate post. If you’re interested in how an innocuous change to the location of one curly brace in the preceding code can transparently break it, that post is for you.

You can find that post here: In JavaScript, curly brace placement matters: An example.

Extracting data from HTML tables

Working with the lists is good for a simple example, but what if we need to apply this technique to an HTML structure that’s more complex than an unordered list?

HTML tables are one of the most common targets for this technique. It’s not unusual to end up with a pre-rendered table that was generated off-page and to desire a client-side data structure representing that table’s data.

For example, here’s a tabular representation of the same data contained in the second list example:

<table id="myTable">   <thead>     <tr>       <th>id</th>       <th>text</th>     </tr>   </thead>   <tbody>     <tr>       <td>123</td>       <td>Item 1</td>     </tr>     <tr>       <td>456</td>       <td>Item 2</td>     </tr>     <tr>       <td>789</td>       <td>Item 3</td>     </tr>   </tbody> </table>

If you wanted to boil that table down to exactly the same JavaScript object shown in the second list example, this .map() usage would do the trick:

// Returns [{id: 123, text: 'Item 1'},  //          {id: 456, text: 'Item 2'}, //          {id: 789, text: 'Item 3'}] $('#myTable tbody tr').map(function() {   // $(this) is used more than once; cache it for performance.   var $row = $(this);     // For each row that's "mapped", return an object that   //  describes the first and second <td> in the row.   return {     id: $row.find(':nth-child(1)').text(),     text: $row.find(':nth-child(2)').text()   }; }).get();

The key to making this approach work is using the :nth-child selector to index into each row and retrieve the contents of the cells we’re interested in. This is very similar to how we handled the unordered list earlier, but can be applied to arbitrarily large structures such as wide HTML tables.

If you use this approach, one thing to keep in mind is that :nth-child uses one-based indexing. So, you must use :nth-child(1) to select the first cell, not :nth-child(0) as you might expect.

A general solution for tables

Using hard coded :nth-child selectors works well enough in simple scenarios, but it’s brittle. If the table structure changes, relying on a certain table layout will break. Hard coding the selectors for each column also becomes tedious when dealing with wider tables that have many columns.

So, as you apply this technique to larger or less predictable tables, you may desire a more general solution for extracting the data. One way of doing that is using the table’s column heading cells to build a basic schema of the table’s data.

Assuming your table has a proper <thead>, this is how you could extract an array of its column headings to use as a schema for mapping the rest of the table’s data:

var columns = $('#myTable thead th').map(function() {   // This assumes that your headings are suitable to be used as   //  JavaScript object keys. If the headings contain characters    //  that would be invalid, such as spaces or dashes, you should   //  use a regex here to strip those characters out.   return $(this).text(); });

With that column list handy, we can determine which column name any cell in the table should be filed under, given nothing more than its index in the row. Now we can automate the process that previously required those :nth-child selectors:

var tableObject = $('#myTable tbody tr').map(function(i) {   var row = {};     // Find all of the table cells on this row.   $(this).find('td').each(function(i) {     // Determine the cell's column name by comparing its index     //  within the row with the columns list we built previously.     var rowName = columns[i];       // Add a new property to the row object, using this cell's     //  column name as the key and the cell's text as the value.     row[rowName] = $(this).text();   });     // Finally, return the row's object representation, to be included   //  in the array that $.map() ultimately returns.   return row;   // Don't forget .get() to convert the jQuery set to a regular array. }).get();

That’s it.

With all the comments, that looks like more work than it actually is. Eleven lines of code for the entire ordeal isn’t bad considering that it will automatically handle the majority of tables you throw at it.

Conclusion

I’m going to stop here, before this gets any longer. I hope that you found this helpful and/or interesting.

Even if you don’t often convert HTML markup to JavaScript objects, do keep .map() in mind when you’re working with collections of any type. When you need it, the notion of map is an extremely useful aspect of JavaScript’s functional nature, but often goes overlooked.

Related posts:

  1. Use jQuery and quickSearch to interactively search any data
  2. How to easily enhance your existing tables with simple CSS
  3. In JavaScript, curly brace placement matters: An example

You’ve been reading Use jQuery to extract data from HTML lists and tables, originally posted at Encosia. I hope you enjoyed it, and thanks for reading.

If you’ve got any feedback, please click through and leave a comment; I’d love to hear from you. You can click here to jump directly to the comment section of this post.

jQuery Templates, composite rendering, and remote loading

In my last post about jQuery Templates, I showed you how to use template composition to build a template out of simple sub-templates. These composite templates are a great way to address the complexity that creeps into real-world UIs, as they inevitably grow and become more intricate. However, one feature missing from my last example was the ability to store those composite templates in external files and load them asynchronously for rendering.

I’ve described how to accomplish that with single templates in the past, using jQuery’s AJAX utilities and a particular usage of tmpl(). Unfortunately, remotely loading a group of composite templates from a single file is not quite as simple, and the technique I’ve described previously will not work.

Not to worry though, it’s still relatively easy.

In this post, I’ll show you how to move a group of composite templates to an external file, how to load and render them with jQuery Templates, and how to take advantage of an expected benefit to improve separation of concerns.

Caution: If you haven’t read my previous posts about remotely loading jQuery Templates definitions and using {{tmpl}} to achieve template composition, read them before continuing with this post. I’m not going to cover that material again here, and this may not make much sense without those prerequisites.

Moving the templates to an external file

Breaking the invoice template apart helped make it more approachable and maintainable, but I don’t like leaving the template embedded in the page’s markup. The larger a single file becomes, the more difficult it is to understand and work with – especially over time.

Disentangling chunks of the presentation tier and moving them to separate files is a great way to attack the problem of bloated pages and views. We’ve been doing that since the beginning of the web, from seemingly-ancient techniques like SSI includes, to file includes in scripting frameworks like ASP and PHP, to partial views in MVC frameworks. So, why stop now just because the templates are rendered in the browser?

Moving the previous example’s template definitions to a separate file is as simple as it sounds. Just take the invoice template and both its row templates, script wrappers included, and move them into a new file of your choosing. I’m going to move them to a file named _invoice.tmpl.htm:

<!-- Tip: It's safe to use HTML comments in the file -->   <!-- Invoice container template --> <script id="invoiceTemplate" type="x-jquery-tmpl">   <table class="invoice">   {{each lineItems}}     {{tmpl($value) get_invoiceRowTemplateName(type)}}   {{/each}}   </table> </script>   <!-- Invoice row templates --> <script id="serviceRowTemplate" type="x-jquery-tmpl">   <tr class="service">     <td colspan="2">${service}</td>     <td colspan="2">${price}</td>   </tr> </script>   <script id="itemRowTemplate" type="x-jquery-tmpl">   <tr class="item">     <td>${item}</td>     <td>${description}</td>     <td>${price}</td>     <td>${qty}</td>   </tr> </script>

Naming the template file

If you’re wondering why I chose that somewhat convoluted filename for the template, I’ll explain:

Ideally, I would like to call the template something like invoiceTemplate.tmpl, but most popular web servers refuse to serve files with non-standard extensions by default. You can circumvent that with a bit of manual configuration, but it’s not worth the unending hassle of extra configuration work on every server and/or site where you use this technique. So, .tmpl is out.

I really liked Nathan Smith’s suggestion for a naming compromise on my previous post about remote loading, which boils down to this:

  • Prefix the filename with an underscore. This denotes a partial view in many modern view engines and is a useful convention for indicating that the file is not a valid/complete HTML document.
  • Nathan suggested following _templateName with .tpl, to indicate that it’s a template. I’m going to tweak that slightly and use .tmpl, so it’s more clear that the template is intended for use with jQuery Templates.
  • The file should end in .htm, to be sure the file will be readily served up under almost any web server’s default configuration. Using .htm or another text/html extension also improves the odds that the template definitions will be served with appropriate compression and caching.

The first two are just suggestions, of course. You can use any arbitrary naming scheme you prefer and the approach described in this post will still work fine.

I do recommend sticking with an .htm or .html extension though. I (stubbornly) went through the configuration hassle of using .tpl when I was originally working with jTemplates’ remote loading feature a couple years ago, and eventually had to give up on it. The ongoing configuration hassles ultimately outweighed the benefit of a uniquely descriptive extension.

Loading and rendering the template

With the templates moved to an external file, we need a way to load and render them. As I mentioned earlier, the approach I’ve previously described for remote template loading isn’t viable in this scenario. Treating the external file as a simple template string only works if the file contains a single template definition and no extraneous markup. Our file fails on both counts.

Another disadvantage my previous approach is caching. When you render a string-based template, jQuery Templates doesn’t cache the compiled template. If you end up rendering the same template more than once per pageview, the string-based approach was slower than it could have been.

To solve both those issues, we can use jQuery to load the contents of the template file, inject all of it into the document, and then work with the templates as if they had been embedded in the page all along.

// The invoice object to render (see previous post). var invoice = {   invoiceItems: [     { type: 'item',        part: '99Designs', description: '99 Designs Logo',        price: 450.00, qty: 1 },     { type: 'service',       service: 'Web development and testing',        price: 25000.00 },     { type: 'item',       part: 'LinodeMonthly', description: 'Monthly site hosting',        price: 40.00, qty: 12 }   ] };   // Asynchronously load the template definition file. $.get('_invoice.tmpl.htm', function(templates) {   // Inject all those templates at the end of the document.   $('body').append(templates);     // Select the newly injected invoiceTemplate and use it   //  render the invoice data.   $('#invoiceTemplate').tmpl(invoice).appendTo('body'); });

Injecting the template definitions into the page’s markup allows you to render them with the same $('#templateId').tmpl(data) syntax that you’ve seen in most jQuery Templates examples.

Bringing the row template resolver along for the ride

One remaining annoyance is the now-orphaned row template name resolver, get_invoiceRowTemplateName() (covered in my composite template post). Keeping that function with the rest of the page’s JavaScript does work, but I’m not happy with it. To get the full benefit of encapsulating the template in an external file, we should keep any dependent JavaScript code with the template itself.

That’s especially true if the template is used from more than one page.

As it turns out, accomplishing that is much easier than it may seem at first. All we need to do is move the resolver function right into _invoice.tmpl.htm:

<!-- Invoice container template --> <script id="invoiceTemplate" type="x-jquery-tmpl">   <table class="invoice">   {{each lineItems}}     {{tmpl($value) get_invoiceRowTemplateName(type)}}   {{/each}}   </table> </script>   <!-- Invoice row templates --> <script id="serviceRowTemplate" type="x-jquery-tmpl">   <tr class="service">     <td colspan="2">${service}</td>     <td colspan="2">${price}</td>   </tr> </script>   <script id="itemRowTemplate" type="x-jquery-tmpl">   <tr class="item">     <td>${item}</td>     <td>${description}</td>     <td>${price}</td>     <td>${qty}</td>   </tr> </script>   <!-- This function is used by the invoice container to determine --> <!--  which row template to render for a given line item --> <script type="text/javascript">   function get_invoiceRowTemplateName(type) {     // Return a template selector that matches our      //  convention of <type>RowTemplate.     return '#' + type + 'RowTemplate';   } </script>

When you inject markup into the DOM, browsers will immediately parse and evaluate that markup just as they do during the initial page load – including any JavaScript code in the markup. Since browsers do this in the same single-thread that they execute JavaScript, it’s safe to assume that embedded functions are available immediately after the markup has been injected.

It seems almost too good to be true, like one of those handy techniques that works in every browser except some version of IE. However, embedding the supporting function inside the template file works in every browser I’ve tested, even including the full complement of JavaScript-enabled options on BrowserShots.org.

Conclusion

I’ve been using this exact technique in production for about a month now. I initially came up with several approaches, but wanted to try them in real-world code before I recommended one. I’m happy to report that this has been working great for me so far. I hope you’ll find it helpful too.

One caveat is that you should be careful about loading and injecting the external template more than once. Injecting multiple copies of the template definitions over and over actually doesn’t break the rendering process in browsers I’ve tested, but it will unecessarily impact performance as the DOM grows larger with every new injection.

I’m going to cover that and another templating concurrency issue soon, but if you want a head start: test $.template('#templateName') to determine whether a particular template is already loaded and cached.


You’ve been reading jQuery Templates, composite rendering, and remote loading, originally posted at Encosia. I hope you enjoyed it, and thanks for reading.

Related posts:

  1. Using external templates with jQuery Templates
  2. Composition with jQuery Templates: Why and How
  3. A few thoughts on jQuery templating with jQuery.tmpl

Using external templates with jQuery Templates

Now that jQuery Templates is official and definitely will not include remote template loading, I wanted to publish a quick guide on implementing that yourself. As I mentioned previously, there’s a handy interaction between jQuery Templates’ API and jQuery’s AJAX methods that makes this easier than you might expect.

In this post, I’ll show you how to use a plain string as a template, how to asynchronously load an external template file as a string, and how to render it with jQuery Templates once it’s loaded.

Defining an external template

Defining a template suitable for remote loading requires almost no extra effort. Simply create a new file and fill it with the same sort of jQuery Templates fragment that you might normally embed in a text/html <script> element.

For example, given this person object:

var person = { name: 'Dave' };

You might define this template for that data in a file called PersonTemplate.htm:

<p>Hello, ${name}.</p>

Be sure not to wrap the template in a <script> element. Since we’ll be loading it remotely, that obscuring measure isn’t necessary. Also be sure not to include any of the HTML boilerplate that may come along with creating a new HTML file in your editor of choice, like <html> or <body>.

An added bonus that comes along with dropping the <script> container is that you get more reliable syntax highlighting for the template’s markup. Most editors don’t provide HTML syntax highlighting within <script> elements, even when their type is text/html, but work fine in the external files.

Using strings as templates

Though most jQuery Templates examples revolve around referencing template definitions hidden inside <script> elements, it’s also possible to provide the template as a plain string. Using the $.tmpl(template, data) syntax, a simple string version of the template may be provided as that first parameter.

For example, these are both perfectly valid ways to render a template, using the person object from the previous section:

// Specifying the template inline. $.tmpl('<p>Hello, ${name}.</p>', person);   // Assign that same template to a JavaScript variable, and then //  use that string value as a template parameter to $.tmpl(). var salutationTmpl = '<p>Hello, ${name}.</p>';   $.tmpl(salutationTmpl, person);

With that ability to use a string as a template, we just need a way to get the external template file’s content loaded into a string variable.

Loading the external template

With that in mind, probably the easiest way to load an external template is to use jQuery’s $.get() method. When you target $.get() at a static HTML file, the result passed to its callback is a string containing the file’s content. Exactly what we’re after.

For example, if an external template were stored in a file named PersonTemplate.htm, this is how you could use $.get() to load and render that external template with jQuery Templates:

// Asynchronously our PersonTemplate's content. $.get('PersonTemplate.htm', function(template) {     // Use that stringified template with $.tmpl() and    //  inject the rendered result into the body.   $.tmpl(template, person).appendTo('body'); });

That’s it.

Because you have to get involved with more of the moving parts, this is slightly more complex than jTemplates’ processTemplateURL, but it’s very manageable.

Caveats

Compilation caching – When you use this method, it’s important to keep in mind that jQuery Templates will not automatically cache the compiled template. If you intend to reuse the same template many times during a given pageview, this approach is slower than the embedded <script> technique.

It’s possible to work around that drawback by using $.template() to compile the remotely loaded template and store it manually, but I’m omitting that here to keep things simple.

If you’re interested, I’ll cover that in follow up post.

HTTP caching – In production, you should ensure that your server’s caching configuration is correct. If your server sends a proper expires header, requesting the same template multiple times during the lifespan of a page will only cause the browser to make one HTTP request. Even subsequent HTTP requests to check for a 304 status aren’t necessary if a future expires header is set on the response.

Template file naming – I recommend using a standard filename extension for your external template files, like .htm or .txt. I started out using more descriptive extensions like .tpl, but server configuration was a constant hassle. Important things like HTTP compression, expires headers, and even serving the files at all are common issues when using nonstandard extensions.


You’ve been reading Using external templates with jQuery Templates, originally posted at Encosia. I hope you enjoyed it, and thanks for reading.

Related posts:

  1. A few thoughts on jQuery templating with jQuery.tmpl
  2. Use jQuery and ASP.NET AJAX to build a client side Repeater
  3. How you can force the Ajax Script Loader to use jQuery 1.4

Using an iPhone with the Visual Studio development server

Testing an ASP.NET site on an iPhone Developing iPhone-optimized portions of an ASP.NET website presents a challenge. More specifically, it’s testing your creations that can be difficult.

Apple’s iPhone emulator only runs on Macs and the Windows-based alternatives don’t emulate mobile Safari well. That leaves us using an actual device as the only high-fidelity option for testing. That’s not all bad; especially when it comes to a touch-driven interface, testing with the real thing is preferable.

Unfortunately, the ASP.NET Development Server bundled with Visual Studio is severely restricted when it comes to testing externally. In fact, it could hardly be more restrictive – it refuses all external connections, even if those connections originate from the same local subnet.

In this post, I’m going to show you one way I’ve found to circumvent that restriction, how to configure your iPhone to take advantage of that, and how to connect to the development server once those steps are completed.

Note: This post specifically describes configuring an iPhone, but the same approach will work for any mobile device that supports using an HTTP proxy.

Fooling the ASP.NET Development Server

The fundamental problem is that Visual Studio’s ASP.NET Development Server actively refuses external connections. That’s a logical precaution if you’re in the business of selling web server operating systems, but it adds unnecessary friction to the legitimate endeavor of testing with mobile devices.

The solution that I stumbled onto uses a tool that you may already have installed: Fiddler. If you aren’t familiar with Fiddler, this recording of Eric Lawrence’s session at MIX10 is a great way to learn a lot about Fiddler in relatively little time.

The feature that we’re specifically interested in is its HTTP proxy server. Unlike the ASP.NET Development Server, Fiddler does not restrict connections from external devices. Even better, routing an external device’s connections through Fiddler is misdirection enough to fool the development server into accepting them.

Checking Fiddler’s proxy port

With Fiddler installed, the first step is to determine which port it’s running the proxy server on. On fresh installs, the default setting is port 8888.

If you’ve had Fiddler installed a while, it doesn’t hurt to double check the setting. You can do that in Fiddler by navigating to Tools > Fiddler Options, and selecting the Connections tab:

Finding the port that Fiddler's proxy server listens on

While you have that dialog open, also verify that the three checkboxes circled above are checked.

Finding your IP address

The next step is to determine your machine’s IP address on the local network. A quick way to do that is running ipconfig at the command prompt.

To open a command prompt, press Win + R, type cmd in the field, and hit enter.

Running cmd

At the command prompt that opens, type ipconfig and hit enter.

Finding the local IP address with ipconfig

What you’re looking for here is the IPv4 address for your machine’s primary network adapter. “Local Area Connection” is mine, so I need to use 192.168.1.119 to connect to my machine. A wireless connection is fine too, as long as it’s connected to the same access point that the iPhone is.

Find yours and make note of it for the next step.

Note: This must be an IP address that your iPhone can route to while connected via Wi-Fi. In most business and almost all residential networks, you won’t need to give this much thought. However, if you’re working within a more complex corporate network and can’t get your iPhone to connect to Fiddler’s proxy server, you may need help from a system administrator.

Configuring an iPhone to route through Fiddler

With your development machine’s IP address and Fiddler’s port number in hand, you’re ready to configure your iPhone to channel its network traffic through Fiddler’s proxy server.

To do that, open the settings app and tap the Wi-Fi option (below, left).

Navigating through the iPhone's settings

In the Wi-Fi Networks panel (above, right), you’ll see the wireless networks that your iPhone has detected in range. Tap the arrow at the right side of the Wi-Fi connection that you intend to use for testing.

Setting the iPhone's proxy settingsAt the very bottom of the panel that opens (left), find the HTTP Proxy setting and tap Manual (1) to enable the feature. In the fields that appear, enter your computer’s local IP address for the server (2), and the port that Fiddler is listening on for the… Port (3).

That’s it! Your iPhone is configured to route its traffic through an instance of Fiddler running on your development machine.

Starting the development server

Now that you have a conduit from your iPhone to the development server, it’s time to get the development server running by starting your site in Visual Studio. Anything that starts an instance of the development server will do (e.g. Start Without Debugging or View in Browser).

Make note of the URL displayed in your browser when Visual Studio displays your website. We’ll modify that slightly in the next step and use it to access the development server from Mobile Safari.

If the development server is already running, you can also determine its address by right-clicking its icon in the system tray and choosing “Show Details”. That will present you with a window that looks like this:

Finding the port and virtual root of your development server app

The “Root URL” address there is what you’ll need in the final step.

Accessing the development server from your device

Finally, we’re ready to start testing against the development server from the browser on a mobile device. The one minor issue remaining is that the exact URL advertised by the development server won’t work in this setup.

To make Fiddler happy, you need to append a trailing period to the hostname portion of the address. For instance, this “Root URL” advertised in the example above will not work without modification:

Wrong!

localhost:24833/WebSite1

To make it work, we simply need to append the trailing period to localhost:

localhost.:24833/WebSite1

That does look odd, but it works.

Fiddler also recognizes ipv4.fiddler as an alias for the localhost loopback, which is a little bit more intuitive. So, you could also access the same example with this address if you prefer:

ipv4.fiddler:24833/WebSite1

That’s it. You’re armed and ready to test with any external device on your local network now, so long as it supports routing its traffic through an HTTP proxy.

Conclusion

At first, this may seem like many steps and a lot of work. Don’t worry. Once you go through the motions a few times, you’ll find that it’s a breeze.

It’s especially smooth sailing in future repetitions, since your machine’s local IP probably won’t change often, and Fiddler’s proxy IP won’t change at all.

Of course, Fiddler isn’t the only utility that will work as an intermediary like this, but using Fiddler brings the great side-effect of providing HTTP traffic analysis while you’re testing. That added utility is welcome when you’re testing on a mobile device where the on-device development tools are basically nonexistent.

###

Originally posted at Encosia. If you’re reading this elsewhere, come on over and see the original.

Using an iPhone with the Visual Studio development server

ASMX ScriptService mistake – Invalid JSON primitive

One group of searches that consistently brings traffic here is variations on the error: Invalid JSON primitive. Unfortunately, the post that Google sends that traffic to doesn’t address the issue until somewhere within its 150+ comments.

Today, the topic gets its own post.

If you’ve worked with ASMX ScriptServices or Page Methods without ASP.NET AJAX’s client-side proxy (e.g. using jQuery or pure XMLHttpRequest code), you’ve may have seen this cryptic error yourself. Or, perhaps you’ve just arrived here due to seeing it for the first time.

Either way, you may be surprised to learn that the most common reason for this error is that you’ve lied to ASP.NET during your AJAX request.

It all begins with the Content-Type

HTTP’s Content-Type header is a fundamental aspect of communication between browsers and servers, yet often remains hidden from us in day-to-day development. The Content-Type header allows an HTTP connection to describe the format of its contents, using Internet media types (also known as MIME types). A few common ones that you’ve probably seen before are text/html, image/png, and the more topical application/json.

Without the flexible negotiation process content types provide, your users’ browsers and your version of IIS would have to both be “ASMX Compatible” and “JSON Compatible” in order for ScriptServices to function. What a nightmare that would be! The IE6 difficulties we face today would pale in comparison.

Further, Content-Type negotiation is part of what allows a single URL, such as WebService.asmx, to represent data in more than one format (e.g. XML and JSON in ASMX’s case).

The benefits of Content-Type negotiation are well worth a bit of occasional hassle.

Okay, but why does that matter?

When your browser sends a POST request, the W3C’s recommendation is that it should default to using a Content-Type of application/x-www-form-urlencoded. The HTML 4.01 spec describes that serialization scheme:

This is the default content type. Forms submitted with this content type must be encoded as follows:

  1. [Omitted for brevity; not relevant to this post.]
  2. The control names/values are listed in the order they appear in the document. The name is separated from the value by ‘=’ and name/value pairs are separated from each other by ‘&’.

For an example of what that means, consider this simple form:

<form method="post">   <label>First Name</label>   <input id="FirstName" value="Dave" name="FirstName" />     <label>Last Name</label>   <input id="LastName" value="Ward" name="LastName" /> </form>

When the preceding form is submitted with URL encoded serialization, the request’s POST data will look like this:

Firebug screenshot showing the URLEncoded POST data

That standardized serialization format allows a server-side backend like ASP.NET to decipher a form submission’s contents and give you access to each key/value pair. Regardless of what sort of browser submits a form to the server, the Content-Type facilitates a predictable conversion from POST data to server-side collection.

In other words, the Content-Type corresponds to a serialization scheme.

What does that have to do with JSON Primitives?

Understanding Content-Type negotiation and how it relates to serialization is important due to its role in coaxing JSON out of ASMX ScriptServices. Specifically, the fact that you must set a Content-Type of application/json on the request means you’re instructing ASP.NET to interpret your input parameters as JSON serialized data.

However, the W3C’s mandate of URL encoding by default means that most AJAX libraries default to that serialization scheme. Similarly, AJAX tutorials targeting endpoints other than ASMX ScriptServices (including even ASP.NET MVC examples) will describe sending URL encoded data to the server.

In other words, when you’re working with a client-side object like this:

var Person = { FirstName: 'Dave',                 LastName:  'Ward' }

The default serialization scheme makes it easy to inadvertently transmit that data to the server as a URL encoded string:

FirstName=Dave&LastName=Ward

Again, remember that a Content-Type of application/json is a requirement when working with ASMX ScriptServices. By setting that Content-Type on the request, you’ve committed to sending JSON serialized parameters, and a URL encoded string is far from valid JSON.

In fact, it’s invalid JSON (primitive?), hence the cryptic error message.

Instead of the URL encoded string above, you must be sure to send a JSON string:

{'FirstName':'Dave','LastName':'Ward'}

Whether you’re using XMLHttpRequest directly or a JavaScript library that abstracts the details, getting your request’s serialization wrong is the root of the invalid JSON primitive error. However, a more specific issue tends to be the leading cause of this happening.

When good JavaScript libraries go bad

The most common source of this error stems from a subtlety of using jQuery’s $.ajax() method to call ASMX ScriptServices. Cobbling together snippets of code from the documentation, platform agnostic tutorials, and even posts here on my site, it’s easy to end up with something like this:

// WRONG! $.ajax({   type: 'POST',   contentType: 'application/json',   dataType: 'json',   url: 'WebService.asmx/Hello',   data: { FirstName: "Dave", LastName: "Ward" } });

Notice the JavaScript object literal being supplied to $.ajax()’s data parameter. That appears vaguely correct, but will result in the invalid JSON primitive error.

Why? jQuery serializes $.ajax()’s data parameter using the URL encoded scheme, regardless of what Content-Type is specified. Even though the contentType parameter clearly specifies JSON serialization, this URL encoded string is what jQuery will send to the server:

FirstName=Dave&LastName=Ward

That obviously isn’t valid JSON!

The solution is as simple as two single-quotes:

// RIGHT $.ajax({   type: 'POST',   contentType: 'application/json',   dataType: 'json',   url: 'WebService.asmx/Hello',   data: '{ FirstName: "Dave", LastName: "Ward" }' });

Did you spot the difference?

Instead of a JavaScript object literal, the data parameter is a JSON string now. The difference is subtle, but helpful to understand. Since it’s a string, jQuery won’t attempt to perform any further transformation, and the JSON string will be unimpeded as it is passed to the ASMX ScriptService.

It doesn’t have to be this way

The problem is trivial once you’re aware of the underlying issue, but there’s not a great reason I can see why things need to be this way in the first place. Either half of this equation could easily provide a remedy.

jQuery – I believe the most correct solution would be $.ajax() attempting to honor the serialization scheme indicated by its contentType parameter. In the case of application/json fixing this could be easy as testing for JSON.stringify and using it if available, to avoid adding any complexity/size to jQuery core.

That would leave it our responsibility to reference a copy of json2.js in older browsers, but that convention wouldn’t be much of a burden. We generally do that anyway when the client-side objects get complex.

Microsoft – It’s absolutely correct that the framework throws an error when you lie to it about what you’re sending. However, a bit of leniency could potentially save thousands of hours spent troubleshooting this problem (if my search traffic is any indication of its prevalence).

Is there any reason that the ScriptHandlerFactory can’t intelligently differentiate between between JSON and URL encoded inputs? If the first non-whitespace character of the request isn’t an opening curly brace, why not attempt to deserialize it as URL encoded before throwing an invalid JSON primitive error?

###

Originally posted at Encosia. If you’re reading this elsewhere, come on over and see the original.

ASMX ScriptService mistake – Invalid JSON primitive

A few thoughts on jQuery templating with jQuery.tmpl

I spent some quality time with Dave Reed’s latest revision of John Resig’s jQuery.tmpl plugin recently, migrating a small project from jTemplates. Since both the jQuery team and Microsoft team have requested feedback on jQuery.tmpl, I decided to write about my experience using it (as I am wont to do with these templating proposals).

Overall, jQuery.tmpl is a great step in the right direction. It’s small, it’s simple, and it’s fast. Overloading append() to allow the append(Template, Data) syntax is phenomenal. That approach feels more like idiomatic jQuery than anything else I’ve used, including jTemplates.

However, if this template rendering engine is going to succeed broadly, I feel there’s one important feature still missing. Additionally, there are a couple ancillary features that are present in the current proposal, but should be protected.

Composition

One area where jTemplates still comes out on top is template composition – also known as nested templates. Specifically, this refers to the ability for templates to contain references to other templates, and the ability to render that entire group as a whole.

The need for template composition may be hard to see in simple examples, but most non-trivial scenarios benefit from template composition. Dave mentioned the example of having a person template that embeds a separate template for displaying information on one or more phone entries about those person records.

That’s a good example, but take it one more step to understand where composition really shines. Consider the possibility that each of those phone records has a type (e.g. Mobile, Home, or Work) and that each type must be presented with different markup.

Template composition provides a clean solution to this problem. By creating separate templates for each type and then rendering the correct amalgamation of those templates, the template code remains simple (but is powerful).

A composition workaround in jQuery.tmpl

Currently, something resembling nested item templates are technically possible via the each keyword in jQuery.tmpl, but it’s not pretty. This example from the jQuery.tmpl demo illustrates that approach:

// Data cities: [ "Boston, MA",           "San Francisco, CA" ]   // Template Cities: {{each(i,city) cities}}${city}{{/each}}

Even for this most simple case, the syntax is rough. Not the sort of readable simplicity that we’ve come to expect from jQuery.

More importantly, extending that to conditionally render different fragments of markup would be much more difficult.

A cleaner workaround (but it’s a trap!)

The jQuery.tmpl demo also contains this seemingly elegant alternative:

// Data cityJoin: function() {   return this.cities.join(", "); }, cities: [   "Boston, MA",   "San Francisco, CA" ]

By embedding a function in the data, referencing that object key in a template returns the result of the cityJoin function. Thus, jQuery.tmpl renders the function’s result, not the function declaration’s actual text.

That technique dramatically simplifies the template itself:

// Template Cities: ${cityJoin}

While that approach does succeed in avoiding the messy template code that each requires, it tightly couples concerns which should not be.

When this is put into practical use, the data object will usually be requested from the server-side. Would my business logic or data repository tier need to inject the joining function? I don’t think I could bring myself to do that.

Further, it doesn’t really address the real-world scenarios I’ve encountered. A callback function to format data won’t feasibly scale to rendering heterogeneous chunks of markup. Effectively, it’s the same as the each solution, with the pain point shuffled around a bit.

How I think it should work

As long as there’s some way to name and render sub-templates, I don’t care how exactly it works.

I wasn’t crazy about jTemplates’ {#include} syntax at first, but it’s okay once you get used to it. Most any syntax shouldn’t be difficult to learn and acclimate to.

Since jQuery.tmpl already provides for caching the templates in jQuery.templates, all that’s really necessary is a method for rendering those named templates within other templates.

A concrete example

This example simplified a bit, but is functionally similar to client work I come across regularly. To be clear, this is something I’m currently using today, not something I’m just theorizing as a good idea.

I want to be able to take data that isn’t necessarily homogeneously structured:

{InvoiceItems: [   { ItemType: 'Product',     PartNumber: '99-Designs-Logo',     Description: '99 Designs Logo design',     TotalCost: 450 },   { ItemType: 'Service',     DescriptionOfWork: "Website development",     TotalCost: 5000 },   { ItemType: 'Service',     DescriptionOfWork: "Deployment and testing",     TotalCost: 300 } ])

And use a set of templates like this to render that data (did I get the each syntax right in #Invoice?):

{#ServiceItem} <tr>   <td colspan="2">{$DescriptionOfWork}</td>   <td>{$TotalCost}</td> </tr>   {#ProductItem} <tr>   <td>{$PartNumber}</td>   <td>{$Description}</td>   <td>{$TotalCost}</td> </tr>   {#Invoice} <table>   <-- Fancy thead here -->   {{each(i, InvoiceItem) InvoiceItems}}     {{if $InvoiceItem.ItemType === 'Service'}}       {{render('ServiceItem', InvoiceItem)}}     {{else}}       {{render('ProductItem', InvoiceItem)}}     {{/if}}   {{/each}} </table>

Scenarios just like that one are common in my current work with jTemplates. Imagine trying to implement that with each or an embedded callback function.

Now imagine you wanted to conditionally render those same row templates as sub-items in more than one master template (e.g. maybe there’s also a credit memo template similar to the invoice, but not identical). The burden of maintaining an application like that quickly compounds without template composition.

Again, the exact syntax doesn’t matter. Any implementation that provides the functionality would be a win.

External templates

I’ve never understood the desire to cruft up a page with inline templates. Whether they’re hidden through CSS or embedded within a text/html type script element, cluttering my markup with templates feels sloppy.

jTemplates introduced me to the idea of storing templates externally, and then asynchronously loading them only as necessary (example). I adopted that approach at nearly the same time I began using jTemplates and haven’t looked back since.

My affinity for remotely template loading caused me to lobby for remote templates both in the DataView and more recently in jQuery.tmpl. However, what I hadn’t considered is that it’s already easy to load external templates with jQuery’s built in AJAX functionality. With the template loaded into a string variable, the append(TemplateString, Data) syntax connects the dots perfectly.

All of that is simply to say: Please do leave the append(TemplateString, Data) syntax intact. As long as we can provide templates via string variable, built in remote template loading isn’t necessary.

On the need for in-template logic

I’ve heard some dissention on the issue of conditional logic within templates.

Philosophically, I agree; mingling logic into your presentation/view is dangerous. It’s always a slippery slope, at best. However, the pragmatist in me ultimately cannot agree with the hard-line approach of banishing it completely.

A screenshot of conditionally rendering invoice display names depending on their invoice numberA project I’m working on is a good example. My client-side code must display a collection of invoices sourced from a legacy backend, each with an upgrade number. Upgrade 0 must be displayed as “Original Invoice”, but the rest should be displayed as “Upgrade n”.

A conditional in the template makes quick work of the problem (this is jTemplates syntax, not jQuery.tmpl):

{#if $T.Upgrade == 0}   Original Invoice {#else}   Upgrade #{$T.Upgrade} {#/if}

Since it’s purely presentational logic, it belongs in the template as much as it does anywhere. Please do keep that functionality available, even if it does have potential for abuse.

In addition to the currently available conditional keywords if and else, elseif and switch would both be nice additions.

Conclusion

Overall, I’m excited about the potential of seeing jQuery.tmpl integrated into jQuery core, or even made available as an “official” plugin. As much as I like jTemplates, support and documentation for it is spotty at best.

Ultimately, we will all benefit from standardizing on an official templating solution rolled into the jQuery core, rather than each of us using our obscure favorite.

I’m curious what you think. Am I the only one using template composition? Anyone want to make a convincing case against conditional logic in templates?

###

Originally posted at Encosia. If you’re reading this elsewhere, come on over and see the original.

A few thoughts on jQuery templating with jQuery.tmpl

5 Steps Toward jQuery Mastery

I am plagiarizing myself!

I originally wrote this article for my friend Moses (of Egypt) to be published in the .Network magazine’s inaugural issue, which coincided with this year’s Cairo Code Camp. Since the article turned out well and there was no corresponding online version, we agreed it would be a good idea to republish it online here too.

Most of us get our first taste of jQuery by implementing a simple animation effect or using a plugin for a specific purpose. This is natural because, like JavaScript itself, jQuery lends itself to beginning with the basics and building from there.

As you branch out from the trivial and begin using jQuery for more complex solutions, it’s important that you stay vigilant for new ways to approach those more involved problems. What works well enough for a dozen lines of code may not work for hundreds, and the unforgiving cross-platform environment that comes along with developing for web browsers only magnifies any trouble you run into.

With that in mind, I want to share a few tips with you that I found valuable as my work with jQuery became more complex.

Use Firebug to experiment interactively

If I could suggest only one thing to you, it would be to use Firebug’s console to prototype your ideas in real time. Nothing matches the command line’s immediate feedback when you’re learning. Rather than go through the hassle of editing a JavaScript file, saving it, and then reloading it in the browser, testing at the console allows you to eliminate those intermediate steps and focus on the task at hand.

In particular, the console is invaluable when testing complex combinations of selectors and traversal methods. Simply execute a jQuery statement at the console, with your page loaded in the browser, and you will instantly see the array of matching elements that are returned. If that result set isn’t correct, press the up-arrow key to retrieve the last command entered, refine the statement, and test it again.

That tight feedback loop is phenomenal for quickly learning the jQuery API without leaving the familiar backdrop of your existing markup.

Additionally, one lesser-known feature of Firebug’s console is how it works in conjunction with Firebug’s debugger. When execution is paused, switching to the console tab allows you to interrogate and manipulate the state of the DOM as it was at the time execution was halted. As you step through JavaScript code in the debugger, the execution context of the console remains in sync with the debugger’s.

Cache selector results

jQuery’s concise syntax makes it easy to forget just how much work the Sizzle selector engine is doing on your behalf. As powerful as the terse selectors are, it’s important not to needlessly duplicate the work that they abstract – especially when using selectors without browser-native backing methods (e.g. a[href^=http], tr:odd, p:contains(Encosia), etc).

To avoid wasteful re-querying, always cache the results of a jQuery selector in a variable if that result set will be used more than once. Once stored in a variable, the result of a selector may be used in exactly the same manner as the original selector itself. For example:

// Wasteful duplication of a slow selector $(':input[id$=name]').val('Type your name here'); $(':input[id$=name]').select();   // Nearly twice as fast var $name = $(':input[id$=name]');   $name.val('Type your name here'); $name.select();

Prefixing the cache variable with a dollar sign is not functionally significant, but helps to clearly indicate that the variable contains a jQuery wrapped set. The topic of Hungarian notation is a contentious one, but I’ve found the dollar sign prefix beneficial as complexity increases.

Don’t use jQuery unless there’s a good reason to

Perhaps one of the most elusive keys to jQuery mastery is knowing when not to use it. Once you’re proficient with jQuery, it seems natural to use it everywhere. However, that tendency may easily mislead you into writing less concise code that runs slower than necessary.

For instance, this egregious example is one that you will see often:

// Typical jQuery over-use $('button').click(function() {   alert('Button clicked: ' + $(this).attr('id')); });

In the context of that callback function, this is a reference to the DOM element that raised the click event – often referred to as the execution context. Rather than using the DOM element to create a jQuery object, with all of the overhead that goes along with that, why not use the DOM element itself?

// Not only faster, but more concise. $('button').click(function() {   alert('Button clicked: ' + this.id); });

Another similar misuse is using jQuery as a document.getElementById shortcut. Though jQuery’s ID selector does leverage document.getElementById, it only does so after parsing the selector and then creates a jQuery object to wrap the element. Not only is there more overhead in that process, but starting with a jQuery object by default guides you toward this mistake of overusing jQuery when it isn’t necessary.

Learn advanced selectors, filters, and traversals

A great way to improve your jQuery code is to learn its selectors, filters, and traversal methods in depth. If you find yourself iterating through a selection and manually filtering it for a desired set, there is usually a better way. Double and triple check that there isn’t a combination of selectors, filters, and/or traversals available to accomplish the same end result.

Not only does using the library’s idioms make your code more concise and expressive, but you will automatically benefit from ongoing performance improvements to jQuery’s Sizzle selector engine that come with each new release. It’s hard to beat having an entire team working to improve your code for you, but that’s exactly what happens when you use jQuery syntax that’s as idiomatic as possible.

A less-frequently documented aspect of learning advanced selectors is that you should be conscious of how to work in concert with jQuery to optimize them. Because the Sizzle engine evaluates selectors from right-to-left, being as specific as possible in the rightmost portion of selectors will improve performance.

Selectors that descend from an ID are one notable exception to the right-to-left rule. Sizzle is specially optimized for that case:

// Breaks the right-to-left specificity guideline: $('#RegistrationForm input.required').append('*');   // jQuery automatically optimizes the previous selector //  as if you had written it using the ID as a context: $('input.required', '#RegistrationForm').append('*');

Use CDN hosting when available

When you’re developing and testing locally, it’s easy to underestimate the impact that WAN latency will have on your site. Today’s website is often accessed by a geographically diverse set of users that it is impossible to optimally serve all of them from a single datacenter. Serving static resources, such as jQuery, from content delivery networks is one effective solution to mitigate that problem.

Not only do these CDNs provide a faster, more consistent experience to geographically dispersed users, but they also open up the potential for users to visit your site with a primed cache. Since everyone using these public CDNs reference the same URL, a single browser-cached copy of a given asset may be shared between any number of sites visited by a given user.

Even better, because these CDNs serve their content with a far future Expires header, browsers immediately use a locally-cached version of that file if it’s available. They don’t even have to check with the server for a 304 “Not Modified” response, eliminating the extra HTTP request altogether.

Assets freely available on public CDNs include jQuery itself, jQuery UI, all 14 jQuery UI ThemeRoller themes, and the jQuery validation plugin.

Conclusion

I hope that you’ll find these ideas useful on your road to jQuery mastery. Though jQuery’s clear, intuitive syntax may appear simplistic on the surface, the library is immensely powerful. The key to unlocking jQuery’s full potential is to never stop experimenting and learning new aspects of it.

In the spirit of that continued road toward mastery, consider taking the next step by watching my TekPub series: Mastering jQuery.

###

Originally posted at Encosia. If you’re reading this elsewhere, come on over and see the original.

5 Steps Toward jQuery Mastery

ASMX ScriptService mistakes: Installation and configuration

Continuing my series of posts about ASMX services and JSON, in this post I’m going to cover two common mistakes that plague the process of getting a project’s first ASMX ScriptService working: Installing System.Web.Extensions into the GAC and configuring your web.config.

System.Web.Extensions (aka ASP.NET AJAX)

The ability for ASMX services to return raw JSON is made possible by two key features originally added by the ASP.NET AJAX Extensions v1.0:

  • JavaScriptSerializerThe JavaScriptSerializer class is the actual workhorse that translates back and forth between JSON strings and .NET CLR objects. Though less powerful than WCF’s DataContractJsonSerializer and third-party libraries like Json.NET, JavaScriptSerializer is likely all you’ll ever need for simple AJAX callbacks.
  • ScriptHandlerFactory – There are several more classes behind the scenes*, but the ScriptHandlerFactory is the tip of the iceberg that you’ll need to remember during configuration. Redirecting ASMX requests through this HttpHandler is what coordinates the pairing of ScriptService with JavaScriptSerializer to provide automatic JSON handling.

Though both of these classes appear in the System.Web.Script namespace, they actually reside in ASP.NET AJAX’s System.Web.Extensions assembly. That has different implications depending on which version of ASP.NET your site targets:

  • 1.x – No support for ScriptServices. A custom HttpHandler coupled with a third party library like Json.NET is your best bet (if anyone has a good tutorial on doing this under 1.x, let me know so that I can link to it).
  • 2.0 – ScriptServices are available in ASP.NET 2.0 with the installation of the ASP.NET AJAX Extensions v1.0.
    • That means that the ASP.NET AJAX installer needs to be run on the server that hosts your site, not just on your local development machine.
    • For some of a ScriptService’s features to work in medium trust (i.e. shared hosting), the System.Web.Extensions assembly needs to be in your server’s global assembly cache (GAC). Don’t waste your time trying to make it work in your site’s /bin directory; insist that the extensions be properly installed on the server.
  • 3.5+ – As of .NET 3.5, System.Web.Extensions ships with the framework. No additional assemblies need be installed.

* If you’re interested in the internals, I highly recommend downloading the ASP.NET AJAX Extensions v1.0 source and taking a look at ScriptHandlerFactory, RestHandlerFactory, and RestHandler. Though the classes have changed slightly since v1.0, they are still very similar.

Rerouting the ASMX handler via web.config

With the System.Web.Extensions assembly installed in the GAC, the remaining configuration step is an element in your site’s web.config. To take advantage of the ScriptService functionality, ASP.NET must be instructed to reroute ASMX requests through the ScriptHandlerFactory instead of ASP.NET’s standard ASMX handler.

This step is often unnecessary. The project templates in ASP.NET 3.5+ include all the necessary configuration elements, and ASP.NET 2.0 sites created with the “AJAX Enabled” templates are also pre-configured correctly.

However, if you find yourself unable to coax JSON out of an ASMX ScriptService, verifying your web.config is one of the best first steps in troubleshooting the issue. Whether due to a web.config generated by an older project template, accidental modification, or other issues, missing the httpHandlers web.config setting is a very common pitfall.

What should appear varies slightly depending on which version of ASP.NET your project targets. Regardless of your framework version, the config elements should be added to the <httpHandlers> section and are the only elements necessary. The variety of other config items required for the UpdatePanel and ScriptManager aren’t crucial to the ScriptService functionality.

ASP.NET 2.0 (with the ASP.NET AJAX Extensions installed)

<configuration>   <system.web>     <httphandlers>       <remove path="*.asmx" verb="*" />       <add path="*.asmx" verb="*" Culture=neutral, validate="false"            type="System.Web.Script.Services.ScriptHandlerFactory,                   System.Web.Extensions, Version=1.0.61025.0,                  PublicKeyToken=31bf3856ad364e35" />     </httphandlers>   </system.web> </configuration>

ASP.NET 3.5

<configuration>   <system.web>     <httphandlers>       <remove path="*.asmx" verb="*" />       <add path="*.asmx" verb="*" Culture=neutral, validate="false"            type="System.Web.Script.Services.ScriptHandlerFactory,                   System.Web.Extensions, Version=3.5.0.0,                  PublicKeyToken=31bf3856ad364e35" />     </httphandlers>   </system.web> </configuration>

ASP.NET 4

Thankfully, ASP.NET 4 has taken steps to reverse the trend of ever-enlarging baseline web.config files. By moving common configuration items such as the ScriptService’s HttpHandler to the default machine.config, each individual site need not include those configuration elements in their specific web.config files.

Unless you go out of your way to manually remove their HttpHandler, ASMX ScriptServices will work automatically in any ASP.NET 4 site.

###

Originally posted at Encosia. If you’re reading this elsewhere, come on over and see the original.

ASMX ScriptService mistakes: Installation and configuration

WP Like Button Plugin by Free WordPress Templates