.Net Development19 Feb 2010 11:37 am

The asp.Net MCV framework is great in a lot of ways. But one of the things that really bugs me about it is that error handling and security are not applied to controllers by default. This means that you have to remember to apply them every time you create a new controller.

There are reasons for this, but insecure by default naturally begs for problems. So the other day I got tired of catching these problems in my current application and wrote the following simple unit tests to ensure that all my controllers are secure an handle errors.

This code uses NUnit but it could easily be ported to any other testing framework.

Likewise you will need to tweak the roles check to suit your needs.

Happy Coding!

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web.Mvc;
using NUnit.Framework;
using MvCApplication1;
 
[TestFixture]
public class ControllerConfigurationTests : NUnitTestFixtureBase
{
    #region Test Data
 
    public IEnumerable<Type> AllControllers()
    {
        return typeof(MvcApplication).Assembly.GetTypes()
            .Where( t =>
                     !t.IsAbstract
                     && typeof(Controller).IsAssignableFrom(t));
    }
 
    public IEnumerable<Type> AllNonExemptControllers()
    {
        return AllControllers().Except(new []
                                           {
                                            typeof(HomeController),   
                                            typeof(NewAccountController),   
                                            typeof(LogonController),   
                                            });
    }
 
    #endregion
 
    #region Tests
 
    [Test, TestCaseSource("AllControllers")]
    public void All_controllers_must_declare_the_HandleError_attribute(
        Type controller)
    {
        var attributes = controller
            .GetCustomAttributes(typeof (HandleErrorAttribute), true);
 
        Assert.That(attributes.Length,  Is.EqualTo(1), 
            controller.Name + " does not declare the HandleError attribute.");
    }
 
    [Test, TestCaseSource("AllNonExemptControllers")]
    public void All_controllers_not_specifically_exempted_must_authorize_only_the_filer_role(
        Type controller)
    {
        var attributes = controller
            .GetCustomAttributes(typeof(AuthorizeAttribute), true);
 
        Assert.That(attributes.Length, Is.EqualTo(1), 
            controller.Name + " does not declare one and only one Authorize attribute.");
 
        Assert.That(((AuthorizeAttribute)attributes[0]).Roles, Is.EqualTo("user"), 
            controller.Name + " does not authorize the appropriate roles.");
    }
 
    #endregion
}
.Net Development02 Feb 2010 10:43 pm

The Problem

A while back, while working in the MVC framework, I was longing for the ClientScriptManager.RegisterClientScriptInclude() and RegisterClientScriptBlock() functionality of traditional ASP.Net web forms.  We were doing a lot of work in jQuery, both using standard plug-ins like AutoComplete and writing our own stuff and so there were a lot of scripts to manage.

Moreover, we were using templates as described by Brad Wilson.  Some of these templates depend on scripts and often the templates appear several times on one page.  This of course leads to the exact situation that RegisterClientScriptInclude() and RegisterClientScriptBlock() are intended to solve:  how to keep page components that render their own script tags, and appear several times on one page, from rendering duplicate script tags, and worse, duplicate script blocks and function definitions.

The Challenge

So what is a developer to do when your framework doesn’t support you?  Well, not to be defeated, I did some digging into the MVC and ASP.Net frameworks and came up with a solution that essentially does the same thing as the ClientScriptManager.

What I needed was:

  1. A means during the rendering process to collect the scripts that need rendering.
  2. A means after the rendering process is done to insert the collected scripts at any arbitrary location in the rendered output.

Collecting the scripts was easy, so I set out in search of how to modify the response stream post rendering.

The Solution

What I found was Http Response Filters.  You can read the same article I did, so I won’t repeat the details, but it does exactly what I wanted.  It lets you modify the response stream any way you like, just before it goes back to the browser.
So with a promising solution to the hard part in hand, I solved the easy part with a pair of extension methods to the ubiquitous HtmlHelper, one to collect the content to be rendered (and eliminate duplicates) and one to insert markers into the response stream that the filter would then replace, post rendering, with the collected content.

So without further adieu, here is…

The Code

Tests First

Following TDD, I started out with a test that I want to pass.

[TestFixture]
public class IncludeRegistrationTests
{
    [Test]
    public void Registration_of_a_javascript_url_should_render_as_a_script_reference()
    {
        var htmlHelper = new HtmlHelper(null, null);
 
        htmlHelper.RegisterInclude("MyJavascript.js");
 
        var renderedHtml = IncludeRegistration
            .RenderIncludes(IncludeType.JavaScript);
 
        Assert.That(
            renderedHtml,
            Is.EqualTo("\r\n<script type=\"text/javascript\" src=\"MyJavascript.js\" />\r\n"));
    }
}
 
public enum IncludeType { JavaScript, Css }
 
public static class IncludeRegistration
{
    private static string _include;
 
    public static void RegisterInclude(this HtmlHelper htmlHelper, string include)
    {
        throw new NotImplementedException();
    }
 
    public static string RenderIncludes(IncludeType includeType)
    {
        throw new NotImplementedException();
    }
}

Several things can been inferred about how I want includes to work from this test and the stub code needed to allow it to compile.

  1. I want to be able to register includes anywhere that I have access to the HtmlHelper using the syntax Html.RegisterInclude(…).
  2. I was to just register a url, without all the boilerplate HTML and…
  3. If the registered URL ends in “.js”, I want the renderer to automatically render it as a valid script referenece.

What if the url does not end in “.js”?  Well that getting into future, yet unwritten tests, but my intention is that, for example, a url ending  “.css” would renderer as a style sheet reference.

Get to Green

My first pass at getting the test looks like this:

public static void RegisterInclude(this HtmlHelper htmlHelper, string include)
{
    htmlHelper.ViewContext.HttpContext.Items["Include"] = include;
}
 
public static string RenderIncludes(IncludeType includeType)
{
    var httpContext = ServiceLocator.Current.GetInstance<HttpContextBase>();
    var include = httpContext.Items["Include"];
    return string.Format("\r\n<script type=\"text/javascript\" src=\"{0}\" />\r\n", include);
}

As you can see I am using the HttpContext.Items collection as the storage location for my registered includes.  This is the appropriate thread safe, request-scoped placed to cache things in ASP.Net.  For those not familiar with this technique, more can be read about it here.

I am also using the Microsoft Patterns and Practices ServiceLocator facility to access the HttpContext in situations where I don’t have an HtmlHelper (as will be the case from within the http response filter).

You might think I could just call HttpContext.Current.  And in the real runtime environment I could, but not inside a test harness.  In the context of an automated test, we have to provide the HttpContext.  Which causes our test to now look something like this:

[TestFixture]
public class IncludeRegistrationTests
{
    private HtmlHelper _htmlHelper;
    private HttpContextBase _httpContext;
 
    [SetUp]
    public void SetupBeforeEachTest()
    {
        //Create a mock HttpContext that supports 
        //getting and setting values in its Items collection.
        _httpContext = MockRepository.GenerateMock<HttpContextBase>();
        _httpContext.Stub(x => x.Items).Return(new Dictionary<object, object>());
 
        //Wrap the HttpContext in a ViewContext and give it to the HtmlHelper
        var viewContext = MockRepository.GenerateStub<ViewContext>();
        viewContext.HttpContext = _httpContext;
 
        //We don't need this mock but we can't pass null to HtmlHelper()
        var viewData = MockRepository.GenerateMock<IViewDataContainer>();
        _htmlHelper = new HtmlHelper(viewContext, viewData);
 
        //Also register the HttpContext with the service locator.
        var serviceLocator = new MyIocContainer();
        serviceLocator.RegisterInstance(_httpContext);
        ServiceLocator.SetLocatorProvider(() => serviceLocator);
    }
 
    [Test]
    public void Registration_of_a_javascript_url_should_render_as_a_script_reference()
    {
        _htmlHelper.RegisterInclude("MyJavascript.js");
 
        var renderedHtml = IncludeRegistration
            .RenderIncludes(IncludeType.JavaScript);
 
        Assert.That(
            renderedHtml,
            Is.EqualTo("\r\n<script type=\"text/javascript\" src=\"MyJavascript.js\" />\r\n"));
    }
}

Now the purpose for using the Microsoft Patterns and Practices ServiceLocator interface is that it abstracts the code from being tied to any particular IoC Container.  So you can take this code and use it with your container of choice.  Since I don’t need anything fancy and didn’t feel like adding unneeded dependancies, I just wrote my own container for our purposes here:

public class MyIocContainer : IServiceLocator
{
    private readonly Dictionary<Type, Object> _cache = new Dictionary<Type, object>();
 
    public void RegisterInstance<TService>(TService instance)
    {
        _cache[typeof(TService)] = instance;
    }
 
    #region IServiceLocator Implementation
    public TService GetInstance<TService>()
    {
        return (TService)_cache[typeof(TService)];
    }
 
    //Implemenation of remaining interface members omitted for brevity.
    //They all throw a NotImplementedException.
    #endregion
}

Next Steps

So now the test passes.  But obviously this implementation has some issues; every time RegisterInclude() is called, it is going to overwrite the previous registration for one.  For another, passing in something other than a javascript url would not have the desired effect.  But at least this gets our test passing.  Which is good enough for now.  I want to get a full vertical slice of functionality working before I broaden out with more features.

So lets move on to the RenderIncludes function…

(Part 2 Coming Soon…)

Agile Practices08 Jan 2010 04:19 pm

My current project is the first that I have been involved in where we are really doing TDD in a meaningful way, but, with no veteran agilest on the team, we are feeling our way along as we go. Coming to the end of our first full sprint, things seem to be doing pretty well. Yet they don’t seem to be going like any TDD I have read about. So, I thought I would blog about what we are doing and see what feedback I get.

To set the stage, this is an asp.Net MVC 2.0 application written in C#. The website sits in a DMZ and the application and data layers are hosted on another set of machines behind a firewall.

In our effort to do true TDD, we start with a user story. From this we write one or more BDD stories using NBehave that execute against a controller in the web application. These tests are not unit tests of the controller however but integration tests. We are only mocking two sets of classes: 1) We are using the types in the System.Web.Abstractions assembly (or our own abstractions where necessary) to decouple our tests from ASP.net and the .Net framework. 2) As we go we define the methods needed on the interface to the application layer and mock the application layer via that interface. Thus our BDD stories test the whole of our web application code (excluding views) as an integrated whole.

Once we have these tests passing, we do a similar thing with our application layer. 1) We write BDD tests to against the interfaces defined in the previous exercise. 2) We write implementations that make the tests pass and in the process define and mock the interfaces needed for the data access layer and peripheral services.

Penultimately, we implement the peripheral services and write integration tests against their interface where practical.

Lastly, we go back to the web layer and implement the views. (Above we only stub out these views to the degree needed to all us to write our controller tests). We wait until the end to do the views because the views have to be human tested anyway and this allows us to basically do end-to-end systems tests while testing out the view under development.

We do this iteratively in tight vertical slices of functionality, and refactor in between iterations. In the end we have a nice little workflow that starts at testing the controller interface and ends with a reasonably rigorous albeit manual systems tests.

On the whole, this seems to be pretty solid TDD. Our automated tests are BDD tests that map directly to needed application behavior. We only write code to satisfy the tests and we can refactor at will inside the respective layers w/o having to modify the tests but knowing we have full functional coverage.

But note the lack of low level unit test. It is not the case that we don’t have any. We have a fair number, but they are not driving our TDD effort. The higher level BDD tests are doing that.
We write unit tests when we have a complicated piece of code that we want to test in isolation to make sure it handles all the edge cases. (We are using NUnit’s new data driven test cases for this sort of thing.) But we also write data driven NBehave tests as well at the integration layer so that we know the application as a whole handles a representative set of those edge cases.

The problem we have encountered with the lower level tests is their fragility. When we refactor, classes get broken up, or combined and the tests against them break and have to be retooled. The higher level abstractions and interfaces are more stable and therefore the integration tests against them remain valid in the face of the lower level refactorings.

The only concern I really have going forward with our approach is the possibility of the tests taking too long to run at some point. Right now we have about 260 test that run in just under 20 seconds. Given that some of these test are data driven tests that actually run hundreds of times the total count of test executions is actually over 1K. Considering this, if we do run into speed issues, the first line of attack may be to curtail some of the more extreme cases of data driven unit testing. Next on the block would be to omit the database integration tests which account for about one quarter of total execution time at this point.

In the end, thought, this is working for now but I would like to hear your thoughts on this and hear how you do it. Happy Coding!

« Previous PageNext Page »