Monthly Archives: June 2012

Griffin.Container: Introducing the command support.

Commands are a great way to decouple logic in an application, since the command invoker have no knowledge of where the command is handled. You also move the focus from the how the task is solved to the task itself.

A command which is used together with Griffin.Container can look like this:

public class CreateUser : ICommand
{
	public CreateUser(string firstName, string lastName)
	{
		if (firstName == null) throw new ArgumentNullException("firstName");
		if (lastName == null) throw new ArgumentNullException("lastName");

		FirstName = firstName;
		LastName = lastName;
	}

	public string FirstName { get; private set; }
	public string LastName { get; private set; }
}

Notice that the commands are immutable and do not return a value. A command should always validate all entered information (always catch errors as early as possible). Mandatory fields should be specified in the constructor while optional is set through methods or properties.

That is important if you want to be able to scale your application later on. You can for instance at any time serialize the command and send it to another server, thanks to that the command is immutable and do not return value is expected. I’ll demonstate how you can achieve that using decorators and Griffin.Networking in a later blog post.

Let’s focus on the command now. We need to dispatch it in some way so that it can be invoked.

Here is an example ASP.NET MVC3 controller:

public class UserController : Controller
{
	ICommandDispatcher _dispatcher;

	public UserController(ICommandDispatcher dispatcher)
	{
		_dispatcher = dispatcher;
	}

	[HttpPost, Transactional]
	public ActionResult Create(CreateModel model)
	{
		if (!ModelState.IsValid)
			return View(model);

		var cmd = new CreateCommand(model.FirstName, model.LastName);
		_dispatcher.Dispatch(cmd);

		return RedirectToAction("List");
	}
}

We do not know where the command is invoked, we should really not care either. We could even go a step further and not care about when the command is executed (as long as we have some kind of notification system to inform the user of failed commands). It all depends on how scalable your application should be (now or later).

We could start with an application where everything is synchronous and executed in the same process and end up with a application where the commands are distributed over several servers.

Getting a result

Commands should not return a result. It complicates the command handling and forces everything to be synchronous (or even more complex). It’s of course a trade off, but it also makes everything more flexible.

You still have the information which you used to invoke the command. You can use that information to update the UI to fake that a new item has been created/updated (which it will eventually). You could also inform the user that the item has been handled and that he need to refresh the UI to see it.

If you need an ID, switch to GUIDs and generate and attach it to the command before invoking it. There are GUIDs which works well with databases.

Handling the command

Commands are handled by classes which implements the interface IHandlerOf<T>.

A typical implementation looks something like this:

[Component]
public class CreateUserHandler : IHandlerOf<CreateUser>
{
	private IUserRepository _repository;

	public CreateUserHandler(IUserRepository repository)
	{
		_repository = repository;
	}

	public void Invoke(CreateUser cmd)
	{
		var user = _repository.Create(cmd.FirstName, cmd.LastName);

		DomainEvent.Publish(new UserCreated(user));
	}
}

The [Component] attribute will allow you to automatically register the class in the container. Read more about that in the core documentation

Nesting commands

No. Do not nest commands. A command is created for a specific use case. It’s not intended to get reused. Nesting commands can quickly produce complex code which is hard to follow and maintain.

Instead break out common functionality to a third class which is used by both commands.

Decorators

Decorators allows you to add functionality to the command handlers without actually having to modify them.

You could for instance create an decorator which meassure the performance of every command. It would look something like this:

public class PerformanceMonitor<T> : IHandlerOf<T> where T : class, ICommand
{
    private readonly IHandlerOf<T> _inner;

    public PerformanceMonitor(IHandlerOf<T> inner)
    {
        _inner = inner;
    }

    public void Invoke(T command)
    {
        var w = new Stopwatch();
        w.Start();
        _inner.Invoke(command);
        w.Stop();
        Console.WriteLine("Invocation of {0} took {1}ms.", command.GetType().Name, w.ElapsedMilliseconds);
    }
}

All decorators are attached the the handlers by using factories. A factory which would attach the performance decorator to all commands would look something like:

[Component]
public class ExceptionDecoratorFactory : IDecoratorFactory
{
	// used to determine which commands to decorate
    public bool CanDecorate(Type commandType)
    {
        return true;
    }

	// invoked if we can decorate a command
    public IHandlerOf<T> Create(IHandlerOf<T> inner) where T : class, ICommand
    {
        return new PerformanceMonitor(inner);
    }
}

The factory itself should be added to the container. The command dispatcher automatically finds all factories which have been registered.

Exception decorator

A decorator called ExceptionDecorator<T> is included in the framework. It automatically logs all failed commands and their properties.

Example output:

FailureCommand
    Parameters:
        FirstName: Arne
    Exception:
        System.InvalidOperationException: That wont work, dude!
           at Griffin.Container.Tests.Commands.DecoratorTests.FailureHandler.Invoke(FailureCommand command) in C:projectscsharpgauffinGriffin.ContainerSourceGriffin.Container.TestsCommandsDecoratorTests.cs:line 53
           at Griffin.Container.Tests.Commands.DecoratorTests.PerformanceMonitor`1.Invoke(T command) in C:projectscsharpgauffinGriffin.ContainerSourceGriffin.Container.TestsCommandsDecoratorTests.cs:line 39
           at Griffin.Container.Commands.ExceptionDecorator`1.Invoke(T command) in C:projectscsharpgauffinGriffin.ContainerSourceGriffin.ContainerCommandsExceptionDecorator.cs:line 39

Validation decorator

There is also a validation decorator included that is used to validate properties on the commands. It uses DataAnnotation attributes for the validation.

Example command (required parameters are manually validated in the constructor while optional properties is validated using DataAnnotations):

public class CreateUser : ICommand
{
	public CreateUser(string firstName, string lastName)
	{
		if (firstName == null) throw new ArgumentNullException("firstName");
		if (lastName == null) throw new ArgumentNullException("lastName");

		FirstName = firstName;
		LastName = lastName;
	}

	public string FirstName { get; private set; }
	public string LastName { get; private set; }

	[StringLength(40)]
	public string Title { get; set; }
}

Getting the code

The code is available at github and is included in the main project.

You can get it by using nuget: install-package griffin.container.


ASP.NET MVC3: Custom error for 401 Unauthorized

Have you tried to display a custom error message if authorization fails..

[Authorize(Roles="User")]
public class ItemController : Controller
{
}

.. by specifying it in the customErrors section in web.config:

    <customErrors defaultRedirect="~/Error/" mode="On">
      <error statusCode="404" redirect="~/Error/NotFound/" />
      <error redirect="~/Error/NotAuthorized/" statusCode="401" />
    </customErrors>

So did I without getting it to work. The [HandleError] attribute can not handle it either since it only handles exceptions (as the contract IExceptionFilter says).

Solution

public class BetterAuthorizeAttribute : AuthorizeAttribute
{
	protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)
	{
		throw new NotAuthorizedHttpException(Roles);
	}
}

public class NotAuthorizedHttpException : HttpException
{
	public NotAuthorizedHttpException(string missingRoles)
		: base(401, string.Format("You do not have the required role(s) '{0}'.", string.Join(", ", missingRoles)))
	{
	}
}

That solution allows us to use the exception filter (HandleErrorAttribute) to serve the error page. Customization get’s easier since we only have to make a change in global.asax to be able to serve any page. (i.e. switch the HandleError filter to our custom one).


Double check pattern

I just answered a question at SO about lazy loading which involved the double check pattern. It’s a really useful pattern for lazy loading since it hurt performance a lot less than always locking.

I thought that I should share explain why by using some comments:

public sealed class Lazy<T> where T : class
{
    private readonly object _syncRoot = new object();
    private readonly Func<T> _factory;
    private T _value;

    public Lazy(Func<T> factory)
    {
        if (factory == null) throw new ArgumentNullException("factory");

        _factory = factory;
    }

    public T Value
    {
        get
        {
            // here is the first check. It only returns true
            // if the instance have not been created, right?
            if (_value == null)
            {
                // so when we enter here, we are within a very small time frame:
                // That is from the above check until the new instance is
                // assigned to the variable.

                // Which is a <strong>very</strong> small time frame
                // (unless you rely on external resources).

                // So let's lock to make sure that no other thread
                // have already started to create the object.
                lock (_syncRoot)
                { 
                    // We enter here as soon as any other thread (if there were one)
                    // have stopped working, which means that the value could 
                    // have been assigned.

                    // So let's check if another thread have already done the work for us.
                    if (_value == null)
                    {
                        //no, so let's create the object.
                        _value = _factory();
                    }
                }
            }
            return _value;
        }
    }

    public override string ToString()
    {
        return _value == null ? "Not created" : _value.ToString();
    }
}

The double check pattern allows us to have lock free code (other than when the instance is created) which is a huge performance gain compared to using a single lock.

Feel three to use the code in .NET versions earlier than 4.


Adding a required indicator in ASP.NET MVC3 using jQuery

You might want to indicate that fields in ASP.NET MVC3 is required visually.

Sample image

It’s easy thanks to the HTML attributes that the unobtrusive validation API generates.

Start by adding a new CSS class in your site.css called something like .required-indicator.

.required-indicator { color: red; font-size: 1.2em; font-weight: bold; }

Then add a small jQuery script somewhere (in the layout or an application.js):

$(function() {
    $('[data-val-required]').after('<span class="required-indicator">*</span>');
});

Done! Here is a sample fiddle.


How to handle transactions in ASP.NET MVC3

I got annoyed about having to repeat the transaction handling in every POST method of my controllers. First of all: I don’t want to have to take the unit of work in the constructor of each controller, since that implicitly says that all/most methods uses transactions. And I do not want to have to resolve a factory, since that says that all methods might use transactions. Finally using a singleton is the worst of solutions.

Before

The problem wasn’t that hard to solve thanks to the flexibility of ASP.NET MVC3. But let’s start by looking at a typical post method:

[HttpPost]
public virtual ActionResult Create(CreateModel model)
{
	if (!ModelState.IsValid)
		return View(model);

	var instruction = new Instruction(CurrentUser);
	Mapper.Map(model, instruction);

	using (var uow = _unitOfWorkFactory.Create())
	{
		_repository.Save(instruction);
		
		uow.SaveChanges();
	}
	
	return RedirectToAction("Details", new {id = instruction.Id});
}

After

What I did was to create a new action filter called TransactionalAttribute. It checks if the model state is valid and that no exceptions have been thrown. It uses the DependencyResolver to find a IUnitOfWork implementation if everything checks out OK. Since this is done in an action filter, the transaction will not be created unless it’s actually required. All you have to do is to register the UoW factory in your IoC container and tag your action:

[HttpPost, Transactional]
public virtual ActionResult Create(CreateModel model)
{
	if (!ModelState.IsValid)
		return View(model);

	var instruction = new Instruction(CurrentUser);
	Mapper.Map(model, instruction);
	_repository.Save(instruction);
	
	return RedirectToAction("Details", new {id = instruction.Id});
}

There is a small catch: You must add an error to the ModelState if you catch exceptions in your class. The transaction will otherwise get committed.

[HttpPost, Transactional]
public virtual ActionResult Create(CreateModel model)
{
	if (!ModelState.IsValid)
		return View(model);

	try
	{
		model.Category = model.Category ?? "Allmänt";

		var instruction = new Instruction(CurrentUser);
		Mapper.Map(model, instruction);
		_repository.Save(instruction);
		
		return RedirectToAction("Details", new {id = instruction.Id});
	}
	catch (Exception err)
	{
		// Adds an error to prevent commit.
		ModelState.AddModelError("", err.Message);
		Logger.Error("Failed to save instruction for app " + CurrentApplication, err);
		return View(model);
	}
}

Implementation

The attribute itself looks like this:

public class TransactionalAttribute : ActionFilterAttribute
{
	private IUnitOfWork _unitOfWork;

	public override void OnActionExecuting(ActionExecutingContext filterContext)
	{
		if (filterContext.Controller.ViewData.ModelState.IsValid && filterContext.HttpContext.Error == null)
			_unitOfWork = DependencyResolver.Current.GetService<IUnitOfWork>();

		base.OnActionExecuting(filterContext);
	}

	public override void OnActionExecuted(ActionExecutedContext filterContext)
	{
		if (filterContext.Controller.ViewData.ModelState.IsValid && filterContext.HttpContext.Error == null && _unitOfWork != null)
			_unitOfWork.SaveChanges();

		base.OnActionExecuted(filterContext);
	}
}

Simple, but effective! :)

Extras

The actual unit of work implementation depends on which kind of data layer you are using. You can use the following code snippet if you are using nhibernate and have successfully registered the ISession in your container:

public class NhibernateUnitOfWork : IUnitOfWork
{
	private readonly ISession _session;
	private ITransaction _transaction;

	public NhibernateUnitOfWork(ISession session)
	{
		_session = session;
		_transaction = session.BeginTransaction();
	}

	public void Dispose()
	{
		if (_transaction == null)
			return;

		if (!_transaction.WasCommitted)
			_transaction.Rollback();

		_transaction.Dispose();
		_transaction = null;
	}

	public void SaveChanges()
	{
		_transaction.Commit();
	}

}

The Unit of work interface is really simple:

public interface IUnitOfWork : IDisposable
{
	void SaveChanges();
}

Protect your data!

The introduction of OR/M libraries and strongly typed views/view models has made a lot of programmers to switch to the dark side. Let’s fight this evil!

What evil am I talking about? Public setters in your domain/business model (or whatever you like to call them). Public setters allows anyone at anytime modify your domain model data. For instance, when you create an item it’s probably the repository and the controller (MVC) / VM (WPF) that modifies it. If you have a windows service who does background processing it will probably modify it. If you allow the user to edit the model through a GUI you have a third place which can modify the model. All those places get two responsibilities since the domain model itself have no control over it’s data it. They have to take care of their own responsibility and to make sure that the domain model is in a consistent state. imho that’s not a very bright idea. How could anyone else than the domain model itself know which properties should be specified and when?

Just look at this terrible domain model:

public class TodoTask
{
	public string Id { get; set; }
	public string Title { get; set; }
	public string Description { get; set; }

	public DateTime CreatedAt { get; set; }
	public User CreatedBy { get; set; }

	public DateTime UpdatedAt { get; set; }
	public User UpdatedBy { get; set; }

	public bool IsCompleted { get; set; }
	public bool CompletedAt { get; set; }
	
	public DateTime AssignedAt { get; set; }
	public User AssignedTo { get; set; }

	public List<UserHistoryEntry> UserHistory { get; set; }
}

public class UserHistoryEntry
{
	public DateTime StartedAt { get; set; }
	public DateTime EndedAt { get; set; }
	public User User { get; set; }
}

aawwww. That’s horrible. I can barely look at it. Give me a minute..

(One hour later)

Ok. So what’s wrong with that model? It’s not responsible of it’s own data. Any class that uses it can change it’s content without any kind of validation.

Here are a sample method to illustrate the problem:

public class TodoController : Controller
{
	// Transactional = action attribute by me: Commit to db if not exceptions & model is valid
	[HttpPost, Transactional] 
	public ActionResult Edit(TodoModel model)
	{
		var dbEntity = _repository.Get(model.Id) 
		
		if (dbEntity.IsCompleted != model.IsCompleted)
		{
			if (!model.IsCompleted)
			{
				// Changed from completed to not completed
				// error?
			}
			
			dbEntity.IsCompleted = true;
			dbEntity.CompletedAt = DateTime.Now;
			
			foreach (var user in dbEntity.UserHistory)
			{
				_emailService.Send(user.Email, "Item completed", dbEntity.AssignedTo + " has completed item " + dbEntity.title);
			}
		}
		
		if (model.AssignedTo != dbEntity.AssignedTo)
		{
			dbEntity.AssignedTo = model.AssignedTo;
			dbEntity.AssignedAt  = model.AssignedAt.

			foreach (var user in dbEntity.UserHistory)
			{
				_emailService.Send(user.Email, "Item (re)assigned", dbEntity.AssignedTo + " has been assigned to " + model.AssignedTo);
			}
		}
		
		_repository.Save(dbEntity);
	}
	
}

Take a moment and look at the code. We have wipped up a view used the tooling in Visual Studio (using a strongly typed view model, edit template). Since we allow the user to edit most fields we need to include different checks for different states and act accordingly. (It would be slightly better if that code existed in your service class instead of in the controller/VM.)

Let’s fast forward one year. You now have a problem with the todo items. For some reason the CompletedAt date is not saved when it should. You therefore add a new check each time a todo item is saved in the repository:

You check if CompletedAt is set if IsCompleted is set, if not you assign it.

That is a workaround. Those kind of workarounds is almost always present in applications that expose all model data to everything in the application. It do not matter if you have a TodoService class which should handle everything for the todo items, since all other code can “fix” the model info too. Hence you WILL get those workarounds sooner or later.

Protect your data

Let’s look at the cure. In fact, it’s very simple. Make all setters private:

public class TodoTask
{
	public string Id { get; private set; }
	public string Title { get; private set; }
	public string Description { get; private set; }

	public DateTime CreatedAt { get; private set; }
	public User CreatedBy { get; private set; }

	public DateTime UpdatedAt { get; private set; }
	public User UpdatedBy { get; private set; }

	public bool IsCompleted { get; private set; }
	public bool CompletedAt { get; private set; }
	
	public DateTime AssignedAt { get; private set; }
	public User AssignedTo { get; private set; }

	public IEnumerable<UserHistoryEntry> UserHistory { get; private set; }
}

Notice that I’ve changed UserHistory to IEnumerable. That’s so that we don’t break Law of Demeter (and loose control over the history changes). The todo item data is now protected. But we can’t change anything now. And we can’t build a GUI. The solution is simple. We do not just want to change data. We want to take some action! Let’s add some methods so that we can do that.

public class TodoTask
{
	public TodoTask(User creator, string title, string description)
	{
		// insert argument exceptions here.
		
		CreatedBy = creator;
		CreatedAt = DateTime.Now;

		UpdatedBy = creator;
		UpdatedAt = DateTime.Now;

		Title = title;
		Description = description;
	}
	
	// [... all the properties ..]
		
	public void Assign(User user)
	{
		// insert argument exception here
		
		UserHistory.Add(new UserHistoryEntry(AssignedTo, AssignedAt, DateTime.Now));
		AssignedTo = user;
		AssignedAt = DateTime.Now;
		
		UpdatedAt = DateTime.Now;
		UpdatedBy = Thread.CurrentPrincipal.Identity;
		
		// hooked by the email dispatcher
		// the event architecture is provided by Griffin.Container
		DomainEvent.Dispatch(new ItemAssigned(this));
	}
	
	public void Complete()
	{
		if (IsCompleted)
			throw new InvalidOperationException("Item has already been completed");
			
		IsCompleted = true;
		CompletedAt = DateTime.Now;
		
		UpdatedAt = DateTime.Now;
		UpdatedBy = Thread.CurrentPrincipal.Identity;
			
		DomainEvent.Dispatch(new ItemCompleted(this));
	}
	
	public void Update(string title, string description)
	{
		// insert argument exceptions here
		
		UpdatedAt = DateTime.Now;
		UpdatedBy = Thread.CurrentPrincipal.Identity;
		Title = title;
		Description = description;
		
		DomainEvent.Dispatch(new ItemUpdated(this));
	}
	
	
}

Constructor

I use constructors to force mandatory data. By doing so I make sure that the item is never in an inconsistent state. It also makes it easy to spot what kind of information each model requires.

Methods

Next I added two methods for the two actions which are possible to do with our todo item. We have now protected the data and do not have to rely on that all dependent classes would have changed the properties correctly. Our todo code is therefore a lot less error prone than before.

Have it as a rule to create a method each time more than one field/property have to be changed to get a consistent state.

Exceptions

Notice that I also validate state in each method. For instance, it should not be possible to set an item as completed if it has already been completed. Those checks are more important than you think.

It can be hard to resist to remove checks like that. You probably think “It’s not so bad that an already completed item get’s set as completed again, what can possbible happen?”. That do not matter. If you start thinking like that, where do you end? It IS a bug. Better to have a zero tolerance policy. Fix all bugs, no matter how small or unimportant they are.

Summary

Since we got full control over our data, it’s quite easy to make sure that it stays that way. Unit tests becomes more straighforward as it’s clear what each method do and what it changes. I tend to always nag about unit tests: Classes which are hard to unit test are often a likely candidate for refactoring.

As we got our methods, it’s also easier to publish events for the actions (pub/sub, you can use Griffin.Container for that). Those events can be used to trigger other actions in the system, which in turn gives us a more loosely coupled application.

Encapsulation is important. That’s because it is very hard to find bugs which is created thanks to that the wrong combination of properties have been set. Those bugs tend to surface later on. It might be a week or a year later. That’s why most bug fixes for that is really workarounds (you can’t find the bug source, so you just “correct” the model information when you can).

Properly layered application (all layers may or may not exist in the same project, I prefer two projects from start, one UI and one for DL/BL):

UI design

If you follow me on twitter you have probably read a somewhat cryptic message a couple of weeks ago. It said something like “make all setters private and see what it do to your UI”. The point was that if you make all setters private, it forces you to use methods. It also effectivly prevents you from creating CRUD applications (ooohh, I really REALLY hate those).

Your users might think that they want a simple CRUD app. But really. They don’t. My current client are using a web based application to be able to follow ISO2000 (a small part of it is a case management system). The edit case screen for it looks like this:


(yes, the image is from a real production system.)

The point is that approximatly 20% of the fields are used when creating the case. 10% or less is used for each state change (the case can have 10 different states). There are research which shows that presenting too much information makes it harder for the user to decide what to do. Which in turn makes the user subconsciously hostile to using the application (my own subjective conclusion based on experience). Have you ever heard “Less is more”? It’s very true when it comes to UI design.

If you are creating a native application you should not have a problem with only displaying relevant information. If you are coding web applications, use Ajax/JSON to load partial information when you need it. A great way to design an UI is to stop focusing on the information, but on the actions instead. Don’t ask the client what kind of information he want’s to handle. A great starting question would be “When you start the application, what want you be able to do on the start page?”. You wont get an answer that says “I want to be able to specify title, description, created at and user who created the item”. But “I want to be able to create a todo item”. Don’t be tempted to ask which information the user want’s to specify, but ask what the minimal requirements are. Then take it from there.

Why am I ranting about this? Because changing to private setters actually help you to create a better UI and therefore increase the user experience. If you are using ASP.NET MVC3, try to create an edit view using your domain model. How many fields do the wizard add for you? The answer is ZERO. Isn’t that great? =)

What we should do is to create a view which corresponds to each method in the model. Only specify the fields which the method takes as it’s arguments. Voila, you have now got a lot more user friendly UI.

Let’s look at the original todo item edit page:


(again, it’s a simple model)

We can’t build our UI in the same way anymore, since our methods effectivly prevents us from having a lot of edit fields. Hence we have to scrap the old edit page and refactor our UI a bit. The details page will now get some buttons instead.

And when we press for instance the assign button we get:

Isn’t that more straight forward than before?

ORM

How do you handle the ORM them? First of all: The OR/Ms responsibility is persistance only. The information stored in the database doesn’t have to have a 1-1 relation with your domain model. You can for instance have normalized or denormalized the information so that it’s either stored more effeciently or to make it easier to query the data source. The point is that the entity that your OR/M uses isn’t always an exact copy of your model.

I do in 99% cases use repostiory classes even if I’m using an OR/M. It’s so that I can refactor the data source (for instance denormalize it) or refactor the models without having affect depending code. (And to be able to unit test all code that loads/store data to/in the data source). If you truly want to create layered applications, you have to make sure that the abstraction between the layers are intact. It will force you to write more code. That’s no way around that. But the code is more losely coupled, which means that it’s easier to maintain and extend it in the future.

However, nhibernate is quite flexible when it comes to the mapping. It can handle properties with private setters. Which means that I can in many cases use the model as the database entity too. Since I use repositores it doesn’t really matter. I might start with a 1-1 mapping but later switch to db entities if the code diverge later on.

Summary

Going back to one of the fundemantals (encapsulation) of object oriented programming doesn’t just make our code less error prone. It also forces us to think again about how we design the applications and the UI. Don’t just be a code monkey using the tools provided. Think about what those tools do to your application. Their end result might not always be a good thing.