Griffin.Container: Introducing the command support.

Commands are a great way to decouple logic in an application, since the command invoker have no knowledge of where the command is handled. You also move the focus from the how the task is solved to the task itself.

A command which is used together with Griffin.Container can look like this:

public class CreateUser : ICommand
{
	public CreateUser(string firstName, string lastName)
	{
		if (firstName == null) throw new ArgumentNullException("firstName");
		if (lastName == null) throw new ArgumentNullException("lastName");

		FirstName = firstName;
		LastName = lastName;
	}

	public string FirstName { get; private set; }
	public string LastName { get; private set; }
}

Notice that the commands are immutable and do not return a value. A command should always validate all entered information (always catch errors as early as possible). Mandatory fields should be specified in the constructor while optional is set through methods or properties.

That is important if you want to be able to scale your application later on. You can for instance at any time serialize the command and send it to another server, thanks to that the command is immutable and do not return value is expected. I’ll demonstate how you can achieve that using decorators and Griffin.Networking in a later blog post.

Let’s focus on the command now. We need to dispatch it in some way so that it can be invoked.

Here is an example ASP.NET MVC3 controller:

public class UserController : Controller
{
	ICommandDispatcher _dispatcher;

	public UserController(ICommandDispatcher dispatcher)
	{
		_dispatcher = dispatcher;
	}

	[HttpPost, Transactional]
	public ActionResult Create(CreateModel model)
	{
		if (!ModelState.IsValid)
			return View(model);

		var cmd = new CreateCommand(model.FirstName, model.LastName);
		_dispatcher.Dispatch(cmd);

		return RedirectToAction("List");
	}
}

We do not know where the command is invoked, we should really not care either. We could even go a step further and not care about when the command is executed (as long as we have some kind of notification system to inform the user of failed commands). It all depends on how scalable your application should be (now or later).

We could start with an application where everything is synchronous and executed in the same process and end up with a application where the commands are distributed over several servers.

Getting a result

Commands should not return a result. It complicates the command handling and forces everything to be synchronous (or even more complex). It’s of course a trade off, but it also makes everything more flexible.

You still have the information which you used to invoke the command. You can use that information to update the UI to fake that a new item has been created/updated (which it will eventually). You could also inform the user that the item has been handled and that he need to refresh the UI to see it.

If you need an ID, switch to GUIDs and generate and attach it to the command before invoking it. There are GUIDs which works well with databases.

Handling the command

Commands are handled by classes which implements the interface IHandlerOf<T>.

A typical implementation looks something like this:

[Component]
public class CreateUserHandler : IHandlerOf<CreateUser>
{
	private IUserRepository _repository;

	public CreateUserHandler(IUserRepository repository)
	{
		_repository = repository;
	}

	public void Invoke(CreateUser cmd)
	{
		var user = _repository.Create(cmd.FirstName, cmd.LastName);

		DomainEvent.Publish(new UserCreated(user));
	}
}

The [Component] attribute will allow you to automatically register the class in the container. Read more about that in the core documentation

Nesting commands

No. Do not nest commands. A command is created for a specific use case. It’s not intended to get reused. Nesting commands can quickly produce complex code which is hard to follow and maintain.

Instead break out common functionality to a third class which is used by both commands.

Decorators

Decorators allows you to add functionality to the command handlers without actually having to modify them.

You could for instance create an decorator which meassure the performance of every command. It would look something like this:

public class PerformanceMonitor<T> : IHandlerOf<T> where T : class, ICommand
{
    private readonly IHandlerOf<T> _inner;

    public PerformanceMonitor(IHandlerOf<T> inner)
    {
        _inner = inner;
    }

    public void Invoke(T command)
    {
        var w = new Stopwatch();
        w.Start();
        _inner.Invoke(command);
        w.Stop();
        Console.WriteLine("Invocation of {0} took {1}ms.", command.GetType().Name, w.ElapsedMilliseconds);
    }
}

All decorators are attached the the handlers by using factories. A factory which would attach the performance decorator to all commands would look something like:

[Component]
public class ExceptionDecoratorFactory : IDecoratorFactory
{
	// used to determine which commands to decorate
    public bool CanDecorate(Type commandType)
    {
        return true;
    }

	// invoked if we can decorate a command
    public IHandlerOf<T> Create(IHandlerOf<T> inner) where T : class, ICommand
    {
        return new PerformanceMonitor(inner);
    }
}

The factory itself should be added to the container. The command dispatcher automatically finds all factories which have been registered.

Exception decorator

A decorator called ExceptionDecorator<T> is included in the framework. It automatically logs all failed commands and their properties.

Example output:

FailureCommand
    Parameters:
        FirstName: Arne
    Exception:
        System.InvalidOperationException: That wont work, dude!
           at Griffin.Container.Tests.Commands.DecoratorTests.FailureHandler.Invoke(FailureCommand command) in C:projectscsharpgauffinGriffin.ContainerSourceGriffin.Container.TestsCommandsDecoratorTests.cs:line 53
           at Griffin.Container.Tests.Commands.DecoratorTests.PerformanceMonitor`1.Invoke(T command) in C:projectscsharpgauffinGriffin.ContainerSourceGriffin.Container.TestsCommandsDecoratorTests.cs:line 39
           at Griffin.Container.Commands.ExceptionDecorator`1.Invoke(T command) in C:projectscsharpgauffinGriffin.ContainerSourceGriffin.ContainerCommandsExceptionDecorator.cs:line 39

Validation decorator

There is also a validation decorator included that is used to validate properties on the commands. It uses DataAnnotation attributes for the validation.

Example command (required parameters are manually validated in the constructor while optional properties is validated using DataAnnotations):

public class CreateUser : ICommand
{
	public CreateUser(string firstName, string lastName)
	{
		if (firstName == null) throw new ArgumentNullException("firstName");
		if (lastName == null) throw new ArgumentNullException("lastName");

		FirstName = firstName;
		LastName = lastName;
	}

	public string FirstName { get; private set; }
	public string LastName { get; private set; }

	[StringLength(40)]
	public string Title { get; set; }
}

Getting the code

The code is available at github and is included in the main project.

You can get it by using nuget: install-package griffin.container.


ASP.NET MVC3: Custom error for 401 Unauthorized

Have you tried to display a custom error message if authorization fails..

[Authorize(Roles="User")]
public class ItemController : Controller
{
}

.. by specifying it in the customErrors section in web.config:

    <customErrors defaultRedirect="~/Error/" mode="On">
      <error statusCode="404" redirect="~/Error/NotFound/" />
      <error redirect="~/Error/NotAuthorized/" statusCode="401" />
    </customErrors>

So did I without getting it to work. The [HandleError] attribute can not handle it either since it only handles exceptions (as the contract IExceptionFilter says).

Solution

public class BetterAuthorizeAttribute : AuthorizeAttribute
{
	protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)
	{
		throw new NotAuthorizedHttpException(Roles);
	}
}

public class NotAuthorizedHttpException : HttpException
{
	public NotAuthorizedHttpException(string missingRoles)
		: base(401, string.Format("You do not have the required role(s) '{0}'.", string.Join(", ", missingRoles)))
	{
	}
}

That solution allows us to use the exception filter (HandleErrorAttribute) to serve the error page. Customization get’s easier since we only have to make a change in global.asax to be able to serve any page. (i.e. switch the HandleError filter to our custom one).


Double check pattern

I just answered a question at SO about lazy loading which involved the double check pattern. It’s a really useful pattern for lazy loading since it hurt performance a lot less than always locking.

I thought that I should share explain why by using some comments:

public sealed class Lazy<T> where T : class
{
    private readonly object _syncRoot = new object();
    private readonly Func<T> _factory;
    private T _value;

    public Lazy(Func<T> factory)
    {
        if (factory == null) throw new ArgumentNullException("factory");

        _factory = factory;
    }

    public T Value
    {
        get
        {
            // here is the first check. It only returns true
            // if the instance have not been created, right?
            if (_value == null)
            {
                // so when we enter here, we are within a very small time frame:
                // That is from the above check until the new instance is
                // assigned to the variable.

                // Which is a <strong>very</strong> small time frame
                // (unless you rely on external resources).

                // So let's lock to make sure that no other thread
                // have already started to create the object.
                lock (_syncRoot)
                { 
                    // We enter here as soon as any other thread (if there were one)
                    // have stopped working, which means that the value could 
                    // have been assigned.

                    // So let's check if another thread have already done the work for us.
                    if (_value == null)
                    {
                        //no, so let's create the object.
                        _value = _factory();
                    }
                }
            }
            return _value;
        }
    }

    public override string ToString()
    {
        return _value == null ? "Not created" : _value.ToString();
    }
}

The double check pattern allows us to have lock free code (other than when the instance is created) which is a huge performance gain compared to using a single lock.

Feel three to use the code in .NET versions earlier than 4.


Adding a required indicator in ASP.NET MVC3 using jQuery

You might want to indicate that fields in ASP.NET MVC3 is required visually.

Sample image

It’s easy thanks to the HTML attributes that the unobtrusive validation API generates.

Start by adding a new CSS class in your site.css called something like .required-indicator.

.required-indicator { color: red; font-size: 1.2em; font-weight: bold; }

Then add a small jQuery script somewhere (in the layout or an application.js):

$(function() {
    $('[data-val-required]').after('<span class="required-indicator">*</span>');
});

Done! Here is a sample fiddle.


How to handle transactions in ASP.NET MVC3

I got annoyed about having to repeat the transaction handling in every POST method of my controllers. First of all: I don’t want to have to take the unit of work in the constructor of each controller, since that implicitly says that all/most methods uses transactions. And I do not want to have to resolve a factory, since that says that all methods might use transactions. Finally using a singleton is the worst of solutions.

Before

The problem wasn’t that hard to solve thanks to the flexibility of ASP.NET MVC3. But let’s start by looking at a typical post method:

[HttpPost]
public virtual ActionResult Create(CreateModel model)
{
	if (!ModelState.IsValid)
		return View(model);

	var instruction = new Instruction(CurrentUser);
	Mapper.Map(model, instruction);

	using (var uow = _unitOfWorkFactory.Create())
	{
		_repository.Save(instruction);
		
		uow.SaveChanges();
	}
	
	return RedirectToAction("Details", new {id = instruction.Id});
}

After

What I did was to create a new action filter called TransactionalAttribute. It checks if the model state is valid and that no exceptions have been thrown. It uses the DependencyResolver to find a IUnitOfWork implementation if everything checks out OK. Since this is done in an action filter, the transaction will not be created unless it’s actually required. All you have to do is to register the UoW factory in your IoC container and tag your action:

[HttpPost, Transactional]
public virtual ActionResult Create(CreateModel model)
{
	if (!ModelState.IsValid)
		return View(model);

	var instruction = new Instruction(CurrentUser);
	Mapper.Map(model, instruction);
	_repository.Save(instruction);
	
	return RedirectToAction("Details", new {id = instruction.Id});
}

There is a small catch: You must add an error to the ModelState if you catch exceptions in your class. The transaction will otherwise get committed.

[HttpPost, Transactional]
public virtual ActionResult Create(CreateModel model)
{
	if (!ModelState.IsValid)
		return View(model);

	try
	{
		model.Category = model.Category ?? "Allmänt";

		var instruction = new Instruction(CurrentUser);
		Mapper.Map(model, instruction);
		_repository.Save(instruction);
		
		return RedirectToAction("Details", new {id = instruction.Id});
	}
	catch (Exception err)
	{
		// Adds an error to prevent commit.
		ModelState.AddModelError("", err.Message);
		Logger.Error("Failed to save instruction for app " + CurrentApplication, err);
		return View(model);
	}
}

Implementation

The attribute itself looks like this:

public class TransactionalAttribute : ActionFilterAttribute
{
	private IUnitOfWork _unitOfWork;

	public override void OnActionExecuting(ActionExecutingContext filterContext)
	{
		if (filterContext.Controller.ViewData.ModelState.IsValid && filterContext.HttpContext.Error == null)
			_unitOfWork = DependencyResolver.Current.GetService<IUnitOfWork>();

		base.OnActionExecuting(filterContext);
	}

	public override void OnActionExecuted(ActionExecutedContext filterContext)
	{
		if (filterContext.Controller.ViewData.ModelState.IsValid && filterContext.HttpContext.Error == null && _unitOfWork != null)
			_unitOfWork.SaveChanges();

		base.OnActionExecuted(filterContext);
	}
}

Simple, but effective! :)

Extras

The actual unit of work implementation depends on which kind of data layer you are using. You can use the following code snippet if you are using nhibernate and have successfully registered the ISession in your container:

public class NhibernateUnitOfWork : IUnitOfWork
{
	private readonly ISession _session;
	private ITransaction _transaction;

	public NhibernateUnitOfWork(ISession session)
	{
		_session = session;
		_transaction = session.BeginTransaction();
	}

	public void Dispose()
	{
		if (_transaction == null)
			return;

		if (!_transaction.WasCommitted)
			_transaction.Rollback();

		_transaction.Dispose();
		_transaction = null;
	}

	public void SaveChanges()
	{
		_transaction.Commit();
	}

}

The Unit of work interface is really simple:

public interface IUnitOfWork : IDisposable
{
	void SaveChanges();
}

Protect your data!

The introduction of OR/M libraries and strongly typed views/view models has made a lot of programmers to switch to the dark side. Let’s fight this evil!

What evil am I talking about? Public setters in your domain/business model (or whatever you like to call them). Public setters allows anyone at anytime modify your domain model data. For instance, when you create an item it’s probably the repository and the controller (MVC) / VM (WPF) that modifies it. If you have a windows service who does background processing it will probably modify it. If you allow the user to edit the model through a GUI you have a third place which can modify the model. All those places get two responsibilities since the domain model itself have no control over it’s data it. They have to take care of their own responsibility and to make sure that the domain model is in a consistent state. imho that’s not a very bright idea. How could anyone else than the domain model itself know which properties should be specified and when?

Just look at this terrible domain model:

public class TodoTask
{
	public string Id { get; set; }
	public string Title { get; set; }
	public string Description { get; set; }

	public DateTime CreatedAt { get; set; }
	public User CreatedBy { get; set; }

	public DateTime UpdatedAt { get; set; }
	public User UpdatedBy { get; set; }

	public bool IsCompleted { get; set; }
	public bool CompletedAt { get; set; }
	
	public DateTime AssignedAt { get; set; }
	public User AssignedTo { get; set; }

	public List<UserHistoryEntry> UserHistory { get; set; }
}

public class UserHistoryEntry
{
	public DateTime StartedAt { get; set; }
	public DateTime EndedAt { get; set; }
	public User User { get; set; }
}

aawwww. That’s horrible. I can barely look at it. Give me a minute..

(One hour later)

Ok. So what’s wrong with that model? It’s not responsible of it’s own data. Any class that uses it can change it’s content without any kind of validation.

Here are a sample method to illustrate the problem:

public class TodoController : Controller
{
	// Transactional = action attribute by me: Commit to db if not exceptions & model is valid
	[HttpPost, Transactional] 
	public ActionResult Edit(TodoModel model)
	{
		var dbEntity = _repository.Get(model.Id) 
		
		if (dbEntity.IsCompleted != model.IsCompleted)
		{
			if (!model.IsCompleted)
			{
				// Changed from completed to not completed
				// error?
			}
			
			dbEntity.IsCompleted = true;
			dbEntity.CompletedAt = DateTime.Now;
			
			foreach (var user in dbEntity.UserHistory)
			{
				_emailService.Send(user.Email, "Item completed", dbEntity.AssignedTo + " has completed item " + dbEntity.title);
			}
		}
		
		if (model.AssignedTo != dbEntity.AssignedTo)
		{
			dbEntity.AssignedTo = model.AssignedTo;
			dbEntity.AssignedAt  = model.AssignedAt.

			foreach (var user in dbEntity.UserHistory)
			{
				_emailService.Send(user.Email, "Item (re)assigned", dbEntity.AssignedTo + " has been assigned to " + model.AssignedTo);
			}
		}
		
		_repository.Save(dbEntity);
	}
	
}

Take a moment and look at the code. We have wipped up a view used the tooling in Visual Studio (using a strongly typed view model, edit template). Since we allow the user to edit most fields we need to include different checks for different states and act accordingly. (It would be slightly better if that code existed in your service class instead of in the controller/VM.)

Let’s fast forward one year. You now have a problem with the todo items. For some reason the CompletedAt date is not saved when it should. You therefore add a new check each time a todo item is saved in the repository:

You check if CompletedAt is set if IsCompleted is set, if not you assign it.

That is a workaround. Those kind of workarounds is almost always present in applications that expose all model data to everything in the application. It do not matter if you have a TodoService class which should handle everything for the todo items, since all other code can “fix” the model info too. Hence you WILL get those workarounds sooner or later.

Protect your data

Let’s look at the cure. In fact, it’s very simple. Make all setters private:

public class TodoTask
{
	public string Id { get; private set; }
	public string Title { get; private set; }
	public string Description { get; private set; }

	public DateTime CreatedAt { get; private set; }
	public User CreatedBy { get; private set; }

	public DateTime UpdatedAt { get; private set; }
	public User UpdatedBy { get; private set; }

	public bool IsCompleted { get; private set; }
	public bool CompletedAt { get; private set; }
	
	public DateTime AssignedAt { get; private set; }
	public User AssignedTo { get; private set; }

	public IEnumerable<UserHistoryEntry> UserHistory { get; private set; }
}

Notice that I’ve changed UserHistory to IEnumerable. That’s so that we don’t break Law of Demeter (and loose control over the history changes). The todo item data is now protected. But we can’t change anything now. And we can’t build a GUI. The solution is simple. We do not just want to change data. We want to take some action! Let’s add some methods so that we can do that.

public class TodoTask
{
	public TodoTask(User creator, string title, string description)
	{
		// insert argument exceptions here.
		
		CreatedBy = creator;
		CreatedAt = DateTime.Now;

		UpdatedBy = creator;
		UpdatedAt = DateTime.Now;

		Title = title;
		Description = description;
	}
	
	// [... all the properties ..]
		
	public void Assign(User user)
	{
		// insert argument exception here
		
		UserHistory.Add(new UserHistoryEntry(AssignedTo, AssignedAt, DateTime.Now));
		AssignedTo = user;
		AssignedAt = DateTime.Now;
		
		UpdatedAt = DateTime.Now;
		UpdatedBy = Thread.CurrentPrincipal.Identity;
		
		// hooked by the email dispatcher
		// the event architecture is provided by Griffin.Container
		DomainEvent.Dispatch(new ItemAssigned(this));
	}
	
	public void Complete()
	{
		if (IsCompleted)
			throw new InvalidOperationException("Item has already been completed");
			
		IsCompleted = true;
		CompletedAt = DateTime.Now;
		
		UpdatedAt = DateTime.Now;
		UpdatedBy = Thread.CurrentPrincipal.Identity;
			
		DomainEvent.Dispatch(new ItemCompleted(this));
	}
	
	public void Update(string title, string description)
	{
		// insert argument exceptions here
		
		UpdatedAt = DateTime.Now;
		UpdatedBy = Thread.CurrentPrincipal.Identity;
		Title = title;
		Description = description;
		
		DomainEvent.Dispatch(new ItemUpdated(this));
	}
	
	
}

Constructor

I use constructors to force mandatory data. By doing so I make sure that the item is never in an inconsistent state. It also makes it easy to spot what kind of information each model requires.

Methods

Next I added two methods for the two actions which are possible to do with our todo item. We have now protected the data and do not have to rely on that all dependent classes would have changed the properties correctly. Our todo code is therefore a lot less error prone than before.

Have it as a rule to create a method each time more than one field/property have to be changed to get a consistent state.

Exceptions

Notice that I also validate state in each method. For instance, it should not be possible to set an item as completed if it has already been completed. Those checks are more important than you think.

It can be hard to resist to remove checks like that. You probably think “It’s not so bad that an already completed item get’s set as completed again, what can possbible happen?”. That do not matter. If you start thinking like that, where do you end? It IS a bug. Better to have a zero tolerance policy. Fix all bugs, no matter how small or unimportant they are.

Summary

Since we got full control over our data, it’s quite easy to make sure that it stays that way. Unit tests becomes more straighforward as it’s clear what each method do and what it changes. I tend to always nag about unit tests: Classes which are hard to unit test are often a likely candidate for refactoring.

As we got our methods, it’s also easier to publish events for the actions (pub/sub, you can use Griffin.Container for that). Those events can be used to trigger other actions in the system, which in turn gives us a more loosely coupled application.

Encapsulation is important. That’s because it is very hard to find bugs which is created thanks to that the wrong combination of properties have been set. Those bugs tend to surface later on. It might be a week or a year later. That’s why most bug fixes for that is really workarounds (you can’t find the bug source, so you just “correct” the model information when you can).

Properly layered application (all layers may or may not exist in the same project, I prefer two projects from start, one UI and one for DL/BL):

UI design

If you follow me on twitter you have probably read a somewhat cryptic message a couple of weeks ago. It said something like “make all setters private and see what it do to your UI”. The point was that if you make all setters private, it forces you to use methods. It also effectivly prevents you from creating CRUD applications (ooohh, I really REALLY hate those).

Your users might think that they want a simple CRUD app. But really. They don’t. My current client are using a web based application to be able to follow ISO2000 (a small part of it is a case management system). The edit case screen for it looks like this:


(yes, the image is from a real production system.)

The point is that approximatly 20% of the fields are used when creating the case. 10% or less is used for each state change (the case can have 10 different states). There are research which shows that presenting too much information makes it harder for the user to decide what to do. Which in turn makes the user subconsciously hostile to using the application (my own subjective conclusion based on experience). Have you ever heard “Less is more”? It’s very true when it comes to UI design.

If you are creating a native application you should not have a problem with only displaying relevant information. If you are coding web applications, use Ajax/JSON to load partial information when you need it. A great way to design an UI is to stop focusing on the information, but on the actions instead. Don’t ask the client what kind of information he want’s to handle. A great starting question would be “When you start the application, what want you be able to do on the start page?”. You wont get an answer that says “I want to be able to specify title, description, created at and user who created the item”. But “I want to be able to create a todo item”. Don’t be tempted to ask which information the user want’s to specify, but ask what the minimal requirements are. Then take it from there.

Why am I ranting about this? Because changing to private setters actually help you to create a better UI and therefore increase the user experience. If you are using ASP.NET MVC3, try to create an edit view using your domain model. How many fields do the wizard add for you? The answer is ZERO. Isn’t that great? =)

What we should do is to create a view which corresponds to each method in the model. Only specify the fields which the method takes as it’s arguments. Voila, you have now got a lot more user friendly UI.

Let’s look at the original todo item edit page:


(again, it’s a simple model)

We can’t build our UI in the same way anymore, since our methods effectivly prevents us from having a lot of edit fields. Hence we have to scrap the old edit page and refactor our UI a bit. The details page will now get some buttons instead.

And when we press for instance the assign button we get:

Isn’t that more straight forward than before?

ORM

How do you handle the ORM them? First of all: The OR/Ms responsibility is persistance only. The information stored in the database doesn’t have to have a 1-1 relation with your domain model. You can for instance have normalized or denormalized the information so that it’s either stored more effeciently or to make it easier to query the data source. The point is that the entity that your OR/M uses isn’t always an exact copy of your model.

I do in 99% cases use repostiory classes even if I’m using an OR/M. It’s so that I can refactor the data source (for instance denormalize it) or refactor the models without having affect depending code. (And to be able to unit test all code that loads/store data to/in the data source). If you truly want to create layered applications, you have to make sure that the abstraction between the layers are intact. It will force you to write more code. That’s no way around that. But the code is more losely coupled, which means that it’s easier to maintain and extend it in the future.

However, nhibernate is quite flexible when it comes to the mapping. It can handle properties with private setters. Which means that I can in many cases use the model as the database entity too. Since I use repositores it doesn’t really matter. I might start with a 1-1 mapping but later switch to db entities if the code diverge later on.

Summary

Going back to one of the fundemantals (encapsulation) of object oriented programming doesn’t just make our code less error prone. It also forces us to think again about how we design the applications and the UI. Don’t just be a code monkey using the tools provided. Think about what those tools do to your application. Their end result might not always be a good thing.


When to throw exceptions

I’ve written several posts about exceptions. For instance when you should catch them or how you should design them. In this post I’ll discuss when you should throw exceptions.

Exceptions are thrown when something exceptional have happended. But what does that mean? In fact it’s really simple. Let’s look at an example interface:

interface IUserRepository
{
	void Save(User user);
}

You should throw an exception every time that method fails to save the user into the data source. The failure reason do really not matter. All of these reasons are valid:

  • The user object do not contain all mandatory information
  • The user object is null
  • The data source is off line
  • The data source rejected the save.

Why? Because the method is named Save and returns void. That is, the contract of the method indicates that a save should in most cases be successful. Hence something exceptional have happened if the user is not saved into the data source.

But couldn’t you have written something like this instead:

interface IUserRepository
{
	bool Save(User user);
}

Yes, sure. That method would return false instead of throwing exceptions. But the method contract indicates that the method can fail as many times that it can succeed. You really do not mean that, do you?

but.. but… The Remove method usually returns a bool.

interface IUserRepository
{
	bool Remove(User user);
}

Yes. It do. The reason to that is that it says that it’s OK to call Remove even if the user do not exist. imho it’s a vague contract. It’s not clear if the method means that the removal is expected to fail or if the method allows you to remove non-existing items. I would have used void as return value instead.

Myth: Exceptions are expensive

First of all: Exceptions that are not thrown are not expensive.

And if you start to think about it, exceptions are actually a lot cheaper than using return values. Exceptions forces you to take action, no action means that your application crashes. The exceptions also include the exact location of the error. Return values only work if you check and handle all of them. I would like to see any reasonable sized application which uses return values where all of them are handled.

The point is that it’s a lot more tedious task to find and fix bugs in applications that uses return values instead of exceptions since you have to use details logs or reproduce errors to be able to locate and fix them. The amount of time spend to locate and correct errors probably costs a lot more money than buying new hardware to make up for the exception performance penalty.

Summary

Throw exceptions if the method contract is violated.



Griffin.Container – “The drunken IoC container”

(I started writing this blog entry at friday night, but somehow I didn’t manage to finish it.)

My son has just fallen asleep and wifey is working. Which means that I got a friday night all by myself. So I’ve put some beer and Bacardi Breezers (kinda girly, I know. But I can’t help myself) in a nice row on my desk. Instead of continuing to code on my next entrepenural endeavor I’ve decided to do something fun and challenging. And that is to code an inversion of control container in three hours (until the boss get’s home). (It took another two hours today to clean up the nonsense in this blog entry)

It’s my third attempt during the last couple of years. My both last attempts was just a bit of fooling around and this time I’ve decided to try to create a lean killing machine. So I’ve come up with a slightly different approach compared to my last attempts.

The responsibility

In my eyes an IoC container has three responsibilites. Hence I’ve divided the container into three different classes.

Registration

A container got to be able to register the services that should be provided. I hate those XML configurations since it’s hard to discover configuration errors. I also cry when I have to use API’s which heavily uses extensions methods, since the intellisense for them is messy and hard to understand.

I’ve therefore created three types of registration options:

The attribute

Registration through attributes. Works quite nicely. Create a class and add the [Component] attribute on it. Repeat for all classes in all of your assemblies.

[Component]
public class MyClass : IService
{
}

Control the lifetime:

[Component(Lifetime = Lifetime.Singleton)]
public class MyClass : IService
{
}

And register everything using:

registrar.RegisterComponents(HostingEnvironment.MapPath("~/bin"), "*.dll));

Done.

The module

You can also control the registration process through modules. Great since each module/namespace/section can itself decide what should be registered in the container.

public class UserManagementModule : IContainerModule
{
    public void Register(IContainerRegistrar registrar)
    {
        registrar.Register<UserService>();
    }
}

Then just load all modules:

// can also specify path + filepattenr as the example above.
registrar.RegisterModules(Assembly.GetExecutingAssembly());

The traditional approach

There are also the traditional RegisterXxxxx methods.

public interface IContainerRegistrar
	IEnumerable<ComponentRegistration> Registrations { get; }

	void RegisterComponents(Lifetime defaultLifetime, string path, string filePattern);
	void RegisterComponents(Lifetime defaultLifetime, params Assembly[] assemblies);

	void RegisterModules(string path, string filePattern);
	void RegisterModules(params Assembly[] assemblies);

	void RegisterConcrete<TConcrete>(Lifetime lifetime = Lifetime.Scoped) where TConcrete : class;
	void RegisterConcrete(Type concrete, Lifetime lifetime = Lifetime.Scoped);

	void RegisterService<TService>(Func<IServiceLocator, object> factory, Lifetime lifetime = Lifetime.Scoped);
	void RegisterService(Type service, Func<IServiceLocator, object> factory, Lifetime lifetime = Lifetime.Scoped);

	void RegisterType<TConcrete, TService>(Lifetime lifetime = Lifetime.Scoped)
		where TService : class
		where TConcrete : class;
	void RegisterType(Type concrete, Type service, Lifetime lifetime = Lifetime.Scoped);

	void RegisterInstance<TService>(TService instance) where TService : class;
	void RegisterInstance(Type service, object concrete);
}

The build plan

Performance is important. And you can’t get performance if you do not analyze how each service should be built (before the actual service location). I therefore go through all classes, check their dependencies and create a plan for how they should be built. I then analyze which classes is used by which services and create a mapping between each service and the implementing classes.

The builder supports multiple constructors. It uses the most specific constructor for which it can resolve all dependencies for.

/// <summary>
/// Used to build the container.
/// </summary>
public interface IContainerBuilder
{
    /// <summary>
    /// Builds a container using the specified registrations.
    /// </summary>
    /// <param name="registrar">Registrations to use</param>
    /// <returns>A created container.</returns>
    IParentContainer Build(IContainerRegistrar registrar);
}

Service location

The container itself IS implementing the service location pattern. Hence the container interface is really a service location interface and we’ll use that as our base.

public interface IServiceLocator
{
    bool IsRegistered(Type type);
    T Resolve<T>() where T : class;
    object Resolve(Type service);
    IEnumerable<T> ResolveAll<T>() where T : class;
    IEnumerable<object> ResolveAll(Type service);
}

The container itself have been divided into two containers. A parent container which is the one generated by the builder and a scoped container.

public interface IParentContainer : IServiceLocator
{
    IChildContainer CreateChildContainer();
}

public interface IChildContainer : IServiceLocator, IDisposable
{
    
}

Scoped services can only be resolved by a scoped container. So you got to use something like this:

using (var scoped = container.CreateChildContainer())
{
    // All scoped services which is resovled here
    // will be disposed automatically.
}

That should make it easier to handle scoping in WCF, MVC3 and similar frameworks.

Extension points

The divided architecture allows you to customize the container quite nicely.

Custom registration

  • Inherit ContainerRegistrar and override Add to be able to handle all registrations.
  • Override CreateRegistration to control which interfaces a class can implement
  • Pass a custom IServiceFilter class to the ContainerRegistrar to filter out which service all classes can be registered for.

Building

  • Inherit ContainerBuilder and override TryGetConstructor to be able to control which constructor the builder should use.

Service location

  • Inherit Container and ChildContainer and override the GetInstance method to get full control over the returned instances. Perfect approach if you want to generate proxies or use AOP.

Usage

public class IntegrationTests
{
    [Fact]
    public void SimpleRegistrationTransient()
    {
        var registrar = new ContainerRegistrar();
        registrar.RegisterType<MySelf>(Lifetime.Transient);
        var builder = new ContainerBuilder();
        var container = builder.Build(registrar);

        var instance1 = container.Resolve<MySelf>();
        var instance2 = container.Resolve<MySelf>();

        Assert.NotSame(instance1, instance2);
    }


    [Fact]
    public void OneDependencySingleton()
    {
        var registrar = new ContainerRegistrar();
        registrar.RegisterType<MySelf>(Lifetime.Transient);
        registrar.RegisterType<OneDepencency>(Lifetime.Singleton);
        var builder = new ContainerBuilder();
        var container = builder.Build(registrar);

        var instance1 = container.Resolve<OneDepencency>();
        var instance2 = container.Resolve<OneDepencency>();

        Assert.Same(instance1, instance2);
    }

    [Fact]
    public void ResolveAllStartable()
    {
        var registrar = new ContainerRegistrar();
        registrar.RegisterType<Startable1>(Lifetime.Singleton);
        registrar.RegisterType<Startable2>(Lifetime.Singleton);
        var builder = new ContainerBuilder();
        var container = builder.Build(registrar);

        var instances = container.ResolveAll<ISingletonStartable>();

        Assert.Equal(2, instances.Count());
    }

    [Fact]
    public void ResolveFromComponents()
    {
        var registrar = new ContainerRegistrar();
        registrar.RegisterComponents(Lifetime.Singleton, Assembly.GetExecutingAssembly());
        var builder = new ContainerBuilder();
        var container = builder.Build(registrar);

        container.Resolve<OneDepencency>();
    }

    [Fact]
    public void ResolveFromModules()
    {
        var registrar = new ContainerRegistrar();
        registrar.RegisterModules(Assembly.GetExecutingAssembly());
        var builder = new ContainerBuilder();
        var container = builder.Build(registrar);

        container.Resolve<MySelf>();
    }

    [Fact]
    public void ChildContainer()
    {
        var registrar = new ContainerRegistrar();
        registrar.RegisterType<MySelf>();
        var builder = new ContainerBuilder();
        var container = builder.Build(registrar);

        MySelf instance;
        using (var childContainer = container.CreateChildContainer())
        {
            instance = childContainer.Resolve<MySelf>();
        }

        Assert.True(instance.IsDisposed);
    }
}

Performance

I used the benchmark testing code which have been created by Daniel Palme.

Name                    Singleton       Transient       Combined        #.ctor Singleton        #.ctor Transient        #.ctor Combined
No                      30              37              45              1                       2000001                         1000000
AutoFac                 582             2635            6225            1                       2000001                         1000000
Dynamo                  65              66              113             1                       2000001                         1000000
Funq                    74              87              231             1                       2000001                         1000000
Hiro                    104             99              109             1                       2000001                         1000000
LightInject             92              90              98              1                       2000001                         1000000
Munq                    91              108             342             1                       2000001                         1000000
Petite                  67              70              210             1                       2000001                         1000000
SimpleInjector          71              81              101             1                       2000003                         1000001
StructureMap            925             1032            3318            1                       2000001                         1000000
Unity                   1346            1694            5599            1                       2000001                         1000000
Windsor                 486             2102            6324            1                       2000001                         1000000
Griffin.Container       158             183             471             1                       2000001                         1000000

Chart:

(I had to remove ninject & spring.net from the chart since they are way too slow)

The performance is OK by me since I’ve not done any optimizations yet.

Summary

I’m quite satisfied with the result and a bit suprised that I managed to create the architecture that I did last night. It was really fun and I think that I’ll continue to improve the container so it becomes even more useful.


SOLID principles with real world examples

The following article aims to explain the five SOLID principles with real world examples. The SOLID principles are five programming principles which is considered to be the foundation of every well designed application. Following the principles will most likely lead to applications which are is easy to extend and maintain. That’s possible since you got small well defined classes with clear contracts.

All motivator images are created by Derick Baley.

Let’s get started with the principles.

Single Responsibility Principle

Single responsibility states that every class should only have one reason to change. A typical example is an user management class. When you for instance create a new user you’ll most likely send an welcome email. That’s two reasons to change: To do something with the account management and to change the emailing procedure. A better way would be to generate some kind of event from the account management class which is subscribed by a UserEmailService that does the actual email handling.

The most effective way to break applications it to create GOD classes. That is classes that keeps track of a lot of information and have several responsibilities. One code change will most likely affect other parts of the class and therefore indirectly all other classes that uses it. That in turn leads to an even bigger maintenance mess since no one dares to do any changes other than adding new functionality to it.

Making sure that a class has a single responsibility makes it per default also easier to see what it does and how you can extend/improve it.

Classes that are hard to unit test are often breaking SRP.

External links

* Wikipedia
* Example & tips on how to follow the principle (earlier post by me)

Open/Closed principle

Open/Closed principle says that a class should be open for extension but closed for modification. Which means that you can add new features through inheritance but should not change the existing classes (other than bug fixes).

The reason is that if you modify a class, you’ll likely break the API/Contract of the class which means that the classes that depend on it might fail when you do so. If you instead inherit the class to add new features, the base contract is untouched and it’s unlikely that dependent classes will fail.

Example violation

Here is a real world parser method (from a SO question which I’ve answered):

    public void Parse()
    {
        StringReader reader = new StringReader(scriptTextToProcess);
        StringBuilder scope = new StringBuilder();
        string line = reader.ReadLine();
        while (line != null)
        {
            switch (line[0])
            {
                case '$':
                    // Process the entire "line" as a variable, 
                    // i.e. add it to a collection of KeyValuePair.
                    AddToVariables(line);
                    break;
                case '!':
                    // Depending of what comes after the '!' character, 
                    // process the entire "scope" and/or the command in "line".
                    if (line == "!execute")
                        ExecuteScope(scope);
                    else if (line.StartsWith("!custom_command"))
                        RunCustomCommand(line, scope);
                    else if (line == "!single_line_directive")
                        ProcessDirective(line);
    
                    scope = new StringBuilder();
                    break;
    
                default:
                    // No processing directive, i.e. add the "line" 
                    // to the current scope.
                    scope.Append(line);
                    break;
            }
    
            line = reader.ReadLine();
        }
    }

It works great. But the method have to be changed each time we want to add support for a new directive. It’s therefore not closed for modification.

Solution

Lets create an interface which is used for each handler (for '$' and '!' in the example above):

    public interface IMyHandler
    {
        void Process(IProcessContext context, string line);
    }

Notice that we include a context object. This is quite important. If we create a new parser called SuperCoolParser in the future we can let it create and pass a SuperAwsomeContext to all handlers. New handlers which supports that context can use it while others stick with the basic implementation.

We comply with Liskovs Substitution Principle and doesn’t have to change the IMyHandler.Process signature (and therefore keeping it closed for modification) when we add new features later on.

The parser itself is implemented as:

    public class Parser
    {
        private Dictionary<char, IMyHandler> _handlers = new Dictionary<char, IMyHandler>();
        private IMyHandler _defaultHandler;
    
        public void Add(char controlCharacter, IMyHandler handler)
        {
            _handlers.Add(controlCharacter, handler);
        }
    
        private void Parse(TextReader reader)
        {
            StringBuilder scope = new StringBuilder();
            IProcessContext context = null; // create your context here.
    
            string line = reader.ReadLine();
            while (line != null)
            {
                IMyHandler handler = null;
                if (!_handlers.TryGetValue(line[0], out handler))
                    handler = _defaultHandler;
    
                handler.Process(context, line);
    
    
                line = reader.ReadLine();
            }
        }
    }

If you go back and look at the ! handling you’ll see a lot of if statements. That method likely have to be changed to add support for more features. Hence it do also violate the principle. Let’s refactor again.

    public interface ICommandHandler
    {
        void Handle(ICommandContext context, string commandName, string[] arguments);
    }
    
    public class CommandService : IMyHandler
    {
        public void Add(string commandName, ICommandHandler handler) 
        {
        }
    
        public void Handle(IProcessContext context, string line)
        {
           // first word on the line is the command, all other words are arguments.
           // split the string properly
    
           // then find the corrext command handler and invoke it.
           // take the result and add it to the `IProcessContext`
        }
    }

External links

* Wikipedia

Liskovs Substitution Principle

Liskovs Substitution Principle states that any method that takes class X as a parameter must be able to work with any subclasses of X.

The principle makes sure that every class follows the contract defined by its parent class. If the class Car has a method called Break it’s vital that all subclasses breaks when the Break method is invoked. Imagine the suprise if Break() in a Ferrari only works if the switch ChickenMode is activated.

Violation

Let’s use the motivator image as inspiration and define the following classes:

    public interface IDuck
    {
       void Swim();
    }
    public class Duck : IDuck
    {
       public void Swim()
       {
          //do something to swim
       }
    }
    public class ElectricDuck : IDuck
    {
       public void Swim()
       {
          if (!IsTurnedOn)
            return;
    
          //swim logic  
       }
    }

And the calling code:

    void MakeDuckSwim(IDuck duck)
    {
        duck.Swim();
    }

As you can see, there are two examples of ducks. One regular duck and one electric duck.

The electric duck can only swim if it’s turned on.The MakeDuckSwim method will not work if a duck is electric and not turned on.

That breaks LSP since any user of the IDuck interface expects the duck to swim when calling MakeDuckSwim.

This example is incorrect. I mistakenly interpreted LSP as a functional contract, but it’s not. The method will violate LSP if it for instance throws an exception that the base class do not. Read more in my StackOverflow answer.

Solution

You can of course solve it by doing something like this (in the method that uses the ducks)

    void MakeDuckSwim(IDuck duck)
    {
        if (duck is ElectricDuck)
            ((ElectricDuck)duck).TurnOn();
        duck.Swim();
    }

But that would break Open/Closed principle and has to be implemented everywhere that the ducks are used (and therefore still generate instable code).

The proper solution would be to automatically turn on the duck in the Swim method and by doing so make the electric duck behave exactly as defined by the IDuck interface.

    public class ElectricDuck : IDuck
    {
       public void Swim()
       {
          if (!IsTurnedOn)
            TurnOnDuck();
    
          //swim logic  
       }
    }

External links

* Wikipedia

Interface Segregation Principle

ISP states that interfaces that have become “fat” (like god classes) should be split into several interfaces. Large interfaces makes it harder to extend smaller parts of the system.

There is nothing that says that there should be a one-to-one mapping between classes and interfaces. It’s in fact much better if you can create several smaller interfaces instead (depends on the class though).

Violation

The MembershipProvider in ASP.NET is a classical example of a violation. MSDN contains a large article (which 4 out of 34 have found useful) which contains a long and detailed instruction on how to properly implement the class.

Solution

The provider could have been divided in several parts:

* MembershipProvider – A facade to the below interfaces
* IAccountRepository – Used to fetch/load accounts
* IPasswordValidator – Checks that the password is valid (according to business rules)
* IPasswordStrategy – How to store PW (hash it, use encryption or just store it in plain text)

Now you only have to implement a small part if you need to customize the provider (for instance IAccountRepository if you are using a custom data source). There’s a saying: “Favor composition over inheritance” which the membership providers illustrates well. The original solution used inheritance while mine used composition.

External Links

* Wikipedia

Dependency Inversion Principle

The principle which is easiest to understand. DIP states that you should let the caller create the dependencies instead of letting the class itself create the dependencies. Hence inverting the dependency control (from letting the class control them to letting the caller control them).

Before

    public class Volvo
    {
        B20 _engine;

        public Volvo()
        {
           _engine = new B20();
        }
    }

After

    public class Volvo
    {
        IEngine _engine;

        public Volvo(IEngine engine)
        {
           if (engine == null) throw new ArgumentNullException("engine");
           _engine = engine;
        }
    }

Which makes it a lot more fun since we now can do the following:

    var myVolvo = new Volvo(new BigBadV12());

(Nice real world example, huh? ;))

Update

I messed up a bit. Dependency Inversion states that you should depend upon abstractions and that lower layers should not be aware of higher layers. I’ll get back to that. Dependency Injection is when you inject dependencies into the constructors instead of creating them in the class. Inversion Of Control is that the container controls your objects and their lifetime.

A. HIGH LEVEL MODULES SHOULD NOT DEPEND UPON LOW LEVEL MODULES. BOTH SHOULD DEPEND UPON ABSTRACTIONS.
B. ABSTRACTIONS SHOULD NOT DEPEND UPON DETAILS. DETAILS SHOULD DEPEND UPON ABSTRACTIONS

The original principle targets modules while I also like to apply it at class level too. It makes the principle easier to apply (so the text below is how I apply the principle).

Depend on abstractions simply means that you should depend on as generic class/interface as possible. For instance IUserRepository instead of DbUserRepository or HttpRequestBase instead of HttpRequest. The purpose is that your code should be as flexible as possible. The more abstract the dependencies are, the easier it is to refactor them or create new implementations.

Depending on abstractions also make your code less likely to change if the dependency change.

External links

* Wikipedia
* Introduction

Summary

Feel free to leave a comment if something is unclear. It will not just help you, but all future readers.