Liskovs Substitution Violation in the real world

I was creating an integration between my own users table (or document since I use RavenDB) with the membershipprovider in ASP.NET. I’m using the default SQL provider.

When doing so I noticed that my code was failing all the time in the register action in the account controller (ASP.NET MVC3). The error message saying “An unknown error occurred. Please verify your entry and try again. If the problem persists, please contact your system administrator“.

var user = _userRepository.Create(model.UserName, model.Email, model.DisplayName);
Membership.CreateUser(model.UserName, model.Password, model.Email, null, null, true, user.Id, out createStatus);

User.Id is a string. I took a look at the createStatus which is returned from CreateUser and it said InvalidProviderUserKey. MSDN did not help to discover why the error was returned (no indication for the enum value nor in the CreateUser documentation).

So I started to google after implementations of the SqlMembershipProvider. The first one that I found was the one in Mono. It had the following logic:

if (providerUserKey != null && ! (providerUserKey is Guid)) {
    status = MembershipCreateStatus.InvalidProviderUserKey;
    return null;
}

Why the (s)hell did they do that?” was my first though. A clear violation of LSP since the contract says object and all they support is a simple Guid. Come on. At least support string.

So I started to search for the MS implementation and found the sample provider in MSDN: http://msdn.microsoft.com/en-us/library/6tc47t75.aspx

So the design decision was Microsofts own. Next time, please document “features” like that.


Griffin.Networking – A somewhat performant networking library for .NET

Disclaimer: The current framework release is a beta. It should be reasonable stable, but don’t blame me if it blow up your computer.

Introduction

Griffin.Networking is a networking library written in C# which purpose is to:

a) abstract away the repetitive tasks which you have to do with vanilla .NET
b) Create a more structured way of processing the inbound and outbound data

Those two goals should lower the time it takes to develop networking applications and also improve the performance thanks to a (hopefully) well-designed networking layer.

The framework also have Inversion Of Control support built in from start (be careful, slow containers hurt performance a lot). The IoC support is provided by a service location interface (which should not be exposed outside the framework).

The goal with this article is to describe the framework and show you have to develop an application with it. You will have a working JSON RPC implementation (server side) when done. The client side can be quite easily created afterwards. Simply create a RequestEncoder and a ResponseDecoder for the framework, everything else can be reused from the server implementation.

Background

I’ve built several networking applications in the past. Everything from small client applications in C# to performant socket servers in C++ utilizing IO Completion Ports.

They have worked well, but I always seemed to repeat myself when implementing my applications. The biggest problem is that it’s hard to get an extendable architecture where you can inject handlers into the protocol handling. My C# WebServer (Google “C# webserver” and click on the first search result) illustrates this well. It’s not easy to follow the communication flow.

I did therefore decide to try create a networking library which is easy to use and extend. During my research I stumpled upon Netty for Java which my library is heavliy inspired by.

Architecture

The purpose of this section is to give you a brief overview over the architecture and the terms which will be used throughout the article. Some things in this section may not make sense until you have read the entire article.

Channel

The channel is the IO layer. In most cases it’s a socket implementation, but could be anything used for communication. The default socket implementation uses the classical Begin/End type of methods. They will probably be replaced by the new Async methods later on.

There are two types of channels. Server channels who’s responsibility is to accept new connections and build the correct client channel (and it’s pipeline). Client channels which responsibility is to send and receive information from the remote peer.

    public interface IChannel
    {
        void HandleDownstream(IPipelineMessage message);
    }

As you can see, the channel interface is quite small. The reason to this is that the entire framework is asynchronous. All communication is made by messages. The contract of a channel says that is should only be able to receive and process messages. A message (read more below) can for instance be Connect, SendBuffer or Close. All channel implementations takes a pipeline (see below) in the constructor and use it to send messages to your application.

Pipeline

The pipeline is the most central part of the library. All the action happens in the pipeline. It’s in the pipeline that you authorize the users, transform the incoming byte[] array into something more usable like a HttpRequest etc etc.

The pipeline has two directions (compare with a road with two lanes). The lane from the channel to the application is called upstream, since the message travels up from the channel to your application. The other direction is called downstream since the message travels down to the channel.

A pipeline can contain an arbitrary number of handlers, and each direction have it’s unique set of handlers. A HTTP streaming server might only contain the HttpHeaderDecoder in the upstream and a HttpFileStreamer in the downstream to gain performance, while a complete HttpServer would include session management, authentication, logging, error handler etc as upstream handlers.

public interface IPipeline
{
	/// <summary>
	/// Send something from the channel to all handlers.
	/// </summary>
	/// <param name="message">Message to send to the client</param>
	void SendUpstream(IPipelineMessage message);

	/// <summary>
	/// Set down stream end point
	/// </summary>
	/// <param name="channel">channel which will handle all down stream messages</param>
	void SetChannel(IChannel channel);

	/// <summary>
	/// Send a message from the client and downwards.
	/// </summary>
	/// <param name="message">Message to send to the channel</param>
	void SendDownstream(IPipelineMessage message);
}

The architecture allows you to have full control over how the incoming and outgoing data is processed before it arrives in your application or in the channel.

Messages

As mentioned in the previous section, the pipeline are used to send message to/from your application. These messages are small classes which contains the information to process. a message can be compared with the EventArg classes in the .NET event mechanism. POCO classes which implements the IPipelineMessage interface.

Messages that requires actions should be named as verbs (Send) while event messages should be named past tense (Received).

The general guideline is that each message may only contain one type of information. You may not have a message called Received with an object property which is a byte[] in the beginning and a SuperDeluxeObject in the end. Rather create a new message named ReceivedSuperDeluxe which contains the SuperDeluxeObject object. It makes the processing cleaner and easier to follow.

Example message:

public class Connect : IPipelineMessage
{
    private readonly EndPoint _remoteEndPoint;

    public Connect(EndPoint remoteEndPoint)
    {
        if (remoteEndPoint == null)
            throw new ArgumentNullException("remoteEndPoint");

        _remoteEndPoint = remoteEndPoint;
    }

    public EndPoint RemoteEndPoint
    {
        get { return _remoteEndPoint; }
    }
} 

Pipeline handlers

Pipeline handlers are used to process the messages which are sent through the pipeline. They can either be singletons (shared among channels) or be created per channel. Handlers that are constructed together with the pipeline can store state information since they are used by one channel only.

Example upstream handler which traces the received information:

public class BufferTracer : IUpstreamHandler
{
    private readonly ILogger _logger = LogManager.GetLogger<BufferTracer>();

    public void HandleUpstream(IPipelineHandlerContext context, IPipelineMessage message)
    {
        var msg = message as Received;
        if (msg != null)
        {
            var str = Encoding.UTF8.GetString(msg.BufferSlice.Buffer, msg.BufferSlice.Position, msg.BufferSlice.RemainingLength);
            _logger.Trace(str);
        }

        context.SendUpstream(message);
    }
}

Notice how it sends all messages to the next handler using context.SendUpstream(message). This is quite important. Each handler get to decide whether the message should be propagated up the call stack or not. It’s also how messages are transformed into something more usable. Let’s look at the HTTP HeaderDecoder handler.

public class HeaderDecoder : IUpstreamHandler
{
    private readonly IHttpParser _parser;
    private int _bodyBytesLeft = 0;

    public HeaderDecoder(IHttpParser parser)
    {
        if (parser == null) throw new ArgumentNullException("parser");
        _parser = parser;
    }

    public void HandleUpstream(IPipelineHandlerContext context, IPipelineMessage message)
    {
        if (message is Closed)
        {
            _bodyBytesLeft = 0;
            _parser.Reset();
        }
        else if (message is Received)
        {
            var msg = (Received) message;

            // complete the body
            if (_bodyBytesLeft > 0)
            {
                _bodyBytesLeft -= msg.BufferSlice.Count;
                context.SendUpstream(message);
                return;
            }

            var httpMsg = _parser.Parse(msg.BufferSlice);
            if (httpMsg != null)
            {
                var recivedHttpMsg = new ReceivedHttpRequest((IRequest) httpMsg);
                _bodyBytesLeft = recivedHttpMsg.HttpRequest.ContentLength;
                _parser.Reset();

                // send up the message to let someone else handle the body
                context.SendUpstream(recivedHttpMsg);
                msg.BytesHandled = msg.BufferSlice.Count;
                context.SendUpstream(msg);
            }

            return;
        }

        context.SendUpstream(message);
    }
} 

Two things are important here:

It follows Single Responsibility Principle

It doesn’t actually parse the HTTP message but uses an external parser for that. It’s easy to follow what the handler does since it does not violate Single Responsibility Principle, and we can at any time switch parser if we find a more performant one.

It transforms the Received message into a ReceivedHttpRequest

All messages should be considered to be immutable. Don’t change their contents unless you have a really good reason to. Don’t propagate the original package upstream, but create a new message instead.

Switching sides

A pipeline handler can at any time switch from the downstream to the upstream (or vice versa). Switching sides will always invoke the first handler in the other side. This allows us to stream line the process and to avoid confusion.

public class AuthenticationHandler : IUpstreamHandler
{
	private readonly IAuthenticator _authenticator;
	private readonly IPrincipalFactory _principalFactory;

	public AuthenticationHandler(IAuthenticator authenticator, IPrincipalFactory principalFactory)
	{
		_authenticator = authenticator;
		_principalFactory = principalFactory;
	}

	public void HandleUpstream(IPipelineHandlerContext context, IPipelineMessage message)
	{
		var msg = message as ReceivedHttpRequest;
		if (msg == null)
		{
			context.SendUpstream(message);
			return;
		}

		var authHeader = msg.HttpRequest.Headers["Authorization"];
		if (authHeader == null)
		{
			context.SendUpstream(message);
			return;
		}

		var user = _authenticator.Authenticate(msg.HttpRequest);
		if (user == null)
		{
			//Not authenticated, send error downstream and abort handling
			var response = msg.HttpRequest.CreateResponse(HttpStatusCode.Unauthorized,													   "Invalid username or password.");
			context.SendDownstream(new SendHttpResponse(msg.HttpRequest, response));
		}
		else
		{
			var principal =
				_principalFactory.Create(new PrincipalFactoryContext {Request = msg.HttpRequest, User = user});
			Thread.CurrentPrincipal = principal;
		}
	}
}

Pipeline factories

A pipeline (and all of it’s handlers) needs to be constructed each time a new channel is created. There are two built in factories in the framework.

One that uses an interface called IServiceLocator which allows you to add support for your favorite IoC container. And one that uses delegates to created stateful handlers.

var factory = new DelegatePipelineFactory();
factory.AddDownstreamHandler(() => new ResponseEncoder());

factory.AddUpstreamHandler(() => new HeaderDecoder(new HttpParser()));
factory.AddUpstreamHandler(new HttpErrorHandler(new SimpleErrorFormatter())); //singleton
factory.AddUpstreamHandler(() => new BodyDecoder(new CompositeBodyDecoder(), 65535, 6000000));
factory.AddUpstreamHandler(() => new FileHandler());
factory.AddUpstreamHandler(() => new MessageHandler());
factory.AddUpstreamHandler(new PipelineFailureHandler()); //singleton

Buffers

A fundamental part of a performant networking library is how the data is handled. All larger allocations hurt performance. We don’t want to create a new byte[65535] each time we read or send a new packet. It takes time to do the allocation, the garbage collector have to work more and the memory will end up fragmented.

The framework solves this by using buffer pools and a class called BufferSlice. We can allocate a buffer which is 5Mb large and slice it into smaller pieces which we use in the processing. We can either make the buffer pool a singleton or let each handler allocate it’s own buffer pool (it’s still just five allocations instead of 5000 if you have five handlers).

The BufferSlice class returns it’s buffer to the pool when it’s disposed. It’s therefore important that all messages that uses the BufferSlice class implements IDisposable, since the channel will dispose all messages when it’s done with them.

Performance

The framework is still quite new (abut one month =)). The performance is not at it’s peak yet.

However, I’ve used Apache’s ab tool to throw 5000 requests at the HTTP listener. The framework handled about 280 HTTP requests per second (localhost) which I consider to be OK this early in the project. The memory consumption was about 80Mb (working set). (Note that the numbers doesn’t really say anything.) Feel free to help improve the performance or do your own benchmarks. I would like to get a sample application which I can use for performance tuning (and compare the performance with other frameworks).

Building a JSON RPC server

It’s time to start building a JSON RPC server. Create a new console application name something like JsonRpcServer. Start the nuget package console and run install-packate griffin.networking to install the framework.

The specification for JSON RPC can be found at the official website. This article will not help you understand it, but only showing how you can implement it. The specification do not say anything about how the messages are transferred and we’ll therefore create a simple envelope which will be used to wrap the messages. The envelope is a simple binary header with a version (byte) and a length (int) field.

Decoding / Encoding

The first thing we need to do is to process the incoming bytes. We have to decode them into something that we can work with. As mentioned we’ll use a simple envelope. Something like:

public class SimpleHeader
{
	public int Length { get; set; }
	public byte Version { get; set; }
}

But to be able to use that class we need to decode the incoming bytes in some way. So let’s create our first pipeline handler which we’ll use for just that:

public class HeaderDecoder : IUpstreamHandler
{
	public void HandleUpstream(IPipelineHandlerContext context, IPipelineMessage message)
	{
		var msg = message as Received;
		if (msg == null)
		{
			context.SendUpstream(message);
			return;
		}

		// byte + int
		if (msg.BufferSlice.RemainingLength < 5)
		{
			return;
		}

		var header = new SimpleHeader
						 {
							 Version = msg.BufferSlice.Buffer[msg.BufferSlice.Position++],
							 Length = BitConverter.ToInt32(msg.BufferSlice.Buffer, msg.BufferSlice.Position)
						 };
		msg.BufferSlice.Position += 4;
		context.SendUpstream(new ReceivedHeader(header));

		if (msg.BufferSlice.RemainingLength > 0)
			context.SendUpstream(msg);
	}
}

Pretty straightforward. We don’t process anything until we got at least five bytes (the channel will continue to fill the buffer at the end until we handle something). Then we just decode the header, send a RecievedHeader message and pass on the remaining bytes. Notice that I use the version byte first. By doing so we can change the header as much as we like in future versions without screwing everything up.

The header doesn’t say anything more than the size of the actual JSON message. So we need something to process the JSON to. Let’s create another upstream handler for that (and thefore complying with the Single Responsibility Princinple). At will be called… BodyDecoder ;) (I’ve cheated and created the Request/Response/Error objects which the JSON RPC specification describes.)

public class BodyDecoder : IUpstreamHandler
{
	private static readonly BufferPool _bufferPool = new BufferPool(65535, 50, 50);
	private readonly BufferPoolStream _stream;
	private SimpleHeader _header;

	public BodyDecoder()
	{
		var slice = _bufferPool.PopSlice();
		_stream = new BufferPoolStream(_bufferPool, slice);
	}

	public void HandleUpstream(IPipelineHandlerContext context, IPipelineMessage message)
	{
		var headerMsg = message as ReceivedHeader;
		if (headerMsg != null)
		{
			_header = headerMsg.Header;
			if (_header.Length > 65535)
			{
				var error = new ErrorResponse("-9999", new RpcError
														   {
															   Code = RpcErrorCode.InvalidRequest,
															   Message =
																   "Support requests which is at most 655355 bytes.",
														   });
				context.SendDownstream(new SendResponse(error));
			}

			return;
		}

		var received = message as Received;
		if (received != null)
		{
			var count = Math.Min(received.BufferSlice.RemainingLength, _header.Length);
			_stream.Write(received.BufferSlice.Buffer, received.BufferSlice.Position, count);
			received.BufferSlice.Position += count;

			if (_stream.Length == _header.Length)
			{
				_stream.Position = 0;
				var request = DeserializeRequest(_stream);
				context.SendUpstream(new ReceivedRequest(request));
			}

			return;
		}

		context.SendUpstream(message);
	}

	protected virtual Request DeserializeRequest(BufferPoolStream body)
	{
		var reader = new StreamReader(body);
		var json = reader.ReadToEnd();
		return JsonConvert.DeserializeObject<Request>(json);
	}
}

Here we are using the BufferPool instead of creating a new buffer each time. Hence a quite large performance gain and a lot less fragmented memory if the server runs for a while. Also notice that the framework has a BufferPoolStream which uses the BufferPool to get byte[] buffers. Future versions of the stream will most likely be able to use several buffers behind the scenes (and therefore be able to handle larger amount of data without creating too large buffers).

Before we continue with the actual application, lets add the only downstream handler. The response encoder.

public class ResponseEncoder : IDownstreamHandler
{
	private static readonly BufferPool _bufferPool = new BufferPool(65535, 50, 100);

	public void HandleDownstream(IPipelineHandlerContext context, IPipelineMessage message)
	{
		var msg =  message as SendResponse;
		if (msg == null)
		{
			context.SendDownstream(message);
			return;
		}

		var result = JsonConvert.SerializeObject(msg.Response, Formatting.None);

		// send header
		var header = new byte[5];
		header[0] = 1;
		var lengthBuffer = BitConverter.GetBytes(result.Length);
		Buffer.BlockCopy(lengthBuffer, 0, header, 1, lengthBuffer.Length);
		context.SendDownstream(new SendBuffer(header, 0, 5));

		// send JSON
		var slice = _bufferPool.PopSlice();
		Encoding.UTF8.GetBytes(result, 0, result.Length, slice.Buffer, slice.StartOffset);
		slice.Position = slice.StartOffset;
		slice.Count = result.Length;
		context.SendDownstream(new SendSlice(slice));
	}
}

Now we only got one thing left to do in the pipeline. And that’s to handle the requests. Let’s start by creating a very simple handler:

class MyApplication : IUpstreamHandler
{
	public void HandleUpstream(IPipelineHandlerContext context, IPipelineMessage message)
	{
		var msg = message as ReceivedRequest;
		if (msg == null)
			return;


		var parray = msg.Request.Parameters as object[];
		if (parray == null)
			return; // muhahaha, violating the API specification

		object result;
		switch (msg.Request.Method)
		{
			case "add":
				result = int.Parse(parray[0].ToString()) + int.Parse(parray[0].ToString());
				break;
			case "substract":
				result = int.Parse(parray[0].ToString()) + int.Parse(parray[0].ToString());
				break;
			default:
				result = "Nothing useful.";
				break;
		}

		var response = new Response(msg.Request.Id, result);
		context.SendDownstream(new SendResponse(response));
	}
}

How do we run the application then? We need to create a server channel and define the client pipeline. I usually do it in a class called XxxxListener to follow the .NET standard. So let’s create a JsonRpcListener.

public class JsonRpcListener : IUpstreamHandler, IDownstreamHandler
{
	private TcpServerChannel _serverChannel;
	private Pipeline _pipeline;


	public JsonRpcListener(IPipelineFactory clientFactory)
	{
		_pipeline = new Pipeline();
		_pipeline.AddDownstreamHandler(this);
		_pipeline.AddUpstreamHandler(this);
		_serverChannel = new TcpServerChannel(_pipeline, clientFactory, 2000);

	}

	public void Start(IPEndPoint endPoint)
	{
		_pipeline.SendDownstream(new BindSocket(endPoint));
	}

	public void Stop()
	{
		_pipeline.SendDownstream(new Close());
	}

	public void HandleUpstream(IPipelineHandlerContext context, IPipelineMessage message)
	{
		var msg = message as PipelineFailure;
		if (msg != null)
			throw new TargetInvocationException("Pipeline failed", msg.Exception);
	}

	public void HandleDownstream(IPipelineHandlerContext context, IPipelineMessage message)
	{
		context.SendDownstream(message);
	}
}

So now we can define the client pipeline in Program.cs and inject it in the RpcListener:

class Program
{
	static void Main(string[] args)
	{
		LogManager.Assign(new SimpleLogManager<ConsoleLogger>());

		var factory = new DelegatePipelineFactory();
		factory.AddUpstreamHandler(() => new HeaderDecoder());
		factory.AddUpstreamHandler(() => new BodyDecoder());
		factory.AddUpstreamHandler(new MyApplication());
		factory.AddDownstreamHandler(new ResponseEncoder());

		JsonRpcListener listener = new JsonRpcListener(factory);
		listener.Start(new IPEndPoint(IPAddress.Any, 3322));

		Console.ReadLine();
	}
}

The first two upstream handlers are stateful, so we need to create those for every channel which is generated. That’s why we use a delegate. The last two is not stateful and can therefore be singletons.

That’s it. You now got a working JSON RPC server. Sure. It’s pretty basic, but the actual remoting layer doesn’t have much to do with the networking layer. I did however take some time to create a proof of concept RPC system. Let’s define our RPC service first:

public class MathModule
{
	[OperationContract]
	public int Sum(int x, int y)
	{
		return x + y;
	}
}

Then we need to redefine the client pipeline:

var invoker = new RpcServiceInvoker(new DotNetValueConverter(), new SimpleServiceLocator());
invoker.Map<MathModule>();

factory.AddUpstreamHandler(() => new HeaderDecoder());
factory.AddUpstreamHandler(() => new BodyDecoder());
factory.AddUpstreamHandler(new RequestHandler(invoker));
factory.AddDownstreamHandler(new ResponseEncoder());

That’s it. From here we could go and include the Http protocol implementation and switch out our simple header against the HeaderDecoder in the HTTP implementation and therefore get an implementation which works over HTTP instead of our basic binary header. We have to do a few minor changes to achieve that, keeping most of the Json RPC implementation intact.

Summary

I hope that I’ve managed to demonstrate how to develop networking applications with Griffin.Networking and show the power that it gives you compared to vanilla .NET socket servers.

The code is available as a nuget package griffin.networking and the http implementation is available as griffin.networking.http. The JSON RPC implementation is still just a concept and therefore not added as a release yet. Feel free to participate to complete it.

All code is also available at github.


Griffin.MvcContrib – The plugin system

Introduction

Griffin.MvcContrib is a contribution project for ASP.NET MVC3 which contains different features like extendable HTML Helpers (you can modify the HTML that the existing helpers generate), easy localization support and a quite fresh support for plugin development. The purpose of this article is to provide a step-by-step instruction on how you can develop plugins using Griffin.MvcContrib.

Using the code

The plugin support are based on the Area support which MVC3 got. If you haven’t used areas before, I suggest that you start by reading the following article.

Most basic approach.

This first sample is not really a plugin system but only showing how you can move your code to class libraries. The example will only contain one class library to avoid confusion.
Start by creating a new ASP.NET MVC3 project (of any kind) and a class library project. For convenience we would like to have support for Razor and all MVC3 wizards in our class library. It helps us add some basic files which are required to get everything working.
To get that support we need to modify the project file for the class library. Here is how:

1. Right-click on the class library project and choose unload project:

2. Right-click on the project file and select Edit project file

3. Add the following XML element on a new line below <ProjectGuid>

<ProjectTypeGuids>{E53F8FEA-EAE0-44A6-8774-FFD645390401};{fae04ec0-301f-11d3-bf4b-00c04f79efbc}</ProjectTypeGuids>  

4. Save and close the project file.

5. Right-click and reload the project.

6. Add reference to “System.Web” and “System.Web.Mvc” (project settings)

You have now told Visual Studio that it should active the tooling and Razor 3 views. We can now add an area by just right-clicking on the project and select “Add Area”.

Do so and create an area with a name of your choosing. Create a controller (with an Index action) and a view for the index action. You now got your first external DLL for the MVC project.

Add a reference to the class library from the MVC3 project:

Congratulations. You now got your first “plugin” based solution. Hit F5 to run the project. I named my area “Ohh” and my controller “My” so I surf to “http://theHostNameAndPort/ohh/my” to visit my plugin controller.

Uh oh. We can visit the page alright. But the view cannot be found. It can be solved thanks to Griffin.MvcContrib. Let’s install the project in the MVC3 project:

Then open up global.asax and create a new method which maps the views:

protected void Application_Start()
{ 
    AreaRegistration.RegisterAllAreas();
    RegisterGlobalFilters(GlobalFilters.Filters);
    RegisterRoutes(RouteTable.Routes);
    RegisterViews();
}
protected void RegisterViews()
{
    var embeddedProvider = new EmbeddedViewFileProvider(HostingEnvironment.MapPath("~/"), new ExternalViewFixer());
    embeddedProvider.Add(new NamespaceMapping(typeof(Lib.Areas.Some.Controllers.MyController).Assembly, "BasicPlugins.Lib"));
    GriffinVirtualPathProvider.Current.Add(embeddedProvider);
    HostingEnvironment.RegisterVirtualPathProvider(GriffinVirtualPathProvider.Current);
} 

The first line creates a new file provider which loads files from embedded resources. We tell it which path it should try to provide files for. The next line simply adds our class library assembly and tell which root namespace it has. To make this work, we also have to change our view to an embedded resource:

Everything should be set now. Hit F5 and visit the area controller again. You should see the view now.

You can also try to have a break point in the class library. Debugging should work just like it used to. There is still one “problem”. We can’t modify the views at runtime, we need to recompile the project each time we adjust a view. Fortunually for us, Griffin.MvcContrib can solve that too. Open the RegisterViews method in global.asax again. Let’s add a disk file provider. It should be added before the embedded provider since the first found file will be used.

var diskLocator = new DiskFileLocator();
diskLocator.Add("~/", Path.GetFullPath(Server.MapPath("~/") + @"..BasicPlugins.Lib"));
var viewProvider = new ViewFileProvider(diskLocator);
GriffinVirtualPathProvider.Current.Add(viewProvider);  

You can now edit the views at runtime.

Building a plugin system

The last part showed you how you can put Controllers, Models and Views in class libraries. It’s quite useful as long as you don’t want to develop features in a more loosely coupled and dynamic way. This section will suggest a structure which you can use when building plugins.

The following part expects you to have used inversion of control containers in the past. The container is used to provide the application extension points to all plugins. I prefer using a container which has a module system so that each plugin can take care of all it’s registrations itself. I’m using Autofac in this article and it’s Module extension.

Let’s place all code which is shared between the plugins and the MVC3 project in a seperate class library. You could also follow follow the Separated interface pattern and only define all interfaces in that project. Thus removing all direct dependencies between the plugins and the main application.

The last thing to remember is that the default route configuration doesn’t work very well if you have controllers with the same name in different areas. To overcome that you have to manually change all route mappings to include the namespace. Also remove the “_default” from the route name.

Modified registration:

public override void RegisterArea(AreaRegistrationContext context)
{
	context.MapRoute(
		"Messaging_default",
		"Messaging/{controller}/{action}/{id}",
		new { action = "Index", id = UrlParameter.Optional },
		new[] { GetType().Namespace + ".Controllers" }
	);
}

Create a new class library named something like “YourApp.PluginBase” and add our basic extension points:

public interface IRouteRegistrar
{
	void Register(RouteTable routes);
}

Used to register any custom routes.

public interface IMenuRegistrar
{
	void Register(IMenuWithChildren mainMenu);
}

Allows the plugins to register themselves in the main application menu.

We’ll just use those two extensions in this example. Feel free to add as many extensions as you like in your own project ;)

The structure

We’ll need to have some structure for the plugins so that the can easily be managed, both during development and in production. All plugins will therefore be placed in a sub directory called “Plugins”. Something like:

ProjectName\Plugins
ProjectName\Plugins\PluginName
ProjectName\Plugins\PluginName\Plugin.PluginName
ProjectName\Plugins\PluginName\Plugin.PluginName.Tests
ProjectName\ProjectName.Mvc3

Add a new solution folder by right-clicking on the solution folder in the Solution Explorer. Do note that solution folders doesn’t exist on the disk and you’ll therefore have to manually append the folder to the location text box each time you add a new project in it.

Right-click on the Plugins folder in Solution Explorer and add a new project named “PluginName”. Don’t forget to append PluginsPlugin.Name to the location text box.

Since we this time don’t want to have any references to the plugins from the main application we have to manually copy them to the main application plugin folder. We use a post build event for that. Don’t forget to copy in all dependencies that your project has, since nothing handles that for you (the price of having no direct references from the MVC3 project to the plugins).

Complete structure:

Menu items

Keep in mind that you can not unload plugins and that they are available for all users. The easiest way to come around that is to display menu items only if the user has a role, which means that you should create one (or more) role(s) per plugin (if the plugin should not be available for all users). Griffin.MvcContrib contains a basic menu implementation which allows you to control if menu items are visible or not.

Sample code:

var item = new RouteMenuItem("mnuMyPluginIndex", "List messages", new { 
        controller = "Home", 
        action = "Index", 
        area = "Messages"});
item.Role = "MessageViewer";
menuRegistrar.Add(item);

You can later use menuItem.IsVisible when you generate the menu to determine if the item should be included or not.

Hello container

For this exercise we’ll use Autofac as the container. It contains a nifty module system which aids us in keeping the projects loosely coupled. Each plugin need to create a class which inherits from Autofac.Module and use that class to register all services which the plugin provides. Thanks to that we’ll just have to scan all plugin assemblies after any container modules and invoke them with our container.

public class ContainerModule : Module
{
	protected override void Load(ContainerBuilder builder)
	{
		builder.RegisterType<MenuRegistrar>().AsImplementedInterfaces().SingleInstance();
		builder.RegisterType<HelloService>().AsImplementedInterfaces().InstancePerLifetimeScope();
		base.Load(builder);
	}
}

The plugins are loaded from the main application using the following snippet:

var moduleType = typeof (IModule);
var modules = plugin.GetTypes().Where(moduleType.IsAssignableFrom);
foreach (var module in modules)
{
	var mod = (IModule) Activator.CreateInstance(module);
	builder.RegisterModule(mod);
}

Views during development

Since we want to be able to modify views at runtime during development we have to tell ASP.NET MVC3 where it can find our plugin views. We do this by using GriffinVirtualPathProvider and a custom file locator which looks like this:

public class PluginFileLocator : IViewFileLocator
{
    public PluginFileLocator()
    {
        _basePath = Path.GetFullPath(HostingEnvironment.MapPath("~") + @"..Plugins");
    }

    public string GetFullPath(string uri)
    {
        var fixedUri = uri;
        if (fixedUri.StartsWith("~"))
            fixedUri = VirtualPathUtility.ToAbsolute(uri);
        if (!fixedUri.StartsWith("/Areas", StringComparison.OrdinalIgnoreCase))
            return null;

        // extract area name:
        var pos = fixedUri.IndexOf('/', 7);
        if (pos == -1)
            return null;
        var areaName = fixedUri.Substring(7, pos - 7);

        var path = string.Format("{0}{1}\Plugin.{1}{2}", _basePath, areaName, fixedUri.Replace('/', '\'));
        return File.Exists(path) ? path : null;
    }
}

It simply takes the requested uri and converts it using the naming standard I described above. Everything else is taken care of by Griffin.MvcContrib.

And interesting part is that the provider is only loaded during development thanks to nifty helper class in Griffin.MvcContrib:

if (VisualStudioHelper.IsInVisualStudio)
    GriffinVirtualPathProvider.Current.Add(_diskFileProvider);

Loading the plugins

Ok. We’ve created some plugins (and their dependencies) which are copied to the bin folder with a post build event. We’ll therefore have to load them in some way. To do this we create a new class looking like this:

public class PluginService
{
    private static PluginFinder _finder;
    private readonly DiskFileLocator _diskFileLocator = new DiskFileLocator();
    private readonly EmbeddedViewFileProvider _embededProvider =
        new EmbeddedViewFileProvider(new ExternalViewFixer());
    private readonly PluginFileLocator _fileLocator = new PluginFileLocator();
    private readonly ViewFileProvider _diskFileProvider;

    public PluginService()
    {
        _diskFileProvider = new ViewFileProvider(_fileLocator, new ExternalViewFixer());

        if (VisualStudioHelper.IsInVisualStudio)
            GriffinVirtualPathProvider.Current.Add(_diskFileProvider);

        GriffinVirtualPathProvider.Current.Add(_embededProvider);
    }


    public static void PreScan()
    {
        _finder = new PluginFinder("~/bin/");
        _finder.Find();
    }

    public void Startup(ContainerBuilder builder)
    {
        foreach (var assembly in _finder.Assemblies)
        {
            // Views handling
            _embededProvider.Add(new NamespaceMapping(assembly, Path.GetFileNameWithoutExtension(assembly.Location)));
            _diskFileLocator.Add("~/",
                                 Path.GetFullPath(HostingEnvironment.MapPath("~/") + @"...." +
                                                  Path.GetFileNameWithoutExtension(assembly.Location)));

            //Autofac integration
            builder.RegisterControllers(assembly);
            var moduleType = typeof (IModule);
            var modules = assembly.GetTypes().Where(moduleType.IsAssignableFrom);
            foreach (var module in modules)
            {
                var mod = (IModule) Activator.CreateInstance(module);
                builder.RegisterModule(mod);
            }
        }
    }

    // invoke extension points
    public void Integrate(IContainer container)
    {
        foreach (var registrar in container.Resolve<IEnumerable<IMenuRegistrar>>())
        {
            registrar.Register(MainMenu.Current);
        }

        foreach (var registrar in container.Resolve<IEnumerable<IRouteRegistrar>>())
        {
            registrar.Register(RouteTable.Routes);
        }
    }
}

The plugins are now loaded and the extension points have been passed to them.

Final touch

The last thing to change is global.asax:

protected void Application_Start()
{
    AreaRegistration.RegisterAllAreas();

    RegisterGlobalFilters(GlobalFilters.Filters);
    RegisterRoutes(RouteTable.Routes);

    _pluginServicee = new PluginService();
    RegisterContainer();
    HostingEnvironment.RegisterVirtualPathProvider(GriffinVirtualPathProvider.Current);
    _pluginServicee.Integrate(_container);
}

private void RegisterContainer()
{
    var builder = new ContainerBuilder();
    builder.RegisterControllers(Assembly.GetExecutingAssembly());
    _pluginServicee.Startup(builder);
    _container = builder.Build();
    DependencyResolver.SetResolver(new AutofacDependencyResolver(_container));
}

Code

The code for the samples are located in github and so is Griffin.MvcContrib. Griffin.MvcContrib can also be install from nuget.


griffin.editor: Added support for Google Prettify

I’ve just refactored the highlighting features of griffin.editor. You can now use your favorite highlighter by implementing the $.griffinEditorExtension.highlighter callback like this:

$.griffinEditorExtension.highlighter = function(inlineSelector, blockSelector) {
    $.each(blockSelector, funtion() {
        console.log('Code block to highlight: ' + $(this).html());
    };
}

The default implementation looks for either highlight.js or Google Prettify.

I’ve also made a small change so that the inline code blocks aren’t highlighted per default. It makes the text more readable imho (and the programming language guessing in the highlights have too little code to guess on).



A javascript selection script

I wanted a lightweight and cross browser plugin to be able to handle selections in a text area (for my griffin.editor plugin). Didn’t find a suitable one despite asking at stackoverflow.com. I created my own. It’s stand alone and only 2,6kb (uncompressed).

Usage:

//jQuery is not required but supported.
var selection = new TextSelector($('#mytextarea'));
selection.replace('New text');

// you can change selection:
selection.select(1,10); // select char 1 to 10

// get selection information
console.log("Start char: " + selection.get().start);

// check if anything is selected
selection.isSelected();

// get the text
var text = selection.text();

Code at github


Introducing griffin.editor – a jQuery textarea plugin

I’ve tried to find a jQuery editor plugin which works out of the box without configuration. The WMD editor used by stackoverflow.com looked nice but I couldn’t find a version that I got running. My main issue with most editors was that I didn’t figure out how to configure custom image and link dialogs. I’ve therefore done my own.

Highlights:

  • Markdown (currently the only format supported)
  • Preview pane (see generated HTML live)
  • Syntax highlighting (live), using highlightjs or google prettify
  • Expanding textarea (which also goes back to original size on blur)
  • jQueryUI dialogs for links/images
  • Access keys (default browser modifier or CTRL if activated)
  • Plug & Play (just include additional scripts to activate features)

The basic setup looks like this:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<head>
  <title>Editor demo</title>
  <script type="text/javascript" src="scripts/jquery-1.6.2.min.js"></script>
  <script type="text/javascript" src="scripts/jquery.markdown-0.2.js"></script>
  <script type="text/javascript" src="../Source/textselector.js"></script>
  <script type="text/javascript" src="../Source/jquery.griffin.editor.js"></script>
  <script type="text/javascript" src="../Source/jquery.griffin.editor.markdown.js"></script>
  <style type="text/css">
   .editor .area { width: 600px; height: 200px; }
   .editor .toolbar { padding: 0px;  }
 </style>
</head>
<body>
<div class="editor">
	<div class="toolbar">
		<span class="button-h1" accesskey="1" title="Heading 1"><img src="../Source/images/h1.png" /></span>
		<span class="button-h2" accesskey="2" title="Heading 2"><img src="../Source/images/h2.png" /></span>
		<span class="button-h3" accesskey="3" title="Heading 3"><img src="../Source/images/h3.png" /></span>
		<span class="button-bold" accesskey="b" title="Bold text"><img src="../Source/images/bold.png" /></span>
		<span class="button-italic" accesskey="i" title="Italic text"><img src="../Source/images/italic.png" /></span>
		<span class="divider">&nbsp;</span>
		<span class="button-bullets" accesskey="l" title="Bullet List"><img src="../Source/images/bullets.png" /></span>
		<span class="button-numbers" accesskey="n" title="Ordered list"><img src="../Source/images/numbers.png" /></span>
		<span class="divider">&nbsp;</span>
		<span class="button-sourcecode" accesskey="k" title="Source code"><img src="../Source/images/source_code.png" /></span>
		<span class="button-quote" accesskey="q" title="Qoutation"><img src="../Source/images/document_quote.png" /></span>
		<span class="divider">&nbsp;</span>
		<span class="button-link" accesskey="l" title="Insert link"><img src="../Source/images/link.png" /></span>
		<span class="button-image" accesskey="p" title="Insert picture/image"><img src="../Source/images/picture.png" /></span>
	</div>
	<textarea class="area">Hello world</textarea>
</div>
<script type="text/javascript">
	$(function(){
		$('.editor').griffinEditor();
	});
</script>
</body>
</html>

All of that is required. (Just a simple copy/paste). The idea is that you should easily be able to customize it’s layout. The script generates the following layout:

Basic layout

Dialogs

The basic setup uses browser dialog boxes:

Dialog box

Not so sexy. Include jQueryUI and the integration script:

  <link rel="stylesheet" href="Styles/jquery-ui-1.8.16.custom.css">
  <script type="text/javascript" src="scripts/jquery-ui-1.8.16.custom.min.js"></script>
  <script type="text/javascript" src="../Source/jquery.griffin.editor.dialogs.jqueryui.js"></script>

.. to automatically reconfigure the plugin to use jQueryUI:

Using jQueryUI for dialogs

You can use your own dialogs by implementation the following function:

$.griffinEditorExtension.imageDialog = function(options)
{
    //options.title & options.url contains info specified in the editor

   // invoke when done
   options.success({ title: 'Some title', url: 'Some url';}) when you are done
}

Same goes for the link dialog.

Preview pane

The preview pane is automatically configured when you add a div with a special id:

<div class="editor" id="myeditor">
//all the editor code
</div>
<div id="myeditor-preview">
</div>

This allows you to place the preview pane wherever you like. The included demo scripts places the preview to the right:

Preview pane

You can also add support for syntax highlighting by including additional script & stylesheet:

  <script src="http://yandex.st/highlightjs/6.1/highlight.min.js"></script>
  <link rel="stylesheet" href="http://yandex.st/highlightjs/6.1/styles/idea.min.css">

The script inclusion will activate those features, no additional configuration is required.

Access keys

The default access key implementation uses the browser specific implementation. For instance Win+Chrome uses ALT+Key to activate it. Hence no additional information in the tooltip:

Default access keys

That can be changed by adding a hotkeys script:

  <script type="text/javascript" src="scripts/jquery.hotkeys.js"></script>

Which reconfigures the tooltips and allows you to use CTRL+key to access the toolbar features. The key is still controlled by the accesskey attribute on the toolbar icons.

Better hotkeys

Summary

The codez & all examples are available at github.


Generic repositories – A silly abstraction layer

This post is all about GENERIC repositories as in Repository, not about all types of repositories. Repositories are a great way to abstract away the data source. Doing so makes your code testable and flexible for future additions.

My recommendation is against generic repositories, since they don’t give you any additional value compared to regular repository classes. “Regular” repositories are usually written specifically for the requirements that your project have.

Let’s look at what generic repositories give you:

You can change OR/M implementation at any time.

Seriously?

  1. If you find yourself having switch OR/M during a project you have not done you homework before you started the project.
  2. The OR/M choice doesn’t matter since you have abstracted away the features of the chosen OR/M

imho you’ll stick with one OR/M during a project and switch for the next one (if you have to switch).

You have to write less code.

Here is a Generic repository (nhibernate implementation) from a SO question:

public interface IRepository<T> : IQueryable<T>
{
  void Add(T entity);
  T Get(Guid id);
  void Remove(T entity);
}

public class Repository<T> : IQueryable<T>
{
  private readonly ISession session;

  public Repository(ISession session)
  {
    session = session;
  }

  public Type ElementType
  {
    get { return session.Query<T>().ElementType; }
  }

  public Expression Expression
  {
    get { return session.Query<T>().Expression; }
  }

  public IQueryProvider Provider
  {
    get { return session.Query<T>().Provider; } 
  }  

  public void Add(T entity)
  {
    session.Save(entity);
  }

  public T Get(Guid id)
  {
    return session.Get<T>(id);
  }

  IEnumerator IEnumerable.GetEnumerator()
  {
    return this.GetEnumerator();
  }

  public IEnumerator<T> GetEnumerator()
  {
    return session.Query<T>().GetEnumerator();
  }

  public void Remove(T entity)
  {
    session.Delete(entity);
  }   
}

Take a look at the methods. All they do is to call methods in nhibernate. You do not win anything by doing so. All you get is an abstraction layer that removes the good things with nhibernate/ef/whatever.

It’s better to create a proper base class and move all repeated (DRY) functionality into it (and therefore still be able to take advantage of the features in your favorite OR/M).

Summary

Did I miss something that a generic repository gives you? Please make a comment.


Repositories, Unit Of Work and ASP.NET MVC

There are a lot of posts discussing repository implementations, unit of work and ASP.NET MVC. This post is an attempt to give you and answer which addresses all three issues.

Repositories

Do NOT create generic repositories. They look fine when you look at them. But as the application grow you’ll notice that you have to do some workarounds in them which will break open/closed principle.

It’s much better to create repositories that are specific for an entity and it’s aggregate since it’s much easier to show intent. And you’ll also only create methods that you really need, YAGNI.

Generic repositories also removes the whole idea with choosing an OR/M since you can’t use your favorite OR/Ms features.

Unit Of Work

Most OR/Ms available do already implement the UoW pattern. You simply just need to create an interface and make and adapter (google Adapter pattern) implementation for your OR/M.

Interface:

    public interface IUnitOfWork : IDisposable
    {
        void SaveChanges();
    }

NHibernate sample implemenetation:

    public class NHibernateUnitOfWork : IUnitOfWork
    {
        private readonly ITransaction _transaction;

        public NHibernateUnitOfWork(ISession session)
        {
            if (session == null) throw new ArgumentNullException("session");
            _transaction = session.BeginTransaction();
        }

        public void Dispose()
        {
            if (!_transaction.WasCommitted)
                _transaction.Rollback();

            _transaction.Dispose();
        }

        public void SaveChanges()
        {
            _transaction.Commit();
        }
    }

ASP.NET MVC

I prefer to use an attribute to handle transactions in MVC. It makes the action methods cleaner:

[HttpPost, Transactional]
public ActionResult Update(YourModel model)
{
    //your logic here
}

And the attribute implementation:

public class TransactionalAttribute : ActionFilterAttribute
{
    private IUnitOfWork _unitOfWork;

    public override void OnActionExecuting(ActionExecutingContext filterContext)
    {
        _unitOfWork = DependencyResolver.Current.GetService<IUnitOfWork>();

        base.OnActionExecuting(filterContext);
    }

    public override void OnActionExecuted(ActionExecutedContext filterContext)
    {
        // let the container dispose/rollback the UoW.
        if (filterContext.Exception == null)
            _unitOfWork.SaveChanges();

        base.OnActionExecuted(filterContext);
    }
}

Localizing jQuery plugins

I’ve spent some time to figure out how to localize my jQuery plugins and I’ve just found out a way that works just fine. Disclaimer: I’m not 100% sure of how namespacing works in jQuery so this might not be the correct way.

I’m using the meta header accept-language to control which language to use. It allows me to select the language in my backend (using user settings or whatever) instead of just using the languages defined by the browser. An ASP.NET MVC3 header would look like this in the layout:

<meta name="accept-language" content="@System.Globalization.CultureInfo.CurrentCulture.Name" />

You should be able to do the same thing in PHP or whatever language you use.

Next thing to do is to add the localization structure to your plugin script file:

(function($) {
    //globals
    $.yourPluginName = {
        texts: {
            title: 'Please wait, loading..'
        },
        translations: []
    };
    
    var methods = {
		// your methods here.
	};

    $.fn.yourPluginName = function(method) {

        if (methods[method]) {
            return methods[method].apply(this, Array.prototype.slice.call(arguments, 1));
        } else if (typeof method === 'object' || !method) {
            return methods.init.apply(this, arguments);
        } else {
            $.error('Method ' + method + ' does not exist on jQuery.yourPluginName');
        }

    };

})(jQuery);

That will make the plugin work if no languages are loaded. The translations will be loaded into the $.yourPluginName.translations array by another script.

We got one thing left to do before we can use the texts in your script. We need to load the correct language. Create another script called jquery.yourpluginnamne.localization.js and load it after$.yourPluginName.texts.anyTextName in your plugin to use the localized strings.