Dependency Injection in a Dynamic Environment

time to read 5 min | 865 words

Scott Bellware joins the DI discussion from a completely different angel, the dynamic language point of view:

The level of abstraction that we work at in .NET is appropriate to explicit inversion of control and thus dependency injection.  To make type mocking your main method of achieving testability in .NET development will drive your designs increasingly sap the maintainability of your code.  Maintainability and agility in a static environment are mostly the effect of applying appropriate static design patterns for static development. 

[...snip...]

The difference in my comfort level in doing this has a lot to do with the fact that I'm usually coding (and thus mocking) at the abstraction level of the Rails framework’s DSLs rather than lower down at the level of an API or a static Domain Model.

I think that the key here is the level in which you are working at a given moment. It is fairly easy to add cross cutting support for DI in my domain entities, but that is something that I wouldn't want to do. I am using DI mainly in controllers, services and commands, because injecting services to entities is something that looks... strange to me. I like my entities to be able to stand up on their own.

Minor note: I have something that I don't know how to call, which bridge the entities business logic and the required services. It is usually expressed in the terms of the domain, and is directly exposed to the entities, but usually as a parameter for methods, and the like.

I have written a few DSL and my share of fluent interfaces, and I think that while there is a lot of value in having a DSL (and a dynamic environment), a good API can cover a lot of ground.

As a side note, he also mentions this:

Beneath the DSL is an API and a framework that has all kinds of OO design goodness, but unlike working with a Rails clone in .NET, I don't often have to have to see the vestiges of the framework's uses of dependency injection in my application code.

I usually divide my applications to several layers: infrastructure, entities, services, controllers, etc. Of those, the layer that is most likely to be exposed to the messy details of real world coding is the infrastructure layer. The rest of those are expressed in terms of the current context, and that works fairly well. The main use of DI in those layers is for constructor injection (I am not fond of setters injection in general) and not much more. In fact, as someone who is part of a Rails-inspired project, I disagree with this statement. I find that I often can write the code in a way that make sense for the layer and context that I find myself in.

Here is a small example for your critique:

public virtual bool AddNewPolicy()
{
	using (RhinoTransaction transaction = UnitOfWork.Current.BeginTransaction())
	{
		Policy policy = BuildPolicy();

		if (ValidatorRunner.IsValid(Policy))
		{
			if (!ValidateThatUserIsAuthorizedToSavePolicy(policy))
				return false;
			if (AddValidPolicy(policy))
			{
				transaction.Commit();
				SetupRedirect(Policy);
				return true;
			}
		}
		else
		{
			Scope.ErrorSummary = ValidatorRunner.GetErrorSummary(policy);
		}
		return false;
	}
}

private bool AddValidPolicy(Policy policy)
{
	Repository<Policy>.Save(policy);
	UnitOfWork.Current.Flush(); //ensure that it will have an id
	try
	{
		CallWebServiceInsert(policy);
	}
	catch (Exception e)
	{
		Log.Error("Failed to insert policy using web service", e);
		Scope.ErrorMessage = string.Format(
			Resources.CouldNotInsertPolicyToBackEndService,
			e.Message
			);
		return false;
	}
	Usage.NewPolicyCreated(policy);
	return true;
}

I don't like the explicit transaction management here, but I had a special case of needing to coordinate an explicit (and expected) failure case, and that was the simplest solution to that. This is a controller's method, and the reason that it is returning a bool is for a check in the UI layer that would show the error message if necessary. Exception semantics usually stops at the controller, as far as I am concerned.

By the way, there is a line here that I didn't written, can you spot it?

I know that CRUD based example is not the best I could give, but I don't think that even this is encumbered by infrastructure concerns. Then again, I never did more than a sample app in rails, so it is entirely possible that I missing things (and I would like to hear about that).

He then goes on to say:

As for dependency injection, I'm simply bloody sick of it.  The vestiges of dependency wiring have no business in application code.

I would agree, except that my term for application code are the entities and controllers (maybe services). Like threading, DI is something that I would like to keep to infrastructure code only.

I would like to offer some educated guesses that may turn to be completely wrong, but I am guessing that while the need for DI is lessened in dynamic environments (I want method missing too, damn it!), you still need some sort of dependencies management, if only to provide a way to handle this as applications grow.

The main problem is that when you have several ways to tackle the same problem in different ways (handling logins for dev, staging, production is a good sample), or need to handle life cycle.