Actively enforce your conventions
Glenn posted about a test I wrote PrismShouldNotReferenceUnity, in which I codified an assumption that the team made.
This is something that I try to do whenever I decide that I will have some sort of convention. If at all possible, I will try to make sure that the compiler will break if you don't follow the convention, but that is often just not possible, therefor, tests and active enforcement of the system convention fill that place.
Wait a minute, I haven't even defined what a convention is! By convention, I means things like:
- All services should have a Dispose() method - add a Dispose abstract method to AbstractService
- All messages should be serializable - create a test that scan the message assemblies and check for non serializable messages.
- You may only call the database twice per web request - create an http module that will throw an exception if you call it more than that
- A request should take less than 100 milliseconds - add an interceptor that would fail if this is not the case.
- The interface assembly may not contain logic - add a test that would fail if it find a class with a method on the interface assembly.
All of those are ways to increase the feedback speed. This is especially true if there is some extra step that you need to perform, or you want to draw a line in the sand, from which you will not deviate.
Actively enforced conventions keep you honest.
Comments
Did you look at Code Query Language rules?
It can handle all your 'static' convention such as:
All services should have a Dispose() method - add a Dispose abstract method to AbstractService
All messages should be serializable - create a test that scan the message assemblies and check for non serializable messages.
The interface assembly may not contain logic - add a test that would fail if it find a class with a method on the interface assembly.
Patrick,
I haven't thought about that, but it seems obvious when I do think about it.
When you refer to a 'request' taking 100ms or less, is this a WebRequest or a database request?
It could looks like (notice the numerous ways to match what classes are services or messages
FROM Somewhere
DeriveFrom "ABaseClass"
Implements "AnInterface"
HasAttribute "AnAttributeClass"
NameLike "ARegExToMatchByName"
NameLike "ARegExToMatchByFullName" (i.e namespace prefix included)
...
All services should have a Dispose() method - add a Dispose abstract method to AbstractService
WARN IF Count IF SELECT TYPES FROM NAMESPACES "YourNamespaceWhereMessageAreDefined" WHERE
(DeriveFrom "YourServiceBaseClass" OR HasAttribute "YourServiceAttribute" OR NameLike "RegExToMatchServiceClass")
AND !Implements "System.IDisposable"
All messages should be serializable - create a test that scan the message assemblies and check for non serializable messages.
WARN IF Count IF SELECT TYPES FROM NAMESPACES "YourNamespaceWhereMessageAreDefined" WHERE
(DeriveFrom "YourMessageBaseClass" OR HasAttribute "YourMessageAttribute" OR NameLike "RegExToMatchMessageClass")
AND !IsSerializable
The interface assembly may not contain logic - add a test that would fail if it find a class with a method on the interface assembly
WARN IF Count IF SELECT ASSEMBLIES WHERE
(HasAttribute "YourInterfaceOnlyAttribute" OR NameLike "RegExToMatchInterfaceOnlyAsm") AND
NbILInstructions > 0
Request is a generic term here.
Can be web request, SOAP call, the time to handle single message, etc
Good ideas, Ayende! One I came up with a while back was for our convention of creating enums for database code values, which we keep in their own table by category. I wrote a unit test that reflected on the enums and retrieved all the code values from the db, then ensured that they matched up.
When you start getting into reflection in unit tests, it's starting to sound a lot like static code analysis. Do you think policies like this could live in custom code analysis rules (either an FxCop rule or a CQL query?).
Do you think this type of thing should be validated in unit tests?
Peter - does it matter much either way? I know Reflection and Nunit, so it was easy to write the check that way.
Another consideration is your organization's infrastructure. We have CI that runs unit tests on every commit, but not FxCop (yet, anyway). So it makes more sense to use unit tests.
As per normal - you are a step ahead Oren ... thanks for the post, it inspired me to prevent an annoying bug in my own code with a guard test!
I try to do the same. It's particularly helpful when working in large (or even small, for that matter) distributed teams. On a previous project we had quite a few database design conventions codified as unit tests (e.g. no use of Identity columns except for the following known tables).
XMLSerializable is another one I've done, also ensuring that XmlInclude attribute is present on base classes if you're using polymorphic messages.
John - I'm undecided at this point. At what point do unit tests stop being unit tests and start being integration tests, policy tests, etc. Does this matter?
How hard is it to write code to reflect IL and verify it in the general case? I think it's pretty hard in the general case. There's existing tools like NDepend that can do that with a simple query.
There's a point you'll probably get to where writing code to statically analyse code causes much more friction than does using a static analysis tool--but then you've got some static anlayis code in unit tests and some in unit tests. In reality, not all of Oren's suggestions are static analysis; and some would be hard to get specific project policy into a static analyser. e.g. what does service mean and how do you get that policy into a static analyser?
I'm wondering, in cases like this, when should existing tools that allow you to do this (statically analyze code) be used, and when you should write unit tests?
Comment preview