This is a bad test, because what it does is ensuring that something does not works. I just finished implementing the session.Advaned.Defer support, and this test got my attention by failing the build.
Bad test, you should be telling me when I broke something, not when I added new functionality.
I agree with Daan. If the test ensures that any client that uses your API will get a valid error when he tries something that is not supported then the test is just fine.
Also, it can give great information to other programmers - they might be browsing tests to see how they're supposed to use this functionality and when they see this test they'll immediately know that it's not yet implemented.
How about that:
You are changing the behavior of your API. If you'd follow test first approach you'd need to write a new test for your changes. Let it fail with the not supported exception, remove the assert throws from the other test and then implement the new feature. This is not a bad test. It is you not following the principles.
I think that in the real world no one would be find useful that kind of test, neither very fanatic agilists.
Making tests to succeed for real feature is enough a hard work.
I actually wrote a test like that against RavenDB.
A few versions ago there was a bug where Uri properties would cause Raven to think the data had always changed. So you could do this from memory:
var customer = session.Load<Customer>();
Assert.IsTrue(session.Advanced.HasChanges(customer));
I didn't want a broken test failing my build for a week while I waited for a new RavenDB build so I wrote it as a negative test. When I upgraded RavenDB and the test failed it was a nice reminder about the issue.
In some ways I think the negative test had /some/ value - if I ever wondered if a problem still existed I had some proof for it. But having a test 'fail' because something 'worked' did leave a strange feeling. I'm not sure what the solution for this should be. Maybe Assert.Inconclusive()?
As mentioned before the problem aint the test itself. It may be that it is hidden in an improper location without giving a semantic clue.
I do have a test file dedicated to "unsupported" or "defects" in third party components. When I update any of them (including RavenDB) and they are fixed I know right away and fix the workarounds in the code.
So I dont think it is a test, I believe it is a "note" to your team or yourself in the future where you are not actively thinking about it.
Parts of that test are actually quite important. shardedDocumentStore.Url can't possibly return a result that makes sense - if it returned (for example) the first shard's URL that would have been a bug. Same goes for DatabaseCommands and AsyncDatabaseCommands.
I think it's even important for GetLastWrittenEtag and Defer - because as long as you've not implemented it, an incorrect result can be far, far worse than a NotSupportedException.
This is why I love using machine.specifications for my testing framework. You can define a test without implementing it. This lets you know which features still need to be implemented.
Tests are designed to enforce the expectations made of the code. Those expectations can be set by interfaces, but can be supplemented by traditional documentation or other out-of-code communications.
In this case, if such communications have established that these methods are not supported, then the tests correctly enforce that (and are "good"). By failing when one of these methods is implemented, the developer is reminded that the expectations of the code have changed, and out-of-code communication artifacts must be updated to reflect that.
There should have been separate tests for each of those functions, positioned where you would have naturally written the tests for the real, working functionality. That way, it wouldn't have shown up at the build server; it would have shown up as soon as you considered working on the feature that wasn't supported.
Glomming everything together means you have a submarine test, which, no matter what it's testing, is bad.
Had these tests been closer to where you were expecting, you wouldn't have thought to write this post, most likely.
> Email-style angle brackets
> are used for blockquotes.
> > And, they can be nested.
> #### Headers in blockquotes
>
> * You can quote a list.
> * Etc.
Horizontal Rules
Three or more dashes or asterisks:
---
* * *
- - - -
Manual Line Breaks
End a line with two or more spaces:
Roses are red,
Violets are blue.
Fenced Code Blocks
Code blocks delimited by 3 or more backticks or tildas:
```
This is a preformatted
code block
```
Header IDs
Set the id of headings with {#<id>} at end of heading line:
## My Heading {#myheading}
Tables
Fruit |Color
---------|----------
Apples |Red
Pears |Green
Bananas |Yellow
Definition Lists
Term 1
: Definition 1
Term 2
: Definition 2
Footnotes
Body text with a footnote [^1]
[^1]: Footnote text here
Abbreviations
MDD <- will have title
*[MDD]: MarkdownDeep
FUTURE POSTS
Partial writes, IO_Uring and safety - about one day from now
Configuration values & Escape hatches - 4 days from now
What happens when a sparse file allocation fails? - 6 days from now
NTFS has an emergency stash of disk space - 8 days from now
Challenge: Giving file system developer ulcer - 11 days from now
And 4 more posts are pending...
There are posts all the way to Feb 17, 2025
RECENT SERIES
Challenge
(77): 20 Jan 2025 - What does this code do?
Answer
(13): 22 Jan 2025 - What does this code do?
Comments
Not entirely true:
Your requirements are changed thus, your test doesn't meets the requirements anymore.
Ah but it gives 100% code coverage. :-P
I agree with Daan. If the test ensures that any client that uses your API will get a valid error when he tries something that is not supported then the test is just fine.
If you consider your doc as part of the API, then your Test is testing your doc.
Also, it can give great information to other programmers - they might be browsing tests to see how they're supposed to use this functionality and when they see this test they'll immediately know that it's not yet implemented.
How about that: You are changing the behavior of your API. If you'd follow test first approach you'd need to write a new test for your changes. Let it fail with the not supported exception, remove the assert throws from the other test and then implement the new feature. This is not a bad test. It is you not following the principles.
Daniel
It's a bad test, because you make more than one assertion in it, not because the reason you mentioned.
One test - one assertion. This way you know all failed assertions. In your test, all of them may be failing, or just the first one, you won't know.
I think that in the real world no one would be find useful that kind of test, neither very fanatic agilists. Making tests to succeed for real feature is enough a hard work.
I actually wrote a test like that against RavenDB.
A few versions ago there was a bug where Uri properties would cause Raven to think the data had always changed. So you could do this from memory:
var customer = session.Load<Customer>(); Assert.IsTrue(session.Advanced.HasChanges(customer));
I didn't want a broken test failing my build for a week while I waited for a new RavenDB build so I wrote it as a negative test. When I upgraded RavenDB and the test failed it was a nice reminder about the issue.
In some ways I think the negative test had /some/ value - if I ever wondered if a problem still existed I had some proof for it. But having a test 'fail' because something 'worked' did leave a strange feeling. I'm not sure what the solution for this should be. Maybe Assert.Inconclusive()?
Paul
As mentioned before the problem aint the test itself. It may be that it is hidden in an improper location without giving a semantic clue.
I do have a test file dedicated to "unsupported" or "defects" in third party components. When I update any of them (including RavenDB) and they are fixed I know right away and fix the workarounds in the code.
So I dont think it is a test, I believe it is a "note" to your team or yourself in the future where you are not actively thinking about it.
Parts of that test are actually quite important. shardedDocumentStore.Url can't possibly return a result that makes sense - if it returned (for example) the first shard's URL that would have been a bug. Same goes for DatabaseCommands and AsyncDatabaseCommands.
I think it's even important for GetLastWrittenEtag and Defer - because as long as you've not implemented it, an incorrect result can be far, far worse than a NotSupportedException.
This is why I love using machine.specifications for my testing framework. You can define a test without implementing it. This lets you know which features still need to be implemented.
Tests are designed to enforce the expectations made of the code. Those expectations can be set by interfaces, but can be supplemented by traditional documentation or other out-of-code communications.
In this case, if such communications have established that these methods are not supported, then the tests correctly enforce that (and are "good"). By failing when one of these methods is implemented, the developer is reminded that the expectations of the code have changed, and out-of-code communication artifacts must be updated to reflect that.
Crazy Idea: What about exposing
public static IEnumerable[Action[IDocumentStore]] OperationsNotSupported() {}
Then you could refactor without refuctoring.
There should have been separate tests for each of those functions, positioned where you would have naturally written the tests for the real, working functionality. That way, it wouldn't have shown up at the build server; it would have shown up as soon as you considered working on the feature that wasn't supported.
Glomming everything together means you have a submarine test, which, no matter what it's testing, is bad.
Had these tests been closer to where you were expecting, you wouldn't have thought to write this post, most likely.
Doesn't this suggest it should have been a NotImplementedException originally?
Comment preview