Upfront Optimizations
Developers tends to be micro optimizers by default, in most cases. In general it is accepted that this is a Bad Thing. This is a very common quote:
In my last project, I wasn't willing to allow discussion on the performance of the application until we got to final QA stages. (We found exactly two bottle necks in the application, by the way, and it cost us 1 hour and 1 day to fix them). You could say that I really believe that premature optimization is a problem.
However, the one thing that I will think of in advance (hopefully far in advance), is reducing the amount of remote calls.
This is something that you should think of in advance, because a remote call is several orders of magnitudes than just about anything else that you can do in your application except maybe huge prime generation.
In general, you need to think about two things:
- How can we reduce remote calls?
- How can we fail (in development) for exceeding some amount of remote calls per a unit of work?
What I have found is that batching makes for a really nice model for reducing remote calls, and ensuring failure on high number of remote calls to be the most effective way to reduce their number.
This is generally not something that you can retrofit into the system, the model of work is completely different. You can try, but you end up with a Frankenstein that is going to be slow and hard to work with.
So, at any rate, this is what I wanted to say. Ignore performance until very late in the game, but do think about remote calls and the distribution of the application components as early as possible.
No, this is not BDUF, it is setting the architecture for the application, and having the right fundamental approach as early as possible.
,
Comments
These pictures are fun, it look likes O´really Head First Series (there are cognitive science concepts behind that). I´ve read Head First Design Patterns some time ago.
NHibernate offers a good level of PI -- Persistence Ignorance. It would be cool to buid a framework that provides a good level of "RPCI" -- Remote Procedure Call Ignorance. So it cold help us to isolate the RPC infrastruture code bloat from our domain, following DDD -- Domain Driven Design -- pratices
Just use serializable objects and it´s OK. I believe NHibernate´s ISession implementation and LINQ Deffered execution already have the main parts of algorithm, but it must be something that respects SOA concepts.
Well, before trying to implementing something like that I should check NServicebus concepts.
I think this is something NServiceBus definitely helps with, if maybe indirectly. It's adoption of the SOA model focuses your application along lines of messages and endpoints rather than RPC, and this practice rarely results in excessively chatty services. However, the obvious caveat applies that this approach is absolutely not appropriate in every circumstance.
I 100% agree, we are usually looking at the difference linked lists and hash sets, but miss the whole point that when you go out-of-band it is an extreme hit.
The problem I feel is the level of abstraction. Back when you had to program done to the socket level you knew the expense.
Abstraction is of course good, that's wy we need common sense and maybe some good tools integrated into the build process that can spot this.
Hmm - was it this project by any chance?
http://www.ayende.com/Blog/archive/2007/06/20/Shocking-Rob.aspx
Nice to see PERF getting it's due :):):)
So, when did you attend the Jeff Atwood school of blog post imaging? :-)
Rob,
No, that was the project before last, but the same ideas hold.
And this is a great way to ensure that you aren't chatty
I do it once in a while. Usually when I think that the content is important enough that it needs breaking up
"...because a remote call is several orders of magnitudes than just about anything else that you can do in your application except maybe huge prime generation."
You have no idea what developers can come up with.
Alex,
I do, and it is still not expensive as remote calls.
Of course, I once saw someone do:
foreach id in Exec("SELECT Id FROM BigTable"):
Each time you find the performance problem, and fix, you are less likely to repeat it in the next project.
In a way, Upfront Optimizations become second nature as you learn from your mistakes.
Your point is well taken about going overboard.
"Of course, I once saw someone do:
foreach id in Exec("SELECT Id FROM BigTable"):
result = Exec("SELECT * FROM BigTable WHERE Id = ${id}")
Process(result)"
Um...isn"t the problem with that code....too many remote calls ? :-P
/Mats
Yes, it is.
That was in reference to the mess programmers can make.
I meant that even the most messiest solution can't usually cost as much as a single remote call, but that I programmers can make remote calls in error
Snyder's Candidate Type Architecture (http://www.cs.washington.edu/homes/snyder/TypeArchitectures.pdf) explains why certain performance issues need to be dealt with upfront. When data access is very slow (network, file system) relative to the speed of the desired computation that data access must be factored into the equation. The CTA does this for parallel computation, but also provides some good insights into non-parallel architectures.
Oren, I wish what you are saying was true, but I have to entirely disagree. Every day I see customers experiencing performance problems that have nothing to do with remote calls.
Two specific and very common scenarios that come to mind are memory leaks and paging - which can also be combined if you need real performance killers. And of course there are the parallelism killers (when you're using 1 CPU out of 16 I'd say there's a performance problem waiting to happen), and there's also the type of performance issues simply caused by a sucky algorithm. Yes, using a bad collection for the task at hand could turn up to be a performance problem a few months later in production.
So while I agree that premature optimization is inappropriate, I entirely disagree with that performance should not be a consideration upfront. It's a part of the entire development cycle, from the very initial design phases and all the way through automatic performance testing and tuning.
Sasha,
From my experience, most of the perf issues that you described are easily handled in a local manner.
Memory leak is a bug, changing the implementation of a collection is an implementation decision.
Those are very low level concerns, with very limited impact on the application.
The structure of remote calls is one thing that is really hard to change when you realize this is a problem
Reflecting upon our email conversation which ensued from this comment, just wanted to summarize my opinion...
Two examples come to mind from last week:
Long-running workflow (.NET 3.0) application spends 75% time in GC when 25 clients attempt to create workflow instances simultaneously. Appears that the workflow has too many activities and the amount of memory consumed by instances grows over-linearly. Now try refactoring a workflow that has 200 activities into multiple workflows (which have to call each other synchronously – you need yet another framework) or merge 200 activities together so they appear as 20-50 instead. This is not a minor implementation change, and is something you wouldn't possibly detect earlier if you didn't think about performance at the workflow design phase.
An application that started out in 1995 as a single-user monolithic program, and today intends to serve hundreds of users. It's not the remote calls that fail – it's the overall system state of mind, which was guided by the fact there is a single user and now you have 500 all of the sudden. It's full of coarse-grained locks which last several seconds before they are released, it's full of property accessors that perform a calculation (because 200ms is fast enough when you have one user, right?), and since everything is coupled with everything beyond any hope, I do not envy the poor programmers who have to maintain and refactor this nightmare.
If I had known that the client wants to use WF, then I would sit through the workflow design meetings and ensure that they were not using 200 activities. Yes, there might have been many reasons for me to say that, one of them is performance. These are the kinds of issues which you cannot face at the end of the project. These are issues that you must address when you design, whether it's for performance reasons or for entirely other reasons.
What truly boggles me is how the argument for correctness in software is not applied to performance in software. I don't understand how someone can write unit tests for their yet-unwritten code (as TDD teaches us) and disregard its performance implications, at the same time. No one in their right mind could possibly say to you, "Let's define and test for correctness later, first I'd like to write some code without thinking about functional requirements." But on the other hand, how can you say to someone, "Let's define and test for performance later, first I'd like to write some code without thinking about non-functional requirements?"
Comment preview