Uber Prof analysis pipeline

time to read 1 min | 191 words

In most situations, I like to think about my apps as a set of composite Lego blocks. Ignoring the infrastructure bits, here is how Uber Prof is structured internally:

image

I have a set of message processors, which work of low level event messages, detect patterns in those messages and turn them into high level events. Next, I have the Model Building part, which takes the high level information and handle correlation between the different events. I then have analyzers, which are branched out of the model building processor to perform the various analysis actions that I need.

The idea is to try, as much as possible, to build the system out of identical parts, and when I need additional functionality in a given location, I create another extension point and plug in more single use classes at that place.

All of this is being orchestrated by a container that can provide those dependencies, so the act of creating a new class is also the sole act required to add the new feature to the system.