Server Farm Dynamic Feedback

This project, lovingly knowns as the Online Beamspot Determination, was the coding project that received the lion’s share of my graduate coding hours.  And after many many grueling code-compile-test-redeploy cycles, all we got was this beautiful diagram.  Well, OK.  We also got a pretty damn cool tool.

Mostly this post is about “ooooh aaaaah pretty diagram”.  But below I’ll try to explain what we are doing using a web-server analogy.

Unable to display PDF
Click here to download

The Analogy:

Imagine a farm of Google search servers: the waving wheat, the moo-ing cows.  Idyllic.  No but seriously, these server processes are getting pounded by search requests.  For the sake of discussion let’s make a few assumptions. First suppose that most requests were identical to others that occurred within the last few minutes.  For example, ever time Kobe Bryant scores a basket, a million people Google “Kobe” (and not the beef).

In such a universe, you might want to detect the onset of these correlated search waves and respond very quickly by updating a set of cached results the servers in the Western US use.  This is a tricky problem because you have thousands of machines who each see a slight uptick of “Kobe” searches and you need to make a decision to update caches that will affect those same machines, even though not one of them sees the whole picture.  And they’re all acting asynchronously.  Bummer, dude.

The Limits of the Analogy and our Solution:

Our problem, which is part of the ATLAS Trigger and Data Acquisition system, has two big differences.  Firstly, our system is partly synchronous.  This means that we have a single point, called the Central Trigger Processor, that is guaranteed to control the flow of data downstream of it (which is then asynchronously processed).  Secondly, our system really really needs the parallel processes to cooperate.  In the Kobe example you might argue that if the wave of searches was so damn big, each individual process would notice it.  Yea perhaps, its just an analogy!  In our system it takes thousands of processes working across dozens of racks to accumulate enough statistics to accurately measure these parameters of physical (and technical) interest called the Luminous Region.

We accomplished the N:N communication problem by a big fan-in and fan-out, which uses a hierarchical data-collection and -distribution.  So we aggregate all the data across the processes for about ten minutes, crunch thousands and thousands of fits on them, then make a decision.  Have the parameters we’re measuring, changed so much that we need to inform the farm?  If so we push our values to an accessible location and trigger an update.  Then WHAMO!  Those exact same processes go fetch the fruits of their collective labor.  Many hands make light work =).

In this way, we were able to coordinate so many processes without ever letting them know about each other.  And when they all do go update their values (as close to synchronicity as the farm ever gets) it still induces < 10ms of dead time on the system.  Not to shabby at all!

 

Well that’s all I had on the diagram.  Like I said, just an overview.  Thanks for reading!

Posted in Uncategorized Tagged with: , , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>