Blue Waters and Red Ink

I got a bit of a shock this morning.  On the front page of the local newspaper (and the online version, too) was the story about IBM pulling out of the Blue Waters project with NCSA. (HPCwire covered it here.) The main reason for IBM's decision is reported to be due to the project no longer being cost-effective.

The building for the hardware has been completed (at least externally) for about a year and I'd been watching the building construction for almost 2 years before that. There's been a sign announcing the National Petascale Computing Facility for several months, too, and all the publicity that has been floating around that part of campus touted the Blue Waters machine. There was even a tour of the building at the end of last year's UPCRC UIUC Summer School on Multicore Programming. I declined to make room for more students and thought I'd get the chance this year when some hardware might have been installed.

As the reports indicate, even though IBM has pulled out, NCSA is still hoping to spend their grant money on a comparable machine to be installed by next year. They are even contemplating a name change since the "Blue" has been tied to IBM and many of their large computing platform projects (e.g., Deep Blue and Blue Gene). Even so, since we consider water to be blue, the name might still work.

And then I got to thinking, if Intel were able to sponsor a petaflop system with our processors, the name could still be used. Even though we might not be thought of as "big" blue, the Intel logo and badges and other corporate identifiers are more often blue in color. This might be the chance to bid a solution that incorporates the forthcoming MIC processors. Something for someone to think about.

UPDATE: The News-Gazette carried the story "Behind the parting of IBM and Blue Waters" on Sunday, 25 SEP.  For those interested, there are more details on why and how the Blue Waters project played out as it did.
For more complete information about compiler optimizations, see our Optimization Notice.


Clay B.'s picture

Thanks to Jim Dempsey for sending me the N-G link to the update story from 25 SEP.

Clay B.'s picture

@Jose - Thank you for the video pointer. The copyrgith date was 2009, so things were still on track with NCSA and IBM. Some of the problems might have been foreshadowed in the video between 1:35 - 1:55.

jose-jesus-ambriz-meza's picture

I believe that any lector need listening this video before reading this article.
best regards!


Clay B.'s picture

Jim - I don't know anymore than the published reports have stated. I'm sure NCSA and IBM won't be airing their dirty laundry about who said what and why things fell through. I don't know if the $30M was the full price or just a downpayment to help defray research costs upfront.

In any event, I think this is a good opportunity for Intel to step up and propose a MIC-based solution. I'm not sure if Intel would be in the same boat that I suspect IBM found itself with "unknown" performance profiles on the MIC products to be released, though. If so, it might be a tricky business to get into right out of the gate. I don't think Intel would welcome a big "black eye" from not being able to meet the system specs of the proposal, which might also be part of the reason IBM backed out.

If the NCSA and UIUC and any other universities that are part of the Petascale consortium were to step up and design their own system, perhaps a vendor would be more willing to work as a support player. If things don't reach the level of performance that is expected, the vendor wouldn't take the majority of the blame or bad press since it was the univeristies that proposed the design.

All of this is business and politics and marketing, which I don't even pretend to understand. It's why I got into computers. (Well, that and I can't draw to save my life.) Right now I'm waiting to see what NCSA are going to do next.

jimdempseyatthecove's picture

>>This might be the chance to bid a solution that incorporates the forthcoming MIC processors.

Perhaps Intel will step up to the plate (for the challenge).

The integration of these (MIC) processors into a complete system will be as critical if not more so critical. Not all large problems have the same type of memory access issues. While some have good locality of data to the core other problems have distributed data access issues. And this requires memory switching capability and/or fast transport of data, not to mention massive storage capability. Also, selection of operating system, or from my viewpoint the creation of an entirely new operating system (phase-2) may be required to take complete advantage of the hardware. IMHO the entire system should be real-time (hot-plug) expandable with a controlling OS that presents a virtual Petascale-like system to run the O/S of choice for the researcher. When the problems are large, the virtual system has more resources. When the problems are not so large then more researchers have access to the system. This system will likely have both homogeneous and heterogeneous characteristics. The O/S will need to efficiently reallocate these resources according to the needs of the researchers. I am sure this was all laid out in the proposal (RFP).

jimdempseyatthecove's picture

>>They return the $30M they've been paid and get their deployed equipment back. Was this how the contract was structured?

Well then did the $30M of deployed equipment meet the performance requirement?

If so, then the center has a good argument to leave the equipment. I imagine that IBM did not disclose if a) $30M was significantly less than it actually cost to produce the equipment or b) they found a buyer that would pay significantly more than the $30M they got for the equipment (and installation,...)

If not, then IBM made a bid that they could not fulfill. Resulting in the second place bidder missing out and in the University (and other contributers) having built a rather expensive computational research facility with no computational capability.

In either case, it may be time for the lawyers to get involved (I hope it doesn't come to this).

An alternative would be for the University's "out-of-box" thinkers to put their heads together and devise a plan to construct their own system for under $30M (as well as reopen the bid). While these thinkers would be lacking the experience of the engineers at IBM this can be a blessing because they can operate outside constraints that the IBM engineers are held to. We will have to see what the university will do next.

Clay B.'s picture

Jim - I think it is more on IBVM's bottom line that the decision was made, not the fact that research projects can't be known to be cost effective at the outset. The details of the hardware and other requirements may have been agreed upon at the initial signing of the contract, but not actually available at that ttime. IBM figured to have something that would meet specs by the time things were needed. Now that the deadline is looming, if those innovations aren't forthcoming or are vastly more expensive than expected, I can understand IBM's reluctance to continue. I don't condone it, I just understand it.

I think I'm most surprised that IBM could get out of the contract so easily. They return the $30M they've been paid and get their deployed equipment back. Was this how the contract was structured? Sweet for them. (One thing I think I've learned from Judge Judy is that a (written) contract is binding unless there is a written amendment changing the terms.)

jimdempseyatthecove's picture


Ahumm, Since when has a research project been cost effective? Cost prohibitive - true. Cost effective - not predictable. i.e. you do not know what will come out of this research project when you begin. And you don't know the value of the research - and branches it takes on the way.

Jim Dempsey

Add a Comment

Have a technical question? Visit our forums. Have site or software product issues? Contact support.