I read the article "Top problems with the TOP500" from Bill Kramer, NCSA Deputy Project Director for Blue Waters, and thought he had some compelling arguments. It's always been a big bi-annual event to see what systems have been added to the list and how many Flops the new list toppers have been able to achieve. (If you're interested in a bit of history about the list and what the HPC community will need to do in the march to Exascale systems, watch the video of Jack Dongarra's talk "On The Future of High Performance Computing: How to Think for Peta and Exascale Computing.") As Bill notes, the TOP500 list is a good marketing tool, as well, for companies to point out the usage of their latest technologies and how many entries on the list they support. In the list released at SC12, there were seven systems that are sporting the Intel Xeon Phi coprocessors.
Has the Linpack benchmark of dense matrix computation run its course as a viable measure? This is one of the big problems in Kramer's article. (Dongarra acknowledges the controversy in the first few minutes of his video.) Kramer points out that computation in the HPC space has gone beyond floating-point linear algebra operations. There's still a lot of that going on, but to use that type of calculation as the sole benchmark of a system's strength is doing a disservice to scientists that need large compute platforms to solve sparse linear algebra probglems, analyze social networks, or align protein sequences. Kramer argues that a more well-rounded benchmark will give a broader picture of how useful a system is for actual use by many different types of computational workloads that are being done today.
One way to replace the TOP 500 list would be to put forth an alternate, like the Sustained Petascale Performance benchmark (which could give rise to the "SPP500") that is mentioned by Kramer. You won't be able to supplant the TOP 500 right away since the new metric won't be accepted initially. However, if I can run the benchmark on my laptop and get into the SPP500, I've got bragging rights, a cool certificate to display in my cube (marketing hype will be essential to get the word out about the new benchmark and its importance), and the envy of HPC centers that didn't participate. Eventually I will be displaced by HPC machines whose owners/users know they can outperform me and then the dance of oneupmanship can begin. In 5-8 years you will have the "Sustained Exascale Performance (SEP) 50" that will expand as machines reach the next level of performance. I imagine that some corporate sponsor of the list, since their machines perform well on several different types of computations and not just dense linear algebra, could be a driving force for the definition of the benchmark codes and the maintenance of the list.
There are already a couple of specialized ranking lists: the Graph 500 and Green 500. I can see other specialized lists being put forth and then some meta-list that could take several of the performance list rankings and pull them all together into a single overall ranking.
As I wrote that last paragraph, I thought about the current state of college football and the current system of BCS rankings being based on the same idea. Such a meta-list ranking will solve some questions about the single benchmark lists used today, but, like the BCS rankings used to determine the number one team in the country, there will be controversy about the choice of lists and the weight given to each to determine the "true" TOP performing HPC. I suppose we could stage a few rounds of playoffs (like college football), but that can get messy and complicated and distracts from the whole point of HPC: doing better science faster.