Stacking digital books on virtual shelves with binary load lifters

Well, I guess we've got an answer to one of my nagging questions: what can you do with an 80-core processor that doesn't involve massive amounts of linear algebra?  A recent University of Chicago report on the uses of their resident 260-processor Linux-based server cluster, known as Teraport, notes several uses by academics and researchers that have no direct connection with the physical sciences. 

Many of these efforts were data mining and processing of tens of millions of words from plays and other literary sources.  One lexical study looked for and found distinct word use differences between male and female authors, and between American and non-American authors.  To further support such research efforts, a consortium, including the University of Chicago, will be participating in the Google Book Search project.  The goals of the consortium will be to digitize up to 10 million books.

Today literature professors need a few nodes from a shared cluster; tomorrow they will have all the cores they need on their desktop.  Storage space is cheap today, with 1 GB of disk for less than the price of a paperback copy of Great Expectations.  In a decade or so, I expect that we will have the technical and algorithmic know-how to utilize the extra cores to data mine other communication mediums beyond text such as voice, video, and music.

While I may not be able to see the direct effect or applicability of such research projects--being that I'm more of a hard-science-oriented guy--I do see the sale of many-core processors in the near future.  While not as big as the home user market, Academia (professors and graduate students) is now and will still be a sizable an important share of the overall market.

(BTW, my first job was programming binary load lifters.)
For more complete information about compiler optimizations, see our Optimization Notice.