Recent innovations in data warehousing and business analytics dramatically increase the capability and potential value of today’s massive, diverse, and often fast-moving data flows. Companies now perform interactive queries and predictive analytics using all available data, including operational data and the huge amounts of poly-structured data available from logs, social networks, sensors, and many other sources. In this white paper, we define a practical, cost-effective infrastructure for supporting data-driven decision-making on an enterprise scale.
The business potential of big data analysis is enormous across virtually every business sector. The Intel IT organization has implemented use cases delivering hundreds of millions of dollars in business value. This paper discusses a few of those use cases and the technologies and strategies that make them possible. It also defines the architecture we use for big data analysis and provides an in-depth look at one of the most important components—an Apache Hadoop* cluster for storing and managing large volumes of poly-structured data.
I had an interesting question come across my desk a few days ago: “Is it still worthwhile to understand T-states?” My first response was to think, “Huh? What the heck is a T-state?”
Doing a little more research, I discovered that, yes, there is something called a T-state, and no, it really isn’t relevant any more, at least for mainline Intel(R) processors.
Let me say this again: T-States are no longer relevant!
I attended the Cloud Expo in New York City at the Javits Center in June. The attendees were a mix of Web hosting companies, web developers, software developers, hardware developers, and operating system developers. The event sponsors included Intel®, IBM*, Citrix*, Rackspace*, Oracle*, Verizon Terremark*, Akamai*, and many more. Everyone came to learn, share, and we agreed that the development cycle was quicker than expected for new software and products using the.