Advanced Computer Concepts for The (Not So) Common Chef: First Some Terminology Part 2

Published: 03/25/2015, Last Updated: 03/25/2015

OF COURSE, I KNOW WHAT A THREAD IS….DON’T I?

Now that we know what a core is, let’s dive into another source of confusion.

This section gets a little deeper into techno babble than I wanted for this series of blogs. If you are so inclined, my gourmet readers, you can either skip or read on. I believe the rest of the blogs can be understood with or without this little aside. But for those of you who are already familiar with threading, it may give you more insight than would be the case otherwise.

Before getting into the guts of the analogy, let’s discuss what a ‘thread’ is. You see, the normal old thread that we computer scientists know and love, is not the same as a thread when talking about the execution of a program in a core. For example, hyperthreading does not really refer to the ability of one core to execute the two threads our program spawned off (say, MyProgram.ThreadA and MyProgram.ThreadB). (I’ll get to hyperthreading a little later in this series.) Don’t get me wrong; they are related. When a computer architect says thread, he’s referring to a “thread of execution” and not a program. At the hardware level, there are no programs, only a sequence of instructions the hardware executes, thus the “thread of execution.”

(I’ll be using “HW-thread” as a short hand for “hardware-thread” and “SW-thread” for “software thread.”)

Hopefully, some of my less-than-professional illustrations will help.

Figure SWTHREAD shows two threads (ThreadA and ThreadB) spawned by ProgramOne, and a single thread that corresponds to a second program, ProgramTwo.

Image of software threads

Figure SWTHREAD. How programmers think two programs execute on a single core machine with hyperthreading. (IP is the Instruction Pointer, a register that tells the computer which instruction to execute next.)

This is a pretty familiar situation. You have a program running and an Instruction Pointer (IP) which points to the instruction that will be executed next. Looking more closely, you’ll see that some strange things are going on with the IP that some might think are typos. For example, there are three IPs even if our hardware can execute only two threads at a time. In addition, the IPs are offset, whereas we usually think of them as being lined up. That’s because each SW-thread has its own IP address associated with the context it is executing in. In this case, a “context” is an operating system (OS) context, including registers and other things associated with passivating and activating a SW-thread. (‘Passivating’ is to put to sleep. ‘Activating’ is to wake up.) This means that the IP addresses are associated with each SW-thread’s context and not the actual hardware. There can be, and are, as many IPs as there are SW-threads executing on the processor.

Now let’s look at Figure HWTHREAD1. In the top left corner is a miniature version of Figure SWTHREAD, allowing us to easily compare how a hardware thread differs from a software thread. How the hardware executes those SW-threads is shown in Figure HWTHREAD1. Notice that slices of the SW-threads are distributed across the two HW-threads by the operating system on an opportunistic basis. This points out the difference between a SW-thread and a HW-thread. SW-threads are the parts of the program that the programmer intends to execute in parallel. That’s why the number of SW-threads can be anything from one to hundreds. In contrast, the execution of a sequential series of instructions by a processing unit is a HW-thread. The number of HW-threads per core is fixed per core, though some modern processors have a switch that can change this value slightly. (Hyperthreading is a mode whereby one core can execute two threads instead of just one, but we’ll talk about that later.) These instructions can and often are from different SW-thread contexts including that of the kernel. The kernel is what does the actually switching between various SW-threads by storing one SW-thread’s context and loading another.

image hardware-threading

Figure HWTHREAD1. Mapping software threads to hardware threads

Ah, I know what you are thinking: What does this have to do with our (not so) common chef and his kitchen? Well, it only does if our chef is familiar with the concept of SW-threading from his Intel® Edison IoT (Internet of Things) hobby or his small, on-the-side Android* cooking app business. This whole thing about SW-threads versus HW-threads only has the potential to confuse computer programmers.

ADDENDUM: ANOTHER WAY OF MAPPING A HARDWARE THREAD TO A SOFTWARE THREAD

Here’s a little addendum for those who want to really have their mind blown. Figure SWTHREAD shows SW-threads from a programmer’s perspective. Figure HWTHREAD1 shows how the program’s SW-threads map onto the processor’s HW-threads.

Figure HWTHREAD2 shows how one of the processor’s HW-threads maps to the software threads (versus SW-threads mapping to HW-threads). Though the difference seems simply a matter of semantics, you can see that in reality the difference is dramatic. The HW-thread is the wavy blue line that jumps back and forth between the different processes. Don’t worry, I’m not going explain it. If the explanation really interests you, just leave a comment and I’ll dazzle and delight.

image mapping threads

Figure HWTHREAD2. Tracing a hardware thread through software threads (plus the operating system)

NEXT: Back to the kitchen analogy and how a modern processor works!

FOOTNOTE:

+The operating system (OS) runs the jobs that must be done so programs (e.g., spreadsheets) can run. Think of those jobs as managers and support staff, people who don’t sell or make stuff, but are nevertheless necessary for the business to run.

Similarly, the kernel is the very core of the operating system that performs the key functions that must be there for anything to work. For example, you can probably do without many of the managers, but not the guys who actually keep the manufacturing equipment working (but are not involved with creating the product itself).

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804