Multiple Processors vs. Multiple Cores?

Multiple Processors vs. Multiple Cores?

Maybe I'm wrong, as I'm not a computer engineer or programmer; sorry for my limited knowledge and understanding, the multiple core processors are basically channeling dual or quad processes through different pipelines at the same given speed.  Sorry if this sounds stupid...

My first personal computer was a Compaq Contura.  If I recall, it was either before Pentium or it's the first Pentium generation.  Till now, there's multiple cores on a same processor, but for some reason I was thinking of my old VAIO LX-900 desktop that had a Pentium III (Tualatin) running at 1.2 GHz.  I loved that computer because it was just one processor and one core and it was amazingly fast to be running multiple programs while process multiple commands.

It struck me thinking:  What are the possibilities of having a personal computer with multiple processors running at different speeds?  Will it process commands and instructions faster than multiple cores?

What if there are multiple processors with only single core but each processing at different speeds according to the operating system to determine the complexity and resources required to process the commands and instructions?

I was thinking the standard deviation bell curve.  I was thinking that in most personal computing, at each end of the spectrum are background functions of minimal processing power to the extreme heavy-load of graphics rendering that requires maximum processing power are on the ends of the bell curve.  Most every day software requires the middle two deviations of the bell curve.

What if there are 2 or 4 single core processors running at different speeds that are designed to maximize the throughput of commands and instructions that are separated into 4-types; like cooking a piece of steak from black & blue, rare, medium-rare and well-done.  Obviously the black & blue are the minimal processing power required to the well-done that requires heavy load of the processor.  I guess would there a possibility to work with Mac OS X and Windows 8 to possibly do some R&D?

Thanks for Listening.

7 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

What distinctions are you making from the 'nice' command (for adusting priority of various tasks)?

Latest CPUs have provisions for reduced power states of various threads; a first step might be that lower priority tasks run at reduced power.

The current MIC isn't particularly well adapted to supporting multiple unrelated tasks.

There are no stupid questions, just stupid answers and I'll try to make this not one of those ;-).

Though your primary question is about processors versus cores, it seems a secondary question is "how can I get things to work as fast as my old (pre-Pentium) computer?"  The fact is that processing demands have evolved with the complexity and features of our software, and that old computer of yours might be hard pressed to run modern programs because of all the extra stuff they are doing  behind the scenes.  Having a faster processor means getting everything done faster, so given the choice (which may be complicated with concerns over battery life and other such features) a faster processor can do everything a slower processor can do, and then go idle, so generally faster is better.  I'm not sure I could make a case for including slower processors, assuming they fit in the same power envelope.

Regarding processors versus cores, we ran into a technological barrier, commonly called the "power wall," which means we can't keep turning up the clock on a single processor, though if we could that would make a lot of software designs a lot simpler.  But we can't, because we can't effectively cool them, so we at least need to go to multiple processors.  But mutliple processors on separate chips impose their own barriers to performance.  The chip drivers have to be bigger to drive signals off one piece of silicon onto another, and everything runs a little slower.  Putting those processors on a single chip and sharing the resources (memory and other controlers) between--separating the processors into core and "uncore" sections, reduces the delays inherent in multple processors and so further speeds the communications between them.  But all this imposes a contraint on programmers to deal with algorithms that split the work between mulitple processors, and that's what we struggle with these days, to provide the performance to run modern software fast enough to make the new features operate in the same time that older, simpler programs used to take.

I hope i'm not stupid to say this... but looked at how the traditional layout diagram of a motherboard.  This is what I thought would it be faster and better use of RAM in general; IF for the following is TRUE?

* Each of the vital components of a motherboard goes through either the Northbridge or Southbridge.  Each of these components require some sort of RAM that is controlled by the Northbridge controller hub.

* The possibility of having each vital component of the motherboard with individual designated RAM?  Instead of all sharing the same RAM slots with memory all transfered in/out through the different components; wouldn't it be faster if each component can read/write on their individual designated RAM?

I am imagining that for Graphics Card/Controller - a higher speed and capacity of sychronized RAM would be great for rendering and gamers.  For the Southbridge - a moderate speed and less capacity of asynchronized RAM would be sufficient?  After all RAM is relatively expensive...?

This goes back to my previous post of 'what if' there is some development that have dual processors to seprately control the Northbridge and Southbridge - provided that the BIOS and and OS be able to initiate and differentiate which processor to send to.  I do think that the possibility of having dual processors (with multiple cores) might be the future of PC's?

Thanks for listening (... reading).

>>>The possibility of having each vital component of the motherboard with individual designated RAM?  Instead of all sharing the same RAM slots with memory all transfered in/out through the different components; wouldn't it be faster if each component can read/write on their individual designated RAM>>>

What do you mean by vital components?Are you refering to various expansion cards and Northbridge/Southbridge?

>>>I am imagining that for Graphics Card/Controller >>>

GPU has its own very fast memory.Some kind of performance bottleneck will be related to interaction between CPU/GPU when processor is writing to MMIO fx data and waiting for GPU for completion and vice versa.I do not think that CPU (host) can directly write/read GPU memory I think that some kind of command processor is reading from MMIO data sent by CPU (with the help of driver) and translating it into GPU internal instruction and data streams.



I guess I'm trying to say that the possibility of 'speicalization' of RAM for different components that require memory.

I understood you.

You simply would like to eliminate main memory MMIO space which every hardware device has?

Leave a Comment

Please sign in to add a comment. Not a member? Join today