Page 1 of 1
CPUs: Don't we need a breakthrough in power-usage?
Posted: Mon Nov 29, 2004 12:03 pm
I don't know why, but I was thinking along the lines of the old joke of Bill Gates saying
"If GM had kept up with technology like the computer industry has, we would all be driving $25 cars that get a thousand miles a gallon."
which was supposedly countered by a GM reply of "Yes, but would you want your car to crash twice a day?"
Well, I was thinking about this for some reason, and it occurred to me to wonder: Have processors really advanced much?
I mean, sure, we can make the traces smaller, run them a LOT faster by orders of magnitude of what existed just a few years ago, and so on.
But these days, heat and power usage are becoming an issue.
I stumbled across something . . talking about cooling, where the 75MHz Pentium uses X watts of power whereas the P4 uses, naturally, a LOT more.
Doing the math, I did notice that the P4 has a slightly lower amount of wattage use per MHz, which is good I suppose. But it actually did seem like an incremental improvement, much how a certain size vehicle with certain size engine can get better MPG these days than in the past, but only say a 10 to 20 percent of improvement.
AMD processors, I realize, perform the same as a given P4 but at a much lower clock speed. This might be the equivalent of getting more power out of the same size engine, rather than going to a bigger displacement engine.
BOTH, however, produce a buttload of heat and draw a lot of power.
I also have a vague recollection of a few years ago that the processors Apple used for the Mac were like AMD in that they got more done with less MHz, but unlike either AMD or Intel in that they drew much less power. I wasn't sure if power per MHz was lower as well, or not.
Is that true? If so, why?
So, I guess my big question is: given that we know how to make the processors go faster and faster, and given that we know that a design difference such as AMD's versus Intel's will allow you to do much more with less MHz, shouldn't there be research and development on coming up with a way to make the CPU more energy efficient as well?
Not only using less actual power, but as a result also producing much less wasted heat.
I have no idea how to do this, of course, but it was something that I thought maybe SHOULD be done.
Posted: Mon Nov 29, 2004 4:47 pm
I know nothing about the Apple machines that you mentioned, so I cannot speculate about how they achieved whatever performance improvement they achieved. Basically, there has not been any major breakthrough in electronics applied to computers since the invention of the transistor.
They all use essentially the same components to create logic circuits. Because of that, and the fact that they also all use similar logic circuits, the end result in terms of power consumption is pretty much the same also.
The next quantum jump in efficiency will probably come as a result of either switching from electricity to light as a medium of carrying information, or as a consequence of advances in nanotechnology. At least that is my guess, provided always that Mr. B. doesn't lead us into a nuclear war first.
Posted: Wed Dec 01, 2004 8:17 am
Well, I can proudly say I did NOT vote for him.
On the other hand, I'm pretty sure he's not THAT stupid.
Posted: Wed Dec 01, 2004 12:46 pm
I think like you said, the companies aren't researching lower power usage very much, and I think the reason is, they are both steadily trying to gain higher clock speeds and putting their efforts toward that. It is much the same as the hotrod engine wars of the late sixties, gas was cheap, and big sales numbers meant pretty much one thing, producing a car that made more horsepower than the other guy. I think like back then, it would take a energy crisis to make people really worry about it. As far as the compenent industry is concerned It is much easier to let users buy bigger and bigger power supplies, and put the ball in their court, and I think it would have to be an across the board effort as well. For instance, look at the power the high end graphics cards use nowdays, many require a seperate dedicated power connector. I think the technology is there to some extent, because obviously mobile cpus use less power and run a bit cooler, but there is a cost issue involved, which can be seen by the fact that slower clocked laptop systems which run the centrino etc cost nearly as much as those that are considered "desktop replacements". I don't think people in general would be willing to pay very much of a premium for a cpu that uses less power, unless it gets to the point where running a computer creates the type of electrical drain as running a dryer, microwave oven etc lol. It is interesting to me that usually the mobile cpus have a tendency to have lots more headroom and be very overclockable, such as is the case with the k63+ and the xp mobile cpus, probably becasue they run cooler to begin with etc, so that is a advantage, but there aren't enough of us overclockers to really drive the market segment as a whole. Your point about heat is a very valid one, back in the k62 days, many was the time in testing a system that I ran the cpu bare, without a heatsink for awhile, you couldn't get away with that nowdays on any cpu I know of for over a few seconds. People do not like to hear loud fans running at 7,000 rpm, and I don't see people putting much bigger panaflo type fans since we already have 120mm being used in some cases for graphics cards and cpus as well, not just for exhaust and intake. I for one am surprised that alot of systems by this point don't already come with refrigerated freon type coolers. I say the industry will be forced to look at power consumption for heat reasons first.
Re: CPUs: Don't we need a breakthrough in power-usage?
Posted: Sat Dec 04, 2004 6:02 pm
His Royal Majesty King V wrote:I also have a vague recollection of a few years ago that the processors Apple used for the Mac were like AMD in that they got more done with less MHz, but unlike either AMD or Intel in that they drew much less power. I wasn't sure if power per MHz was lower as well, or not.
Not sure about that any more. The latest high end Apples had to use liquid cooling.
Posted: Sat Dec 11, 2004 5:05 pm
The whole premise that research is not being done to lower power requirements is plain wrong. This is now one of the most important considerations for AMD and Intel. It might be hard to believe considering their level of success, but it is currently the biggest problem they face and they are dedicating plenty of money to help solve it.
The Pentium 4 is a horrible chip, and should not be compared with the K8 as an equal. It isn't, it is vastly inferior overall, although with some things it is better at. Intel totally screw up with the Prescott version, and has essentially decided to kill the Pentium 4 while developing further the Centrino (a Pentium III offshoot). Intel didn't understand the power problem at .09 microns as well as they would have liked to, and increased the transistor count dramatically on the .09 shrink (Prescott core) to allow for higher clock speeds. The problem is, it uses up so much power you can't approach the limits the tranistors can work at; it generates too much heat and takes too much power. So, the limit of speed for the Pentium 4s currently out ARE their power usage. In fact, the same clock speed 90nm Pentium 4s take as much or more power than their 130nm (Northwood) brothers. Really bad move, and Intel is paying for it; several weeks ago they announced they wouldn't be hitting 4 GHz with the current Pentium 4 core.
The Athlon 64, by comparison, uses much less power at 90nm than it did at 130, simply because it is just a shrink. No extra transistors for a bigger L1 cache (which is also slower. What was Intel thinking?), no wasted transistors for extra stages in the pipeline (slowing down the work done per cycle), etc... So, the transistor count stayed pretty much the same, rather than nearly doubling like it did for Intel. So, it uses much less power, and generally out performs the Pentium 4. It is a much smaller die too, so is cheaper to make (although Intel uses 300mm wafers instead of 200mm wafers, which offsets some of the difference in dies).
The Athlon 64, and even Intel chips all include some form of power savings. Intel's chips even slow down if they get too hot, and do it more elegantly with the D0 stepping of the chip. Obviously, heat is a big problem and they have to incorporate these features in the chips because of it. AMD chips have a feature that lowers the voltage and clock speed of the processor if it notices it is not being used much (which happens a lot for most people). These are desktop chips I am talking about too, not mobile.
As I mentioned earlier, Intel has decided to kill the Pentium 4 off in the relatively near future and use the Centrino instead. The Centrino is all about power savings and is a fine chip. I have been wishing for a couple of years now (or at least it seems that long) that I could get one for my home machine, but they don't make them for desktops. Well, that is changing. Intel is now pushing that into the desktop and will continue to as it usurps the Pentium 4 as their main desktop processor. This is ALL about power consumption/usage. That is why Intel no longer rates their processors with clock speed, by the way. It is a prelude to the Pentium 4 being replaced by the Centrino.
The old clock speed versus work per cycle has been around forever. I still remember the Z80 versus the 6809 back in the late 70s. It was even more extreme back then, with a 2 MHz 6809 easily outperforming a 4 MHz Z80A, and even outperforming a Z80B (6 MHz). Alpha was a big clock speed beast initially, with POWER being a "brainiac" chip that did a lot of instructions per cycle. Eventually, they sort of merged with the degree of difference being much less extreme. Still, you have companies like Intel making bad processors like the Pentium 4 to confuse people. The Pentium 4 was all about marketing, and they figured people only understood clock speed. Only problem is, they have this huge processor that gets outperformed by AMD's much smaller one, and they have to walk away from it and go back to less showy, but much better technology like Centrino. They only were competitive with AMD by adding stuff like Hyper-Threading, but that doesn't work very well with dual cores (since they each handle different threads anyway, the advantage is dubious and the overhead excessive for the benefit) and Intel is not going to include that in their dual core processors. AMD smartly avoided hyper-threading, knowing that they would be moving to dual cores. One last thing about dual cores, the limit for the processors is the heat. Each individual core will be clocked considerably less than on the single core versions of the processors. Obviously, power is the reason why since the cores should be able to operate at the same clock speed otherwise.
Posted: Sun Dec 12, 2004 6:19 am
I have a Centrino laptop (Dothan core w/ 2MB L2 cache) and I can heartily recommend it. A most, most excellent CPU. Fast yet runs EXTREMELY cool.
Another reason Intel have moved onto number ratings (and rather poorly executed IMO) is that to still keep up with the AMD, Intel have had to bump up the L2 caches to create the 'EE' editions. Now, the average home user hasn't the faintest clue as what it is and how it affects performance.
Posted: Wed Feb 16, 2005 10:36 am
Sorta bringing this thread back up from the dead.....
As pointed out by Wiggy, the Centrino is an interesting beast from a power-saving perspective.... low power consumption relative to its performance.
Another one was an article I stumbled across today, at Tom's Hardware Guide about the Mac Mini
Now, true, the CPU only runs 1.26GHz or 1.42GHz. Tom's had a "maxed out" sample.
Most interesting to me was this:
During testing, our Mac mini was able to shine in this respect, drawing a mere 20 watts of power; during DVD playback, this rose to only 28 W.
I was impressed! After all, do any of the x86 CPUs, even at this relatively low speed, aside from the Centrino, achieve this? And this is the power draw of the whole system, not just the CPU.
Not sure how the Apple's CPU compares in work-per-MHz though.
Anyway, I thought it was interesting, to say the least.
There's also, on a sort of side-topic, an article
at Anandtech talking about whether or not the single-core CPU is doomed. Particularly noteworthy to me is that, while die-shrinkage increases the useful work you can get per watt, it also causes much greater power leakage. A weird sort of double-edged sword.
Oh well, these are the things that obsess me mid-week.
Posted: Wed Feb 16, 2005 6:26 pm
Off topic; but if Mr. B goes into Iran; and I suspect he intends to; he IS that stupid.