Introduction
Lost my camera. Hard disk died on my laptop. Have eaten enough free cookies to toss them right back. So, I apologize for the lack of graphics, but the pen is mightier than the sword so, no a “picture paints a thousand words” taunts. We’re going pure prose for this one, baby!
Bad Strategy/Good Strategy – Price Cuts on CPUs
Assuming that every dollar invested in a recession is worth far more than the equivalent in a boom period, Intel is bathing in ass’s milk and partying like it’s 1999, while AMD is, well, counting beans.
So, I sympathesize with folks that are very pro-AMD, but I can’t lie, cutting prices, cutting spending, and hunkering down is not the best strategy right now.
The fact of the matter is that Intel has positioned itself quite nicely with the Pentium 4 2 GHz, and has enough product SKUs (stock keeping units – I always loved that acronym) to be able to price to its market. Fewere frequency steps for its business customers, and more for the consumer market to get in at different price points.
Heck, if nothing else, 2 GHz gives Intel a few more steps to SKU, or some such idiom.
While we may decry Intel’s business tactics, Intel still has a grip on the Tier One OEMs, and AMD is not going to win them over with price cuts. I mean, that’s the reality.
Don’t get me wrong – of course, some of you will – I like cheap, but cheap is not good business policy, recession or no recession. AMD is not going to win the price war with bigger, better business.
What is even more disheartening is the fact that Intel is spending quite aggressively to build the Pentium 4 brand at the exact time that AMD is cutting back on marketing.
No – that doesn’t apply to the enthusiast.
Yes – that does apply to the volume buyer, for want of a better term.
IDF Fall 2001 – Intel’s Self-Confidence
There is a world of difference between this IDF and the one I attended last Spring. IDF Spring 2001 was a bit of a mess, frankly. This IDF has the air of self-confidence.
This time, we have a better grasp of the vision and strategy on the desktop, and can easily see that a lot of fat has been trimmed from the overall package. It’s still Architecting the Digital Universe, an almost painfully grandiose statement, but with the absence of Craig Barrett, it doesn’t have that almost disdainful, laid back approach of the past.
At the last IDF, the communications part of the Intel pitch got a lot of coverage, and exposure, but it was also the weakest, and most difficult to latch onto. When it comes to servers, and desktops, and other clients, boxes if you will, Intel’s in its comfort zone, and so are we.
So, here’s the important stuff, and I’ll sidestep alot of the communications strategy because, frankly, I don’t know enough about it to be cynical and sarcastic.
10 GHz In The Next 3 Years
Intel’s case for support of RDRAM shows some life – honest, it does. While the company acknowledges that it needs to support DDR, and will be coming out with DDR support in the first quarter of 2002, no sooner, it is also doing a better job of putting forward its RDRAM case. No DDR until 2002, dudes! Not from Intel, at least.
I am coming round to the notion that while there may be some high-minded business strategy regarding Rambus, Intel’s engineers are really the guys championing RDRAM.
Floyd Goodrich, Desktop Platform Manager, Platform Applications Engineering – big ass title – went into it in great, simplistic detail to make sure that we all got it. In a nutshell:
Pentium 4 is operating at 2 GHz, but Intel is going to take it up to 10 GHz in the next 3 years. That’s big ass frequencies. Which means big ass problems for electrical designers and motherboard designers.
RDRAM is the architecture that Intel has determined has the ability to best meet the needs of its Pentium 4 roadmap into the future, and they just think that all the issues regarding the stability of DDR266 and DDR200 have not been addressed by the OEMs, although Floyd did admit that the OEMs are smart enough to figure it out, but that Intel wants it to be a clean solution. You could read that as, Intel wants to control the market, too.
Chiefly, Intel is used to calling the shots, and is just beginning to get around to addressing the whole DDR thing.
Maybe Intel misjudged the adverse impact of transitioning to RDRAM without a middle step. Maybe Intel just likes to have it its own way. Maybe Intel’s engineers were just convinced that RDRAM is the only way to go and that all the other stuff is just short term.
Memory I/O Schizophrenia
The i845 chipset should be officially launched on September 10, there’s a first for you. I know that many people feel that it seems pointless to have an SDRAM solution for a Pentium 4, but the minutae of system performance doesn’t have an impact when someone is buying in bulk for a corporation, or walking through a computer store. The reality is that there’s a checklist of items the average shopper is looking for and the list goes hand-in-hand with a certain price point.
Intel’s just getting in the SKUs it needs for its vast range of customers. The company also claims that there are over 250 OEMs that will have i845 products this year. The impact of DDR is going to be minimal on the Pentium 4 market the remainder of this year, and next year, well, we’ll see.
Intel has to create a success out of RDRAM because, it has the memory I/O that is best suited to the way the Pentium 4 executes and works. The Pentium 4 is a source synchronous device, it relies heavily on prefetch prediction, it has a deep a execution pipeline that gets flushed every time there is a misprediction, and that weighs heavily on the system performance.
Pushing the GHz barrier is critical to Intel’s continued success as a performance leader (apparently, you can do a few more mispredictions and still catch up when you have those extra cycles to play with). Hence, RDRAM provides the best solution for Intel from a strategic point of view. It’s a risky strategy, but necessary. There’s no denying that “alternative” memory I/O interfaces have proven themselves just as clever as RDRAM. In the meantime, there’s a lot that has to go on at the software level for full exploitation of the Pentium 4’s execution pipeline. Code compilers, and OS optimization can only do so much.
Hyper-Threading
For example, we got our first look at something that is targeted at the server segment today, and will eventually find its way on to the rest of the product line. It’s a technology called Hyper-Threading. Intel’s Paul Otellini demonstrated Hyper-Threading at his keynote speech, and claimed it delivered a 30% increase in performance in certain server and technical workstation applications. Hyper-Threading will first appear on the Xeon processor family next year and then filter down the product pipeline.
There’s obviously a lot of work going on within Intel to determine how best to help software developers optimize the performance of code on the new CPUs. Either this implies an uphill struggle with legacy attitudes, or a gradual shift in developer thinking. One thing is for sure, there’s a learning curve associated with figuring out how to optimize system performance on new Intel architectures, and in turn, complexity is increasing to the point where it’s difficult to predict what the return on investment is going to be.
Easy enough to see on the server and workstation end of things, but not quite so clear on the desktop. Which takes full circle to the comment, “who gives a rat’s ass.”
Images courtesy of Intel
Hyper-Threading, also known as Jackson Technology, is really targeted at improving the number of Web transactions and users that Intel-based servers can handle. Yeah, it’s multi-threading with a twist. Intel claims that processors enabled with Hyper-Threading technology can manage incoming data from different software applications and continuously switch from one set of data instructions to the other, every few nanoseconds, without losing track of the data processing status of each set of instructions. Developers will be able to get a hold of prototype software development systems later this year. Intel also claims that applications that are already multithreaded can take advantage of incremental tuning with Intel Performance Libraries enabled with Hyper-Threading. The company is putting developing compilers using its new technology, and building performance measures for Hyper-Threading into Vtune.
For more information on Hyper-Threading check out Hyper-Threading Technology.
Summary – Day 1
There isn’t a whole lot to get excited about in terms of the new, new thing. That’s obvious from working around the press meeting room as well:
“You see anything interesting?” Press Guy One.
“No, you?” Press Guy Two.
“No.” Press Guy One.
But, journalists can be a pain in the neck as well. Tech journalists can sometimes be like children at a family gathering: they know why they’re there, but they wish they could be somewhere else, having more fun.
The new, new thing is the easy hit. It’s easy press.
At the end of the day, the state of the market is what it is. There isn’t much anyone can do about it for now. Intel’s got the money, and seems to have a little bit of its swagger back so, that’s bad news for the competition. It could just be that Intel is isolated from the immediate damaging effects of a tech slowdown. Did I mention they have bucket loads of money?
In truth, Intel is not doing a bad job of transition to a post PC world, but by the standards of the go-go PC world of the 90s, it might seem a little disappointing. The communications stuff remains uncertain. One company trying to supply end to end technologies is a bit much. Can Intel sell all the hardware you need – no. Can they dominate in any other space but the CPU space – probably not. Is the PC dead – heck, no! All it’ll take to jump start the market is if everyone just got into mixing audio on their PCs, and editing digital video.
Come on, come on. You know you want to.