Introduction
The Microprocessor Forum has been an industry bellweather and the jumping off point for many processor announcements in the past.
Transmeta was the first out the gate with an announcement at this year’s event. The T6000 Crusoe, followed up a day later with the announcement that its president of seven months was being deposed as the company struggles to build on the hype it generated when it first went public. I can’t really get excited about a company that has less than $40 million in revenues after all the print they got on CNET. But, I hope that we get to do some THG testing on the Crusoe in future because, Transmeta has gotten a bit of a bad rap about its claims on performance and power consumption, and maybe not all of it is justified.
The Forum itself was very badly attended, not surprising in light of the events of the last month. However, this didn’t stop a large number of companies from presenting and participating at the event.
A measure of the focus of the industry is the fact that prominence at the conferece was given to server and workstation processors and developments in this area of the market.
Equal attention was also given to providers of components for the non-PC devices such as set-up boxes, Internet appliances, and so forth, as well as sessions devoted to networking processors, and programmable chips.
For us, the main factors of interest are the emerging products and directions being taken by competitors in the server market. While AMD managed to get a lot of ink in the technology press devoted to its 8th Generation Hammer architecture, there was also some interesting input from Via’s Centaur Technology on its product roadmap, and Intel announcements on its server and workstations roadmaps for IA-32 and IA-64.
Hightened security meant that all glassware had to be enclosed by a triumvirate of bartenders. Sadly, there were not enough journalists and other healthy imbibers around to cause the liquor supply distress. Sign of the times, folks, when getting a free drink at these shindigs requires no standing in line.
Conference Focus – The CPU Sweet Spot
David Tuttle, Senior Director of Sun’s Austin Design Center gave an entertaining keynote that seemed to encapsulate the general state of the CPU market for many vendors. It was entiteld Triathletes vs. Sprinters: Processor Directions for the New Millenium.
Granted, you got the usual sly digs at Intel in this presentation, but AMD should be happy to note that Mr. Tuttle was very quick to point out that more MHz wasn’t the driving factor for processors in the new millenium.
Mr. Tuttle defined the issues as a set of Meta-Trends:
Power-density – or, how many Watts/square millimeter are processors going to dissipate. Mr. Tuttle took great delight in pointing out that computers consume nearly 13% of the total kWh power consumption of the US. Interesting way of looking at it, but obviously a key issue, and something that even the average user is aware of. We had a very strong, and positive reaction to David Stellmack’s Rant-O-Matic article on power supplies so, the issue of power consumption is very broad for the industry.
CMOS performance scaling slowdown – CMOS technology isn’t scaling as well as it used because, we may have hit the physical limits of the technology. What’s next?
State management overhead – as we go into higher frequency domains with increasing MHz on our CPUs, the cost of flop-skew-jitter is going up as a percentage of cycle time.
Memory and I/O bandwidth lag – processors are getting faster at a faster rate than memory
Conference Focus – The CPU Sweet Spot, Continued
Rise of different types of workloads & forms of parallelism – there is a balance required in processor design between software tuning and scalability, system power,
True cost of transistors – by extension, transistors are not free, just because we can stick millions, probably billions on a single chip at some point.
Obviously, Sun knows that it feels pressure from Intel, as has every other processor vendor, because Intel’s winning the MHz marketing war, if there is such a thing. So, there is a certain defensive posture to be taken, but the general tenor of all the presentations reflected Mr. Tuttle’s presentation – we are hitting some physical limits to Moore’s Law, and just overcoming them by brute force isn’t necessarily going to happen, because designers don’t have a clear handle on what’s beyond CMOS, or how well memory technology will scale.
In addition, it’s quite clear that this year’s Forum was emphasizing server and workstation markets, areas in which, to some extent, the buyer is not easily swayed by feature lists, but is concerned with specific real world performance, reliability, and support for proprietary applications.
I thought Mr. Tuttle’s slide and discussion of the Integration Trajectory, shown above, was a little self-serving, in some ways trying to do what Sun always tries to do, push the PC into a commodity slot, and keep it at arm’s length. However, it’s worth keeping it in mind as you read the following sections on AMD, Via, Intel, and ATI.
I have no problem with this part of the presentation that defines the sweet spot of the processor market as being a balance between MHz and instruction level parallelism (ILP), or a balance between speed demons and brainiacs, resulting in processors that are designed more to be triathletes, rather than just sprinters.
Of course, this also raises the issue of how you actually quantify that sweet spot in the server and workstation market without getting highly subjective. What value does reliability and support play, for instance? I would say both have a pretty high value so, how do you weigh that when you have two processors within 5% of each other in performance?
The Optical I/O
Bill Pohlman, Founder, Chairman and CTO of Primarion, also had his own slant on things, weighted heavily towards his own company’s optical devices, of course. This was a conference which sometimes seemed to be a parade of engineering sales pitches, but I’ll let it go. My cynicism is beginning to wear me down, too.
Anyhow, Mr. Pohlman talked about the challenges of 10 GHz processors, which is frankly where everyone is heading, even if MHz and GHz aren’t everything. Mr. Pohlman painted the picture of a 10 GHz processor in 2005
- <0.05 micron CMOS technology
- Aggressive voltage scaling: Vdd 0.5V to 0.8V
- ~300 M transistors
- CMP, EPIC or OOO, 16+ MB caches
- Fault tolerance
- 150 A peak current transients
- Local unit level power management
- 30 GB/s bus bandwidth
- Low power optical I/O and memory buses
However, Mr. Pohlman’s hear lies with the I/O, and the density problem. If processor’s hit 30 GB/s bandwidths for their I/O subsystems then, each port requires 192 pins at 2.5 Gb/s, and an eight port controller will have approximately 1600 I/O pins. SNR goes down as voltage is scaled down, and energy per bit goes down as frequencies scale up. In addition, latency is impacted if multiplexing is built into the processor to reduce pin counts, or coding improves bit error rates.
In Mr. Pohlman’s opinion optical is the way to go.
The real cost of optical is the cost of fiber to processor enablers or, better put, you can’t run the chip on light, but you can move data around much more efficiently with it. Fascinating subject. One for the eggheads in the research world to consider. This is one to delve into in detail in future.
My only suggestion to the affable Mr. Pohlman in the short term is to reduce the number of titles he holds for the sake of brevity.
AMD – Hammer and Athlon 2000+ XP
Here it is, a first look at the Athlon 2000+ XP running side-by-side with a Pentium 4 at 2 GHz.
If the sign says so then, it must be true. Athlon 2000+ XP. Does anyone even remember the actual clock frequency of this baby?
Start your engines, gentleman. AMD’s Athlon 2000+ XP next to a P4 2 GHz machine.
Doing a file conversion in MusicMatch, the P4 screams along.
The Athlon 2000+ XP screams along screamier, or some such idiom.
AMD – Hammer and Athlon 2000+ XP, Continued
While there is some debate surrounding the efficacy of AMD’s real performance branding campaign, which frankly I find confusing, the company sounded very confident and even, dare I say, ebullient about this demonstration. Athlon 2000+ XP certainly sounds a lot better than Athlon 1800+ XP, and is a nice number. It also takes direct aim at the Intel flagship.
Benchmarks for the MusicMatch software in the pictures above isn’t verified, or endorsed by us. This was, after all, an AMD first look, but if they can keep doing the same math in the performance category, and on the branding, AMD may just get some traction with this type of marketing push.
As for AMD’s 8th Generation processor release – there’s a lot of goodwill in the online press towards AMD as it makes these announcements. Part of that is dearth of news in the technology world that can make real headlines, and part of it has to do with pent-up demand of AMD’s fan base. Check out the Forum presentation from AMD.
As you can see from the latest AMD processor roadmap below, we have a year to go before we get to see what’s under the hood of AMD’s 8th Generation.
So, the discussions about this product are academic at best. It might be best to look at some of the strategic issues, and evaluate the choices AMD has made in that regard with its technology.
AMD – Hammer and Athlon 2000+ XP, Continued
So, first the bad news:
AMD’s a neophyte in the enterprise server market, and while Hammer gives it the workstation and server product to compete with Itanium, you have to take into consideration that it’s not just about Intel. There’s also Sun, IBM, and by association, HP/Compaq. Not a single small fry among them so, that makes it a tougher market than the desktop space where there is really only Intel to contend with.
It’s hard to say whether by extending the 32-bit ISA of the x86 architecture to support 64-bits is good or bad, but it does have one possible drawback. Fundamentally, the 32-bit ISA of x86 needs to die at some point, and if not, it should be relegated to commodity levels. David Tuttle’s assertion that PC processors need to get the PC to VCR pricing models isn’t far off the market the further out we look. My other assertion would be that from a marketing perspective, Intel may have created a better value proposition for its IA-64 approach because, it is basically demarcating between what is its server and workstation product line, and what is goiing into the desktop in a very clear manner. That means, customers are going to pay more for clearly separate server product, and it’s going to be easier for OEMs to support and build services around this idea. So, my question is, does AMD’s approach act as a damper on Hammer’s higher-end ambitions because it’s going to also filter into the general desktop as a high MHz K8 processor?
64-bit computing is the domain of large data sets, lots of transaction processing, scientific computation, and whatever are fancy things you want to do. AMD’s integrated the Northbridge into Hammer, and by virtue of that, gained some possible advantages in terms of memory latency, but what about the issue of memory bandwidth not scaling as fast as CPU performance? Does Intel have an advantage in having stuck to building a roadmap that relies on Rambus hitting certain product and bandwidth points on its roadmap that it has managed to quite effectively do in the past? In other words, does it make sense to bet on DDR, as much as does it make sense to bet on RDRAM alone? Remember, this is a game that’s going to be played for the next 5-6 years.
Dell is completely Intel. IBM has its own plans for this market segment. Sun has its own plans. HP, and by virtue of merger, Compaq are Itanium. Who is the king maker for AMD in this market?
As for the good news:
AMD hits a sweet spot in the computing market with Hammer – the transition point between desktop, workstation and server in a PC OEM’s product line. In other words, OEMs can create product lines that run the gamut of high-end desktops, workstations, and 4P-8P enterprise servers. Nice opportunity.
AMD has economies of scale because Hammer gives it the best of both 32-bit and 64-bit worlds.
AMD has had a strong following among the tech enthusiast crowd, the same people who help to shape the server landscape, who may be driving Linux adoption, and who have an aversion to Intel and Rambus. With Hammer, they also have a rallying cry for the workstation and server market which takes AMD out of the enthusiasts’ favorite category, and into a general techie favorite category. In other words, Hammer is a better argument for IT departments and developers to promote AMD products within their organizations than Athlon ever could be, even with the price/performance advantage.
Let the games begin.
Via’s Processor Roadmap
I was actually more excited by Glen Henry’s presentation on Via’s processor roadmap although it is focused on the less sexy value segment of the PC market, which means the cheap stuff. Or, if you are feeling very abusive, the MS Office Guy crowd.
Via’s existing C3 family of processors has done a pretty good job of scaling in performance terms while keeping an eye on power consumption, and delivering compelling enough performance for its value segment target audience. Via doesn’t make any bones about not wanting to be in the high-end.
In the coming year, Via’s moving to Ezra-T, which is sampling at 800-1000 MHz, and will move to 900-1200 MHz in 2002.
Future plans involve the addition of a C5XL core that will push Via’s processor up the MHz pipe, while retaining a smaller footprint die to keep costs and power consumption down.
Although Via is mainly shipping its processor outside of the US, primarily into the low cost markets of Asia and the Indian subcontinent, for example, the company may see some traction for its products if the US PC market doesn’t bounce back. There is an opportunity for vendors of value solutions, and I have to agree at some level with Mr. Henry that, “How many people in the world need 2 GHz today? Especially when it’s 75W and costs hundreds of dollars and performs like a well-designed 1.4 GHz CPU.”
The new C5XL core has a new microarchitecture, adding a certain amoung of ILP enhancement, as well as increasing the L2 cache to 256 KB. It’s all still Socket 370 compatible, and Via has the ability to be a one-stop shop for OEMs, delivering chipsets, motherboards, and CPUs in one fell swoop to a price driven market.
Intel’s IA-32 And IA-64 Processor Roadmap Updates
While its competitors like to beat the “it’s not just more MHz that counts” drum, Intel has actually been clearly taking its own stand and saying that, more MHz isn’t all that counts.
Intel’s strategy, as befits the behemoth, is on a number of fronts:
First, there’s the total computing experience approach. This is the all encompassing umberella term for almost all of Intel’s initiatives: ILP and deep pipelines, MMX/SSE/SSE2, USB, InfiniBand, 3GIO, Serial ATA, Hyper-Threading, AGP, shrinking desktop form factors etc.
So, Intel embraces the contention that MHz isn’t everything, but also realizes that it doesn’t look bad to have a lot of MHz and GHz after your processor name. However, one very interesting area in which Intel’s evangelism is operating is the area of Hyper-Threading.
Intel clearly has the MHz issue addressed in the P4 product line. However, Intel’s reach is very broad, and the company also has to start thinking about how it can get developers to jump on its bandwagon on future technologies. Hyper-Threading gives Intel a number of useful marketing pushes:
- It does promote the notion of ILP, and plays to Intel’s pipeline’s strengths
- It gives a unique identifier for its SMT strategy, and Hyper-Threading is going to stretch across all Intel CPUs in future
- It addresses some of the issues of memory latency by delivering memory level parallelism
- Threads are good, they’re ubiquitous, and they eat up clock cycles
Hyper-Threading does bear closer examination, and it gives Intel another feature set that can only help it gain greater developer support.
Processor vendors are not fighting as much for the hearts and minds of their users these days, as they are fighting for developers to eat up their cycles, use those damn memory chips more efficiently, and make it cool to buy a state-of-the-art system again.
Obviously, Intel had two roadmaps to present to the Forum audience. One was for its IA-32 enterprise processors:
And one was for its Itanium, 64-bit architecture:
Intel’s IA-32 And IA-64 Processor Roadmap Updates, Continued
Hyper-Threading will appear on Intel Xeon MP processors in 1H’02 and it requires a less than 5% increase in die size, according to Intel. If you take David Tuttle’s issues with the dilemma facing processor designers in the new millennium, Intel is obviously trying to find its own sweet spot by pushing MHz, moving to 0.13 micron, delivering on ultra low voltage P3 cores, and looking at the level of intelligence in the processor pipeline to improve the overall throughput and performance of its products.
Intel and most of its competitors have pretty smart engineers so, let’s just take the denigration of Intel’s MHz pushing strategy with a pinch of salt. I don’t think Intel is behind the curve, here.
Intel’s 870 chipset will support McKinley, Madison, and Deerfield through 2004 and allow OEMs to develop upto 64 CPU systems.
The other thing that is worth noting is that Intel’s splitting up of its IA-32 and IA-64 products, as opposed to the AMD Hammer approach, does allow for overlap in different market segments as can be seen in the Itanium roadmap. So, it maintains a differentiation between the two architectures which helps to focus the customer and the Intel sales team, but still retains the ability to mix and match architectures in an all encompassing roadmap.
Like I said, AMD is a neophyte in the server business, and Intel’s really gunning for the IBMs and Suns of this world in the enterprise server market so, I kind of get a little wary of reading about Hammer versus Itanium. Seems like a non sequitor of sorts to me.
There’s certainly a lot of interaction between Itanium and Hammer in the segments that Intel defines as WS (workstation) and Performance/Volume DP Server, but I would much rather see how Intel fares against IBM and Sun in the server market. This is really a key issue for Intel’s Itanium strategy.
I certainly wouldn’t presume to take on the subject of tackling the relative merits and deficiencies of IBM’s Power4, Sun’s Jalapeno, Intel’s Itanium, and AMD’s Hammer. It does bear thinking about in terms of getting some perspective on the interactions that are bound to take place in the enterprise market over the coming three years so, we can’t just isolate Intel and AMD in the server and workstation market.
What we need to do is spend a little time looking at the architectures arrayed before us at the Forum through the eyes of the software community, and IT management. Academia can also be helpful here. One paper at the Forum was given by the IMPACT Research Group at the University of Illionois at Urbana-Champaign on Itanium Performance Insights. Worth some extra-curricular research for those of you that really want to get into the performance issues facing the big iron boys (well, this stuff is big iron enough these days, right?).
Who gives a fudge about the features, the real test is in looking at how a processor recovers from a cache miss, or really measuring latency under extenuating circumstances. Performance metrics that are constantly being revised and assessed. We’ve got a long way to go before we can pass judgement on these products.
ATI’s Xilleon 220
ATI unveiled information on a set-top box chipset called Xilleon 220 at the conference, and while it is not directly relevant to our focus on the desktop, it is of some interest.
ATI has long wanted to leverage its expertise in graphics and digital video to break into the high volume markets of set-top boxes (nearly 120 million network set-top boxes are projected to ship by 2005 according to Cahners In-Stat), satellite receivers, and those fanciable PVR (programmable video recorders). Xilleon 220 shows how far ATI has come and points out some interesting difference between Nvidia and ATI in their approach to their respective futures.
If you want integration, this is a beauty. The Xilleon 220 has pretty much everything you need from a system-on-a-chip (SOC) for the digital TV market. We all know about ATI’s expertise and quality application with digital video hardware from its graphics boards, and it’s good to see the transition the company has made into this extremely competitive, and thankless market.
It’s got an integrated 300 MHz MIPS CPU running Linux, Win CE, and VxWorks. It interfaces to PCI, or an xDSL modem so, you can easily add peripherals to a design. It can drive two TVs. It has a 32/64 bit DDR/SDR interface, and can get approximately 3 GB/sec of memory bandwidth through two memory channels.
You can build a box to PVR multiple video windows which means that you can decide between 4 programs not to watch. You can attach a wireless 802.11 transmitter for a home network. Have sophisticated 2D/3D overlay graphics for a GUI.
I liked it. I hadn’t kept up with ATI’s work in the set-top box area. In fact, I thought they might have given up, but there’s shades of the GameCube in the design of some of the interfaces, and it’s good see that ATI is sticking to this strategy.
Now, Nvidia, on the other hand, is going down the long and winding road of integrating more and more of the PC’s components.
So, both companies are approaching it from different ends of the spectrum: Nvidia top-down, and ATI bottom-up. All this knowledge has to end up benefiting Joe Public at some point.
Summary
The enterprise server and workstation market moves at a slower pace than the desktop arena so, I didn’t think that this year’s Forum offered us new insight into the PC world. In fact, it seems that it merely lay the groundwork for more discussion on the issue facing the server segment of the market. It’s a highly complex, and I’ll argue, subjective area of the market. There is, however, an interesting groundswell of movement towards Linux in this market that does bear closer examination, and that may help us to understand the underlying hardware issues better as well.
Yes, there were a number of products and presentations in the consumer electronics and information appliance category. This is an area where companies like National Semiconductor, Hitachi, Cirrus Logic and MIPS play. ATI has brought its expertise to bear on the digital TV market with Xilleon, and it was of passing interest to see that NeoMagic was taking its 3D graphics onto handheld multimedia devices with the MiMagic NMS704x SOC solution. It’s been a long haul for ATI, and it’ll probably be a long haul for NeoMagic too. As it has failed to do for a number of years, the Internet device category has yet to set the world on fire. There really isn’t a driving force, a Microsoft or Intel to pull everyone behind them. Can’t they just put a PIM on a Game Boy Advance?
This is not a bad set of features for a handheld device, but NeoMagic excited the notebook graphics market to get into the information appliance business? Tough call.
More importantly, we probably all need to rethink our approach to the PC. Microsoft and Intel are not the driving forces they used to be. The PC belongs to the collective body of users that work with it. It is the connected device, our hub, for some people, their universe. So, we’ve still got a lot to learn about servers, the software that makes them tick, and the ways in which we can exploit them and develop them. What was once the domain of the specialist is now becoming the interest of the great sprawling mass of PC enthusiasts and professionals. Maybe that’s the message you can take from this year’s Forum, that the isolated desktop is just another node, and the really cool battles are going to be one on the network. We can’t look at a processor in isolation any longer. Seems like there’s some meaty stuff there to delve into.