Siggraph 2002:
Nvidia vs. World
Редакция THG,  2 августа 2002


Siggraph 2002 - Gateway to the pixel

The Henry B. Gonzalez Convention Center in San Antonio, Texas. Home of Siggraph 2002.

The only thing anyone needs to know about Siggraph is that it is always in the summer, and it is always somewhere really, really hot. There is no Alaska Siggraph planned, and Iceland can pretty much kiss its chances of ever hosting the show goodbye.

Nope, if it's a hot place that everyone wants to avoid, Siggraph's organizers, out of a sense of low self esteem and misguided sense of philanthropy, will try and rejuvenate the local economy by dragging hordes of ever faithful graphics designers, academics, programmers, and vendors to their conference and trade show.

On with the show.

Oh my God, it's NV30! Or, is it?

"Like, you wouldn't believe it, but I think, like, Nvidia, like, released the specs on NV30."

"Shut up!"

"I could die."

"Shut up! I am so not believing you."

"I am so there."

And so it was in the corridors of power at Siggraph as stunned graphics groupies came to realize that, embedded in Nvidia's presentation on Cg, were the specifications to NV3x. It probably helped that Nvidia's PR machine went into overdrive to make it seem like a discovery. Next time, fellas, why not just slap us in the head with a PowerPoint presentation, and come and write the story for us?

In case you don't know, or are afraid of plugging the term into Google, Cg stands for C for graphics, a high level shading language developed by Nvidia to run on top of DirectX and OpenGL.

On the one hand, Nvidia claims that Cg is pretty much the equivalent of Microsoft's HLSL in DirectX, and on the other hand, it incorporates very target specific commands that make it proprietary to Nvidia.

Cg has ruffled a few feathers in the industry. Nvidia gave me the pitch about how they worked closely with Microsoft on it, and about it's compatibilities with HLSL. Microsoft turned around and said that they will have their own C for graphics equivalent, and that it will probably be the one that developers standardize on for Windows, at least.

This is not the first time that someone's tried to push a standard higher language for graphics. PHIGS, PEX, HOOPS, et al have all beaten the same path. In graphics there is no one-size-fits-all language. So, it is a little mystifying that Nvidia would adopt this approach, but the company has been very good at rehashing old graphics ideas in shiny new marketing packages and making it its own.

On the other hand, I talked to one veteran graphics guru, delivering a presentation at the conference, who said that Nvidia's old guard of developers seem to have taken the riches of stock option heaven, and left the labs to an influx of SGI guys, particularly the old Farenheit crowd. Farenheit was a joint project between SGI and Microsoft that aimed to create one interface for Direct3D and OpenGL. Hmmm.

Maybe that's where the incredible hubris comes from. SGI wasn't ever known for its humility, even in the face of misfortune. Maybe Nvidia is getting a little too fat and comfortable. I don't know. I am not particularly happy about the fact that, despite longstanding relationships with Nvidia personnel, I got fed the party line and some BS on Cg as Nvidia used the online press to put some major FUD out there on Cg disguised as a preview of NV3x.

Nvidia's Intelization

Nevertheless, Cg may also be Nvidia's Intelization of the graphics market. By encouraging developers to use Cg, Nvidia is, in effect, creating an application base that runs on the x-Geforce architecture, much as Intel has a base of applications that run solely on the x86 platform.

A top down view of the Nvidia booth at Siggraph.

Intel occupied the largest booth at the show.  However, notice the strikingly similar canopy theme on the Intel booth.  Is this yet another sign of the direct rivalry between Intel and Nvidia, or is a canvas top just much cheaper and hipper than a large format printed sign from Kinko's?

That's pretty clear enough. And, Nvidia has repeatedly targeted Intel in its marketing hype. Witness this hagiographical piece on Nvidia at Wired, The Next Intel. So, why not emulate success?

The NV3x Spec Leak

Nvidia was giving a presentation at Siggraph called CineFX. The pitch of the presentation, something being echoed among all the graphics hardware guys these days, is real-time cinematic 3D. Nvidia has showcased this concept by demonstrating the "Final Fantasy" movie being rendered on one of their cards.

ATI wasn't left behind this time, and was showing "Lord of the Rings" in the same manner.

ATI's

Under the heading of the CineFX Architecture, Nvidia pretty much laid out the full specification of NV30. It's also worth noting that no matter what the politics of Cg, Nvidia continues to manage to create good buzz for itself wherever it goes, and Siggraph was no exception.

The features of Cg being touted by Nvidia at Siggraph are indistinguishable from those of the upcoming NV3x.

It probably didn't do any harm making an announcement that Cg' s compiler was going Open Source, which happened during the show. You can check out the "open source" angle on Cg here. However, it is unwise to have a kneejerk positive response to this move just because it is "open source." In effect, Nvidia's Cg compiler is for Nvidia hardware. If someone else wants to invest in making a compiler for their hardware then, they have that option, but at what price?

The Nvidia vision for Cg might not be so friendly to the so-called Brand X hardware.  This is what is upsetting Nvidia's competitor's who think that Nvidia is creating FUD around the more proprietary aspects of Cg.

The NV3x Tape Out Debacle

Cg may very well end up being a red herring. If nothing else, it acted as a catalyst for Nvidia to plant stories about NV3x. At least two OEMs and one ISV at Siggraph told me that NV3x has not taped out, which indicates that it is at least six months away from realization. Having said that, Nvidia is a champion of going from tape out of a part to having actual sample boards ready for testing. They took 60 days with the GeForce2, but that was a die shrink. The NV3x is a major architectural change as well as a change to a new manufacturing process (0.13 micron), so it's going to take a miracle to short cut the process.

Since then, Nvidia's competition has been circulating this tidbit from the company's conference call with financial analysts this week:

"It should be available for holiday season." J. Huang

"Has the NV30 taped out yet?" Analyst

"Historically Nvidia has done a good job with tapeouts etc... We're in the process of wrapping it up." J. Huang

"So it has not been taped out yet?" Analyst

"We're in the process of wrapping it up so the answer is not." J. Huang

An analyst source had this to offer in response to our inquiries:

Our contacts indicate that TSMC's average yields at 0.13 micron have been just 15%. So even if the part was ready for volume production later this year, those poor yields are likely to impact the number of parts meeting spec that you can get to market.

NVIDIA took a gamble by acting as one of the guinea pigs for TSMC, and it looks like this gamble isn't working out, right now. ATI's strategy of staying with the proven 0.15-micron process will probably pay dividends in the near-term.

Lastly, NVIDIA has experienced a tough time with its board partners with regard to its NV17 (GeForce4MX). A number of vendors have experienced tons of trouble buliding boards based on the chip and Nvidia's reference design.

The upshot of all this is that Nvidia is on the defensive, and we haven't seen that since the Edge3D days. Having managed to alienate Intel, Microsoft, Tier One OEMs, and many others in the graphics industry, look for the sharks to start smelling blood in the water.

Nvidia will probably come out okay at the other end, but this is a major transition point for the company. Going from being top dog to not killed S3 and 3dfx, mostly because the suits at those companies were to busy paying attention to stock prices and options. Let's hope that's not going to happen here.

Precision Graphics in NV3x - Next Generation 3Ds' Key Concept

By now, the CineFX presentation, and the white papers that Nvidia used at Siggraph to hint at its NV3x architecture, are common knowledge. You can get the engineer speak here http://developer.nvidia.com/.

Some of the key features need closer examination, however. Much was made of Nvidia offering 65536 vertex shader instructions over the 1024 in DirectX 9.0. Big numbers are good, but more important was the fact that Nvidia allows for dynamic flow control in its shaders. The direct relevance to its hardware is that it makes it slightly more programmable than most other DirectX 9.0 hardware, which only offers static flow control.

Dynamic flow control would be a loop that says something like, "IF x>5 THEN DO...." Static flow control would only allow something like, "IF x=5 THEN DO..." This seemed to be the key advantage of the Nvidia approach when compared to its rivals.

However, it's also worth noting that most developers are interested in writing a small number of shader code, let's say 12 lines, so does Cg having 1024 pixel shader instructions, more than allowed in DirectX 9.0, make a difference? It's debatable whether more is better.

Additionally, Nvidia made a big point of having 128-bit color precision, downgrading the R300 for having only 96-bit color precision. Again, it depends on how you do color. The big number sounds good, but it's worth noting that ATI's hardware is the fastest on the block and it has a six-month jump on Nvidia's next generation.

The move to fully programmable graphics processing.

Certainly, when the NV3x hits the market, it will be a big deal, but take a look at this: http://valve.speakeasy.net/. This is the Valve Half-Life survey on Speakeasy.net. It kind of opens up your eyes to what kind of hardware is out there. Just look at the data for the high-end, performance iron. Makes you think twice about programmable graphics. We have a long way to go, and it better come down to $100 a board instead of this continuous leapfrogging technology dance that has effectively helped keep prices high.

Quantum3D seems to be alive and kicking after escaping the death throes of 3dfx.

The Funny Side of Nvidia

Just to show that Nvidia isn't all about cynicism and misanthropy, here's a look at something the company was doing in another part of the show.

In another corner of the show floor, Nvidia had set up a special competition area where it was giving away copies of 3D animation software, and a chance to win a Quadro4 900 XL board.

Apparently, the importance of psychiatric disorders in the artistic process must never be underestimated.

This guy must be a THG reader.  Usually people come to us desperate for a free graphics card.

Now, it isn't true, Siggraph attendees will not hand over their own grandmother for a free tee shirt, or a chance at a raffle prize of some Softimage software, or a graphics card. It only seems that way.

3Dlabs Bites Back

At Siggraph, 3Dlabs was showing off its Wildcat VP line of cards, which had been announced in late June.

Product Memory Displays Performance Value Segment Estimated Street Price(US Only)
Wildcat VP970 128 MB256-bit DDR IndependentDual-headDual Link 225M Vertices/Sec42G AA Samples/Sec Ultimate Visual Processing Performance CADDCCSimulation $1,199
Wildcat VP870 128 MB256-bit DDR Independent Dual-head 188M Vertices/Sec35G AA Samples/Sec Powerful, Versatile Productivity CADDCCSimulation $599
Wildcat VP760 64 MB256-bit DDR Independent Dual-head 165M Vertices/Sec23G AA Samples/Sec Affordable, CAD-optimized Performance CAD $449

There isn't much news on Creative's plans for the P10, but there is some optimism that there might be something on show by the end of this year, which is good news for all concerned.

With Creative's backing, 3Dlabs is breathing a little easier these days.

However, 3Dlabs was most eager to fight back against Cg, being one of the main proponents of OpenGL 2.0. So, Siggraph was an ideal time for 3Dlabs to put forward its own agenda for programmable graphics, starting with the notion that Cg is bad for all graphics.

3Dlabs sees Cg from Nvidia as binding developers to the Nvidia platform.

The 3Dlabs argument is that a much better approach, one based on an open standard, primarily OpenGL 2.0, would allow a high level language (HLL) for graphics that could then be targeted at the assembler level by each unique piece of hardware. In fact, it is really a matter of control. Who gets to define the hooks into the HLL gets to define the underlying architecture. Even Microsoft doesn't want Nvidia to have that kind of reach with Cg, and it is unlikely to happen because Microsoft's own HLL, or C for graphics equivalent, whatever you choose to call it, will probably be the chosen path on Windows. And I don't think anyone seriously wants to see OpenGL marginalized by other programmable interfaces.

Every argument looks rational in PowerPoint.  Proprietary bad, OpenGL 2.0 good.

Anyhow, ATI is now chairing the OpenGL 2.0 working group, so 3Dlabs isn't alone in pushing its agenda. The only drawback, and maybe this applies just as much to Nvidia, is that the impetus behind standardization on shader languages is coming primarily from hardware vendors. This doesn't leave software developers with a warm and fuzzy feeling, so it's not certain whether any consensus among the hardware vendors will have long term implications.

ATI's Fire GL X1

ATI doesn't have to worry about ARBs and working groups for now. It is riding the R300 launch. The company has been fighting back hard against Nvidia with an uncharacteristic zeal. At Siggraph, however, the emphasis was on the Fire GL X1.

It's targeted for an October 2002 release and will come in at around $900 retail. Among the interesting additions are optimized drivers for CAD and DCC (digital content creation) applications, and a software bundle that includes Rendermonkey and two products from Right Hemisphere: Depp Visualizer, and Deep Exploration.

On show at the ATI booth was this real-time rendering demonstration of CG effects from blockbuster movie

The Fire GL X1's Radeon 9700 has been reviewed by us previously, and all that's left is for us to test it with CAD and DCC specific benchmarks aimed at the professional graphics application user. The software bundle was a little downstated, but Rendermonkey is worth more attention. It's an interactive shader tool that was developed within ATI for internal use. You can pretty much use it to right shader code in DX Assembler, and see the results in an interactive window. Very cool stuff. I don't see why this doesn't deserve any more coverage than Cg.

However, it is called Rendermonkey, and I believe that some programmers have to find their happy places whenever they hear that name, being reminded of that common taunt from the ol' schoolyard. Hmmmm. Make your way to www.firegl.com to get the details and make up your own mind.

This was probably my favorite marketing gimmick of the whole show, free bottles of water.  Well, duh.  It's a million degrees in the shade so, who's going to say no?

Matrox's 3 Headed Conundrum

Matrox was at Siggraph with the Parhelia, showing off the three screen feature, as well as launching its Linux drivers. While not the powerhouse it used to be, and certainly not possessing the kind of presence its competitors can muster, I have to admit to coming around to the idea of a multi-screen setup for my own work space.

If you set up with the right lighting scheme, the Matrox Parhelia 3 display setup makes for a very good alternative to an upmarket aquarium.  It's probably cheaper, too, judging by the price of angel fish these days.

Parhelia might just get enough traction as a productivity boon for those of us who have to sit in front of a computer screen for more than eight hours a day, and multiple displays sure as heck make a lot of sense when you consider how much stuff gets thrown around the typical OS desktop these days. How much the 3D part of the equation is going to bite is debatable. Matrox still has a loyal following, smaller in size than in the past, but still loyal to the things that the company does well. Sometimes great 2D and okay 3D is enough.

Matrox's Linux drivers are a month away.  They get their first outing at Siggraph.

A Walk in Wonderland

Siggraph wouldn't be complete with getting to see a couple of shots of motion capture in all its guises. This first one is when motion capture is used to enhance animation effects.

No Siggraph would be complete without dancing Motion Capture Guy.  Stick a couple of fluorescent ping pong balls on that spandex body suit, and make your way to the show.  You might just get a chance to do the funky chicken, too.

This second one is Intersense, where the captured motion is used as a means of interacting, and interfacing, with an application. This notion that the very space around us becomes some sort of giant mouse pad, and we become the mouse, well, it doesn't sound too healthy when you put it that way. However, it's still very much an evolving science, and the military always loves this sort of thing.

Another Siggraph stalwart, the head mounted display, and virtual reality demo.  They're called immersive environments, and you can suffocate from them, walking the Siggraph halls.  Some day all computing will be this way?

My must-have useful thingy at the show was the Wacom tablets, which would go very nicely with my three screen Parhelia setup and combo DVD-RW drive/ cocktail dispenser. You can get more information on the Cintiq display tablets here. Weekend's coming up, what else are you going to do but browse my links?

I remember when tablets were just big lumps of plastic.  Now, they're more likely bulky pieces of electronic paper.  I really want one of these Wacom tablets.

Didn't get invited to many parties (I wonder why), but I did attend a special luncheon hosted by graphics analyst Jon Peddie, who has written a few things for us in the past, where a panel of industry experts discussed the topic, "Graphics: Are we done yet?"

I'm not normally a fan of panels because they are so difficult to write about. You either say too little or focus on the wrong points, but in this case, if you want to get a feel for how the discussion panned out, make up your own minds by listening to the audio here. Well, that gets me off the hook. But, seriously folks, you don't often get to hear Kurt Akeley, for instance, and the debate was lively and insightful. Weekend's coming up, what else are you going to do?

At the Peddie luncheon, seated from the left:  Andy Thompson of ATI; Richard Chuang, co-founder of PDI/ DreamWorks; Kurt Akeley, co-founder of SGI, presently consulting for Nvidia, and one of the greats of graphics hardware; Michael Sheasby of Softimage; Larry Girtz of Exluna Nvidia; Bob Bennette of Alias/ Wavefront.

Finally

It wasn't a wise decision on the part of the organizers to move Siggraph to San Antonio. When the show was lodged in LA, it drew the biggest crowds and packed the exhibit halls, and while Siggraph was once primarily a conference of interest to academics, students, and engineers of computer graphics, it is now as commercial as any other show. A sort of grown-up Game Developer Conference type of thing.

Or, maybe, it's just a sign of the times. Everything that was big is smaller now, and the mighty have fallen or are faltering. That was true of the show, of the big player in graphics hardware, and even in the application space. With Alias/ Wavefront dropping prices to absurd levels, the professional 3D applications market looks like it is being drained of much of its vim and vigour, too.

I have to go find my happy place right now. I really do.

КОНЕЦ СТАТЬИ


Координаты для связи с редакцией:

Общий адрес редакции: thg@thg.ru;
Размещение рекламы: Roman@thg.ru;
Другие координаты, в т.ч. адреса для отправки информации и пресс-релизов, приглашений на мероприятия и т.д. указаны на этой странице.


Все статьи: THG.ru

 

Rambler's Top100 Рейтинг@Mail.ru