РЕКЛАМА
ПОИСК И ЦЕНЫ
Поиск по сайту THG.ru


Поиск по ценам в Price.ru




ИНФОРМАЦИЯ
ПОЛЕЗНЫЕ ССЫЛКИ
NVIDIA GeForce 256

GeForce256 and the First T&L-Title

ATI Rage Fury Pro Review

VideoLogic Neon 250 Review

Diamond Stealth III S540 Xtreme Review

3dfx, Matrox and Nvidia take on the KryoTech Cool Athlon 800MHz

32 Graphic Card Meltdown - Part One

32 Graphic Card Meltdown - Part Two

Review of the Matrox G400 MAX

3DNow! Enabled 3D Adapters - Which solution offers the best performance?

3D Chips and Cards

NVIDIA rocks the Boat with TNT2

Preview of 3Dfx Voodoo3

Rambler's Top100 Рейтинг@Mail.ru
bigmir)net TOP 100

ВИДЕОКАРТЫ

Full Review NVIDIA
Краткое содержание статьи: The expectations are high, NVIDIA's new graphics processing unit does not only come with a respectable frame rate but also with a complete transform and lighting engine and several more nifty DirectX 7 features. Does it stand out far enough from the competition with its 'old fashioned' 3D-chips?

Full Review NVIDIA's new GeForce256 'GPU'


Редакция THG,  11 октября 1999


Introduction

GeForce256 Chip

The first time we heard about NVIDIA's mysterious 'NV10'-project is now more than 1.5 years ago. Still NVIDIA took time until this 'super-chip' was eventually presented to the public. It's only a month ago that 'GeForce256' was finally unveiled to us, and NVIDIA didn't lose any time to make it available to everyone soon after the announcement, although the schedule got delayed by the Taiwanese earthquake. Creative Labs were even faster than NVIDIA, they released their first boards already last week in Asia and I doubt that it will take much longer until you will be able to buy the '3D Blaster Annihilator' in the US and Europe too. I only wish that the names of those 3D-products wouldn't always have something to do with mass-murder. Since 'Napalm' I'm only waiting for a 3D-product called Hiroshima, Agent Orange or Pearl Harbor.

Anyway, to get back to the topic, what's the news about GeForce? Well, most of you will certainly have read several articles about its architecture elsewhere, and while we at Tom's Hardware don't claim to fame with regurgitating presentations or white papers, I will try and keep my summary of GeForce256 short. For those of you who require more information I deeply suggest visiting NVIDIA's website, where you can find all the white papers and presentation material you need to keep dreaming about the GeForce GPU all night long.

GeForce256 SDR
Download image in PNG format

This is the GeForce256 reference board with single data rate SDRAM, actually made by Creative Labs and thus identical to '3D Blaster Annihilator'.

GeForce256 DDR
Download image in PNG format

The GeForce256 reference board with DDR-SGRAM looks like this.

The Features

The most important thing about GeForce is that NVIDIA doesn't want it to be called '3D-chip' anymore. GeForce is called a 'GPU' for 'Graphics Processing Unit', and this name was chosen to give it the right honors in comparison with the all-important 'CPU'. Most of us don't realize that the complexity of today's graphics chips is at least on par with any high-end PC-processor, while the price point of 3D-chips is rather mediocre compared to a CPU. GeForce ensembles no less than 23 million transistors, which is in the same range as AMD's Athlon or Intel's upcoming Coppermine processor. Intel or AMD take more than 10 times more money for one, which makes you wonder why the 3D-chip industry is still alive. The 23 million transistors in GeForce are well spent, since it's the first 3D-chip that includes a transform and lighting ('TNL') engine. This engine is the real reason for the name 'GPU', because it adds a huge amount of computing power to GeForce, which makes it a lot more than just a chip that displays 3D-graphics.

Transform & Lighting Engine

'Transforming' is the very FP-calculation intensive task to 'transform' the 3D-scene with all its objects, called 'world-space' into the 'screen-space' that we are looking at. 'Lighting' is pretty self-explanatory, it represents on optional stage in the 3D-pipeline that calculates the lighting of objects in relation to one or several light sources. Lighting is just as transforming a pretty FP-calculation intensive task. Both tasks used to be executed by the CPU, putting rather heavy strain on it. The effect was that the 3D-chip was often in the situation that it had to wait for the CPU to deliver data (e.g. CPU-limited 3D-benchmarks) and that game developers had to restrict themselves to less detailed 3D-worlds, because heavy polygon usage would have stalled the CPU.

GeForce is supposed to put an end to this dilemma, it can take the strain off the CPU, can keep the 3D-pipeline from stalling and allows game developers to use much more polygons, which automatically results in greatly increased detail. At the same time the CPU can dedicate itself to a more complex game AI or more realistic game physics. NVIDIA claims that GeForce can transform, clip and light 15-25 million triangles/s, and somewhere else I read it's at least 10 million triangles/s.

High Fill Rate

Well, even GeForce is not free from having to obey the classic demands in 3D-gaming, and one of those is asking for as much fill rate as possible. NVIDIA once claimed 500 Mpixels/s, now it's obviously shrunk to 480 Mpixels/s, but who cares, it's still way beyond the 366 Mpixels/s of the fastest current 3D-chips. I'd also like to note that I have yet to see a game that runs with a frame rate that would go hand in hand with those high fill rates. A game with dual-texturing would thus have to run at 95 fps at a resolution of 1600x1200, GeForce could do 125 fps at this resolution. I've never seen any results even close to that ... so far about those nice fill rates. GeForce achieves the 480 Mpixels/s with 4 parallel rendering pipelines and a core clock of (only?) 120 MHz.

AGP4x Fast Write

Yes! We haven't even got an AGP4x platform available on the market yet and GeForce is already faster than the rest! The 'Fast Write'-feature enables the CPU to directly write to the graphics card's frame buffer without taking a detour through system memory, and it's supposed to be up to 30% faster than 'normal' AGP4x. Let's see if Intel's 'Camino' or i820 will ever be ready and working or let's hope that 'fast write' will be supported by VIA's AGP4x 640 chipset. NVIDIA is proud to be the only 3D-chip maker that has a product supporting fast writes and NVIDIA deserves to be commended on implementing this new technology, even though I doubt that we will see much of an impact of it anytime soon. 2D as well as 3D-applications are supposed to benefit from AGP4x fast write, so you have all reason to be happy about it, regardless if you draw faster than your shadow in 3D-Westerns or type faster than your keyboard in Word.

Why is GeForce called 'GeForce256'?

Well, it took me some time to really understand that as well. First of all it isn't the price, Creative Labs are supposed to ship theirs for $249, but if you're in the right state with low tax it may still add up to $256. It should also not really be the memory interface, because this is only 128-bit wide. Some think that the usage of DDR ('double data rate') memory excuses the use of '256' for the memory interface, but that's in my humble opinion not quite all right. GeForce-cards with SDR-RAM would anyway not deserve the '256' then and the fact that data is transferred with the rising as well as falling edge of the memory clock does still not make it wider than 128-bit. The memory interface is anyway my critique-point number one, because it provides the boards equipped with SDR-RAM with a slower memory bandwidth than TNT2-Ultra-boards. GeForce's memory is currently clocked at 166 MHz, while TNT2-Ultra runs it at 183+ MHz and both chips have the same memory bus width of 128-bit. NVIDIA did not tell us the memory clock of the DDR-RAM card in our test, but I guess it's 166 MHz too, so that this card has at least 81% more memory bandwidth than TNT2-Ultra.

But let's get back to the magic '256'. I could hardly believe my ears when I was finally told what the '256' stands for. NVIDIA adds the 32-bit deep color, the 24-bit deep Z-buffer and the 8-bit stencil buffer of each rendering pipeline and multiplies it with 4, for each pipeline, which indeed ads up to 256. So far about the fantasy of marketing people, they are a very special breed indeed.

HDTV

NVIDIA has finally got the message as well, GeForce includes a HDTV-processor with HDTV-motion compensation. ATi introduced support for this upcoming technology already in the 'good old' Rage128, but although GeForce does something to catch up with them it's still missing an iDCT for best MPEG2-decoding/encoding as found in Rage128 and Rage128 Pro.

Cube Environment Mapping

This is a real nice feature, and it should improve game experience by a good amount. Cube environment mapping is a technique developed by SGI a while ago and NVIDIA will be the first to bring it to home desktops. The idea is pretty simple. From an object with a reflective surface (and not from the room center, as I read somewhere else) you render six environment maps in each room direction (front, back, up, down, left, right) and use those to display the reflections on this object. It can either be used to reflect in detail or blurred, but it can also be used for more accurate (per-pixel) specular lighting. The viewer/camera can move around the reflecting object without you noticing distortions or other artifacts in the reflection, something that's not possible with sphere environment mapping. Cube environment mapping is fully implemented into DirectX 7 and will certainly be found in 3D-games very soon.

Some More ...

GeForce will also introduce 'vertex blending', which basically joins different pieces of geometry with each other, as e.g. required in joints of characters or animals. 'Particle Systems' is another DirectX 7 feature included into GeForce, and NVIDIA has a rather impressive demo for it. Particle systems could be very useful to render nice explosions (what's nice about an explosion?) or the fountain in Billy Christal's garden from 'Analyze This'.

Let me not forget to list the 350 MHz RAMDAC that should be good for the best CRTs and the fact that GeForce is currently still manufactured in .22µ process.

Before We look at the Benchmarks ...

... we should summarize our expectations. GeForce comes with great 3D-enhancements, but those nice features won't be found in any of the current games. In this regard we will again have to wait for the right software to show up, as so often. The high fill rate of GeForce should enhance frame rate in current games though, although the SDR-RAM equipped cards could get a bit touchy at high resolutions, due to the rather limited memory bandwidth of less than a TNT2-Ultra's. What should we expect from the T&L-engine? Well, if all the hype we've heard is true, we should see great frame rates with almost any CPU, even in today's games. I wonder if this will be the case ...

The Benchmark Setup

Hardware Information  
CPU PIII 550
Motherboard (BIOS rev.) ABIT BX6 2.0 (BIOS date 7/13/99)
Memory 128 MB Viking PC100 CAS2
Network Netgear FA310TX
Driver Information  
NVIDIA GeForce 256 4.12.01.0347
ATI Rage Fury Pro 4.11.6713
NVIDIA TNT2 Series 4.11.01.0208
Voodoo3 Series 4.11.01.2103.03.0204
Matrox G400 Series 4.11.01.1151
Environment Settings  
OS Version Windows 98 SE 4.10.2222 A
DirectX Version 7.0
Quake 3 Arena v1.08
command line = +set cd_nocd 1 +set s_initsound 0
Shogo v2.14
Advanced Settings = disable sound, disable music, disable movies, disable joysticks,
enable optimized surfaces, enable triple buffering, enable single-pass multi-texturing
High Detail Settings = enabled
Fortress Demo
Expendable Demo Version
Setup = use Triple Buffering
Audio = disable sound
Descent III Retail version
Settings = -nosound -nomusic -nonetwork -timetest

The Benchmark Results - Descent III 640x480

We ran Descent III under Direct3D as well as OpenGL. This game doesn't have very complex 3D-scenes, which results in high frame rates. Unfortunately there's no chance to run it at 32-bit color-depth.

Descent III 640x480x16 DirectX 7.0 secret2.dem

At 640x480 GeForce can't really impress, it scores on par with ATI's Rage Fury Pro and its TNT2-brothers. However, 100+ fps should pretty much please everyone.

Descent III 640x480x16 OpenGL secret2.dem

Except for the 'drop-out' of G400 and Voodoo3, the picture is pretty similar. Only the other cards don't perform as well as GeForce, which speaks against their and for GeForce's driver.

Descent III 640x480x16 best API secret2.dem

'Best API' means that we compare the best scores each card scored in either D3D, OpenGL or Glide. For the first 7 cards it's D3D and for Voodoo3 it's naturally Glide. You may want to hear me say that Voodoo3 scores 'better' than the rest, but I haven't found the advantage that 130 fps have over 100 fps quite yet.

The Benchmark Results - Descent III 1024x768

Descent III 1024x768x16 DirectX 7.0 secret2.dem

Cranking up the resolution to a reasonable 1024x768 shows a different picture, GeForce shows off its fill rate-muscles and scores well ahead of the competition.

Descent III 1024x768x16 OpenGL secret2.dem

We witness again a well performing GeForce, a failing G400 and Voodoo3 as well as the bad OpenGL-driver of the Rage Fury Pro.

Descent III 1024x768x16 best API secret2.dem

Now Voodoo3 and its Glide can't lose the competing GeForce anymore, but I would have expected GeForce to score better. Is this one of the last proves for the advantage of Glide in some situations ...? It seems quite obvious.

The Facts Behind 3D-Games And A Possible Usage Of Geforce256's Integrated T&L-Engine

The term CPU-scaling in combination with NVIDIA's new GeForce256 GPU is becoming more and more an object of discussions and speculations. NVIDIA's comments to this topic add to the confusion, claiming that GeForce's 3D-performance is rather independent of the CPU-power, regardless if a K6, a Celeron, a Pentium III or a Athlon is being used. Fact is that games will ALWAYS be depending on CPU-performance, especially in case of complex or multiplayer games, where the CPU has a lot more to do than computing transform and lighting. The bandwidth of the system buses, like memory bandwidth, PCI-bandwidth and AGP-bandwidth are another factor that impacts GeForce just as much as any other 3D-chip without integrated T&L. Nevertheless, GeForce256 can reduce CPU-scaling. Games with complex graphics and rather simple AI and physics benefit greatly from GeForce's T&L-engine and even the other games should at least show a difference in CPU-scaling. Having said that, we should be aware that games have to fulfill some requirements to make usage of an integrated T&L.
  1. The game must be programmed for an API that supports integrated T&L-engines, like DirectX 7 or OpenGL.
  2. Current DX6-games can only take real advantage of GeForce with a DX7-patch.
  3. The game shouldn't have an engine too complex to be the performance-bottleneck in the first place.
  4. The rendering-engine of the 3D-chip should be able to draw the frames supplied by its integrated T&L-engine fast enough.
I case of our testing only TreeMark, Dagoth Moor Zoological Gardens and in some way Quake3 fulfill the requirements. The first two show a significant difference in CPU-scaling over chips without integrated T&L. Quake3 will show more of a difference once it was changed to depend more on the OpenGL transform and lighting procedures than on its own. In case of the other games we'll have to wait until the developers release DX7-patches.

The Benchmark Results - Descent III 1600x1200

Descent III 1600x1200x16 DirectX 7.0 secret2.dem

At 1600x1200 GeForce shows its fill rate advantage again, but please note the G400 MAX! Matrox seems to have a better memory design than GeForce. Although G400 can't touch GeForce in terms of fill rate, its memory bandwidth lets it get very close to the (castrated?) GeForce w/SDR-RAM. This high resolution clearly shows the advantage of the DDR-GeForce, which won't be available in the shops for quite a while unfortunately.

Descent III 1600x1200x16 OpenGL secret2.dem

The plot thickens under OpenGL, GeForce w/DDR is some considerable 41% faster than GeForce w/SDR. This is enough to decide between playable or unplayable at this resolution and API.

Descent III 1600x1200x16 best API secret2.dem

Now Glide can't help Voodoo3 anymore, GeForce's sheer fill rate power puts it way ahead of the competition.

The Benchmark Results - Expendable Demo 640x480

Expendable is a rather tough game for the average 3D-card. It may not be complex in terms of polygon-counts, but it has a whole lot of lighting effects.

Expendable 640x480x16 Timedemo

GeForce scores rather embarrassing, Fury Pro and the TNT2 cards look better at this resolution. Still, we're over 60 fps, so that there's no sensible reason to complain yet.

Expendable 640x480x32 Timedemo

No difference at 32-bit color, only the two Voodoo3 had to drop out due to their lack of 32-bit color support.

The Benchmark Results - Expendable Demo 1024x768

Expendable 1024x768x16 Timedemo

At 1024x768 GeForce is leading, but not by a remarkable amount. I won't moan too much though, because GeForce scores at least over 60 fps.

Expendable 1024x768x32 Timedemo

At 32-bit color the story looks better, at least for the GeForce board with DDR-RAM. Please note that the 'Canadians', G400 and Rage Fury Pro take advantage of their ability to only use 16-bit deep Z-buffer at 32-bit color. NVIDIA's chips can't and don't do that. They have to render with 32-bit deep Z-buffer, which costs memory bandwidth, as you can see by the rather bad result of the GeForce w/SDR.

The Benchmark Results - Expendable Demo 1600x1200

Expendable 1600x1200x16 Timedemo

1600x1200 is again good for GeForce to show its fill rate advantage, but what do you say about G400 MAX and GeForce w/SDR? These results show again that GeForce with the slower single data rate memory is castrated and that the memory interface of G400 is rather respectable.

Expendable 1600x1200x32 Timedemo

I'd say that this is the one benchmark where GeForce looks worst, since even GeForce w/DDR is just about a tiny bit faster than G400 MAX and Rage Fury Pro is faster than GeForce w/SDR!!! However, let me not forget to tell you that due to some memory mapping issue GeForce can't run Expendable at 1600x1200x32 with triple buffer enabled, it runs out of memory. Thus I disabled triple-buffering for this test run.

The Benchmark Results - Quake 3 Test 640x480

I guess I don't have much to say for the introduction of Quake3. This OpenGL-based game can bring a graphics card to its knees once you select 'high quality' mode. The complexity in terms of polygon counts is probably the highest amongst those gaming benchmarks.

Quake3 Test 1.08 640x480 Normal Quality Demo1

GeForce is ahead of the competition, but not exactly by much at this low resolution. G400 suffers from its bad OpenGL-implementation, which is a shame for that chip really. Voodoo3 proves again that it's 'the master of the low resolutions'.

Quake3 Test 1.08 640x480 High Quality Demo1

High quality is what GeForce likes, its frame rate stays untouched where it was at 'normal quality'.

The Benchmark Results - Quake 3 Test 1024x768

Quake3 Test 1.08 1024x768 Normal Quality Demo1

The plot thickens again as the resolution rises to 1024x768, GeForce scores way ahead of the competition, the closest competitor is its own older brother TNT2 Ultra.

Quake3 Test 1.08 1024x768 High Quality Demo1

The difference between DDR and SDR-memory becomes obvious again at this 32-bit color 'high quality' setting. GeForce w/DDR scores close to double of its competition. That's what GeForce is great for, playing Quake3 at high resolutions and high quality.

The Benchmark Results - Quake 3 Test 1600x1200

Quake3 Test 1.08 1600x1200 Normal Quality Demo1

It's rather impressive to see how well GeForce scores even at 1600x1200, but don't overlook the difference between the two memory-types again!

Quake3 Test 1.08 1600x1200 High Quality Demo1

1600x1200 and 'high quality' is even too much for GeForce, none of the two can skip the magic 30 fps barrier.

The Benchmark Results - Shogo

Although not a particularly commonly found game, Shogo is also using multi-texturing and a few light effects. The game is not particularly complex in terms of polygons used, neither in terms of lighting, but we still decided to keep it in our benchmark suite for the time being.

Shogo 640x480x16 FORTRESS

We've seen similar scores at 640x480 before; GeForce is certainly not for 'low resolution gamers'.

Shogo 1024x768x16 FORTRESS

Things change at 1024x768; GeForce can show its fill rate power once again.

Shogo 1600x1200x16 FORTRESS

The scores at 1600x1200 don't surprise, GeForce scores about double of the competition.

Shogo Screenshot
Download image in PNG format

GeForce had a bit of a problem with Shogo, explosions close to the player created a rather odd picture. I guess NVIDIA should have a look into that and fix the driver a bit.

TreeMark

NVIDIA's TreeMark is an OpenGL-benchmark that uses a very high polygon count. You've certainly heard of it, it's simply a scene with a tree that is very detailed. The camera moves around the tree close and then again away from it and a few fireflies are flying around it.

That's what it looks like:

TreeMark Screenshot
Download image in PNG format

That's the tree from far,

TreeMark Leaves Screenshot
Download image in PNG format

and here is a close-up. You can see how detailed it is.

The Benchmark Results - TreeMark

We ran the benchmark in two versions, the 'simple' version, using only 35,820 polygons/frame, a depth level of 5, 4 lights and it runs at 1024x768x16 in full screen mode. The 'complex' version uses 128080 polygons/frame, a depth level of 8 and 6 light sources. It also runs at 1024x768x16 in full screen.

NVIDIA TreeMark SIMPLE 35,000 polygons/4 lights

You can see how much higher GeForce scores due to its transform and lighting engine; it's about 5 times faster than TNT2 Ultra.

NVIDIA TreeMark COMPLEX 129,000 polygons/6 lights

In the complex scene GeForce scores even six times as high as TNT2 Ultra.

CPU Scaling

Is it true, will GeForce put an end to CPU-scaling? Will you be able to run any game with any CPU as long as you've got GeForce? We tried to find the answer by running each of the above benchmarks with a Celeron 400 at 640x480x16 to keep the impact of the fill rate as low as possible. We ran those benchmarks on GeForce w/DDR and TNT2 Ultra.

The following chart shows you the percentage of the Celeron 400-score compared to the Pentium III 550 score. This means that a score of 69% means that with this card a Celeron 400 scored 69% of the Pentium III 550 score. We would obviously expect that with GeForce the scores are as close to 100% as possible. See yourself if it achieves that.

CPU Scaling - GeForce 256 vs TNT2 Ultra

You're probably just as disappointed as I was when I ran the benchmarks. GeForce scales almost identical to TNT2, only in Descent 3 you can find a noticeable difference. The only real exception is the TreeMark. I was impressed to see that Celeron 400 scored the identical result as Pentium III 550 with GeForce, while with TNT2 the score was only 75%. After talking to NVIDIA about this issue I was told that most of the CPU-power is currently lost inside the rather young GeForce-drivers and that the CPU-scaling will get less once the drivers have matured.

The Facts Behind 3D-Games And A Possible Usage Of Geforce256's Integrated T&L-Engine

The term CPU-scaling in combination with NVIDIA's new GeForce256 GPU is becoming more and more an object of discussions and speculations. NVIDIA's comments to this topic add to the confusion, claiming that GeForce's 3D-performance is rather independent of the CPU-power, regardless if a K6, a Celeron, a Pentium III or a Athlon is being used. Fact is that games will ALWAYS be depending on CPU-performance, especially in case of complex or multiplayer games, where the CPU has a lot more to do than computing transform and lighting. The bandwidth of the system buses, like memory bandwidth, PCI-bandwidth and AGP-bandwidth are another factor that impacts GeForce just as much as any other 3D-chip without integrated T&L. Nevertheless, GeForce256 can reduce CPU-scaling. Games with complex graphics and rather simple AI and physics benefit greatly from GeForce's T&L-engine and even the other games should at least show a difference in CPU-scaling. Having said that, we should be aware that games have to fulfill some requirements to make usage of an integrated T&L.
  1. The game must be programmed for an API that supports integrated T&L-engines, like DirectX 7 or OpenGL.
  2. Current DX6-games can only take real advantage of GeForce with a DX7-patch.
  3. The game shouldn't have an engine too complex to be the performance-bottleneck in the first place.
  4. The rendering-engine of the 3D-chip should be able to draw the frames supplied by its integrated T&L-engine fast enough.
I case of our testing only TreeMark, Dagoth Moor Zoological Gardens and in some way Quake3 fulfill the requirements. The first two show a significant difference in CPU-scaling over chips without integrated T&L. Quake3 will show more of a difference once it was changed to depend more on the OpenGL transform and lighting procedures than on its own. In case of the other games we'll have to wait until the developers release DX7-patches.

Summary

NVIDIA's new GPU entered the scene with quite a lot of noise and in many respects it deserves all the attention it got so far. However, I had expected a bit more personally, but I am known to be a bit unreasonable sometimes.

Let's have a look at the good sides first. Geforce256 performs well and is ahead of the competition in the vast majority of the benchmarks, especially at high resolutions. Those benchmarks look as if they are from the last century, if you realize GeForce's new features and its ability to display highly detailed scenes with huge amounts of polygons. Buyers of GeForce will indeed get the fastest and most advanced 3D-chip out there and the pleasure to run games on GeForce may grow as new and demanding games show up on the scene.

However, first of all I expected GeForce to be further ahead of the competition even in today's benchmarks. I am definitely disappointed with the memory interface, which deeply depends on the availability of double data rate memory. This means that the cards with single data rate SDRAM, which are shipping right now, are not as fast as they could be, simply due to the memory issue. I also expected to see some kind of advantage of the transfer and lighting engine in current applications. The CPU-scaling benchmark shows that GeForce scales about the same as TNT2 right now though

What I have to say is that GeForce is another product made for the, hopefully close, future. As the drivers improve even the scores in current games might improve and once games become as detailed as the TreeMark, CPU-scaling may indeed become a thing of the past. Once DDR-SDRAM becomes available on GeForce-cards, it's definitely the card to buy, but you have to decide for yourself if the performance advance that is offered by the currently shipping cards with SDR-memory is enough for you to justify the purchase. I also expect that NVIDIA will either improve the yields or soon move over to .18µ-technology, so that future GeForce-chips will have higher clock rates and thus higher fill rates, which will also greatly improve its performance. I will get one personally, but I can justify the expense and I want to have the fastest card I can get, especially for Quake3. If you are tight on your budget but still keen on GeForce, then I'd suggest waiting until the DDR-cards become available. That's when I will get another one for myself.

NVIDIA had the courage to come first to market with a 3D-chip that includes a full transform and lighting engine. All the other cards with this feature that will be released later will take advantage of the fact that NVIDIA pushed the door open into a new gaming era with highly detailed graphics. I can understand well that the developers love this 'GPU', but we will have to wait for those developers to finish their work until we can start to experience GeForce's real potential.

CONTINUED: GeForce256 and the First T&L-Title
Now that we know how GeForce performs with today's games, let's have a look at its performance with an actual T&L-title. Dragon Moor Zoological Gardens throws boat loads of polygons at our new 3D-hero and its contestants. Will the integrated T&L-engine make the difference?




Свежие статьи
RSS
Обзор игрового кресла ThunderX3 Eaze Mesh: надежность и комфорт Обзор планшета HUAWEI MatePad Pro 13,2: флагман с великолепным дисплеем Лучший процессор для игр: текущий анализ рынка Иерархия процессоров Intel и AMD: сравнительная таблица Лучшие мониторы для игр: текущий анализ рынка
Обзор игрового кресла ThunderX3 Eaze Mesh Обзор планшета HUAWEI MatePad Pro 13,2 Лучший процессор для игр Иерархия процессоров Intel и AMD: сравнительная таблица Лучший монитор
РЕКЛАМА
РЕКОМЕНДУЕМ ПРОЧЕСТЬ!

История мейнфреймов: от Harvard Mark I до System z10 EC
Верите вы или нет, но были времена, когда компьютеры занимали целые комнаты. Сегодня вы работаете за небольшим персональным компьютером, но когда-то о таком можно было только мечтать. Предлагаем окунуться в историю и познакомиться с самыми знаковыми мейнфреймами за последние десятилетия.

Пятнадцать процессоров Intel x86, вошедших в историю
Компания Intel выпустила за годы существования немало процессоров x86, начиная с эпохи расцвета ПК, но не все из них оставили незабываемый след в истории. В нашей первой статье цикла мы рассмотрим пятнадцать наиболее любопытных и памятных процессоров Intel, от 8086 до Core 2 Duo.

ССЫЛКИ
Реклама от YouDo
erid: LatgC7Kww