"640k ought to be enough for anybody." Dating back to the year 1981, this is probably the most famous quote attributed to Bill Gates. In light of today's software and accompanying memory requirements, most people can't help but smirk at this statement. But all humor aside, Mr. Gates' little mis-prediction has had a huge impact on the way we think about computer memory, leading to a "too much is barely enough" mentality. Just last year we witnessed the standard memory configuration for pre-built computers jump to 256MB RAM, thanks to new Windows versions and plummeting memory prices.
A similar development is evident in the graphics card market. The original monochrome graphics adapters only needed about 2KB of memory. The first video cards that really deserved the name were the VGA boards, which offered a resolution of 320x240 pixels at 256 colors or 640x480 pixels at 16 colors, requiring 32KB of video memory. Then S-VGA entered the picture and quickly became the accepted standard. EGA, on the other hand, was never able to gain a foothold and left the picture very quickly. The next logical step brought boards with 64, 128, 256 and 512KB of RAM. At the time, the larger memory sizes were needed mainly to render higher resolutions and not for the extra functions video cards have since added. For example, a card needs 6MB to display a resolution of 1600x1200-32.
With the advent of 3D accelerators (the first of which were really more "decelerators"), the video memory was burdened with the additional task of buffering the 3D data. A RIVA 128 with 4MB was only able to render a 3D scene up to 800x600 pixels - in 16Bit color, mind you! The follow-up card offered 8MB and was able to render 3D up to 1024x768 pixels. Starting with the 16MB generation of cards, the video memory began to be of only secondary importance for 2D resolutions. The quality of the card's RAMDAC, and consequently the supported refresh rate, now determined the maximum attainable resolution.
Fast forward to the present, and conventional wisdom has it that a 32MB video card is sufficient to play current 3D games. The only feature that would truly require more memory would be anti-aliasing. Nonetheless, cards of NVIDIA's GeForce2 Pro and Ultra generations and ATi's RADEON line began shipping with 64MB. Considering the relatively modest (super sampling) FSAA performance, this was more than enough. Even today's high-end cards like the ATi RADEON 8500 and NVIDIA's Ti500 don't need more than that.
Yet, true to the trend, the first companies have now begun selling Titanium cards with 128MB of RAM. Interestingly, it's not the flagship Ti500 chip that's receiving the royal treatment, but the more budget-oriented Ti200. Probably, the Ti500's faster memory requirement made this option too expensive. So, does the additional memory translate into additional performance, or is it just another clever PR strategy some manufacturers are trying to see if it will help sell more cards to enthusiasts? That's what we want to find out in this article.
Leadtek's entry into this new niche is the WinFast Titanium 200TDH 128MB. Gainward sent us the GeForce3 Power Pack!!! Ti/500 TV Jumbo in two versions. The first is a standard version, the second a faster "Golden Sample." The latter runs at higher speeds and is equipped with a more powerful fan. We'll cover the differences between the cards in more detail later, though. Aside from the additional memory, the 128 MB Ti200's are identical to the 64MB versions. They also use NVIDIA's reference clockspeeds of 175MHz (GPU) and 400MHz (memory), except for the Gainward GS.