This means Titan X is being marketed exclusively to gamers, rather than those more interested in compute performance. Instead, it is locked (via driver, not hardware) at 1/32 of single-precision performance, just as with the standard GTX 900 series. Interestingly, Nvidia is no longer offering unlocked double-precision performance, despite this being part of the Titan family. The memory still runs at 7,010MHz effective, and with the wider memory bus this takes total memory bandwidth to 336.5GB/sec. Still, it's higher than AMD's current largest frame buffer, which is 8GB on selected R9 290X models. According to Nvidia, this was a developer driver decision, with the company hearing feedback that larger buffers are becoming more and more important, although where that leaves the GTX 980 4GB in the future remains to be seen. While the GPU has essentially seen a 50 percent increase over GM204, Titan X has had a threefold increase over GTX 980 when it comes to VRAM, and is thus equipped with a massive 12GB frame buffer. ![]() The GPU has a base clock of 1GHz, with a rated boost clock of 1,075MHz, though as ever the actual boost speed will vary depending on the workload, thermal environment and so on. Remember, this is a fully enabled part, so there will be no fiasco about partially disabled ROP/L2 cache partitions as there was with the GTX 970. Since each controller is tied to 16 ROPs and 512KB of L2 cache, GTX Titan X has a whopping 96 ROPs and 3MB of L2 cache. Memory controllers too are up from four to six, making for a 384-bit interface. There are now two more GPCs (six total) with four SMMs a piece, and this also takes the texture unit count to 192. In fact, multiply a GM204 by 1.5 and you effectively arrive at the GM200 – the highly parallel design of Maxwell makes scaling like this relatively easy. This is 50 percent more cores than in the GTX 980's GM204 GPU. Getting to the nitty gritty, GM200 is a fully enabled 28nm Maxwell part with a 601mm 2 die size, 8 billion transistors and 3,072 CUDA cores. Both VR SLI and Asynchronous Time Warp can be used together, with an alpha driver enabling this functionality now available to select developers and partners. There's also VR SLI, where one card is assigned to each eye in the VR headset. These include Asynchronous Time Warp, whereby a scene is “shifted” in line with the latest information from a VR headset's tracking sensor right before being displayed, rather than having the GPU re-render it, thus reducing the perceived latency between head movements and what's shown. Nvidia is also working on virtual reality technologies which Titan X will support. The most exciting of these is VXGI, a method of rendering dynamic lighting in scenes in real time on the GPU, and this has now been integrated into a branch of Unreal Engine 4, which was recently made free to developers worldwide. ![]() Nvidia's latest visual technologies are also of course supported, including VXGI, MFAA and DSR, which are all covered in our GTX 980 review. In terms of features, Titan X is DirectX 12 compatible with Feature Level 12.1. Nvidia is also working to make sure that compatible waterblocks are available at or shortly after launch. ![]() Like previous Titan family cards, it will come as the reference design only – some partners may ship their own coolers for it, but it will need to be included separately and installed manually. All of Nvidia's main board partners should be stocking Titan X, and buyers should be able to pre-order the card tomorrow from 2pm GMT.
0 Comments
Leave a Reply. |