Friday, October 17th 2008

NVIDIA Introduces NVIDIA Quadro CX - The Accelerator For Adobe Creative Suite 4

NVIDIA today introduced its new accelerator for Adobe Creative Suite 4 software, the NVIDIA Quadro CX. This new GPU provides creative professionals with a faster, better, more reliable way to maximize their creativity. Quadro CX was architected to deliver the best performance for the new GPU-optimized features of Adobe Creative Suite 4.

"Adobe is at the forefront of the Visual Computing Revolution," said Dan Vivoli, executive vice president of marketing at NVDIA. "CS4's GPU features are sending shockwaves through the creative industry."

"Photoshop users are always looking for maximum performance, and we recognized that tapping into the power of the GPU is one way to give it to them," said Kevin Connor, vice president of product management for Professional Digital Imaging at Adobe. "Thanks to NVIDIA's efforts to optimize the Quadro CX product for CS4, users can be assured of a dramatically fast and fluid experience on tasks such as panning, zooming, and rotating large images as well as manipulating 3D content."

NVIDIA specifically designed and optimized the Quadro CX to enhance the performance of Adobe Creative Suite 4 product line and meet the unique needs of the Creative Suite 4 professional. Adding NVIDIA Quadro CX to the pipeline gives creative professionals a faster, better, and more reliable experience. With NVIDIA Quadro CX users can encode H.264 videos at lightning-fast speeds with the NVIDIA CUDA-enabled plug-in for Adobe Premiere Pro CS4, RapiHD from Elemental Technologies; accelerate rendering time for advanced effects; and accurately preview what deliverables will look like with 30-bit color or uncompressed 10-bit/12-bit SDI before final outputi.

Adobe Creative Suite 4 software now has added optimization to take advantage of GPU technology, including the parallel processing capability in the Quadro CX GPU. These enhancements accelerate a number of visually intensive operations, including:
  • Adobe Photoshop CS4 uses the NVIDIA Quadro CX GPU to bring unprecedented fluidity to image navigation. The GPU enables real-time image rotation, zooming, and panning, and makes changes to the view instantaneous and smooth. Adobe Photoshop CS4 also taps the GPU for on-screen compositing of both 2D and 3D content, ensuring smoothly anti-aliased results regardless of zoom level. Brush resizing and brushstroke preview, 3D movement, high-dynamic-range tone mapping, and color conversion are also accelerated by the GPU.
  • Adobe After Effects CS4 now has added optimization features to accelerate a variety of creative effects, making it easier than ever to add graphics and visual effects to video, which allows the artist to quickly move from concept to final product and speeds up the workflow. Effects accelerated include depth of field, bilateral blur effects, turbulent noise such as flowing water or waving flags, and cartoon effects. NVIDIA Quadro CX takes advantages of these workflow enhancements.
  • Adobe Premiere Pro CS4 can take advantage of Quadro CX to accelerate high-quality video effects such as motion, opacity, color, and image distortion. Quadro CX also enables faster editing of multiple high-definition video streams and graphic overlays and provides a variety of video output choices for high-quality preview, including DisplayPort, component TV, or uncompressed 10-bit or 12-bit SDI.
NVIDIA Quadro CX is available starting today, from leading resellers, e-tailers, and retailers, for $1,999 USD. For more information on NVIDIA Quadro and the new Quadro CX, please visit: www.nvidia.com/quadrocx.
Source: NVIDIA
Add your own comment

33 Comments on NVIDIA Introduces NVIDIA Quadro CX - The Accelerator For Adobe Creative Suite 4

#26
aiaN
Sorry to tell you are all wrong. Softmodding isnt possible as it where before. But still we got some geniouses out there who have fixed this.

Now let me set your straight! Cause i only registered here to tell you how this works.

So! Well earlier on people did unlock shaders and other stuf by fysicaly modify the cards. Later on it where possible to unlock stuff with a simple bios tweak! now it isnt that easy!

And when it comes to Quadro cards the biggest difference is usualy the rams YES! but whan you realy are paying for is a beter guarantee that the cards wont crash on you. And the huge driver pacage that comes with it for proffesional CAD work.

So, when it comes to Geforce cards there is an even simpler solution to get it working as a quadro card! This you can do with a never Quadrosoftware script. Witch actually works together with rivatuner www.guru3d.com/index.php?page=rivatuner !!

Earlier this wherent working good at all, and still it does not increase the power of a geforce card to a quadro. But it increases!

Wish you good luck!

BTW. The script is a plugin for Rivatuner.
Posted on Reply
#27
TheMailMan78
Big Member
mdm-adphThen you can spend the other $1700 on hookers and blow
I could go for some hookers and blow. :pimp:
Posted on Reply
#28
eodeo
Quadro cards the biggest difference is usualy the rams
Here’s a simple comparison of 8800gtx and its variants:

Quadro FX 4600 is renamed and underclocked 8800gtx that costs 1,100$.

Quadro FX 5600 is renamed and overclocked 8800gtx with 2x more memory- 1.5tb opposed to "low" 768mb. This one is 2,200$

8800gtx uses the exactly same chip, altho a bit different in core/shaders/memory speed, and while its actually faster than its underclocked quadro variant it now costs below 200$- if found. While qfx 5600 is a bit faster than 8800gtx, 8800 ultra is faster than both - since its even more overclocked. 8800 ultra is also sub 200$ today... if found.

Now lets look at the performance difference between the highest-end Quadro cards and 2 generations old geforce:

Which one is faster for 3ds Max, AutoCad or Maya?

Here's the kicker- 8800!

In all tests and every possible way, 8800 is faster in real work. How? Its actually physically faster than quadro variants and added ram on 5600 model does nothing since 768mb is way more than any of the users/ tests use to begin with. 1,5tb is nothing more than marketing ploy for the clueless- or nice if you have a CPU from the future(are we there yet Doc?) that can handle more than a billion polygon scene, that is needed to use up 1,5tb of ram.
but this surely cant be, just look at SpecCheatTests
(i realize that im quoting myself on a yet unstated sentence)

Its true that SpecCheatTests wont reflect what I've just said, but these tests are synthetic and OpenGL based. OpenGL is not used by any of the mentioned professional programs for ANY serious work today.

8800, aside from being crippled in drivers for windowed OpenGL work, is also treated badly in the SpeCheatTest- these conditions are next to impossible to recreate in actual work with many of the new professional programs that have long since ditched the outdated and backward OpenGL methodology. OpenGL, aside from being old/slow/ugly is actually additionally crippled in all non-workstation cards, as if OpenGL wasn’t slow and ugly enough to begin with.

To see what I’m talking about in OpenGL vs D3D performance, just test Max/Maya/ACad with a workstation card in both OpenGL and D3D. You’ll surely notice that speed is 10x slower at best, and 100x slower on average when using OGL. It might be a bit more difficult to notice to the novice, but light and shadow effects/quality are sub-par or nonexistent when using OGL as well.

This is all on “professional” cards that don’t have OpenGL drivers crippled. When you see that a traditional driver crippling takes away another 100x performance in same class geforce chip, you get to 1k-10k x slower performance than D3D while, looking worse too.

Who uses OpenGL for anything anymore? Poor, obsolete CAD users and people who are forced to use OpenGL due to lack of choice on their OS(Mac, Linux...)
Posted on Reply
#29
aiaN
Yeah to good that people who takes those tests done sit and work with highpoly models through the roof and work with textures who are frigging huge at the same time working with physics and animation.... cause that is pre calculated in computer games. But in maya and 3d max these are things that the GPU needs to calculate in a high quality render. And when you do a Mental Ray with faces over the million you can start swallowing ram chips as they where h2o atoms.... and you know you need a lot of them unless it/you will chocke....

So, no need for loads of ram? well.... dont come here and say 4gb ram on MOBO and 1gb ram on GPU is enough. I know for shure when working with programs as Maya and Max in a professional manner! That aint enough....

But exept for that i wont correct you wrong!
Posted on Reply
#30
eodeo
Mental Ray probably will one day be done using GPU power. That day isnt today and by the looks of it, it isnt that close yet- than again it isnt very far away either.

All scene requirements at rendering time is handled by the CPU and system RAM. You can have integrated graphics with not a single mb of dedicated video ram or the quadro fx 5600 with 1,5tb ram. It wont make 0.01% difference.

Scene previewing is another matter all together, and all x8xx cards that have came out for the alst 2 years are capable of over billion polygon scenes with 100k textures. Moderns CPUs arent. They might be, if the viewports were optimized to use more than one core, but sadly, that is not yet the case.

For over 2 years, CPUs were the bottleneck of high-end 3d programs, not the GPUs. Still, interestingly enough, GPUs keep getting ridiculously better while CPUs get minute improvements.
Posted on Reply
#31
TheGuruStud
Sounds like those douche coders need to learn a lesson from gaming. OpenGL rules.
Posted on Reply
#32
panchoman
Sold my stars!
can't cuda cards also accelerate? so why would we need this unless if we are ridiculous game designers.
Posted on Reply
#33
norway7
Softmod of Geforce FX 260/280 to Quadro CX

I think most of the users of Premiere CS4 wonder if it's possible decrease rendering time of H.264 with a softmoded FX260/280. Maybe not as much as with a Quadro CX, but GPU rendring will without doubt go faster than ordinary CPU rendering. Someone who actually tested this?
Posted on Reply
Add your own comment
Apr 25th, 2024 18:52 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts