• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Names Stanford's Bill Dally Chief Scientist, VP of Research

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,277 (7.69/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
NVIDIA Corporation today announced that Bill Dally, the chairman of Stanford University's computer science department, will join the company as Chief Scientist and Vice President of NVIDIA Research. The company also announced that longtime Chief Scientist David Kirk has been appointed "NVIDIA Fellow."

"I am thrilled to welcome Bill to NVIDIA at such a pivotal time for our company," said Jen-Hsun Huang, president and CEO, NVIDIA. "His pioneering work in stream processors at Stanford greatly influenced the work we are doing at NVIDIA today. As one of the world's founding visionaries in parallel computing, he shares our passion for the GPU's evolution into a general purpose parallel processor and how it is increasingly becoming the soul of the new PC. His reputation as an innovator in our industry is unrivaled. It is truly an honor to have a legend like Bill in our company."



"I would also like to congratulate David Kirk for the enormous impact he has had at NVIDIA. David has worn many hats over the years - from product architecture to chief evangelist. His technical and strategic insight has helped us enable an entire new world of visual computing. We will all continue to benefit from his valuable contributions."

About Bill Dally
At Stanford University, Dally has been a Professor of Computer Science since 1997 and Chairman of the Computer Science Department since 2005. Dally and his team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. At Caltech he designed the MOSSIM Simulation Engine and the Torus Routing chip which pioneered "wormhole" routing and virtual-channel flow control. His group at MIT built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. He is a cofounder of Velio Communications and Stream Processors, Inc. Dally is a Fellow of the American Academy of Arts & Sciences. He is also a Fellow of the IEEE and the ACM and has received the IEEE Seymour Cray Award and the ACM Maurice Wilkes award. He has published over 200 papers, holds over 50 issued patents, and is an author of the textbooks, Digital Systems Engineering and Principles and Practices of Interconnection Networks.

About David Kirk
David Kirk has been with NVIDIA since January 1997. His contribution includes leading NVIDIA graphics technology development for today's most popular consumer entertainment platforms. In 2006, Dr. Kirk was elected to the National Academy of Engineering (NAE) for his role in bringing high-performance graphics to personal computers. Election to the NAE is among the highest professional distinctions awarded in engineering. In 2002, Dr. Kirk received the SIGGRAPH Computer Graphics Achievement Award for his role in bringing high-performance computer graphics systems to the mass market. From 1993 to 1996, Dr. Kirk was Chief Scientist, Head of Technology for Crystal Dynamics, a video game manufacturing company. From 1989 to 1991, Dr. Kirk was an engineer for the Apollo Systems Division of Hewlett-Packard Company. Dr. Kirk is the inventor of 50 patents and patent applications relating to graphics design and has published more than 50 articles on graphics technology. Dr. Kirk holds B.S. and M.S. degrees in Mechanical Engineering from the Massachusetts Institute of Technology and M.S. and Ph.D. degrees in Computer Science from the California Institute of Technology.

View at TechPowerUp Main Site
 

wolf

Performance Enthusiast
Joined
May 7, 2007
Messages
7,726 (1.25/day)
System Name MightyX
Processor Ryzen 5800X3D
Motherboard Gigabyte X570 I Aorus Pro WiFi
Cooling Scythe Fuma 2
Memory 32GB DDR4 3600 CL16
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
after reading all of that, i think awesome.

this guy seems like a great mind to tap for this kind of product.

t'will be good to see how his input affects Nvidia's products and/or marketing.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.65/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
I wonder how much he has to do with NVIDIA being in cahoots with the Folding @ Home project. If he is the primary driving force behind that, I'm done with NVIDIA. I buy graphic cards for games, not pet projects--especially corporate-sponsored projects.
 
Last edited:
Joined
Feb 26, 2007
Messages
850 (0.14/day)
Location
USA
I wonder how much he has to do with NVIDIA being in cahoots with the Folding @ Home project.
I would bet a lot, but then if you think about it. His goals were met by partnering with Nvidia to make folding faster. My folding has been crazy with my GTX260.

This is hopefully good news for Nvidia and better products for us : )
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.65/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
What this guy is liable to do is remove NVIDIA from the gaming market altogether by striving to increase folding performance. It's already happening too seeing how many people build computers with 2+ NVIDIA cards in it just for folding. I really don't like where NVIDIA is going with this, hence my comment about NVIDIA potentially losing a customer.

I'm just glad Intel is getting ready to enter the market with NVIDIA perhaps leaving.
 
Joined
Feb 21, 2008
Messages
4,985 (0.85/day)
Location
Greensboro, NC, USA
System Name Cosmos F1000
Processor i9-9900k
Motherboard Gigabyte Z370XP SLI, BIOS 15a
Cooling Corsair H100i, Panaflo's on case
Memory XPG GAMMIX D30 2x16GB DDR4 3200 CL16
Video Card(s) EVGA RTX 2080 ti
Storage 1TB 960 Pro, 2TB Samsung 850 Pro, 4TB WD Hard Drive
Display(s) ASUS ROG SWIFT PG278Q 27"
Case CM Cosmos 1000
Audio Device(s) logitech 5.1 system (midrange quality)
Power Supply CORSAIR HXi HX1000i 1000watt
Mouse G400s Logitech
Keyboard K65 RGB Corsair Tenkeyless Cherry Red MX
Software Win10 Pro, Win7 x64 Professional
What this guy is liable to do is remove NVIDIA from the gaming market altogether by striving to increase folding performance. It's already happening too seeing how many people build computers with 2+ NVIDIA cards in it just for folding. I really don't like where NVIDIA is going with this, hence my comment about NVIDIA potentially losing a customer.

I'm just glad Intel is getting ready to enter the market with NVIDIA perhaps leaving.

Yeah, I really don't want to see the cure to cancer or new treatments for cancer patients if it means comprimising my FPS(frames per second). :laugh:

In all seriousness: I think philanthropic persuits are fine. In fact they should be encouraged considering cancer takes some of our loved ones away from us every passing day. Unless gaming is somehow more important.:wtf:

I think Nvidia is showing they can be a company with heart and trying to be a great graphics card company at the same time. Nothing wrong with that. Try not to be so negative.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.65/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
What they are doing is not philanthropic. What they're doing is capitalizing on philanthropy. If NVIDIA was actually being philanthropic here, they would design a card specifically for folding and create a large farm just to donate to the project. They aren't doing that.

They play the middle man. I got these cards which are supposed to be great for gaming but you can also use them to simulate protein folding for Stanford. The more you buy and the more you run them, the higher your score. What, exactly, is NVIDIA doing philanthropic except facilitating the movement of more product?
 
Joined
Feb 21, 2008
Messages
4,985 (0.85/day)
Location
Greensboro, NC, USA
System Name Cosmos F1000
Processor i9-9900k
Motherboard Gigabyte Z370XP SLI, BIOS 15a
Cooling Corsair H100i, Panaflo's on case
Memory XPG GAMMIX D30 2x16GB DDR4 3200 CL16
Video Card(s) EVGA RTX 2080 ti
Storage 1TB 960 Pro, 2TB Samsung 850 Pro, 4TB WD Hard Drive
Display(s) ASUS ROG SWIFT PG278Q 27"
Case CM Cosmos 1000
Audio Device(s) logitech 5.1 system (midrange quality)
Power Supply CORSAIR HXi HX1000i 1000watt
Mouse G400s Logitech
Keyboard K65 RGB Corsair Tenkeyless Cherry Red MX
Software Win10 Pro, Win7 x64 Professional
What they are doing is not philanthropic. What they're doing is capitalizing on philanthropy. If NVIDIA was actually being philanthropic here, they would design a card specifically for folding and create a large farm just to donate to the project. They aren't doing that.

They play the middle man. I got these cards which are supposed to be great for gaming but you can also use them to simulate protein folding for Stanford. The more you buy and the more you run them, the higher your score. What, exactly, is NVIDIA doing philanthropic except facilitating the movement of more product?

I see it as no different than a Solar panel factory to lower energy dependence on coal. To act like all philanthropy cannot turn a profit or is evil if it does is rediculous. Its the life blood of capitalism, but choosing to go into something that benefits us instead of just pure self indulgence as a 100% gaming product would.

If anything it gives a chance for gamers to give a little back to the world. And if you think about it, whats more noble a goal than to try to make the world a better place than it was before by ending the suffering or giving more hope to those in need of a cure. Giving people hope and an outlet to make a difference in a positive way, is never the wrong thing to do.:toast:
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.65/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
I've been down this road before and it's practically arguing religion ("but it cures cancer!!!!"). There's no sense in continuing.

Cancer is nature's way of saying you've out lived your welcome.
 
Joined
Feb 21, 2008
Messages
4,985 (0.85/day)
Location
Greensboro, NC, USA
System Name Cosmos F1000
Processor i9-9900k
Motherboard Gigabyte Z370XP SLI, BIOS 15a
Cooling Corsair H100i, Panaflo's on case
Memory XPG GAMMIX D30 2x16GB DDR4 3200 CL16
Video Card(s) EVGA RTX 2080 ti
Storage 1TB 960 Pro, 2TB Samsung 850 Pro, 4TB WD Hard Drive
Display(s) ASUS ROG SWIFT PG278Q 27"
Case CM Cosmos 1000
Audio Device(s) logitech 5.1 system (midrange quality)
Power Supply CORSAIR HXi HX1000i 1000watt
Mouse G400s Logitech
Keyboard K65 RGB Corsair Tenkeyless Cherry Red MX
Software Win10 Pro, Win7 x64 Professional
I've been down this road before and it's practically arguing religion ("but it cures cancer!!!!"). There's no sense in continuing.

Cancer is nature's way of saying you've out lived your usefulness.


Well, some believe life is more important than to just let it slip away. I like living personally. :cool:
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
What they are doing is not philanthropic. What they're doing is capitalizing on philanthropy. If NVIDIA was actually being philanthropic here, they would design a card specifically for folding and create a large farm just to donate to the project. They aren't doing that.

They play the middle man. I got these cards which are supposed to be great for gaming but you can also use them to simulate protein folding for Stanford. The more you buy and the more you run them, the higher your score. What, exactly, is NVIDIA doing philanthropic except facilitating the movement of more product?

I think you don't understand what F@H is. No company can build a fast enough supercomputer, Nvidia by pushing GPGPU and F@H, and by teaching GPGPU in universities is doing much more than what a farm of supercomputers can do.
Quote from F@H FAQ:

Why not just use a supercomputer?

Modern supercomputers are essentially clusters of hundreds of processors linked by fast networking. The speed of these processors is comparable to (and often slower than) those found in PCs! Thus, if an algorithm (like ours) does not need the fast networking, it will run just as fast on a supercluster as a supercomputer. However, our application needs not the hundreds of processors found in modern supercomputers, but hundreds of thousands of processors. Hence, the calculations performed on Folding@home would not be possible by any other means! Moreover, even if we were given exclusive access to all of the supercomputers in the world, we would still have fewer computing cycles than we do with the Folding@home cluster! This is possible since PC processors are now very fast and there are hundreds of millions of PCs sitting idle in the world.

EDIT: Just for an easy comparison. Fastest supercomputer Roadrunner has 12,960 IBM PowerXCell 8i CPUs and 6,480 AMD Opteron dual-core processors and a peak of 1.7 petaflops. Now looking at the statistics in these forums I find there are 38,933 members. If only half the members contributed to F@H at the same time, there would be much more power there. Now extrapolate to the world...

Cancer is nature's way of saying you've out lived your welcome.

WTF??!!
 
Last edited:

ascstinger

New Member
Joined
Apr 10, 2008
Messages
544 (0.09/day)
Location
In a house
System Name F34R T3H 0R4NG3
Processor AMD Phenom II 945
Motherboard DFI DK 790gx LP JR
Cooling Scythe Mugen II
Memory G.SKILL Trident 4GB DDR3 1600
Video Card(s) EVGA GTX260 Core216 55nm
Storage Western Digital 80gb Vraptor
Display(s) Westinghouse 19" LCD
Case Antec Mini P180 Black
Audio Device(s) Onboard :/
Power Supply Antec Signature Series 650w
Software Vista x64 SP2
eh, only thing I can weigh in on the F@H deal, is if nvidia cards compromise gaming performance for points, and ATi produces a faster card for the same money, that I would go for the ati card. If they can keep putting out powerful cards that just happen to be good at folding, that's great and I applaud them. If nothing else, why not develop a relatively affordable gpu specifically for F@H that doesnt run up the powerbill to a ridiculous level like running a gtx260 24/7, and then concentrate on gaming with a different card. Then, you can have the option to just grab the gtx for gaming and possible occasional folding, both for gaming and low power use when folding, or just the folding card for someone who doesn't game at all, making the gtx a waste.

There's probably a million reasons why that wouldn't work in the market today, but its a thought for some of us who hesitate due to the power bill it could run up, or being restricted to just nvidia cards
 
Joined
Feb 21, 2008
Messages
4,985 (0.85/day)
Location
Greensboro, NC, USA
System Name Cosmos F1000
Processor i9-9900k
Motherboard Gigabyte Z370XP SLI, BIOS 15a
Cooling Corsair H100i, Panaflo's on case
Memory XPG GAMMIX D30 2x16GB DDR4 3200 CL16
Video Card(s) EVGA RTX 2080 ti
Storage 1TB 960 Pro, 2TB Samsung 850 Pro, 4TB WD Hard Drive
Display(s) ASUS ROG SWIFT PG278Q 27"
Case CM Cosmos 1000
Audio Device(s) logitech 5.1 system (midrange quality)
Power Supply CORSAIR HXi HX1000i 1000watt
Mouse G400s Logitech
Keyboard K65 RGB Corsair Tenkeyless Cherry Red MX
Software Win10 Pro, Win7 x64 Professional
A powerful GPU is also a powerful folding@home card. A weak Folding@home card is also a weak GPU... the properties that make a good GPU also make it good for folding if that makes since. Thinking they are seperate things and it might comprimise graphics performance is a non-issue so don't think it will cause a problem.

Provided that software is written to utilize the GPU for folding in the first place. Which in Nvidia's case it is.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.65/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
I think you don't understand what F@H is. No company can build a fast enough supercomputer, Nvidia by pushing GPGPU and F@H, and by teaching GPGPU in universities is doing much more than what a farm of supercomputers can do.
GPGPU is fundamentally wrong. Intel's approach is correct in that there's no reason GPUs can't handle x86 instructions. So, don't teach proprietary GPGPU code for NVIDIA's profit in school, teach them how to make GPUs effective at meeting D3D and x86 requirements.


Just for an easy comparison. Fastest supercomputer Roadrunner has 12,960 IBM PowerXCell 8i CPUs and 6,480 AMD Opteron dual-core processors and a peak of 1.7 petaflops. Now looking at the statistics in these forums I find there are 38,933 members. If only half the members contributed to F@H at the same time, there would be much more power there. Now extrapolate to the world...
Those computers have extremely highspeed interconnects which allows them to reach those phenominal numbers; moreover, they aren't overclocked and they are monitored 24/7 for problems making them highly reliable. Lots of people here have their computers overclocked which breeds incorrect results. If that was not enough, GPUs are far more likely to produce bad results than CPUs.

There obviously are inherint problems with Internet-based supercomputing and there's also a whole lot of x-factors that ruin it's potential for science (especially machine stability). Folding especially is very vulnerable to error because every set of work completed is expanded by another and another. For instance, how do we know that the exit tunnel is not the result of an uncaught computational error early on?


A powerful GPU is also a powerful folding@home card. A weak Folding@home card is also a weak GPU... the properties that make a good GPU also make it good for folding if that makes since. Thinking they are seperate things and it might comprimise graphics performance is a non-issue so don't think it will cause a problem.
As was just stated, a 4850 is just as good as the 9800 GTX in terms of gaming but because of the 9800 GTX's architecture, the 9800 GTX is much faster at folding. This is mostly because NVIDIA uses far more transistors which means higher power consumption while AMD takes a smarter-is-better approach using far fewer transistors.

And yes, prioritizing on GPUs leaves much to be desired. I think I recall trying to play Mass Effect while the GPU client was folding and it was unplayable. It is a major issue for everyone that buys cards to game.
 
Last edited:

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
GPGPU is fundamentally wrong. Intel's approach is correct in that there's no reason GPUs can't handle x86 instructions. So, don't teach proprietary GPGPU code for NVIDIA's profit in school, teach them how to make GPUs effective at meeting D3D and x86 requirements.

95% of making effective GPGPU code work is knowing parallel computing, the rest is the language itself, so they are indeed doing something well. Ever since Nvidia is inside the OpenCL board they are teaching that too so don't worry, as said that's only the 5%. General computing is no different in that way, 95% of knowing how to program nowadays is knowing how to program with objects. If you know how to program with C++ for example, you know programming with the rest.

The same aplies to x86. The difficulty relies on making the code highly parallel. x86 is NOT designed for parallelism and is as difficult making a highly parallel computing program in x86 as doing it in GPGPU codes.

This BTW was said by Standford guys (maybe even this same guy) BEFORE Nvidia had any relations with them. When GPGPU was nothing else than Brook running in X1900 Ati cards so...

Those computers have extremely highspeed interconnects which allows them to reach those phenominal numbers; moreover, they aren't overclocked and they are monitored 24/7 for problems making them highly reliable. Lots of people here have their computers overclocked which breeds incorrect results. If that was not enough, GPUs are far more likely to produce bad results than CPUs.

There obviously are inherint problems with Internet-based supercomputing and there's also a whole lot of x-factors that ruin it's potential for science (especially machine stability). Folding especially is very vulnerable to error because every set of work completed is expanded by another and another. For instance, how do we know that the exit tunnel is not the result of an uncaught computational error early on?

False. GPGPU is as prone to errors as supercomputers are, they doublecheck the data is correct in the algorithms. Even if that takes more computing time, reducing efficiency, beause the seer computing power of F@H is like 1000 times that of a supercomputer that means squat.

A GPU does not make more errors than a CPU anyway. And errors resulted from OC yield highly unexpected results that are easy to detect.

Anyway F@H is SCIENCE, do you honestly believe they only send each algorithm to a single person?? The have 1000's of them and they know which one is well and which not. :laugh:
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
As was just stated, a 4850 is just as good as the 9800 GTX in terms of gaming but because of the 9800 GTX's architecture, the 9800 GTX is much faster at folding. This is mostly because NVIDIA uses far more transistors which means higher power consumption while AMD takes a smarter-is-better approach using far fewer transistors.

And yes, prioritizing on GPUs leaves much to be desired. I think I recall trying to play Mass Effect while the GPU client was folding and it was unplayable. It is a major issue for everyone that buys cards to game.

G92 (9800 GTX) has much less transistors than RV770 (HD4850) FYI. And the 55nm G92b variant is significantly enough smaller too 230mm^2 vs 260mm^2.

Of course folding at the same time reduces performance, but the fact that GPGPU exists doesn' make the card slower. :laugh:

Next stupid claim??
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.65/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
The same aplies to x86. The difficulty relies on making the code highly parallel. x86 is NOT designed for parallelism and is as difficult making a highly parallel computing program in x86 as doing it in GPGPU codes.
Intel is addressing that.


False. GPGPU is as prone to errors as supercomputers are, they doublecheck the data is correct in the algorithms. Even if that takes more computing time, reducing efficiency, beause the seer computing power of F@H is like 1000 times that of a supercomputer that means squat.

A GPU does not make more errors than a CPU anyway. And errors resulted from OC yield highly unexpected results that are easy to detect.

Anyway F@H is SCIENCE, do you honestly believe they only send each algorithm to a single person?? The have 1000's of them and they know which one is well and which not. :laugh:
F@H doesn't double check results.

What happens when a CPU errors? BSOD
What happens when a GPU errors? Artifact

Which is fatal, which isn't? CPUs by design are meant to be precision instruments. One little failure and all goes to waste. GPUs though, they can work with multiple minor failures.

I got no indication from them that any given peice of work is completed more than once for the sake of validation.


No, errors aren't always easy to catch.
Float 2: 00000000000000000000000001000000
Float 4: 00000000000000001000000001000000

If the 17th digit got stuck, every subsequent calculation will be off by a minute amount. For instance:
Should be 2.000061: 00000000000000010000000001000000
Got: 4.0001221: 00000000000000011000000001000000

Considering F@H relies on a lot of multiplication, that alone could create your "exit tunnel."


G92 (9800 GTX) has much less transistors than RV770 (HD4850) FYI. And the 55nm G92b variant is significantly enough smaller too 230mm^2 vs 260mm^2.
9800 GTX = 754 million transistors
4850 = 666 million transistors

Process doesn't matter except in physical dimensions. The transistor count only changes with architectural changes.


Of course folding at the same time reduces performance, but the fact that GPGPU exists doesn' make the card slower. :laugh:
It's poorly executed and as a result, CUDA is not for gamers in the slightest.
 
Last edited:

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
Double post. Sorry
 
Last edited:

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
Intel is addressing that.



F@H doesn't double check results.

What happens when a CPU errors? BSOD
What happens when a GPU errors? Artifact

Which is fatal, which isn't? CPUs by design are meant to be precision instruments. One little failure and all goes to waste. GPUs though, they can work with multiple minor failures.

I got no indication from them that any given peice of work is completed more than once for the sake of validation.


No, errors aren't always easy to catch.
Float 2: 00000000000000000000000001000000
Float 4: 00000000000000001000000001000000

If the 17th digit got stuck, every subsequent calculation will be off by a minute amount. For instance:
Should be 2.000061: 00000000000000010000000001000000
Got: 4.0001221: 00000000000000011000000001000000

Considering F@H relies on a lot of multiplication, that alone could create your "exit tunnel."

It's SICIENCE so of course they have multiple instances of the same problem. They don't have to say that because they are firstmost and ultimately scientists working for scientists.

EDIT: Anyway, I don't know you, but every math program I made at school, doeble checked the results by redundancy, I was teached to do it that way. I expect scientists working to cure cancer received an education as good as mine, AT LEAST as good as mine.
EDIT: Those examples are, in fact, easy to spot errors. Specially in F@H. If you are expecting the molecule to be around the 2 range (you know what to expect, but it's science, you want to know EXACTLY where will it be) and you got 4, well you don't need a high grade to see the difference.


WRONG. RV670 has 666 m transistors. RV770 has 956 m transistors. source
source

Don't contradict educated facts without doublechecking your info PLEASE.

Process doesn't matter except in physical dimensions. The transistor count only changes with architectural changes.

So now you are going to teach me that?? :laugh::laugh:

It's poorly executed and as a result, CUDA is not for gamers in the slightest.

Of course it's not for games (except for PhysX). But it doesn't interfere at all with games performance.
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.65/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
It's SICIENCE so of course they have multiple instances of the same problem. They don't have to say that because they are firstmost and ultimately scientists working for scientists.
Pande is a chemical biologist. How much he cares about computational accuracy remains to be seen.


WRONG. RV670 has 666 m transistors. RV770 has 956 m transistors. source
source
So, Tom's Hardware is wrong. That doesn't change the fact that F@H prefers NVIDIA's architecture.


Of course it's not for games (except for PhysX). But it doesn't interfere at all with games performance. Intel's Larrabee, x86 or not, won't help at games neither. In fact there's no worse example than Larrabee, for what you are trying to say. There'd not be a GPU worse at gaming than Larrabee.
That makes a whole lot of no sense so I'll respond to what I think you're saying.

-NVIDIA GeForce is designed specifically for Direct3D (or was).
-CUDA was intended to offload any high FLOP transaction from the CPU. It doesn't matter what the work actually is comprised of.
-CUDA interferes enormously with game performance because it's horrible at prioritizing threads.
-Larrabee is a graphics card--but not really. It is simply designed to be a high FLOP, general purpose card that can be used for graphics among other things. Larrabee is an x86 approach to the high-FLOP needs (programmable cores).

Let's just say CUDA is riddled with a lot of problems that Larrabee is very likely to address. CUDA is a short-term answer to a long term problem.
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
Pande is a chemical biologist. How much he cares about computational accuracy remains to be seen.

I can see your disbelief about science, but I don't condone it. Scientists know how to make their work, assuming they don't is plainly stupid.


So, Tom's Hardware is wrong. And? That doesn't change the fact that F@H prefers NVIDIA's architecture.

Yeah, it prefers Nvidia's architecture because Nvidia's GPUs where designed with GPGPU in mind. I still see Nvidia on top in most games. So?

That makes a whole lot of no sense so I'll respond to what I think you're saying.

-NVIDIA GeForce is designed specifically for Direct3D (or was).
-CUDA was intended to offload any high FLOP transaction from the CPU. It doesn't matter what the work actually is comprised of.
-CUDA interferes enormously with game performance because it's horrible at prioritizing threads.
-Larrabee is a graphics card--but not really. It is simply designed to be a high FLOP, general purpose card that can be used for graphics among other things. Larrabee is an x86 approach to the high-FLOP needs (programmable cores).

Let's just say CUDA is riddled with a lot of problems that Larrabee is very likely to address. CUDA is a short-term answer to a long term problem.

- Nope, they are designed for GPGPU too. Oh and strictly talking, I don't really know of there was a time when Nvidia GPUs where focused at D3D. It's been more focused on OpenGL, except maybe the last couple of generations.
- Yes and I don't see where you're going with that.
- Unless you want to use CUDA for PhysX, CUDA doesn't interfere with gaming AT ALL. And in any case, Nvidia has hired this guy to fix those kind of problems. It's going to move to MIMD cores too, so that thing is going to be completely fix in the next generation of GPUs.
- Yes, exactly.

Many people think that GPGPU is the BEST answer for that, and they all of them don't work for Nvidia. In fact, many work for Ati.
 
Joined
Apr 7, 2008
Messages
633 (0.11/day)
Location
Australia
System Name _Speedforce_ (Successor to Strike-X, 4LI3NBR33D-H, Core-iH7 & Nemesis-H)
Processor Intel Core i9 7980XE (Lapped) @ 5.2Ghz With XSPC Raystorm (Lapped)
Motherboard Asus Rampage VI Extreme (XSPC Watercooled) - Custom Heatsinks (Lapped)
Cooling XSPC Custom Water Cooling + Custom Air Cooling (From Delta 220's TFB1212GHE to Spal 30101504&5)
Memory 8x 8Gb G.Skill Trident Z RGB 4266MHz @ 4667Mhz (2x F4-4266C17Q-32GTZR)
Video Card(s) 3x Asus GTX1080 Ti (Lapped) With Customised EK Waterblock (Lapped) + Custom heatsinks (Lapped)
Storage 1x Samsung 970 EVO 2TB - 2280 (Hyper M.2 x16 Card), 7x Samsung 860 Pro 4Tb
Display(s) 6x Asus ROG Swift PG348Q
Case Aerocool Strike X (Modified)
Audio Device(s) Creative Sound BlasterX AE-5 & Aurvana XFi Headphones
Power Supply 2x Corsair AX1500i With Custom Sheilding, Custom Switching Unit. Braided Cables.
Mouse Razer Copperhead + R.A.T 9
Keyboard Ideazon Zboard + Optimus Maximus. Logitech G13.
Software w10 Pro x64.
Benchmark Scores pppft, gotta see it to believe it. . .
I dont think we should shove aside the important factors here;
For starters, Anyones efforts to do humanity a favour, especially of this magnitute should be respected, regardless of belief's, unless you wish the Terran race extinction ofcourse. But thats because good and evil does exist regardless if religion does or not.

. . . . If CUDA doesnt increase f.p.s, nor does it decrease it. Then that equals even.
. . . . If CUDA does ANYTHING. Then thats a plus.

Darkmatter, thank you for explaining to those out there that cant comprehend, but unfortunately i think its fallen on blind hearts . . . Oh wait a minute, all of out hearts are blind . . . Maybe i meant cold hearted.

Anyways, im going to go and take out my graphics cards and play Cellfactor @ 60+ f.p.s with just the Asus Ageia P1.

Edit : Ohh ye, almost forgot. I want to know how much Bill and David are on p.a. I bet the x-Nvidia staff would like to know too.
I dont think either Bill or David have much more to offer Nvidia, and i dont think they will bother either. Good luck to the green team.
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.65/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
If CUDA doesnt increase f.p.s, nor does it decrease it. Then that equals even.
If something is using CUDA while a game is running, it hurts the game's FPS badly.
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.28/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
If something is using CUDA while a game is running, it hurts the game's FPS badly.

But NOTHING forces you to use CUDA at the same time, that's the point. When you are gaimng dissable F@H, of course!! But when you are not using the GPU for anything you can fold, and with GPU2 and Nvidia card you can fold MORE. It's simple.

And if you are talking about PhysX, take in mind that the game is doing more, so you get more for more, not the same while requiring more as you are suggesting. If it comes a time when GPGPU is used for say AI, then the same will be true, you will get more than what the CPU alone can do while mantaining more frames too, because without the GPU it would be unable to provide enough frames with that kind of detail. That's the case with PhysX and that will be the case with any GPGPU code used in games.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.65/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
I can see your disbelief about science, but I don't condone it. Scientists know how to make their work, assuming they don't is plainly stupid.
Just because you can use a computer doesn't mean you understand how it works. Likewise, just because Pande wants results for science doesn't mean he knows the best way to go about them from a computing standpoint.


Many people think that GPGPU is the BEST answer for that, and they all of them don't work for Nvidia. In fact, many work for Ati.
All I know is that the line between GPU and not is going away. There's more focus on the FLOPs--doesn't matter where it comes from in the computer (on the CPU, on the GPU, on the PPU, etc.).

But then again, FLOPs for mainstream users aren't that important (just for their budgeting). It is kind of ackward to see so much focus on less than 10% of a market. Everyone (AMD, Intel, AMD, Sony, IBM, etc.) are all pushing for changes to the FPU when ALU needs work too.


But NOTHING forces you to use CUDA at the same time, that's the point. When you are gaimng dissable F@H, of course!! But when you are not using the GPU for anything you can fold, and with GPU2 and Nvidia card you can fold MORE. It's simple.
F@H should be smart enough to back off when the GPU is in use (the equivilent of low priority on x86 CPUs). Until they fix that, it's useless to gamers.


Regardless, I still don't support F@H. Their priority is in results, not accurate results.


Physx is useless.


The problem with GPGPU is the GPU is naturally a purpose-built device: binary -> display. Any attempts to multitask it leads to severe consequences because it's primary purpose is getting encroached upon. The only way to overcome that is multiple GPUs but then they really aren't GPUs at all because they aren't working on graphics. This loops back into what I said earlier in this post that the GPU is going away.
 
Top