1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Names Stanford's Bill Dally Chief Scientist, VP of Research

Discussion in 'News' started by btarunr, Jan 28, 2009.

  1. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    29,463 (10.79/day)
    Thanks Received:
    14,010
    Location:
    Hyderabad, India
    NVIDIA Corporation today announced that Bill Dally, the chairman of Stanford University’s computer science department, will join the company as Chief Scientist and Vice President of NVIDIA Research. The company also announced that longtime Chief Scientist David Kirk has been appointed “NVIDIA Fellow.”

    “I am thrilled to welcome Bill to NVIDIA at such a pivotal time for our company,” said Jen-Hsun Huang, president and CEO, NVIDIA. “His pioneering work in stream processors at Stanford greatly influenced the work we are doing at NVIDIA today. As one of the world’s founding visionaries in parallel computing, he shares our passion for the GPU’s evolution into a general purpose parallel processor and how it is increasingly becoming the soul of the new PC. His reputation as an innovator in our industry is unrivaled. It is truly an honor to have a legend like Bill in our company.”

    [​IMG]

    “I would also like to congratulate David Kirk for the enormous impact he has had at NVIDIA. David has worn many hats over the years – from product architecture to chief evangelist. His technical and strategic insight has helped us enable an entire new world of visual computing. We will all continue to benefit from his valuable contributions.”

    About Bill Dally
    At Stanford University, Dally has been a Professor of Computer Science since 1997 and Chairman of the Computer Science Department since 2005. Dally and his team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. At Caltech he designed the MOSSIM Simulation Engine and the Torus Routing chip which pioneered “wormhole” routing and virtual-channel flow control. His group at MIT built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. He is a cofounder of Velio Communications and Stream Processors, Inc. Dally is a Fellow of the American Academy of Arts & Sciences. He is also a Fellow of the IEEE and the ACM and has received the IEEE Seymour Cray Award and the ACM Maurice Wilkes award. He has published over 200 papers, holds over 50 issued patents, and is an author of the textbooks, Digital Systems Engineering and Principles and Practices of Interconnection Networks.

    About David Kirk
    David Kirk has been with NVIDIA since January 1997. His contribution includes leading NVIDIA graphics technology development for today’s most popular consumer entertainment platforms. In 2006, Dr. Kirk was elected to the National Academy of Engineering (NAE) for his role in bringing high-performance graphics to personal computers. Election to the NAE is among the highest professional distinctions awarded in engineering. In 2002, Dr. Kirk received the SIGGRAPH Computer Graphics Achievement Award for his role in bringing high-performance computer graphics systems to the mass market. From 1993 to 1996, Dr. Kirk was Chief Scientist, Head of Technology for Crystal Dynamics, a video game manufacturing company. From 1989 to 1991, Dr. Kirk was an engineer for the Apollo Systems Division of Hewlett-Packard Company. Dr. Kirk is the inventor of 50 patents and patent applications relating to graphics design and has published more than 50 articles on graphics technology. Dr. Kirk holds B.S. and M.S. degrees in Mechanical Engineering from the Massachusetts Institute of Technology and M.S. and Ph.D. degrees in Computer Science from the California Institute of Technology.

    Source: NVIDIA
     
  2. wolf

    wolf Performance Enthusiast

    Joined:
    May 7, 2007
    Messages:
    5,547 (1.92/day)
    Thanks Received:
    847
    after reading all of that, i think awesome.

    this guy seems like a great mind to tap for this kind of product.

    t'will be good to see how his input affects Nvidia's products and/or marketing.
     
  3. FordGT90Concept

    FordGT90Concept "I go fast!1!11!1!"

    Joined:
    Oct 13, 2008
    Messages:
    14,635 (6.20/day)
    Thanks Received:
    4,341
    Location:
    IA, USA
    I wonder how much he has to do with NVIDIA being in cahoots with the Folding @ Home project. If he is the primary driving force behind that, I'm done with NVIDIA. I buy graphic cards for games, not pet projects--especially corporate-sponsored projects.
     
    Last edited: Jan 29, 2009
    Crunching for Team TPU
  4. Darkrealms

    Joined:
    Feb 26, 2007
    Messages:
    852 (0.29/day)
    Thanks Received:
    23
    Location:
    USA
    I would bet a lot, but then if you think about it. His goals were met by partnering with Nvidia to make folding faster. My folding has been crazy with my GTX260.

    This is hopefully good news for Nvidia and better products for us : )
     
  5. FordGT90Concept

    FordGT90Concept "I go fast!1!11!1!"

    Joined:
    Oct 13, 2008
    Messages:
    14,635 (6.20/day)
    Thanks Received:
    4,341
    Location:
    IA, USA
    What this guy is liable to do is remove NVIDIA from the gaming market altogether by striving to increase folding performance. It's already happening too seeing how many people build computers with 2+ NVIDIA cards in it just for folding. I really don't like where NVIDIA is going with this, hence my comment about NVIDIA potentially losing a customer.

    I'm just glad Intel is getting ready to enter the market with NVIDIA perhaps leaving.
     
    Crunching for Team TPU
  6. DaedalusHelios

    DaedalusHelios

    Joined:
    Feb 21, 2008
    Messages:
    4,965 (1.91/day)
    Thanks Received:
    826
    Location:
    Greensboro, NC, USA
    Yeah, I really don't want to see the cure to cancer or new treatments for cancer patients if it means comprimising my FPS(frames per second). :laugh:

    In all seriousness: I think philanthropic persuits are fine. In fact they should be encouraged considering cancer takes some of our loved ones away from us every passing day. Unless gaming is somehow more important.:wtf:

    I think Nvidia is showing they can be a company with heart and trying to be a great graphics card company at the same time. Nothing wrong with that. Try not to be so negative.
     
  7. FordGT90Concept

    FordGT90Concept "I go fast!1!11!1!"

    Joined:
    Oct 13, 2008
    Messages:
    14,635 (6.20/day)
    Thanks Received:
    4,341
    Location:
    IA, USA
    What they are doing is not philanthropic. What they're doing is capitalizing on philanthropy. If NVIDIA was actually being philanthropic here, they would design a card specifically for folding and create a large farm just to donate to the project. They aren't doing that.

    They play the middle man. I got these cards which are supposed to be great for gaming but you can also use them to simulate protein folding for Stanford. The more you buy and the more you run them, the higher your score. What, exactly, is NVIDIA doing philanthropic except facilitating the movement of more product?
     
    Crunching for Team TPU
  8. DaedalusHelios

    DaedalusHelios

    Joined:
    Feb 21, 2008
    Messages:
    4,965 (1.91/day)
    Thanks Received:
    826
    Location:
    Greensboro, NC, USA
    I see it as no different than a Solar panel factory to lower energy dependence on coal. To act like all philanthropy cannot turn a profit or is evil if it does is rediculous. Its the life blood of capitalism, but choosing to go into something that benefits us instead of just pure self indulgence as a 100% gaming product would.

    If anything it gives a chance for gamers to give a little back to the world. And if you think about it, whats more noble a goal than to try to make the world a better place than it was before by ending the suffering or giving more hope to those in need of a cure. Giving people hope and an outlet to make a difference in a positive way, is never the wrong thing to do.:toast:
     
    1c3d0g says thanks.
  9. FordGT90Concept

    FordGT90Concept "I go fast!1!11!1!"

    Joined:
    Oct 13, 2008
    Messages:
    14,635 (6.20/day)
    Thanks Received:
    4,341
    Location:
    IA, USA
    I've been down this road before and it's practically arguing religion ("but it cures cancer!!!!"). There's no sense in continuing.

    Cancer is nature's way of saying you've out lived your welcome.
     
    Crunching for Team TPU
  10. DaedalusHelios

    DaedalusHelios

    Joined:
    Feb 21, 2008
    Messages:
    4,965 (1.91/day)
    Thanks Received:
    826
    Location:
    Greensboro, NC, USA

    Well, some believe life is more important than to just let it slip away. I like living personally. :cool:
     
    1c3d0g says thanks.
  11. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.63/day)
    Thanks Received:
    184
    I think you don't understand what F@H is. No company can build a fast enough supercomputer, Nvidia by pushing GPGPU and F@H, and by teaching GPGPU in universities is doing much more than what a farm of supercomputers can do.
    Quote from F@H FAQ:

    EDIT: Just for an easy comparison. Fastest supercomputer Roadrunner has 12,960 IBM PowerXCell 8i CPUs and 6,480 AMD Opteron dual-core processors and a peak of 1.7 petaflops. Now looking at the statistics in these forums I find there are 38,933 members. If only half the members contributed to F@H at the same time, there would be much more power there. Now extrapolate to the world...

    WTF??!!
     
    Last edited: Jan 29, 2009
    1c3d0g says thanks.
  12. ascstinger

    ascstinger New Member

    Joined:
    Apr 10, 2008
    Messages:
    544 (0.21/day)
    Thanks Received:
    76
    Location:
    In a house
    eh, only thing I can weigh in on the F@H deal, is if nvidia cards compromise gaming performance for points, and ATi produces a faster card for the same money, that I would go for the ati card. If they can keep putting out powerful cards that just happen to be good at folding, that's great and I applaud them. If nothing else, why not develop a relatively affordable gpu specifically for F@H that doesnt run up the powerbill to a ridiculous level like running a gtx260 24/7, and then concentrate on gaming with a different card. Then, you can have the option to just grab the gtx for gaming and possible occasional folding, both for gaming and low power use when folding, or just the folding card for someone who doesn't game at all, making the gtx a waste.

    There's probably a million reasons why that wouldn't work in the market today, but its a thought for some of us who hesitate due to the power bill it could run up, or being restricted to just nvidia cards
     
  13. DaedalusHelios

    DaedalusHelios

    Joined:
    Feb 21, 2008
    Messages:
    4,965 (1.91/day)
    Thanks Received:
    826
    Location:
    Greensboro, NC, USA
    A powerful GPU is also a powerful folding@home card. A weak Folding@home card is also a weak GPU... the properties that make a good GPU also make it good for folding if that makes since. Thinking they are seperate things and it might comprimise graphics performance is a non-issue so don't think it will cause a problem.

    Provided that software is written to utilize the GPU for folding in the first place. Which in Nvidia's case it is.
     
    1c3d0g says thanks.
  14. FordGT90Concept

    FordGT90Concept "I go fast!1!11!1!"

    Joined:
    Oct 13, 2008
    Messages:
    14,635 (6.20/day)
    Thanks Received:
    4,341
    Location:
    IA, USA
    GPGPU is fundamentally wrong. Intel's approach is correct in that there's no reason GPUs can't handle x86 instructions. So, don't teach proprietary GPGPU code for NVIDIA's profit in school, teach them how to make GPUs effective at meeting D3D and x86 requirements.


    Those computers have extremely highspeed interconnects which allows them to reach those phenominal numbers; moreover, they aren't overclocked and they are monitored 24/7 for problems making them highly reliable. Lots of people here have their computers overclocked which breeds incorrect results. If that was not enough, GPUs are far more likely to produce bad results than CPUs.

    There obviously are inherint problems with Internet-based supercomputing and there's also a whole lot of x-factors that ruin it's potential for science (especially machine stability). Folding especially is very vulnerable to error because every set of work completed is expanded by another and another. For instance, how do we know that the exit tunnel is not the result of an uncaught computational error early on?


    As was just stated, a 4850 is just as good as the 9800 GTX in terms of gaming but because of the 9800 GTX's architecture, the 9800 GTX is much faster at folding. This is mostly because NVIDIA uses far more transistors which means higher power consumption while AMD takes a smarter-is-better approach using far fewer transistors.

    And yes, prioritizing on GPUs leaves much to be desired. I think I recall trying to play Mass Effect while the GPU client was folding and it was unplayable. It is a major issue for everyone that buys cards to game.
     
    Last edited: Jan 29, 2009
    Crunching for Team TPU
  15. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.63/day)
    Thanks Received:
    184
    95% of making effective GPGPU code work is knowing parallel computing, the rest is the language itself, so they are indeed doing something well. Ever since Nvidia is inside the OpenCL board they are teaching that too so don't worry, as said that's only the 5%. General computing is no different in that way, 95% of knowing how to program nowadays is knowing how to program with objects. If you know how to program with C++ for example, you know programming with the rest.

    The same aplies to x86. The difficulty relies on making the code highly parallel. x86 is NOT designed for parallelism and is as difficult making a highly parallel computing program in x86 as doing it in GPGPU codes.

    This BTW was said by Standford guys (maybe even this same guy) BEFORE Nvidia had any relations with them. When GPGPU was nothing else than Brook running in X1900 Ati cards so...

    False. GPGPU is as prone to errors as supercomputers are, they doublecheck the data is correct in the algorithms. Even if that takes more computing time, reducing efficiency, beause the seer computing power of F@H is like 1000 times that of a supercomputer that means squat.

    A GPU does not make more errors than a CPU anyway. And errors resulted from OC yield highly unexpected results that are easy to detect.

    Anyway F@H is SCIENCE, do you honestly believe they only send each algorithm to a single person?? The have 1000's of them and they know which one is well and which not. :laugh:
     
    1c3d0g says thanks.
  16. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.63/day)
    Thanks Received:
    184
    G92 (9800 GTX) has much less transistors than RV770 (HD4850) FYI. And the 55nm G92b variant is significantly enough smaller too 230mm^2 vs 260mm^2.

    Of course folding at the same time reduces performance, but the fact that GPGPU exists doesn' make the card slower. :laugh:

    Next stupid claim??
     
    1c3d0g says thanks.
  17. FordGT90Concept

    FordGT90Concept "I go fast!1!11!1!"

    Joined:
    Oct 13, 2008
    Messages:
    14,635 (6.20/day)
    Thanks Received:
    4,341
    Location:
    IA, USA
    Intel is addressing that.


    F@H doesn't double check results.

    What happens when a CPU errors? BSOD
    What happens when a GPU errors? Artifact

    Which is fatal, which isn't? CPUs by design are meant to be precision instruments. One little failure and all goes to waste. GPUs though, they can work with multiple minor failures.

    I got no indication from them that any given peice of work is completed more than once for the sake of validation.


    No, errors aren't always easy to catch.
    Float 2: 00000000000000000000000001000000
    Float 4: 00000000000000001000000001000000

    If the 17th digit got stuck, every subsequent calculation will be off by a minute amount. For instance:
    Should be 2.000061: 00000000000000010000000001000000
    Got: 4.0001221: 00000000000000011000000001000000

    Considering F@H relies on a lot of multiplication, that alone could create your "exit tunnel."


    9800 GTX = 754 million transistors
    4850 = 666 million transistors

    Process doesn't matter except in physical dimensions. The transistor count only changes with architectural changes.


    It's poorly executed and as a result, CUDA is not for gamers in the slightest.
     
    Last edited: Jan 29, 2009
    Crunching for Team TPU
  18. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.63/day)
    Thanks Received:
    184
    Double post. Sorry
     
    Last edited: Jan 29, 2009
  19. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.63/day)
    Thanks Received:
    184
    It's SICIENCE so of course they have multiple instances of the same problem. They don't have to say that because they are firstmost and ultimately scientists working for scientists.

    EDIT: Anyway, I don't know you, but every math program I made at school, doeble checked the results by redundancy, I was teached to do it that way. I expect scientists working to cure cancer received an education as good as mine, AT LEAST as good as mine.
    EDIT: Those examples are, in fact, easy to spot errors. Specially in F@H. If you are expecting the molecule to be around the 2 range (you know what to expect, but it's science, you want to know EXACTLY where will it be) and you got 4, well you don't need a high grade to see the difference.

    WRONG. RV670 has 666 m transistors. RV770 has 956 m transistors. source
    source

    Don't contradict educated facts without doublechecking your info PLEASE.

    So now you are going to teach me that?? :laugh::laugh:

    Of course it's not for games (except for PhysX). But it doesn't interfere at all with games performance.
     
    Last edited: Jan 29, 2009
  20. FordGT90Concept

    FordGT90Concept "I go fast!1!11!1!"

    Joined:
    Oct 13, 2008
    Messages:
    14,635 (6.20/day)
    Thanks Received:
    4,341
    Location:
    IA, USA
    Pande is a chemical biologist. How much he cares about computational accuracy remains to be seen.


    So, Tom's Hardware is wrong. That doesn't change the fact that F@H prefers NVIDIA's architecture.


    That makes a whole lot of no sense so I'll respond to what I think you're saying.

    -NVIDIA GeForce is designed specifically for Direct3D (or was).
    -CUDA was intended to offload any high FLOP transaction from the CPU. It doesn't matter what the work actually is comprised of.
    -CUDA interferes enormously with game performance because it's horrible at prioritizing threads.
    -Larrabee is a graphics card--but not really. It is simply designed to be a high FLOP, general purpose card that can be used for graphics among other things. Larrabee is an x86 approach to the high-FLOP needs (programmable cores).

    Let's just say CUDA is riddled with a lot of problems that Larrabee is very likely to address. CUDA is a short-term answer to a long term problem.
     
    Crunching for Team TPU
  21. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.63/day)
    Thanks Received:
    184
    I can see your disbelief about science, but I don't condone it. Scientists know how to make their work, assuming they don't is plainly stupid.


    Yeah, it prefers Nvidia's architecture because Nvidia's GPUs where designed with GPGPU in mind. I still see Nvidia on top in most games. So?

    - Nope, they are designed for GPGPU too. Oh and strictly talking, I don't really know of there was a time when Nvidia GPUs where focused at D3D. It's been more focused on OpenGL, except maybe the last couple of generations.
    - Yes and I don't see where you're going with that.
    - Unless you want to use CUDA for PhysX, CUDA doesn't interfere with gaming AT ALL. And in any case, Nvidia has hired this guy to fix those kind of problems. It's going to move to MIMD cores too, so that thing is going to be completely fix in the next generation of GPUs.
    - Yes, exactly.

    Many people think that GPGPU is the BEST answer for that, and they all of them don't work for Nvidia. In fact, many work for Ati.
     
  22. Haytch

    Haytch New Member

    Joined:
    Apr 7, 2008
    Messages:
    510 (0.20/day)
    Thanks Received:
    28
    Location:
    Australia
    I dont think we should shove aside the important factors here;
    For starters, Anyones efforts to do humanity a favour, especially of this magnitute should be respected, regardless of belief's, unless you wish the Terran race extinction ofcourse. But thats because good and evil does exist regardless if religion does or not.

    . . . . If CUDA doesnt increase f.p.s, nor does it decrease it. Then that equals even.
    . . . . If CUDA does ANYTHING. Then thats a plus.

    Darkmatter, thank you for explaining to those out there that cant comprehend, but unfortunately i think its fallen on blind hearts . . . Oh wait a minute, all of out hearts are blind . . . Maybe i meant cold hearted.

    Anyways, im going to go and take out my graphics cards and play Cellfactor @ 60+ f.p.s with just the Asus Ageia P1.

    Edit : Ohh ye, almost forgot. I want to know how much Bill and David are on p.a. I bet the x-Nvidia staff would like to know too.
    I dont think either Bill or David have much more to offer Nvidia, and i dont think they will bother either. Good luck to the green team.
     
    Last edited: Jan 29, 2009
    1c3d0g says thanks.
  23. FordGT90Concept

    FordGT90Concept "I go fast!1!11!1!"

    Joined:
    Oct 13, 2008
    Messages:
    14,635 (6.20/day)
    Thanks Received:
    4,341
    Location:
    IA, USA
    If something is using CUDA while a game is running, it hurts the game's FPS badly.
     
    Crunching for Team TPU
  24. DarkMatter New Member

    Joined:
    Oct 5, 2007
    Messages:
    1,714 (0.63/day)
    Thanks Received:
    184
    But NOTHING forces you to use CUDA at the same time, that's the point. When you are gaimng dissable F@H, of course!! But when you are not using the GPU for anything you can fold, and with GPU2 and Nvidia card you can fold MORE. It's simple.

    And if you are talking about PhysX, take in mind that the game is doing more, so you get more for more, not the same while requiring more as you are suggesting. If it comes a time when GPGPU is used for say AI, then the same will be true, you will get more than what the CPU alone can do while mantaining more frames too, because without the GPU it would be unable to provide enough frames with that kind of detail. That's the case with PhysX and that will be the case with any GPGPU code used in games.
     
  25. FordGT90Concept

    FordGT90Concept "I go fast!1!11!1!"

    Joined:
    Oct 13, 2008
    Messages:
    14,635 (6.20/day)
    Thanks Received:
    4,341
    Location:
    IA, USA
    Just because you can use a computer doesn't mean you understand how it works. Likewise, just because Pande wants results for science doesn't mean he knows the best way to go about them from a computing standpoint.


    All I know is that the line between GPU and not is going away. There's more focus on the FLOPs--doesn't matter where it comes from in the computer (on the CPU, on the GPU, on the PPU, etc.).

    But then again, FLOPs for mainstream users aren't that important (just for their budgeting). It is kind of ackward to see so much focus on less than 10% of a market. Everyone (AMD, Intel, AMD, Sony, IBM, etc.) are all pushing for changes to the FPU when ALU needs work too.


    F@H should be smart enough to back off when the GPU is in use (the equivilent of low priority on x86 CPUs). Until they fix that, it's useless to gamers.


    Regardless, I still don't support F@H. Their priority is in results, not accurate results.


    Physx is useless.


    The problem with GPGPU is the GPU is naturally a purpose-built device: binary -> display. Any attempts to multitask it leads to severe consequences because it's primary purpose is getting encroached upon. The only way to overcome that is multiple GPUs but then they really aren't GPUs at all because they aren't working on graphics. This loops back into what I said earlier in this post that the GPU is going away.
     
    Crunching for Team TPU

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page