1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Larrabee Capable of 2 TFLOPs

Discussion in 'News' started by btarunr, Jul 6, 2008.

  1. eidairaman1

    eidairaman1

    Joined:
    Jul 2, 2007
    Messages:
    11,959 (4.66/day)
    Thanks Received:
    1,337
    The way they keep on leaking info makes it seem they expecting alot from their first attempt.
  2. AphexDreamer

    AphexDreamer

    Joined:
    Jun 17, 2007
    Messages:
    7,078 (2.74/day)
    Thanks Received:
    912
    Location:
    C:\Program Files (x86)\Aphexdreamer\
    Does this mean that with intel joining in on the GPU Market that GPU's will become cheaper due to increased competition? Or not?
  3. hat

    hat Maximum Overclocker

    Joined:
    Nov 20, 2006
    Messages:
    16,869 (6.04/day)
    Thanks Received:
    2,060
    Location:
    Ohio
    Hopefully. Also, hopefully, both ATi and Nvidia won't be able to lazily build minor improvements on the same type of arcitecture with Intel swimming around in the pool... there's gonna be a lot of dunking heads underwater going on :)
    Crunching for Team TPU
  4. Morgoth

    Morgoth

    Joined:
    Aug 4, 2007
    Messages:
    3,795 (1.50/day)
    Thanks Received:
    250
    Location:
    Netherlands
    lol
  5. lemonadesoda

    lemonadesoda

    Joined:
    Aug 30, 2006
    Messages:
    6,242 (2.17/day)
    Thanks Received:
    963
    Contrast that story with Creative. They SUED the guy who was trying to push Audigy further. Just goes to show there is far better management at Intel than Creative.
    1c3d0g and WarEagleAU say thanks.
  6. W1zzard

    W1zzard Administrator Staff Member

    Joined:
    May 14, 2004
    Messages:
    14,632 (3.94/day)
    Thanks Received:
    11,363
    you wont be able to run existing programs on it and make them run 8479483 times faster. its just like going from single core to dual core to quad core. almost no application scales from x1 to x8 or even more. yes there may be some exceptions (maybe 10 apps on the market right now in total?) but nothing that anyone here regularly uses
    1c3d0g, WarEagleAU and eidairaman1 say thanks.
  7. lemonadesoda

    lemonadesoda

    Joined:
    Aug 30, 2006
    Messages:
    6,242 (2.17/day)
    Thanks Received:
    963
    Not true per se. Why?

    1./ Larrabee has a much more powerful ALU that a GPU, meaning that for some tasks, Larrabee can do in one instruction what might take a fat loop and lookuptables on a GPU

    2./ Larrabee ALU is DP and FP. GPU SPE is SP. To mimick DP or FP using SP requires a lot of loop and overhead

    3./ SIMD on Larrabee is 512bit or more. That's the same as 16x 32bit (SP) calculations at once. With 32 x86 cores in the Larrabee matrix, that is equivalent to 16x 32cores = 512 simultantous SP calculations. ie the same as 512 shader processor units.

    The key and as yet unknown data is how many clock cycles to execute SIMD compared to a GPU's SPE.
  8. tigger

    tigger I'm the only one

    Joined:
    Mar 20, 2006
    Messages:
    10,177 (3.35/day)
    Thanks Received:
    1,397
    Its looking good for this intel gpu.Remember how much money intel has,loads for r+d,it has its own fabs and can write its own drivers.They also have a hell of a lot of processor manufacturing experience to fall back on.

    I hope intel can sock it to the other 2,it will be good for us in the longrun,whether their first attempt is good or not.
  9. OnBoard

    OnBoard New Member

    Joined:
    Sep 16, 2006
    Messages:
    3,044 (1.06/day)
    Thanks Received:
    379
    Location:
    Finland
    Oh I remember when a friend of mine had a Pentium 75MHz and he had it overclocked to 90MHz and NFS (1) run on full screen! I had something 486 (edit: probably 486SX 33MHz) back then and could only run it half screen big :) I was so in awe of the overclock and the performance, remember everyone was not doing it (OC) those days.
  10. TheGuruStud

    TheGuruStud

    Joined:
    Sep 15, 2007
    Messages:
    1,614 (0.65/day)
    Thanks Received:
    168
    Location:
    Police/Nanny State of America
    Oh yeah, well I had a 486 then a pentium 233 WITH MMX! Top that sucka! :p
  11. lemonadesoda

    lemonadesoda

    Joined:
    Aug 30, 2006
    Messages:
    6,242 (2.17/day)
    Thanks Received:
    963
    Anyone here interested in top500.org supercomputers?

    Well, this Larrabee thing will put an end to Beowulf Class I clusters. And put a STOP to the interest in Cell blades.

    Why? Much cheaper. And you wouldnt need to learn a new architecture model for programming, e.g. Cell. Just use your regular x86 IDE with Larrabee add-in.

    With Larrabee we are getting 2000Gflops / 300Watt = 6000Mflops / watt, ie 10-30 times as power efficient as the best supercomputers.

    That has a HUGE implication to power and cooling needed to host a number crunching monster.

    It also has a HUGE implication on the cost of installing an HPC given how cheap Larrabee is compared to scaling under regular Beowulf.

    With Larrabee, anyone could have an HPC if they wanted to.
  12. Error 404

    Error 404 New Member

    Joined:
    Apr 14, 2008
    Messages:
    1,777 (0.78/day)
    Thanks Received:
    169
    Location:
    South Australia
    Hey, would you be able to get a single one of the cores and then put it on a Pentium board? :D
    Hopefully they'll get smart and use Pentium Pro cores instead; 512 kb of L2 cache, MMX arch., and cooler name; what could go wrong?
    Also, imagine if you got a bunch of mobos with 4 PCI-E x16 lanes (I'm pretty sure they exist), stuck these cards onto a whole bunch of them (along with a quad core something), and ran a beowulf cluster? Say you had 8 motherboards, thats 4 cards per mobo, which is 32 cards, which is 64 TFLOPS!! :eek:

    @ TheGuruStud: I went from a Pentium 90 to a Celeron-400! :p
  13. mrhuggles

    mrhuggles

    Joined:
    Oct 10, 2007
    Messages:
    1,540 (0.62/day)
    Thanks Received:
    174
    there is more too it than this, normal cpus are much more powerfull and multipurpose, altho they should go with core2 duh, heh... P54Cs kinda suck imho.... and also there is even more to it than just the raw processing power, like the cache interfaces, and omg.. the memory interfaces, <3 2900XT/pro and 4870 for having a 512bit ring bus combined with a direct bus for low latency, honestly, p54C? they must plan on useing DDR 400mhz.. you think they mighta revamped some things?

    OOPS i mean sims at 60mhz :? wow i was a whole 2 generations off.
  14. TheGuruStud

    TheGuruStud

    Joined:
    Sep 15, 2007
    Messages:
    1,614 (0.65/day)
    Thanks Received:
    168
    Location:
    Police/Nanny State of America
    I've still got you beat :) After the 233 I got a celeron 366 and Oced to 550. The chip could do over 600, but my MB sucked.

    Then I swapped it for a 600 pentium III, but ran it at stock. Piece of crap CPU just magically died one day. Then I built a new rig :) AMD 1.4 Thunderbird! And I've never looked back (upgraded to xp 2100, then a long wait until athlon 64 3500, x2 4200 and opteron 170).

    Damn, way off topic. Don't hurt me.
    Last edited: Jul 7, 2008
  15. lemonadesoda

    lemonadesoda

    Joined:
    Aug 30, 2006
    Messages:
    6,242 (2.17/day)
    Thanks Received:
    963
    No.
    Too big, too much heat, too much power and VERY little gain. Remember, these things are for crunching, not for executing long complex and branching code. MMX and SSEx are ditched in favour of specialised SIMD instructions. http://forums.techpowerup.com/showpost.php?p=872820&postcount=5
    You wont need a PCEIx16 slot for these. They will probably be on PCIex1 or x4 slots. x16 not needed. Remember these things crunch... they dont need a super high bandwidth for most applications. Think of gigabit network. That bandwidth goes quite easily down a x1 slot. So you would have a gigbit bandwidth of data, representing data that had been seriously crunched to produce.

    With a Larrabee, it is a cluster, but, strictly, it is not a beowulf cluster.

    If you like home-made beowulfs, go here http://www.calvin.edu/~adams/research/microwulf/
  16. eidairaman1

    eidairaman1

    Joined:
    Jul 2, 2007
    Messages:
    11,959 (4.66/day)
    Thanks Received:
    1,337
  17. lemonadesoda

    lemonadesoda

    Joined:
    Aug 30, 2006
    Messages:
    6,242 (2.17/day)
    Thanks Received:
    963
    http://www.intel.com/pressroom/archive/reference/IntelMulticore_factsheet.pdf

    So, will Larrabe be adopting AVX?
  18. eidairaman1

    eidairaman1

    Joined:
    Jul 2, 2007
    Messages:
    11,959 (4.66/day)
    Thanks Received:
    1,337
  19. WarEagleAU

    WarEagleAU Bird of Prey

    Joined:
    Jul 9, 2006
    Messages:
    10,796 (3.69/day)
    Thanks Received:
    545
    Location:
    Gurley, AL
    And even more time for ray tracing, which apparently is made use of in the 4800 series cards. Intels first shot at GPUs ended miserably roughly 10 - 15 years ago. Im sure theyve learned from their mistakes back then. I for one am interested in seeing how it performs, but in the time frame given, it wont be new and cutting edge. Its a rehash. From all the information given and linked, it seems alot more complicated now than I originally though it was.
  20. Initialised

    Joined:
    Jul 5, 2008
    Messages:
    264 (0.12/day)
    Thanks Received:
    35
    Yup, and that was with 1GB 2900XTs, the extra branching logic on R770 should make a big gains.

    I foresee nVidia integrating Via Nano or Cell cores and AMD/ATI using Thunderbirds or K6-2s.
  21. bryan_d

    bryan_d New Member

    Joined:
    Dec 28, 2006
    Messages:
    42 (0.02/day)
    Thanks Received:
    4
    I wonder if they will be implementing the old PowerVR tech that the Kyro series used against ATI and nVidia in the past. Hidden Surface Removal was a tech that I wished ATI and nVidia would actually steal! :) Sure ATI had their Z-buffer, and nVidia with their variant... but they simply were not as efficient as PowerVR. My Kyro2 only ran at 175MHz and it held its own fine against what ATI and nVidia had.

    If this becomes something big, it will suck for nVida and AMD... and for us computer tweakers.

    bryan d
  22. eidairaman1

    eidairaman1

    Joined:
    Jul 2, 2007
    Messages:
    11,959 (4.66/day)
    Thanks Received:
    1,337
    PowerVR is NEC/Panasonic, Graphics for Dreamcast were Awesome.
  23. substance90

    substance90 New Member

    Joined:
    Sep 22, 2007
    Messages:
    71 (0.03/day)
    Thanks Received:
    2
    Location:
    Bulgaria
    Woah, since when is Intel planning on entering the video card industry with something more powerful then built-in GPUs?! And what`s with the design?! You can`t just stich 32 Pentiums together and call it a GPU! nVidia and AMD are way ahead in graphics card design!
  24. vojc New Member

    Joined:
    Mar 29, 2008
    Messages:
    85 (0.04/day)
    Thanks Received:
    9
    thast just LOL of GPU amd 4870 X2 has ~ 2,4Gflops and TDP under 300W (250-270 i guess)
  25. Morgoth

    Morgoth

    Joined:
    Aug 4, 2007
    Messages:
    3,795 (1.50/day)
    Thanks Received:
    250
    Location:
    Netherlands
    panchoman number 2 :laugh:
    yes you can

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page