Discussion in 'News' started by btarunr, Jul 6, 2008.
The way they keep on leaking info makes it seem they expecting alot from their first attempt.
Does this mean that with intel joining in on the GPU Market that GPU's will become cheaper due to increased competition? Or not?
Hopefully. Also, hopefully, both ATi and Nvidia won't be able to lazily build minor improvements on the same type of arcitecture with Intel swimming around in the pool... there's gonna be a lot of dunking heads underwater going on
Contrast that story with Creative. They SUED the guy who was trying to push Audigy further. Just goes to show there is far better management at Intel than Creative.
you wont be able to run existing programs on it and make them run 8479483 times faster. its just like going from single core to dual core to quad core. almost no application scales from x1 to x8 or even more. yes there may be some exceptions (maybe 10 apps on the market right now in total?) but nothing that anyone here regularly uses
Not true per se. Why?
1./ Larrabee has a much more powerful ALU that a GPU, meaning that for some tasks, Larrabee can do in one instruction what might take a fat loop and lookuptables on a GPU
2./ Larrabee ALU is DP and FP. GPU SPE is SP. To mimick DP or FP using SP requires a lot of loop and overhead
3./ SIMD on Larrabee is 512bit or more. That's the same as 16x 32bit (SP) calculations at once. With 32 x86 cores in the Larrabee matrix, that is equivalent to 16x 32cores = 512 simultantous SP calculations. ie the same as 512 shader processor units.
The key and as yet unknown data is how many clock cycles to execute SIMD compared to a GPU's SPE.
Its looking good for this intel gpu.Remember how much money intel has,loads for r+d,it has its own fabs and can write its own drivers.They also have a hell of a lot of processor manufacturing experience to fall back on.
I hope intel can sock it to the other 2,it will be good for us in the longrun,whether their first attempt is good or not.
Oh I remember when a friend of mine had a Pentium 75MHz and he had it overclocked to 90MHz and NFS (1) run on full screen! I had something 486 (edit: probably 486SX 33MHz) back then and could only run it half screen big I was so in awe of the overclock and the performance, remember everyone was not doing it (OC) those days.
Oh yeah, well I had a 486 then a pentium 233 WITH MMX! Top that sucka!
Anyone here interested in top500.org supercomputers?
Well, this Larrabee thing will put an end to Beowulf Class I clusters. And put a STOP to the interest in Cell blades.
Why? Much cheaper. And you wouldnt need to learn a new architecture model for programming, e.g. Cell. Just use your regular x86 IDE with Larrabee add-in.
With Larrabee we are getting 2000Gflops / 300Watt = 6000Mflops / watt, ie 10-30 times as power efficient as the best supercomputers.
That has a HUGE implication to power and cooling needed to host a number crunching monster.
It also has a HUGE implication on the cost of installing an HPC given how cheap Larrabee is compared to scaling under regular Beowulf.
With Larrabee, anyone could have an HPC if they wanted to.
Hey, would you be able to get a single one of the cores and then put it on a Pentium board?
Hopefully they'll get smart and use Pentium Pro cores instead; 512 kb of L2 cache, MMX arch., and cooler name; what could go wrong?
Also, imagine if you got a bunch of mobos with 4 PCI-E x16 lanes (I'm pretty sure they exist), stuck these cards onto a whole bunch of them (along with a quad core something), and ran a beowulf cluster? Say you had 8 motherboards, thats 4 cards per mobo, which is 32 cards, which is 64 TFLOPS!!
@ TheGuruStud: I went from a Pentium 90 to a Celeron-400!
there is more too it than this, normal cpus are much more powerfull and multipurpose, altho they should go with core2 duh, heh... P54Cs kinda suck imho.... and also there is even more to it than just the raw processing power, like the cache interfaces, and omg.. the memory interfaces, 2900XT/pro and 4870 for having a 512bit ring bus combined with a direct bus for low latency, honestly, p54C? they must plan on useing DDR 400mhz.. you think they mighta revamped some things?
OOPS i mean sims at 60mhz :? wow i was a whole 2 generations off.
I've still got you beat After the 233 I got a celeron 366 and Oced to 550. The chip could do over 600, but my MB sucked.
Then I swapped it for a 600 pentium III, but ran it at stock. Piece of crap CPU just magically died one day. Then I built a new rig AMD 1.4 Thunderbird! And I've never looked back (upgraded to xp 2100, then a long wait until athlon 64 3500, x2 4200 and opteron 170).
Damn, way off topic. Don't hurt me.
Too big, too much heat, too much power and VERY little gain. Remember, these things are for crunching, not for executing long complex and branching code. MMX and SSEx are ditched in favour of specialised SIMD instructions. http://forums.techpowerup.com/showpost.php?p=872820&postcount=5
You wont need a PCEIx16 slot for these. They will probably be on PCIex1 or x4 slots. x16 not needed. Remember these things crunch... they dont need a super high bandwidth for most applications. Think of gigabit network. That bandwidth goes quite easily down a x1 slot. So you would have a gigbit bandwidth of data, representing data that had been seriously crunched to produce.
With a Larrabee, it is a cluster, but, strictly, it is not a beowulf cluster.
If you like home-made beowulfs, go here http://www.calvin.edu/~adams/research/microwulf/
So, will Larrabe be adopting AVX?
Ok this gives AMD and Nvidia Time to Send in Working Pieces for Hybrid units.
And even more time for ray tracing, which apparently is made use of in the 4800 series cards. Intels first shot at GPUs ended miserably roughly 10 - 15 years ago. Im sure theyve learned from their mistakes back then. I for one am interested in seeing how it performs, but in the time frame given, it wont be new and cutting edge. Its a rehash. From all the information given and linked, it seems alot more complicated now than I originally though it was.
Yup, and that was with 1GB 2900XTs, the extra branching logic on R770 should make a big gains.
I foresee nVidia integrating Via Nano or Cell cores and AMD/ATI using Thunderbirds or K6-2s.
I wonder if they will be implementing the old PowerVR tech that the Kyro series used against ATI and nVidia in the past. Hidden Surface Removal was a tech that I wished ATI and nVidia would actually steal! Sure ATI had their Z-buffer, and nVidia with their variant... but they simply were not as efficient as PowerVR. My Kyro2 only ran at 175MHz and it held its own fine against what ATI and nVidia had.
If this becomes something big, it will suck for nVida and AMD... and for us computer tweakers.
PowerVR is NEC/Panasonic, Graphics for Dreamcast were Awesome.
Woah, since when is Intel planning on entering the video card industry with something more powerful then built-in GPUs?! And what`s with the design?! You can`t just stich 32 Pentiums together and call it a GPU! nVidia and AMD are way ahead in graphics card design!
thast just LOL of GPU amd 4870 X2 has ~ 2,4Gflops and TDP under 300W (250-270 i guess)
panchoman number 2
yes you can
Separate names with a comma.