Thursday, October 4th 2012

Siliconarts Announces 'RayCore' Real-Time Ray Tracing GPU

Siliconarts Inc., a Korean tech startup, developed RayCore, the real-time ray tracing graphics processor (GPU) for the first time in graphics hardware history. RayCore is the next-generation GPU that is used in rendering high-quality 3D graphics whose graphics performance surpasses that of rasterization GPU approach.

Mostly utilized in Hollywood studios in creating huge contents such as "Toy Story" and "Avatar," realistic effects are the key to differentiating the features of ray tracing technology compared to a rasterization approach. However, ray tracing was only implemented using software due to the limitations to process exponential calculations, to equip with highly expensive series of rendering gears and to process and the wait for significantly longer rendering times. In addition, it was not even implemented on a real-time basis.
RayCore, developed by Siliconarts, is a hardware that has overcome the limitations of the existing ray tracing approach, applying all of the benefits of ray tracing to its product that can render cinema-quality 3D graphics effects on real-time basis.

Particularly, RayCore is designed to consume the industry's lowest power level in order to implement both User Interface and User Experience on mobile platforms such as smartphones; it is expected that a significant impact is inevitable to the mobile game space. In other words, UI/UX as well as mobile games created based on these high-quality 3D effects will be made possible. A senior official of Siliconarts mentioned that ray tracing technology-enabled smartphones will be introduced soon, as the company already completed its license agreement with Company A to supply RayCore GPU IP.

"The key feature of this product is that ray tracing, which was considered impossible to implement using hardware, is now available not only on PCs and servers, but also in diverse devices such as smartphones and smart TVs. None of the global GPU companies have successfully implemented ray tracing on a real-time basis and their technology level is still trivial. It is almost impossible to implement," said Hyung Min Yoon, the CEO of Siliconarts.

You can witness the powerful implementation of exquisite 3D graphics using RayCore as the company will be demonstrating its products at i-SEDEX2012 from October 9 - October 11 at #2570, Hall 2, KINTEX, Goyang-si, Gyunggi-do, Korea.
Add your own comment

24 Comments on Siliconarts Announces 'RayCore' Real-Time Ray Tracing GPU

#1
eidairaman1
The Exiled Airman
eventually AMD or Nvidia will buy the company out.
Posted on Reply
#2
Benetanegia
Skeptic here. Faster than rasterization with lower power consumption? I want to believe...

Also I'm not sure I want my games back to hard shadows again (looking at the pictures). Ray-tracing without area lights pretty much sucks in this day and age.
Posted on Reply
#3
Ikaruga
BenetanegiaAlso I'm not sure I want my games back to hard shadows again (looking at the pictures). Ray-tracing without area lights pretty much sucks in this day and age.
And GPU accelerated voxels and octrees arrived years ago to rescue you. (but the picture sucks hard, I agree. No caustics, no radiosity, etc,...nothing)
Posted on Reply
#4
Prima.Vera
We need more info about this. Obviously, the pictures above look like a 3D Studio render back in the '90s
Posted on Reply
#5
Ikaruga
Prima.VeraWe need more info about this. Obviously, the pictures above look like a 3D Studio render back in the '90s
I highly agree. If this is a small GPGPU which might be used on any PCB as some kind of a coprocessor, or if the silicon could be used in current CPUs and GPUs, than this could be quite a big success.
Posted on Reply
#6
Fourstaff
eidairaman1eventually AMD or Nvidia will buy the company out.
As is the norm, this company lack the R&D to translate their research to a product.
BenetanegiaSkeptic here. Faster than rasterization with lower power consumption? I want to believe...

Also I'm not sure I want my games back to hard shadows again (looking at the pictures). Ray-tracing without area lights pretty much sucks in this day and age.
There is probably a catch somewhere.
Posted on Reply
#7
Benetanegia
I've been investigatig a bit about it and found this:

sites.google.com/a/siliconarts.co.kr/siliconarts-inc/newsroom
RayCore® enables 3D contents to process in between 14 million to 24 million rays per second per core, the industry's fastest performance, and supports display resolutions of 960x640, 960x480, 800x480, and 300x240.
Here's the catch and not very impressive apparently.

From graphics.stanford.edu/papers/i3dkdtree/gpu-kd-i3d.pdf:
The Cell system from [Benthin et al. 2006] can issue 19.2 four-wide GInstr/s, about 62% the rate of our X1900 XTX, but casts 57.2 million primary rays per second compared to our 15.2 for similar renderings of the conference room scene. Their single 2.4 GHz Opteron reaches 8.7 million
primary rays per second.
And here bouliiii.blogspot.com.es/2008/08/real-time-ray-tracing-with-cuda-100.html
With the last optimizations I made, I think that between 12 millions rays / s and 40 millions rays/s may be computed on a GeForce 8800GT on the demo I give in the code. I will test soon the code on a GTX280 and without being too optimistic, I think that 100 millions rays / sec may be generated on it.
2.4 Opteron - 8.7 million rays/s
X1900 XTX - 15.2 million rays/s
Cell - 57.2 million rays/s
8800 GT - 12-40 million rays/s
GTX 280 - 100 million rays/s
Current gen GPU?? - 2-4x GTX280? More? GTX 480 was 8x faster than GTX285 in Design Garage demo. So what 200-800 million rays/s for a current GPU??
RayCore - 14-24 million ray/s per core.

(Yeah, I might be comparing apples to oranges, but I think that it gives a general idea and it's not like the examples given by Siliconarts look much more advanced than previous realtime ray-tracers)

If 1 RayCore is aimed at mobile chips (that's what I understand), how many cores would fit in a die size equal to a high-end GPU? 10 (240 million rays/s)? 20 (480 million rays/s)?

After investigating a lot I only have more questions lol. But I do believe a little more on the posibility of real-time ray-tracing being absolutely posible right now and definitely by 2013-2014. As to being faster than rasterization tho, NO WAY.
Posted on Reply
#8
Fourstaff
Still looks promising with the figures you are posting, they are about 5 years behind, easily fixed with boatloads of money thrown their way.
Posted on Reply
#9
Benetanegia
FourstaffStill looks promising with the figures you are posting, they are about 5 years behind, easily fixed with boatloads of money thrown their way.
What do you mean by 5 years behind? Who?
Posted on Reply
#10
Fourstaff
BenetanegiaWhat do you mean by 5 years behind? Who?
RayCore, judging by their per core performance. 14-24 million rays/s would have been average to good performance 5 years ago compared to standard raster.
Posted on Reply
#11
Benetanegia
FourstaffRayCore, judging by their per core performance. 14-24 million rays/s would have been average to good performance 5 years ago compared to standard raster.
Well considering it's on a mobile size I'd consider them as not being su much behind. In fact I'm not sure they are behind at all. But we are comparing a commercial product against research studies which were mostly looking for feasibility and not for maximum optimized performance and using CUDA/OpenGL/OpenCL wrappers. In future releases Nvidia/AMd could certainly implement certain "semi-fixed" function capabilities and match or even vastly outperform them though, but who knows. Right now I can only assume that on this side of the equation RayCore is ahead by a couple years.

On the other hand I don't think that more funding is going to help them much, looking at the link of the website I posted it seems that they got some nice funding from Korean government. More investment would definitely be needed for a grand scale manufacturing plan, but I don't think it's going to help R&D all that much.

So summarizing, I don't consider them behind (nethier ahead as PR says), but I don't think funding would improve their tech much (talking abut generational jumps) either.

It woud be nice to have them as competitors to AMD and Nvidia on the GPU side, but they don't offer equivalent tech so I doubt game developers will be interested unless AMD/Nvidia offer the same tech and most probably an API standard would need to be created before they move a finger in that direction.

For professional market this is good, no doubt and here they can certainly compete, since being a platform on their own is not a problem, tho I hear that Nvidia's Optix has already a substantial market and there's free GPU renderers such as Cycles, Luxrender which are very popular too. Competition is good any day of the week tho.
Posted on Reply
#13
Ikaruga
Benetanegia2.4 Opteron - 8.7 million rays/s
X1900 XTX - 15.2 million rays/s
Cell - 57.2 million rays/s
8800 GT - 12-40 million rays/s
GTX 280 - 100 million rays/s
Current gen GPU?? - 2-4x GTX280? More? GTX 480 was 8x faster than GTX285 in Design Garage demo. So what 200-800 million rays/s for a current GPU??
RayCore - 14-24 million ray/s per core.

(Yeah, I might be comparing apples to oranges, but I think that it gives a general idea and it's not like the examples given by Siliconarts look much more advanced than previous realtime ray-tracers)

If 1 RayCore is aimed at mobile chips (that's what I understand), how many cores would fit in a die size equal to a high-end GPU? 10 (240 million rays/s)? 20 (480 million rays/s)?

After investigating a lot I only have more questions lol. But I do believe a little more on the posibility of real-time ray-tracing being absolutely posible right now and definitely by 2013-2014. As to being faster than rasterization tho, NO WAY.
I'm not here to defend RayCore (don't know anything about them anyway), but you are indeed comparing apples and oranges. There are still a lots of technical difficulties with currently availabe GPU based raytracing methods, and some clever hardware invention could speed things up quite a bit. It's about the relation and the interaction of the secondary rays (shadow, reflection, refraction), the transparency of the traced "point", the speedy and proper handling of curved surfaces with their adaptive tessellation,etc.. those are just a few from many things which are slowing down HW based ray-tracing a lot. Being skeptical is all right, but let's just not bash something new before we know enough about it;)

ps.: fixed the link for ya
Posted on Reply
#14
Benetanegia
IkarugaI'm not here to defend RayCore (don't know anything about them anyway), but you are indeed comparing apples and oranges. There are still a lots of technical difficulties with currently availabe GPU based raytracing methods, and some clever hardware invention could speed things up quite a bit. It's about the relation and the interaction of the secondary rays (shadow, reflection, refraction), the transparency of the traced "point", the speedy and proper handling of curved surfaces with their adaptive tessellation,etc.. those are just a few from many things which are slowing down HW based ray-tracing a lot. Being skeptical is all right, but let's just not bash something new before we know enough about it;)

ps.: fixed the link for ya
It was not my intention to bash it. I just read the PR and thought "this are some outstanding claims" and of course outstanding claims require outstanding demonstrations.

I investigated and the claims don't seem to match what the reality is atm.

Like I said I know it's kind of apples to oranges, but there's absolutely no way to tell if RayCore offers anything over existing GPU ray-tracers or if it's actually the other way around (i.e rayCore using "optimizations", compromises). Pictures provided by Siliconarts definitely DON'T show anything new or outstanding, so why should we assume there tracer is more advanced?

PS: Thanks for the link. I fixed it. For some reason a colon "magically appeared" at the end for no reason. :p
Posted on Reply
#15
Ikaruga
BenetanegiaIt was not my intention to bash it. I just read the PR and thought "this are some outstanding claims" and of course outstanding claims require outstanding demonstrations.

I investigated and the claims don't seem to match what the reality is atm.

Like I said I know it's kind of apples to oranges, but there's absolutely no way to tell if RayCore offers anything over existing GPU ray-tracers or if it's actually the other way around (i.e rayCore using "optimizations", compromises). Pictures provided by Siliconarts definitely DON'T show anything new or outstanding, so why should we assume there tracer is more advanced?

PS: Thanks for the link. I fixed it. For some reason a colon "magically appeared" at the end for no reason. :p
It might have some special instruction or micro-code hard-wired into the silicon, specially designed to help some certain ray-tracing algorithms, but that's just a wild guess;)

Btw - as I already said it above - I agree about the picture, it makes the whole PR attempt to look like a really bad joke, and it would be better to leave it out:)
Posted on Reply
#16
Benetanegia
IkarugaIt might have some special instruction or micro-code hard-wired into the silicon, specially designed to help some certain ray-tracing algorithms, but that's just a wild guess;)
He, I was suggesting that when I said:
In future releases Nvidia/AMd could certainly implement certain "semi-fixed" function capabilities and match or even vastly outperform them though, but who knows. Right now I can only assume that on this side of the equation RayCore is ahead by a couple years.
But I can understand that my made up jargon wasn't understood properly. :laugh:

But back to the discussion, I'm not completely sure that advantage (which is probably just temporal should ray-tracing become an standard) is even enough now, considering that GPU makers are probably way ahead on manufacturing and probably on raw performance. I've not seen comparable numbers for current cards. Nebulous in my memory I think I remember reading that Design Garage shoots over 1 billion rays/s, but that's at 3 fps on a Fermi, so certainly not real-time and not comparable and to make matters worse it was at a higher 1080p resolution. Hard to make comparisons. Only that on that particular ray-tracing demo GTX480 was 8x faster than GTX 285, but that's probably in large part, Nvidia optimizing for Fermi (not that Fermi >>>> Tesla for those kind of tasks).

So all in all, I agree, it's too early to draw conclusions. I'm just trying to cover all angles and understand all things implicated, for which discussing it is really advantageous. Nothing like seing others' pov to cover all angles.
Posted on Reply
#17
hardcore_gamer
eidairaman1eventually AMD or Nvidia will buy the company out.
I hope Intel buys this company and comes up as an alternative to AMD/nVidia.
Posted on Reply
#18
Disparia
^ I don't know, seems too specific a solution for them. As the Phi becomes more powerful I believe Intel will start to expand its market beyond HPC. It could happen as early as the next generation of Phi.
BenetanegiaIt woud be nice to have them as competitors to AMD and Nvidia on the GPU side, but they don't offer equivalent tech so I doubt game developers will be interested unless AMD/Nvidia offer the same tech and most probably an API standard would need to be created before they move a finger in that direction.
AMD/nVidia could adopt quickly thanks the increasing attention to GPGPU over the years and several devs have expressed an interest, but yeah, someone will need to come forward with a plan as to how to go forward with it.
Posted on Reply
#19
TheoneandonlyMrK
hardcore_gamerI hope Intel buys this company and comes up as an alternative to AMD/nVidia.
probably the most interesting thing that could happen with this tech at the minute would be this, how are they claiming anything more then mobile use with those resolutions( they mention server and pc's)960xweva is not good enough ,simples.

i dont see this as even slightly evolutionary either ,imagination technologies bought caustic tec a few years ago and have been threatening to release an arm Soc with powerVr/caustic tech capable of raytraced gfx's, is it possible this is linked to that(ala patent troll tech) , regardless they are far from the first with this but hey ho.
Posted on Reply
#20
Benetanegia
theoneandonlymrkhow are they claiming anything more then mobile use with those resolutions( they mention server and pc's)960xweva is not good enough ,simples.
I suppose they can use multiple processors (6 seem to be required) to form a full HD picture. Since ray-tracing works "backwards" (from camera > to objects > to light sources) tiling is no problem at all.
Posted on Reply
#21
TheoneandonlyMrK
BenetanegiaI suppose they can use multiple processors (6 seem to be required) to form a full HD picture. Since ray-tracing works "backwards" (from camera > to objects > to light sources) tiling is no problem at all.
Your creating false realities there, what you supose they can do is not what they just announced , and in reality they announced very little concrete info at all so why spread fud

who is this Company A and what phones are going to have them , what Oem's are going to be shipping them, premature pr Bs Imho, and anyway crysis isnt going to run on it so it must be crap:p
Posted on Reply
#22
Benetanegia
theoneandonlymrkYour creating false realities there
Eh?? Do you know how ray-tracing (or rasterizing for that matter) works?

Btw I didn't see it supports 960x640, so with 4 of them 1920x1280 would be posible. No need for 6 (was thinkikng max res was 960x480)
Posted on Reply
Add your own comment
Apr 20th, 2024 00:12 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts