• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Lightelligence's Optical Processor Outperforms GPUs by 100 Times in Some of The Hardest Math Problems

I wish mining would stop creeping into threads it does not belong in...
While I would normally agree, in this case, the context is valid. If these optical processors can be utilized for cryptocoin mining, it would be very attractive and beneficial for everyone. The mere prospect of such is an exciting possibility! Do you not agree?
How so? This doesn't do any kind of meaningful data hash that I can see.
Do you not think it could be easily adapted? Given what's been stated, I think it would be trivial to recompile mining code to run on such a processor.

In my eyes, this development is the way forward for computing long term. Optical CPU/GPU/APU would not suffer from heat and power consumption like traditional silicon based IC's do. Very exciting indeed!

but can it play Crysis?
While it is clear you say that in jest, I suggest that with the code properly complied and optimized, such a processor would likely be able to run dozens of instances of Crysis simultaneously.
 
There are thousands fields of science and engineering alone that would massively benefit from hardware that could solve certain classes of problems a 100x faster. They specifically don't use general purpose hardware for everything because they are too slow. The article specifically mentions "NP-complete class of problems". There are exist many such problems that have no efficient algorithms at all to solve them. These problems run in superpolynomial time which can be exponentially or factorially slow.
I fail to see what any of that has anything to do with what I said. The matter of the fact is "100 times faster than a GPU" is still a meaningless statement.

The article specifically mentions "NP-complete class of problems". There are exist many such problems that have no efficient algorithms at all to solve them. These problems run in superpolynomial time which can be exponentially or factorially slow.

It doesn't matter how fast the hardware is, those problems always remain prohibitively expensive to be put into practice.

Does it matter if you only need it to do one thing?

No, but it's still dumb to make that comparison. I am sure that there are bits of silicon in a phone's SoC that compute specific things much faster than a GPU, that still doesn't mean anything in particular.
 
No, but it's still dumb to make that comparison. I am sure that there are bits of silicon in a phone's SoC that compute specific things much faster than a GPU, that still doesn't mean anything in particular.

In a resource limited scenario such as a phone, you're wrong that it doesn't matter because speed directly translates to a lower energy consumption. Hypothetically could that be the difference between 8h or 24h of battery life for that phone.
 
In a resource limited scenario such as a phone, you're wrong that it doesn't matter because speed directly translates to a lower energy consumption. Hypothetically could that be the difference between 8h or 24h of battery life for that phone.

I don't know if you guys can't or don't want to understand what I am saying.

Comparing them is meaningless, when one says something is X times better than a GPU I expect it to be better at everything since that's a generic statement.
 
I don't know if you guys can't or don't want to understand what I am saying.

Comparing them is meaningless, when one says something is X times better than a GPU I expect it to be better at everything since that's a generic statement.

That not the statement being made here. The statement is "something is X times better than a GPU at Y applications."
 
That not the statement being made here. The statement is "something is X times better than a GPU at Y applications."
Since it cannot store data without exchanging photons for electron/electron potential the supporting hardware is going to be next to impossible to make faster than existing custom ASIC's for whatever code they threw at it or however they twisted the testing to attain the result they wanted. I want to see their testing methodology and descriptions, cause after reading up on it more it seems like a really fast photon sorting device that still requires the silicon hardware to make work. The only real speed up is the speed of light through optically transparent material VS the speed of EM propogation through copper and other metals used in current semiconductors.

Essentially they need to learn to trap light in a bottle to make this work any faster in real world applications.
 
Do you not think it could be easily adapted? Given what's been stated, I think it would be trivial to recompile mining code to run on such a processor
not really, no. PoW mining relies on a hash function of some kind and as far as I can tell, this lacks any.
 
Can they make a processor which will crash the crypto market by making it too easy to produce coin?
No.
Crypto is increasingly harder to mine the more you mine it. By design. No matter how much computing power you throw at it, you're going to hit a wall at some point.
 
Back
Top