Friday, March 18th 2022

Google Uses Artificial Intelligence to Develop Faster and Smaller Hardware Accelerators

Designing Artificial Intelligence / Machine Learning hardware accelerators takes effort from hardware engineers in conjunction with scientists working in the AI/ML area itself. A few years ago, we started seeing AI incorporated into parts of electronic design automation (EDA) software tools, helping chip designers speed up the process of creating hardware. What we were "used to" seeing AI do are just a couple of things like placement and routing. And having that automated is a huge deal. However, it looks like the power of AI for chip design is not going to stop there. Researchers at Google and UC Berkeley have made a research project that helps AI design and develop AI-tailored accelerators smaller and faster than anything humans made.

In the published paper, researchers present PRIME - a framework that created AI processors based on a database of blueprints. The PRIME framework feeds off an offline database containing accelerator designs and their corresponding performance metrics (e.g., latency, power, etc.) to design next-generation hardware accelerators. According to Google, PRIME can do so without further hardware simulation and has processors ready for use. As per the paper, PRIME improves performance upon state-of-the-art simulation-driven methods by as much as 1.2x-1.5x. It also reduces the required total simulation time by 93% and 99%, respectively. The framework is also capable of architecting accelerators for unseen applications.
As far as comparing the test results, PRIME was in its prime time (pun intended). Google's very own EdgeTPU was compared to PRIME-made design, and the AI-generated chip was faster with latency improvement of 1.85x, resulting in a much quicker design. Researchers also noticed a fascinating thing, and that the framework-generated architectures are smaller, resulting in a more minor, less power-hungry chip. You can read more about PRIME in this free-to-access paper here.
Sources: Google AI Blog, via The Register
Add your own comment

49 Comments on Google Uses Artificial Intelligence to Develop Faster and Smaller Hardware Accelerators

#26
TheoneandonlyMrK
zlobbyYou forgot Aperture among others but top kek anyway! :D

If we don't die in some global war or some cosmic catastrophy, the next pretendent for global human extinction is AI.

The moment AI gain the ability to replicate and sustain itself outside humans, the most logical thing for it would be that humans are its biggest threat. And that ain't some scifi...
I think people get a bit excited about AI personally.
This is a specialised AI designed to shine in one task , not many.
General AI that's some dreamy shit IMHO.
Posted on Reply
#28
kiriakost
zlobbyDo you think they removed theit famous 'Don't be evil' accidently?
They have awaken the Evil in me.
Your computer this is not recognized by Google, open your smartphone ... you will receive special code .... enter that to unlock your account.....
Did this once ... three days later again .... Your computer this is not recognized by Google, open your smartphone ... you will receive special code .... enter that to unlock your account.....

Do they have any physical address? because I do plan to mail them few farts with 100% natural ingredients.
Posted on Reply
#29
R-T-B
kiriakostDid this once ... three days later again .... Your computer this is not recognized by Google, open your smartphone ... you will receive special code .... enter that to unlock your account.....
Thank dynamic IPs.
Posted on Reply
#30
Assimilator
This is a PRIME (pardon the pun) use-case for AI/ML. Train it to know what good output is and how to generate said output, then throw a whole database of designs at it and let it figure out the most optimal permutations. Considering so much of the semiconductor industry still relies on human-hand-tuned designs, it seems like this could definitely be a game-changer for designing smaller, faster, more efficient chips.

I do wonder what sort of optimisations could be achieved if this was applied to x86, which - being old as the hills - has a proportionally much larger percentage of human-hand-tuned design that could potentially benefit massively.
Posted on Reply
#31
zlobby
kiriakostThey have awaken the Evil in me.
Your computer this is not recognized by Google, open your smartphone ... you will receive special code .... enter that to unlock your account.....
Did this once ... three days later again .... Your computer this is not recognized by Google, open your smartphone ... you will receive special code .... enter that to unlock your account.....

Do they have any physical address? because I do plan to mail them few farts with 100% natural ingredients.
1600 Amphitheatre Pkwy, Mountain View, CA 94043, United States
Posted on Reply
#32
Chrispy_
TheinsanegamerNseth1911 appears to be the result of an AI being trained exclusively on the machine translated youtube comment sections of indonesian vtubers.
I'm guessing @seth1911 was banned?

Autobans for racism, xenophobia, intolerance, trolling, propoganda, and botting are something Google sorely needs some AI accelerators to help with; Maybe it's just me but I find 90% of all Youtube videos to be marred somewhere by the worst examples that humanity has to offer.
Posted on Reply
#33
zlobby
Chrispy_I'm guessing @seth1911 was banned?

Autobans for racism, xenophobia, intolerance, trolling, propoganda, and botting are something Google sorely needs some AI accelerators to help with; Maybe it's just me but I find 90% of all Youtube videos to be marred somewhere by the worst examples that humanity has to offer.
Google are banning for far less, worst part being that banned stuff is usually on the right side of the moral compass, but it simply isn't complying with their ideology.
Posted on Reply
#34
kiriakost
zlobbyGoogle are banning for far less, worst part being that banned stuff is usually on the right side of the moral compass, but it simply isn't complying with their ideology.
The last time that my access was restricted from this forum, this was because my ideology was against curved monitors. :toast:
Every time that I do log-in, from then and since, its actually a new benchmark, if my account this is still alive.

Google does not have any definition of it ideology.
And because of that, I can not take them seriously, and neither I care of what they might do, with the few KB of data which this is called as user profile.
The wise internet user, he should avoid all internet services which violating his soul, then his logic, and also avoid everything which seems as low importance.


Google this dreams having me tracked, and be a force of control of my habits, and of my thoughts, I have bad news for them.
I am autonomous H.I. and this will never change, for them I am an super dangerous terrorist. :)
Posted on Reply
#35
zlobby
kiriakostThe last time that my access was restricted from this forum, this was because my ideology was against curved monitors. :toast:
Every time that I do log-in, from then and since, its actually a new benchmark, if my account this is still alive.

Google does not have any definition of it ideology.
And because of that, I can not take them seriously, and neither I care of what they might do, with the few KB of data which this is called as user profile.
The wise internet user, he should avoid all internet services which violating his soul, then his logic, and also avoid everything which seems as low importance.


Google this dreams having me tracked, and be a force of control of my habits, and of my thoughts, I have bad news for them.
I am autonomous H.I. and this will never change, for them I am an super dangerous terrorist. :)
Yes, by 'ideology' I actually meant agenda.
Posted on Reply
#36
kiriakost
zlobbyYes, by 'ideology' I actually meant agenda.
I do not wish to get too deep, but for an organization this dreaming to receive global acceptance, their agenda it is forbidden to be that narrow.
Either way, its one runs his business according to his own business plan.
And regarding business plans, I do prefer my own.
Posted on Reply
#37
lexluthermiester
DeathtoGnomesI'd wager we are closer than 2 decades to seeing a self aware AI
I will concede that you might be right because big advancements can and do happen. My earlier statement would hold true if the current trend continued and we ended up hitting the atomic wall with lithography processing.
Posted on Reply
#38
Forza.Milan
..and continue the spirit of beta product.. like always
Posted on Reply
#39
LabRat 891
I read several years ago that AMD, Intel, etc. had already been using Machine Intelligence to aid in CPU and ASIC design. In fact, IIRC Bulldozer was largely done using MI to help lay out the architecture.
No doubt technology and techniques have advanced leaps and bounds since then, but it's not 'a new thing'.
Posted on Reply
#40
zlobby
LabRat 891I read several years ago that AMD, Intel, etc. had already been using Machine Intelligence to aid in CPU and ASIC design. In fact, IIRC Bulldozer was largely done using MI to help lay out the architecture.
No doubt technology and techniques have advanced leaps and bounds since then, but it's not 'a new thing'.
True. Undoubtly no one is manually stitching billions of transistors and traces together.

Trick is to have the optimum from high-level design to low-level trace routing done automatically. And trust me, no human can do it better than AI at this point.
Posted on Reply
#41
Steevo
AssimilatorThis is a PRIME (pardon the pun) use-case for AI/ML. Train it to know what good output is and how to generate said output, then throw a whole database of designs at it and let it figure out the most optimal permutations. Considering so much of the semiconductor industry still relies on human-hand-tuned designs, it seems like this could definitely be a game-changer for designing smaller, faster, more efficient chips.

I do wonder what sort of optimisations could be achieved if this was applied to x86, which - being old as the hills - has a proportionally much larger percentage of human-hand-tuned design that could potentially benefit massively.
If Intel and AMD aren’t already doing this and more I believe we would have stagnated a few years ago.

The issue that persists is still humans are more betterer (I mean that) at figuring out random things than hardware is with a limited sample set. A chip may perform great with X and Y but not Z and Z may be the manufacture node, the few subsets of computation. Considering AMD still has cache latency issues and Intel can’t make a competitive chip without sacrificing branching security issues tells a lot about the status of everything, Arm/Apple has to conflate numbers and still ended up with a massive cache system to reach their performance need.
Posted on Reply
#43
TheinsanegamerN
Chrispy_I'm guessing @seth1911 was banned?

Autobans for racism, xenophobia, intolerance, trolling, propoganda, and botting are something Google sorely needs some AI accelerators to help with; Maybe it's just me but I find 90% of all Youtube videos to be marred somewhere by the worst examples that humanity has to offer.
Yeah, I think if you honestly believe that 90% of youtube content needs auto banned for hurt feelings it may just be you. All those buzzwords you listed have been used to censor opposing opinions and quell public discource repeatedly, and make the strongest argument against any further use of AI.
AssimilatorThis is a PRIME (pardon the pun) use-case for AI/ML. Train it to know what good output is and how to generate said output, then throw a whole database of designs at it and let it figure out the most optimal permutations. Considering so much of the semiconductor industry still relies on human-hand-tuned designs, it seems like this could definitely be a game-changer for designing smaller, faster, more efficient chips.

I do wonder what sort of optimisations could be achieved if this was applied to x86, which - being old as the hills - has a proportionally much larger percentage of human-hand-tuned design that could potentially benefit massively.
This is largely a misunderstanding at best, much like "oh well ARM CPUs will be way smaller because no muh x86 code". The portion of a modern x86 core dedicated to x86 is rather small. The "monolithic x86 core" thing stopped being true with the OG pentium, where micro ops were introduced and the line between CISC and RISC became rather blurred. The x86 instructions are very tiny on modern CPUs and trying to change them will likely only produce tiny benefits in special synthetic tests
SteevoIf Intel and AMD aren’t already doing this and more I believe we would have stagnated a few years ago.

The issue that persists is still humans are more betterer (I mean that) at figuring out random things than hardware is with a limited sample set. A chip may perform great with X and Y but not Z and Z may be the manufacture node, the few subsets of computation. Considering AMD still has cache latency issues and Intel can’t make a competitive chip without sacrificing branching security issues tells a lot about the status of everything, Arm/Apple has to conflate numbers and still ended up with a massive cache system to reach their performance need.
Computers can only solve what humans program them to, and AI doesnt change that.
lexluthermiesterI will concede that you might be right because big advancements can and do happen. My earlier statement would hold true if the current trend continued and we ended up hitting the atomic wall with lithography processing.
I remember, back in 2005, the prediction was that we would hit dog level intelligence by 2050 assuming moore's law remained constant. Well we all saw what happened with moore's law, and even if a supercomputer sized AI could hit dog level, that's a far reach rom skynet.
Posted on Reply
#44
lexluthermiester
TheinsanegamerNI remember, back in 2005, the prediction was that we would hit dog level intelligence by 2050 assuming moore's law remained constant.
I think you're right. That sounds familiar. Advancements in AI runtime efficiencies have been made since then, but Moore's Law has still slowed down. For progress in AI to continue there has to be a breakthrough that overcomes the atomic wall we\re running up to with IC design, runtime optimization will not be enough.
Posted on Reply
#45
Frank_100
TheinsanegamerNI remember, back in 2005, the prediction was that we would hit dog level intelligence by 2050 assuming moore's law remained constant. Well we all saw what happened with moore's law, and even if a supercomputer sized AI could hit dog level, that's a far reach rom skynet.
...and even if we could hit dog or horse level intelligence, would you let the dog drive the car?
Posted on Reply
#46
Chrispy_
TheinsanegamerNYeah, I think if you honestly believe that 90% of youtube content needs auto banned for hurt feelings it may just be you. All those buzzwords you listed have been used to censor opposing opinions and quell public discource repeatedly, and make the strongest argument against any further use of AI.
That is absolutely not what I said.
I said, quite unambiguously that "90% of youtube videos are marred", not that 90% of youtube commenters should be autobanned.

The difference between those two things is multiple orders of magnitude. 90% of those videos have at least one example of racism, xenophobia, intolerance, trolling, propoganda, or botting - all of which are ToS violations punishable by account ban according to Google.

At any rate, 9/10 videos there's either an argument with bigots or between bigots risen to the first page of comments, or it's spammed by bots/adverts/scammers somewhere in the comments chain.
Posted on Reply
#47
Assimilator
TheinsanegamerNI remember, back in 2005, the prediction was that we would hit dog level intelligence by 2050 assuming moore's law remained constant. Well we all saw what happened with moore's law, and even if a supercomputer sized AI could hit dog level, that's a far reach rom skynet.
"Dog-level intelligence" is honestly a ridiculous comparison; even the simplest machine learning algorithm (not even AI) can easily be trained to perform inference on vast datasets. Can a dog? Nope.
Chrispy_Maybe it's just me but I find 90% of all Youtube videos to be marred somewhere by the worst examples that humanity has to offer.
Then you are doing a pretty bad job at selecting channels.
Posted on Reply
#48
Frank_100
Assimilator"Dog-level intelligence" is honestly a ridiculous comparison; even the simplest machine learning algorithm (not even AI) can easily be trained to perform inference on vast datasets. Can a dog? Nope.
I need to stick up for the dogs.

I can't think of any AI algorithm with the power of a dog's sense of smell.
A dog's sense of smell operates on the entire world: a very large data set.
Dogs can do many things. Most AI routines can only do one thing.
Dogs are pretty good at training themselves to be dogs, better then any AI routine.

Wolfram has some good lectures on the subject:
www.wolfram.com/wolfram-u/catalog/machine-learning/

My own experience with AI routines is in time series analysis.
They give interesting results but are generally inferior to a well thought out adaptive filter.

If you know of a publicly available classification algorithm that is designed for martingale time series please post link.
I would be happy to have my opinion changed.

...and no, I would not let a dog tell me the distribution of a time series.
Posted on Reply
#49
pipi803
kiriakostGoogle, failed be a quality search engine, failed at YouTube as administrator, failed as email hosting service.
Failed
in its and every sector of engagement with real people.
Who knows, they might succeed in sectors that there is no people there.
Well, you must be a very successful person, to criticize those three things Google as done so "poorly". I mean, really? Google failed in search, compared to what? In YouTube, compared to what? In Gmail, compared to what?
Your comment is one of the clearest demonstrations of how no matter how good someone / some Company is, there will always be "better" people who will bring them down.
I wonder how do you think you succeeded in anything, considering the way you think Google failed so much...
As for Google, thank you for all the good you have bring to the world and to my life: the Internet was reborn the day I saw the Google search start page. YouTube is better and better: there is so much good content available (e.g. high quality technical tutorials) that the problem is having time to see the top 1% of them. And although I use other email services for certain purposes (e.g. ProtonMail), nothing compares with Gmail, both in the desktop and in Android.
Paganstomp
:) exactly
Posted on Reply
Add your own comment
Apr 18th, 2024 10:09 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts