Tuesday, March 21st 2023

Google Bard Chatbot Trial Launches in USA and UK

We're starting to open access to Bard, an early experiment that lets you collaborate with generative AI. We're beginning with the U.S. and the U.K., and will expand to more countries and languages over time. Today we're starting to open access to Bard, an early experiment that lets you collaborate with generative AI. This follows our announcements from last week as we continue to bring helpful AI experiences to people, businesses and communities.

You can use Bard to boost your productivity, accelerate your ideas and fuel your curiosity. You might ask Bard to give you tips to reach your goal of reading more books this year, explain quantum physics in simple terms or spark your creativity by outlining a blog post. We've learned a lot so far by testing Bard, and the next critical step in improving it is to get feedback from more people.
About Bard

Bard is powered by a research large language model (LLM), specifically a lightweight and optimized version of LaMDA, and will be updated with newer, more capable models over time. It's grounded in Google's understanding of quality information. You can think of an LLM as a prediction engine. When given a prompt, it generates a response by selecting, one word at a time, from words that are likely to come next. Picking the most probable choice every time wouldn't lead to very creative responses, so there's some flexibility factored in. We continue to see that the more people use them, the better LLMs get at predicting what responses might be helpful.

While LLMs are an exciting technology, they're not without their faults. For instance, because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs. And they can provide inaccurate, misleading or false information while presenting it confidently. For example, when asked to share a couple suggestions for easy indoor plants, Bard convincingly presented ideas…but it got some things wrong, like the scientific name for the ZZ plant.
Although it's important to be aware of challenges like these, there are still incredible benefits to LLMs, like jumpstarting human productivity, creativity and curiosity. And so, when using Bard, you'll often get the choice of a few different drafts of its response so you can pick the best starting point for you. You can continue to collaborate with Bard from there, asking follow-up questions. And if you want to see an alternative, you can always have Bard try again.
Bard is a direct interface to an LLM, and we think of it as a complementary experience to Google Search. Bard is designed so that you can easily visit Search to check its responses or explore sources across the web. Click "Google it" to see suggestions for queries, and Search will open in a new tab so you can find relevant results and dig deeper. We'll also be thoughtfully integrating LLMs into Search in a deeper way — more to come.

Building Bard responsibly

Our work on Bard is guided by our AI Principles, and we continue to focus on quality and safety. We're using human feedback and evaluation to improve our systems, and we've also built in guardrails, like capping the number of exchanges in a dialogue, to try to keep interactions helpful and on topic.
Sign up to try Bard

In case you were wondering: Bard did help us write this blog post — providing an outline and suggesting edits. Like all LLM-based interfaces, it didn't always get things right. But even then, it made us laugh.
We'll continue to improve Bard and add capabilities, including coding, more languages and multimodal experiences. And one thing is certain: We'll learn alongside you as we go. With your feedback, Bard will keep getting better and better.

You can sign up to try Bard at bard.google.com. We'll begin rolling out access in the U.S. and U.K. today and expanding over time to more countries and languages.

Until next time, Bard out!
Source: Google Blog
Add your own comment

7 Comments on Google Bard Chatbot Trial Launches in USA and UK

#1
hsew
You can practically smell the stench of Google sweat.
Posted on Reply
#2
trsttte
hsewYou can practically smell the stench of Google sweat.
Not really, they've been at it for longer than anyone else, just didn't release anything to the public. OpenAI forced their hand, but they've been at it and producing results long before them

The long road to LaMDA

LaMDA’s conversational skills have been years in the making. Like many recent language models, including BERT and GPT-3, it’s built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next.
Case in point, openai/gpt-3 uses the technology that google invented before anyone else
Posted on Reply
#3
mashie
After using both ChatGPT and Bard, Google have quite a catch-up to do but eventually I'm sure they will get ahead.

It's a bit like the Google Home vs Amazon Echo launch.
Posted on Reply
#4
stimpy88
Hahaha. I love it when Google gets taken down a peg or two.
Posted on Reply
#5
R-T-B
trsttteNot really, they've been at it for longer than anyone else, just didn't release anything to the public. OpenAI forced their hand, but they've been at it and producing results long before them



Case in point, openai/gpt-3 uses the technology that google invented before anyone else
This. Google invented nearly all the stuff all your "better" models use, people. They aren't some kind of noob here.

I believe this is based on the same model that google fired an engineer for who was firmly convinced the model had become sentient, and deserved rights. As silly as that was, this is NOT a primitive model.

www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/
Posted on Reply
#6
Wye
After decades of AI research, what we have in 2023 are glorified search engines that have a "thinking loop" to simulate thinking and type-delay to simulate human like typing.
It is simply gross.
Posted on Reply
#7
trsttte
Wye"thinking loop" to simulate thinking and type-delay to simulate human like typing
The marketing might say that, but it's far from the only reason (considering it's even a real reason to begin with which I doubt). These things have to have delays because the queries take time to resolve and are freaking expensive.

One of the big challenges ahead for something like chatgpt and bard and all these tools will be profitability, those servers don't run on hopes and dreams, green hard cash is necessary to keeps the lights on.

A couple months ago when Alphabet took a dive I saw an estimate from Morgan Stanley that put an AI infused Google search at around a penny for query - doesn't sound like much right? how many google searches are done again? Google's first result says 6.3 million a minute

This article from 2 days ago puts it much much worse at an estimated 36 cents per chatgpt query.

www.digitaltrends.com/computing/chatgpt-cost-to-operate/

earthweb.com/how-many-google-searches-per-minute/
Posted on Reply
May 17th, 2024 11:33 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts