The Regenerative AI space has seen a slew of developments recently. After OpenAI set the Internet ablaze with the trailblazing ChatGPT and GPT-4, digital tech major Google announced that the Bard, its own AI chat bot, is ready for use. However, users in India will have to wait a bit longer to take a shot at it.

The early version of Bard can suggest a packing list for your weekend fishing sortie, or even a blog idea. 

“Today we’re starting to open access to Bard, an early experiment that lets you collaborate with generative AI,” Sissie Hsiao, Vice-President (Product) and EliCollins, Vice-President (Research) of Google, said.

“You can use Bard to boost your productivity, accelerate your ideas and fuel your curiosity,” they said.

As with ChatGPT, you can ask Bard anything that crosses your mind. From tips to reach your goal of reading more books this year, to explain quantum physics in simple terms – you can ask it anything.

A few weeks ago, Google CEO Sundar Pichai, announced the Bard project, which was opened to a select group of users for testing.

“We’ve been working on an experimental conversational AI service, powered by LaMDA, that we’re calling Bard. And today, we’re taking another step forward by opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks.

“Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models,” he said in his February announcement.

Bard draws on information from the web to provide fresh, high-quality responses. “Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old,” he said.

Also read: GPT-4 vs ChatGPT: How different is OpenAI’s new AI language model

Tech behind Bard

In a statement on Wednesday, Google said Bard is powered by a research large language model (LLM). It is a lightweight and optimised version of LaMDA (Language Model for Dialogue Applications).

It will be updated with newer, more capable models over time. 

“When given a prompt, it generates a response by selecting, one word at a time, from words that are likely to come next,” they said.

The more people use them, the better LLMs get at predicting what responses might be helpful.

While LLMs are an exciting technology, they’re not without their faults. “For instance, because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs. And they can provide inaccurate, misleading or false information, while presenting it confidently,” they point out.