Google has already pioneered the communication between computers and human beings using tools like Google Assistant. The future of computer-human conversations, however, seems eerily similar to actual conversations among humans. At Google I/O 2021, Google announced a new language model that may not only literally interpret whatever is said by a user but also use context to make more meaningful conversations. With this new model called "LaMDA," Google aims to tap into the versatility of human interactions for open-ended yet purposeful dialogues.

LaMDA is based on Google's neural network architecture called  Transformer. This neural network is capable of reading or listening to words — or set of words, identify the relations between those words in different scenarios, and predict reasonable responses when used in a conversation. Alongside words, LaMDA is also trained to inculcate elements like excitement or surprise, so the conversation sounds more sensible. In a blog post, Google adds that the language model can converse about or as almost anything, and exemplified it during the keynote by talking as a paper airplane and Pluto.

The model is yet to be inducted into consumer products like Google Assistant and before doing that, Google is exploring ways to make conversations more interesting, witty, insightful, but also unexpected at times. But this demands that LaMDA's responses are not only convenient but also fitting and do not hurt or harm users. It requires Google to weed out any biases while training the neural network that might lead to hate speech or the spread of misinformation.

In the future, LaMDA will likely form the basis of conversations with Google Assistant, which is certain to be more powerful than it is right now. So, you might be able to ask the Assistant to "find me a route with beautiful mountains" while driving or to "show me the part where the lion roars at the sunset" while watching a video on YouTube.

While the potential applications of LaMDA are only limited by the bounds of human imagination, one of the most common uses we can think of is in conversations for people who feel lonely. That may, however, also has some complications, and we can't help but be reminded of the 2013 film Her in which a man develops a relationship with a growing AI with a female voice. We hope Google can intervene and patch any loopholes that could lead to a situation as represented in the movie while still fostering healthy and compassionate human-like conversations.

Check back for more Google I/O news at XDA!