Google Announces LaMDA For Conversational AI

LaMDA is different from most other language models in that it was trained on dialogue.

Recently, Google announced LaMDA - a breakthrough conversation technology.

Google said that LaMDA is company's latest research breakthrough that adds pieces to one of the most tantalizing sections of conversational AI. Similer to many recent language models, including BERT and GPT-3, LaMDA is built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. Transformer architecture produces a model that can be trained to read many words (for example: a sentence or paragraph), understand how those words relate to one another and then predict what words it thinks will come next. 

LaMDA is different from most other language models in that it was trained on dialogue. "During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language. One of those nuances is sensibleness. Basically: Does the response to a given conversational context make sense?"

Source: Google

According to Google once trained, LaMDA can be fine-tuned to significantly improve the sensibleness and specificity of its responses. The company is also exploring dimensions like "interestingness," by assessing whether responses are insightful, unexpected or witty.

Google has build and open-source resources that researchers and developers can use to analyze models and the data on which they’re trained.

LaMDA is based on earlier Google research, published in 2020, that demonstrated Transformer-based language models trained on dialogue could learn to talk about virtually anything.