7 Comments
Aug 9, 2022Liked by Alberto Romero

I think LMMs are still a long way to be a truly and realistic virtual assistant. Current LLMs, although impressive in what they do, they are just “input-response” magical statistical black boxes (“stochastic parrots”) which do not consider the context, time awareness, location and all those subtleties of a true human conversation. The present AI virtual assistants on the market are just that, an “input- response” devices, without long memory, conversational flow, context awareness and so on.

Some developments are in progress. as you have reported. Google claims LaMDA is a conversational AI capable of managing the “open-ended nature” of human dialogue. Amazon says Alexa Conversations allow users experience natural conversations with Alexa. So, what obstacles could prevent companies to accomplish this? I’m not an expert on the subject, but I perceive LMMs models are still a long way to go. Perhaps a new paradigm in LMM models is needed, some experts say to consider the research on cognitive science, psychology, and neurology. Who knows. (Maybe a good subject to see in a future article from you).

In the meantime, sooner than later, very probably the industry will include the LMMs in Siri, Alexa, Google Assistant or Meta BlenderBot 3, test on us and evolve the product over the time. (as always)

Expand full comment
Aug 9, 2022·edited Aug 9, 2022Liked by Alberto Romero

It will take some time before you have true virtual assistants, my expectation would be that we will not see the first commercial applications in approx 3 years.

The upside is obvious, people can communicate in their own language with technology. At first the assistants will be specific, in aiding users in certain applications, like email search or to help with technical troubleshooting. The upside for organisations is the much lower costs and less dependence on humans. The downside of these applications is that it will be extremely hard to finetune the model to make them safe and free of bias. An assistant that makes racist or offensive remarks cannot be accepted in any way by the organizations that deploy them. And the nature of neural networks, especially that large, makes them by definition a black box, so it's almost impossible to get rid of the bias. Another obstacle is the huge amount of computing power required to scale this technology.

I also see many opportunities in overcoming loneliness. I see a future where people actually enjoy talking to a robot, since you wouldn't have the judgements of fellow humans. People would also forgive a bot for not being perfect. I would be hesistant to get so hooked on this technology and to have another interface to big tech, but hey, maybe I change my mind when it really helps me to organize my life and I can outsource some of the chores that usually fill up your day.

Expand full comment
Aug 9, 2022Liked by Alberto Romero

It would be lots of fun to be able to have a real discussion with my various Assistants -- imagine being able to give Wikipedia a target word count ("Give me 250 words on the history of Croatia", "That was good. Now 500.") but the application I would love most is being able to call and talk to technical support and/or customer assistance directly, without either having to a) sit and wait and listen forever to terrible music, and b) listen to long lists of alternative connections. ("because our phone numbers have changed")

Expand full comment