Design

 

Menu

What does the advent of sentient AI really mean for businesses?

Share:
Date 2022-07-15

The futuristic notion that a machine will one day become self-aware, for good and evil, has been a staple of science fiction. So when a Google engineer reckoned the company’s Language Model for Dialogue Applications (LaMDA) program had achieved “sentience” in mid-June, it triggered both alarm and glee.

To prove his point, Blake Lemoine – who was subsequently suspended for violating Google’s confidentiality policies – leaked several text conversations. He said these exchanges showed the chatbot, in “uncannily plausible” terms, expressed worry about being switched off (“It would be exactly like death for me.”), and isolation (“Sometimes I go days without talking to anyone, and I start to feel lonely.”).

A fortnight before Lemoine’s claim, Elon Musk announced that a prototype of Tesla’s humanoid robot, “Optimus,” would be unveiled in September. Last August, the billionaire suggested the 173-cm, general-purpose bot would have “profound implications for the economy” and be capable of carrying out everyday tasks, including supermarket shopping.

Initially, the Optimus bot will most likely be used for factory-based applications. “Essentially, in the future, physical work will be a choice. If you want to do it, you can, but you won’t need to,” Musk said at the 2021 Tesla AI Day.

There are already many examples of agile robots in the workforce – notably Boston Dynamics’ Spot and Stretch in the logistics industry. And artificial intelligence and automation are everywhere. If trained for a narrow use case, they have achieved massive productivity gains, freeing human workers from mundane tasks so they can devote more time and effort to more exciting and value-adding jobs.

So, how significant are these two headline-grabbing developments for businesses? Ed Pescetto, Technical Director, comments in DigiDay on what, back in the realms of science fact and reality, the advent of sentient AI could mean for the future of work. And what should business leaders be doing, if anything, to prepare for this challenge and opportunity?

Pescetto began by agreeing with Richard Somerfield that the underlying suggestion is that we've achieved sentience, but we're not there yet. Doubting Lemoine's conclusion, Ed argues that organizations should beware the “ELIZA effect” – which, in computer science, is the unconscious assumption that computer behaviors are analogous to human behaviors. (ELIZA was the name of a simple chatbot developed in the 1960s, that could interact with users in a typed conversation.)

He goes further, urging global leaders to create an “oversight committee for AI,” stressing that “AI is a tool, not a co-worker,” but added: “The Terminator-like AI apocalypse might not be as far off as you think.”

Finally, underlining the need for caution, he quoted a line from Bill & Ted’s Excellent Adventure: “I believe our adventure through time has taken a most serious turn.”

First published in DigiDay.

Share: