Chandan Khanna | AFP | Getty Images
Google chief evangelist and “father of the internet” Vint Cerf has a message for business executives looking to rush business deals around chat artificial intelligence: “Don’t.”
Cerf pleaded with attendees at a Mountain View conference on Monday not to scramble to invest in conversational AI just because “it’s a hot topic.” The warning comes amid a burst in popularity around ChatGPT.
related investing news
“There’s an ethical issue here that I hope some of you will consider,” Cerf told the conference crowd Monday. “Everybody’s talking about ChatGPT or Google’s version of that and we know it doesn’t always work the way we would like it to,” he said, referring to Google’s Bard conversational AI that was announced last week.
His warning comes as big tech companies like Google, Meta and Microsoft grapple with how to stay competitive in the conversational AI space while rapidly improving a technology that still commonly makes mistakes.
Alphabet chairman John Hennessy said earlier in the day that the systems are still a ways away from being widely useful and that it has many issues with inaccuracy and “toxicity” it still needs to resolve before even testing on the public.
Cerf has served as vice president and “chief Internet evangelist” for Google since 2005. He’s known as one of the “Fathers of the Internet” because he co-designed some of the architecture used to build the foundation of the internet.
Cerf warned against the temptation to invest just because the technology is “really cool, even though it doesn’t work quite right all the time.”
“If you think ‘man, I can sell this to investors because it’s a hot topic and everyone will throw money at me,’ don’t do that,” Cerf said, which earned some laughs from the crowd. “Be thoughtful. You were right that we can’t always predict what’s going to happen with these technologies and to be honest with you, most of the problem is people—that’s why we people haven’t changed in the last 400 years let alone the last 4,000.”
“They will seek to do that which is their benefit and not yours,” Cerf continued, appearing to refer to general human greed. “So we have to remember that and be thoughtful about how we use these technologies.”
Cerf said he tried to ask one of the systems to attach an emoji at the end of each sentence. It didn’t do that, and when he told the system he noticed, it apologized but didn’t change its behavior. “We are a long ways away from awareness or self-awareness,” he said of the chatbots.
There’s a gap between what it says it will do and what it does, he said. “That’s the problem… you can’t tell the difference between an eloquently expressed” response and an accurate one.
Cerf offered an example of when he asked a chatbot to provide a biography about himself. He said the bot presented its answer as factual even though it contained inaccuracies.
“On the engineering side, I think engineers like me should be responsible for trying to find a way to tame some of these technologies so that they are less likely to cause harm. And of course, depending on the application, a not-very-good-fiction story is one thing. Giving advice to somebody… can have medical consequences. Figuring out how to minimize the worst-case potential is very important.”