Artificial Intelligence: Friend or Foe?


CEOs Divided over AI

Forty-two percent of CEOs surveyed at the Yale CEO Summit last June say AI has the potential to destroy humanity five to ten years from now, according to survey results. “It’s pretty dark and alarming,” Yale professor Jeffrey Sonnenfeld said, referring to the findings.

The survey, conducted at a virtual event held by Sonnenfeld’s Chief Executive Leadership Institute, found disparate opinions concerning the risks and opportunities linked to AI.

Sonnenfeld said the survey included responses from 119 CEOs from a cross-section of business, including Walmart CEO Doug McMillion, Coca-Cola CEO James Quincy, the leaders of IT companies like Xerox and Zoom as well as CEOs from pharmaceutical, media and manufacturing.

The business leaders displayed a sharp divide over just how dangerous AI is to civilization. While 34% of CEOs said AI could potentially destroy humanity in ten years and 8% said that could happen in five years, 58% said that could never happen and they are “not worried.”

In a separate question, Yale found that 42% of the CEOs surveyed say the potential catastrophe of AI is overstated, while 58% say it is not overstated.

Sounding the Alarm

The findings came just weeks after more than 350 executives, researchers and engineers working in AI signed a one-sentence statement, warning that fast-evolving AI technology could create as high a risk of killing off humankind as nuclear war and COVID-19-like pandemics. The signatories included top executives from Google, Microsoft and ChatGPT, three of the leading AI developers.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," read the statement, which was released by the Center for AI Safety (CAIS), a San Francisco-based nonprofit organization.

CAIS said it released the statement as a way of encouraging AI experts, journalists, policymakers and the public to talk more about urgent risks relating to AI.

True/False?

Geoffrey Hinton (a.k.a. the Godfather of AI) recently decided to sound the alarm on the technology he helped develop after worrying about just how intelligent it has become.

“I’m just a scientist who suddenly realized that these things are getting smarter than us,” Hinton said in an interview on CNN. “I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us.”

Hinton’s pioneering work on neural networks shaped artificial intelligence systems powering many of today’s products. On May 4, 2023, he made headlines for announcing that he was leaving his role at Google, where he had worked for a decade, in order to speak openly about his growing concerns surrounding the technology.

In a New York Times interview, Hinton said he was concerned about AI’s potential to eliminate jobs and create a world where many will “not be able to know what is true anymore.” He also pointed to the stunning pace of advancement, far beyond what he and others had anticipated.

“If it gets to be much smarter than us, it will be very good at manipulation because it will have learned that from us, and there are very few examples of a more intelligent thing being controlled by a less intelligent thing. It knows how to program so it’ll figure out ways of getting around restrictions we put on it. It’ll figure out ways of manipulating people to do what it wants.”

Risk/Reward

While business leaders debate the dangers of AI, the CEOs surveyed by Yale displayed a degree of agreement about the rewards. Just 13% of the CEOs said the potential opportunity of AI is overstated, while 87% said it is not. The CEOs indicated AI will have the most transformative impact in three key industries:

  • Healthcare (48%)

  • Professional Services/IT (35%)

  • Media/Digital (11%)

As some inside and outside the tech world debate doomsday scenarios around AI, there are likely to be more immediate impacts, including the risks of misinformation and the loss of jobs.

Five Schools of Thought

Sonnenfeld told CNN that business leaders break down into five distinct camps when it comes to AI.

1) The first group includes “curious creators” who are “naive believers” who argue everything you can do, you should do. “They are like Robert Oppenheimer, before the bomb,” Sonnenfeld said, referring to the American physicist known as the “father of the atomic bomb.”

2) Then there are the “euphoric true believers” who only see the good in technology.

3) Noting the AI boom set off by the popularity of ChatGPT and other new tools, Sonnenfeld described “commercial profiteers” who are enthusiastically seeking to cash in on the new technology. “They don’t know what they’re doing, but they’re racing into it,” he said.

4 & 5) Two camps pushing for an AI crackdown of sorts: alarmist activists and global governance advocates. “These five groups are all talking past each other, with righteous indignation,” Sonnenfeld said.

Where to Land?

This lack of consensus around how to approach AI emphasizes how even captains of industry are still trying to wrap their heads around the risks and rewards of what could be a real gamechanger for society. My advice on this is pretty much the advice I give on more things: Eyes wide open. Head on a swivel. Approach with caution, but…approach. It’s the only way forward.


Paul Gravette