Dive Brief:
-
Sixty-seven percent of CEOs responding to a recent Workday survey cited potential errors as a top risk of artificial intelligence adoption.
-
While the vast majority (98%) of CEOs see AI having the potential to generate immediate benefits for their organizations, concerns over risks such as data privacy or inaccuracy are holding many of them back from fully embracing the technology, according to a report on the findings. Twenty-eight percent of CEOs said they want to wait to see how AI impacts their organizations before deciding on an approach.
-
“But these technologies are evolving fast, so business leaders can’t afford to stand still,” the study said. “The implications could mean that the competition races ahead and leaves these businesses behind.”
Dive Insight:
The Workday research aligns with many other recent studies on the benefits and risks of AI.
In a survey unveiled last month by KPMG, 80% of executives said they believe generative AI — which is capable of producing text, images, or other content based on data used to “train” it — will disrupt their industry, with 93% saying it could provide value to their business.
“Generative AI technology is in the midst of a meteoric rise and is now reaching an inflection point,” KPMG said in its survey report. “The market has matured to the point that large companies in basically every industry can no longer ignore it and are now spurring into action.”
Yet almost half (45%) of respondents said the technology could negatively impact their organizations’ trustworthiness if the appropriate risk-management tools aren’t implemented.
“Early versions of generative AI have shown a lot of challenges with getting even basic, unchallenged facts correct, such as which national soccer team won the last the World Cup,” the report said. “Billions of dollars could be wasted if enterprises place bets on the wrong tools, applications, or use cases, or fail to weave initial pilot projects into their ways of operating. Customers could be alienated, and brands could be ruined, by an unsupervised generative AI algorithm spewing out immoral or erroneous advice.”
AI's success is “intricately tied” to the quality of the data it's trained on, according to Jeff Schumann, CEO of technology firm Aware. “Just as a diamond derives its brilliance from its clarity, the brilliance of an AI solution is derived from the clarity and quality of its training data,” he said in an article published on LinkedIn.
While the risk of errors can be reduced, many companies are unprepared as they “wrangle huge volumes of information across patchwork systems, static spreadsheets and fragmented processes,” according to a Workday blog post on the company’s research, which was based on a global survey of 2,355 senior business executives in May and June. Fifty-nine percent of organizations reported that their data is somewhat or completely siloed, and only 4% said their data is fully accessible.
“AI is only as powerful as the data and the humans that power its design and application,” the report said. “And data that is siloed, poor quality and not uniformly structured is limiting AI’s potential.”
Meanwhile, as corporate leaders grapple with the risks of AI, the issue is also gaining increased scrutiny in Washington. Last week, the White House announced that eight technology companies, including Salesforce, Adobe, IBM, and Nvidia, were added to a group that had committed to adhering to a set of voluntary safeguards for the technology.
During the same week, Senate Majority Leader Chuck Schumer, D-N.Y., hosted a closed-door AI summit with the executives of major technology companies.
Regulators such as the Federal Trade Commission are focused on the issue as well. The agency is investigating whether Microsoft-backed startup OpenAI, creator of the popular generative AI tool known as ChatGPT, has violated consumer protection laws by putting personal reputations and data at risk, according to a July Washington Post report. As part of the probe, the company was asked to provide detailed descriptions of all complaints it has received of its products making “false, misleading, disparaging or harmful” statements about people, the report said.