Dive Brief:
- Companies using artificial intelligence to mine insights from mountains of data will likely customize pricing and marketing with sharper precision and gain leverage over consumers, Securities and Exchange Commission Chair Gary Gensler said Monday.
- “When communications, product offerings and pricing can be narrowly targeted efficiently to each of us, producers are more able to find each individual’s maximum willingness to pay a price or purchase a product,” Gensler said in a speech. “With such narrowcasting, there is a greater chance to shift consumer welfare from us to producers.”
- While posing perils such as bias and fraud, “AI opens tremendous opportunities for humanity, from healthcare to science to finance,” Gensler said. “As machines take on pattern recognition, particularly when done at scale, this can create great efficiencies across the economy,” he said, calling AI “the most transformative technology of our time.”
Dive Insight:
U.S. technology executives, lawmakers, academics, regulators and other stakeholders are debating how to allow an economy-wide burst of AI innovation while limiting the risk of misinformation, bias and “deepfake” fabrication of audio and images.
Senate Majority Leader Chuck Schumer of New York last month laid out a framework for creating AI regulation that seeks a rapid build-up of AI expertise in Congress as a prelude to bipartisan legislation. His plan prioritizes such goals as explainability, accountability and security.
“AI promises to transform life on Earth for the better,” Schumer said in a June 21 speech.
“It will reshape how we fight disease, tackle hunger, manage our lives, enrich our minds and ensure peace,” Schumer said. “But there are real dangers too — job displacement, misinformation, a new age of weaponry and the risk of being unable to manage the technology altogether.”
AI chatbots such as Bard, ChatGPT and Bing Chat pose “profound risks to humanity,” Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and thousands of other signatories said in an open letter released in March.
AI developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” the open letter says. The signatories called for a six-month pause in developing systems more powerful than ChatGPT-4 and creation of AI governance structures.
The new technology may eventually trigger financial turmoil, Gensler said. “AI may heighten financial fragility as it could promote herding with individual actors making similar decisions because they are getting the same signal from a base model or data aggregator.”
Gensler also warned that AI may open new horizons for fraud.
“With AI, fraudsters have a new tool to exploit,” he said. “They may do it in a narrowcasting way, preying upon our personal vulnerabilities.”
The SEC will crack down on AI-related instances of deception, Gensler said. “Public companies making statements on AI opportunities and risks need to take care to ensure that material disclosures are accurate and don’t deceive investors.”
The handful of companies leading AI innovation may eventually dominate the market, achieving “rent extractions” and jeopardizing privacy and intellectual property by collecting data from AI users, Gensler said.
“Through the data being collected on each of us, we’re all helping train the parameters of AI models,” Gensler said. If AI algorithms, or “base layers,” are “training off of downstream applications, they are able to gain significant value.”
“For the SEC, the challenge here is to promote competitive, efficient capital markets in the face of what could be dominant base layers at the center of the capital markets,” he said.
Federal AI regulation may promote accountability and some of the other goals of “Responsible AI,” which has emerged in the private sector.
Responsible AI also ensures that the technology serves a broad range of stakeholders; mines high-integrity data; protects against attacks and rogue use; shields user data; avoids harming people, property or the environment; and is explainable, transparent and reliable.