OpenAI CEO Sam Altman will make his first appearance before Congress on May 16 to discuss artificial intelligence (AI) regulation in the United States during a hearing on oversight. Also testifying will be IBM’s chief privacy and trust officer, Christina Montgomery — who is a member of the U.S. National Artificial Intelligence Advisory Committee — and New York University emeritus professor Gary Marcus.
Details remain scarce concerning the hearing’s agenda. Its title, “Oversight of A.I.: Rules for Artificial Intelligence,” implies the discussion will center on safety and privacy, as does the roster of scheduled attendees.
The hearing will mark Altman’s first on-the-record testimony before Congress, though he recently attended a roundtable discussion with Vice President Kamala Harris at the White House alongside the CEOs of Alphabet, Microsoft and Anthropic.
NYU’s Marcus recently made waves in the AI community with his full-throated support for a community-driven “pause” on AI development for six months.
The idea of an AI pause was defined in an open letter published on the Future of Life Institute website on March 22. As of this article’s publishing, it has more than 27,500 signatures.
The letter’s stated goal is to “call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
Altman and Montgomery are among those opposed to the pause.
For Montgomery’s part, her sentiments were explained in an in-depth IBM company blog post she authored entitled, “Don’t pause AI development, prioritize ethics instead,” wherein she made a case for a more precise approach to AI regulation:
“A blanket pause on AI’s training, together with existing trends that seem to be de-prioritizing investment in industry AI ethics efforts, will only lead to additional harm and setbacks.”
According to another IBM blog post penned in part by Montgomery, the company believes AI should be regulated based on risk — it’s worth noting that, to the best of Cointelegraph’s knowledge, IBM doesn’t currently have any public-facing generative AI models.
OpenAI, on the other hand, is responsible for ChatGPT, arguably the most popular public-facing AI technology in existence.
Per an interview with Lex Fridman at a Massachusetts Institute of Technology event, Altman supports the safe and ethical development of AI systems but believes in “engaging everyone in the discussion” and “putting these systems out into the world.”
That leaves Marcus as the lone outlier, one who’s been a vocal supporter of the pause since it was first floated. Though Marcus admittedly had “no hand in drafting” the pause letter, he did pen a blog post titled, “Is it time to hit the pause button on AI?” nearly a month before the open letter was published.
While the upcoming Senate hearing will likely function as little more than a forum for members of Congress to ask questions, the discussion could have disruptive ramifications — depending on which experts you believe.
If Congress determines that AI regulation deserves a heavy hand, experts such as Montgomery fear such efforts could have a chilling effect on innovation without necessarily addressing safety concerns.
This harm could trickle into operating sectors where GPT technology underpins a plethora of bots and services. In the world of fintech, for example, cryptocurrency exchanges are adapting chatbot technology to serve their customers, conduct trades and analyze the market.
However, experts such as Marcus and Elon Musk worry that failure to enact what they deem as common sense policy related to AI oversight could result in an existential crisis for humankind.