If you're in the mood to watch four dudes vigorously presenting their buts to each other, then don't miss the All-In Podcast.
All-In recently tackled the issue of regulating the development and proliferation of artificial intelligence, from large language models like ChatGPT to the potential rise of artificial general intelligence. Here’s a quick summary from Episode 124:
Chamath Palihapitiya argued that AI must be regulated, because there are clear and present dangers to unchecked proliferation. He suggested that a regulatory body, similar to the Federal Drug Administration, should be created to do it.
David Sacks and David Friedberg argued that it’s too early to know how to regulate, that it wouldn’t effectively stop proliferation, and that government intervention typically slows innovation and drives it underground or overseas.
Jason Calacanis kept the conversation rolling and encouraged the besties to square their points of view. (Jason is a natural practitioner of momentum thinking.)
Everyone agreed that like it or not, regulation is coming.
AI regulation is a huge, gnarly problem. It’s full of tricky balance points. It’s the kind of problem that defies absolute solutions. And we’ll be grappling with it until our new AI overlords take over. (Just kidding…at least I think I’m kidding.)
Applying The Two But Rule
If you watch the whole All-In episode, you’ll notice where the conversation starts to go circular, repeating points and counter-points. This happens when even smart, well informed people embrace only one of their buts at a time. And if you go to Chamath’s tweet proposing AI regulation, you’ll see a ton of reactionary ‘1buts’ in the thread of replies…mixed with the weird evangelism and ad hominem attacks that make Twitter such a joy.
So, let's apply the two-but rule to the concerns raised by David Sacks and David Friedberg about AI regulation.
To start, I spent a several hours researching the AI regulation topic. Then I reviewed ideas with colleagues who know a thing or two about the subject. And then I consulted the best momentum-thinker of all time — ChatGPT. Yep…nothing like talking to an AI about regulating AI.
To present all the different threads here would make for a very long issue. So I cherry-picked a few.
AI Regulation Won’t Keep Pace With Innovation, But…
Maybe It Would If We Employ AI-Driven Regulatory Tools That Learn And Adapt At The Same Pace
Argument: The best way for regulators to keep pace with AI is to use AI in the development, monitoring and modification of AI regulation. Anyone using large language models like ChatGPT frequently these days can see the sense of this. It’s reasonable to believe that AI systems can be trained to monitor progress and recommend regulatory updates in accordance with high-level regulatory principles. It’s even reasonable to believe that doing this wouldn’t be a herculean feat given the current state of the technology.
But An AI Watchdog Won’t Work, Because…
Relying heavily on AI-driven tools for regulation might introduce vulnerabilities, as bad actors could potentially target these tools to bypass regulatory measures or manipulate the regulatory process.
A watchdog AI can only monitor what it can see, so this leaves out advances developed in secret.
Even if the AI watchdog is fast on the draw to identify necessary changes, the human regulatory system is woefully inadequate to react in a timely way to the recommendations.
But These Issues Can Be Addressed By…
Making the watchdog system principally open source and offering a compellingly large international “bug bounty” — funded by the widest possible set of governments and other institutions — to continuously identify problems, vulnerabilities and logic issues.
Offering huge rewards on any information leading to the discovery of secret AI research.
Creating a regulatory ‘high-speed lane’ within limited, well-defined subjects. While it’s not plausible that we can turn government bodies into speed-demons, it’s well established practice to ‘run slow’ on creating and promulgating statutes while running faster on specific regulatory rule making and enforcement. Speed and confidence can be enhanced by writing clear limits to the latitude that the regulators working with the AI watchdog have to make changes.
See a problem with any of these ideas? Good! That was the point.
Jump in the chat and add, “But that won’t work.”
Just remember to add, “But it could work if…”
Regulation Will Slow Down Innovation, But…
Maybe It Wouldn’t If We Create An Open Sandbox Testing Service
Argument: Sandboxing AI would provide a controlled environment for testing and evaluating AI applications, ensuring that any potential risks, biases, or harmful behaviors are identified and addressed before any AI is released to the public. Chamath suggested this in the podcast. This approach would improve the safety and trustworthiness of AI systems and minimize the risks associated with deploying untested or potentially harmful AI applications.
But A Sandbox Won’t Work, Because…
Sandboxing AI might slow down the development and deployment process, potentially stifling innovation and limiting the benefits of advancements. It would be easy to imagine a bottleneck forming for projects to get access to the sandbox.
Sandboxing could create a false sense of security, as AI applications might behave differently in a controlled environment compared to real-world situations, leading to unforeseen issues or vulnerabilities.
But These Issues Can Be Addressed By…
Developing an open standard for sandbox testing and providing subsidized compute resources to providers of sandbox services, so that there is a diverse and market-driven set of options for AI developers and researchers of all sizes and types to use quickly, easily and at a price that they can afford.
Using feedback measures from the Watchdog section above, monitor AI applications after they have been released from the sandbox and feed learnings back into testing models and standards.
See something missing? Then here’s your chance:
Jump in the chat and add, “But there’s another issue you missed.”
Just remember to add a way to solve it…even if it sounds crazy.
It Only Takes One Failure For An Unaligned AI Breakout, But…
We Could Make “Trust Nothing Unsigned” Everyone’s Default Setting
Argument: It’s true that our best measures to prevent an artificial general intelligence from breaking out and wreaking havoc may not be enough. And it seems likely that even one case could be ruinous and hard to contain. But it’s plausible that protocols, routers, and internet-connected endpoints like web browsers can ‘flip’ from assuming content and code is permitted to assuming that no content or code is safe or real unless signed. This could come quickly, particularly if deep fake proliferation becomes so widespread that nobody trusts anything they see on the Web. It would make sense for browsers to add the option to filter for signed content, and it’s conceivable that it could become the default over time. If this happened, any transmissions coming from an AI that didn’t generate a signature proving it passed sandbox testing would be filtered, and security organizations could be alerted to track down the “rogue” AI. For more on this idea, take a look at my 2019 story, Deep Fake Deadlock.
But Flipping To “Trust Nothing Unsigned” Won’t Work, Because…
Implementing default filtering of unsigned content might inadvertently block legitimate content that hasn't been signed, leading to reduced access to valuable information and resources.
Default filtering could be seen as a form of censorship, raising concerns about internet freedom and the potential for misuse or abuse by governments or other entities.
But These Issues Can Be Addressed By…
Browsers implementing a user-friendly mechanism for reporting false positives, allowing users to flag and automatically unblock valuable resources that may have been inadvertently filtered. This feedback system would enable continuous improvement of the filtering mechanism.
Ensuring transparency in the filtering process by providing open source tools and fast processes for generating and managing the necessary signatures. This can be further secured against inappropriate censorship through privacy preserving techniques like Zero Knowledge Proofs, which would allow a browser to pass the content without knowing anything about the identity of the signer other than the fact that it’s an approved entity. This would help maintain trust and prevent accusations of censorship or misuse of the filtering system.
Just The Beginning
Wikipedia lists numerous global and national initiatives addressing AI regulation. I found them to be mainly a series of high-level ‘we shoulds’: “We should make sure AI doesn’t discriminate” or “We should work together to ensure AI aligns with human interests.” These statements are just begging to be one-butted to death. And as you would expect, even a single round of “but that won’t work, but it would if…” is hard to find. If you really want to get somewhere, I recommend at least five rounds. #5x2buts
The more I look around at the big problems ahead, the more I notice how we limit ourselves in how we explore them — too afraid to tell someone that their idea has flaws on one hand, and too rushed or lazy to provide more than a single “but that won’t work” on the other.
It was a refreshing experience exploring this topic and taking the time to iterate through the two-but rule on it, even though I only had room for a few iterations here. That said, just as I couldn’t help thinking that there were a lot more buts to discover after listening to the All-In episode, the hope is that you’ll have the same reaction to this and add some more buts to the conversation. (Seriously…get off your but and write a couple in the chat.)
And who knows…maybe if enough of us do it, eventually world leaders, grappling with AI regulation and other gnarly issues, will start employing The Two But Rule themselves. It might help them make sure that the many challenges facing humanity don’t wind up kicking their buts.
2) But, it would be easy for objectives to be disregarded by businesses overtime as they pivot to current market environmental conditions, to justify short-terminist opportunities or, as the case may be, satisfying the opinions of a new leadership team. BUT, an AI system trained on the historical data of the company can advise on how to satisfy these short-term commitments without contravening the objectives set by the company. AI systems trained on internal data are more likely to have intimate knowledge of the company's priorities and set the right course of action, eschewing the input from expensive external consultants. BUT, again the AI system could be biased to provide input that the senior leadership team want to hear, which amplifies 'confirmation bias'. SMART objectives should, therefore, be codified in an AI system that can assess whether a single decision or decision is likely to align with that objective on a realistic probability scale.
"As I understand it, Michael Borrelli, you are something of a scholar on new EU regulation changes regarding AI, yes?"
While I would never call myself an expert, I have a passion for regulation, law and related disciplines. As a risk-taking entrepreneur, knowing where the boundaries are - and how to remain within them - allows me to build a sustainable business.
AI & Partners helps firms subject to the EU AI Act build trustworthy AI as well as value-added services. We leverage our knowledge of the regulatory environment to empower them as regulatory structures can asphyxiate businesses at the critical early stage growth phase.
Check out our LinkedIn page: https://www.linkedin.com/company/ai-&-partners/.