11 Comments
May 8, 2023Liked by John Wolpert

2) But, it would be easy for objectives to be disregarded by businesses overtime as they pivot to current market environmental conditions, to justify short-terminist opportunities or, as the case may be, satisfying the opinions of a new leadership team. BUT, an AI system trained on the historical data of the company can advise on how to satisfy these short-term commitments without contravening the objectives set by the company. AI systems trained on internal data are more likely to have intimate knowledge of the company's priorities and set the right course of action, eschewing the input from expensive external consultants. BUT, again the AI system could be biased to provide input that the senior leadership team want to hear, which amplifies 'confirmation bias'. SMART objectives should, therefore, be codified in an AI system that can assess whether a single decision or decision is likely to align with that objective on a realistic probability scale.

Expand full comment
author
May 8, 2023·edited May 8, 2023Author

LOVE Your BUTs!

Ok, here-goes...one more before I have to hunker down on manuscript writing again (second 1.5hr session of the day starting at bottom of the hour).

But a SMART objective set (presumably a model?) could be biased, as you say, and fail to align with objectives...and the company objectives could be misaligned themselves with stakeholder objectives...this is exacerbated by working from smaller data sets, and if the overall AI/LLM is trained from "the internet", there are those issues; BUT there could rise up an open source service that provides models and model-checks on alignment, and where stakeholders could require that a company's system generate a proof that it is compliant with that in some way. (By the way, the instance of the service's system should not only be open source, but it should regularly sign that it is running the actual code of that open source repo and no other code...dropping that hash/proof/checksum onto a public, tamper-resistant bulletin board like a blockchain or the New York Times classified section).

Expand full comment
May 8, 2023Liked by John Wolpert

"As I understand it, Michael Borrelli, you are something of a scholar on new EU regulation changes regarding AI, yes?"

While I would never call myself an expert, I have a passion for regulation, law and related disciplines. As a risk-taking entrepreneur, knowing where the boundaries are - and how to remain within them - allows me to build a sustainable business.

AI & Partners helps firms subject to the EU AI Act build trustworthy AI as well as value-added services. We leverage our knowledge of the regulatory environment to empower them as regulatory structures can asphyxiate businesses at the critical early stage growth phase.

Check out our LinkedIn page: https://www.linkedin.com/company/ai-&-partners/.

Expand full comment
author

Let’s get a conversation going here and see if we can get some knowledgeable folks to show us their buts. ;)

On the article, I went through several “but regulation won’t work, but it would if…but that won’t work either, but it would if” combinations, but space and time constraints limited that to three layers deep. Chat has the benefit of the right format for further dialectic and space to more deeply explore our buts.

I presented ways to avoid the slow-down problem you identified above (which was also the key concern that the David’s on All-In were articulating), problems with those solutions, and solutions to those problems. But I also encouraged folks to add “buts” I didn’t include and iterate with additional “but that won’t work but it could if” #2buts to the ones I did include.

Care to show us your buts? :)

Expand full comment
May 4, 2023Liked by John Wolpert

Thank you, John.

Do you ever think the reverse can be true: innovation won't keep pace with regulation? (Think of a see-saw of momentum shifts.)

Expand full comment
author

As I understand it, Michael Borrelli, you are something of a scholar on new EU regulation changes regarding AI, yes?

Expand full comment
author

Interesting point! So...”Regulation wouldn’t outpace innovation if...”?

Expand full comment
May 8, 2023Liked by John Wolpert

1. Lawmakers consult industry on an ongoing basis.

2. SMART objectives are set.

3. Scope of regulation is both proportionate and risk-based.

4. Supervision of regulation does not exceed reasonable expectations.

5. Innovation hubs are managed in the best interests of the economy.

Expand full comment
author

Awesome!

Ok…next 2buts:

1) But it would be easy for lawmakers to either flag in their practice of consulting industry or go through the motions of doing that while not really absorbing the insights; BUT an open and LLM-powered online forum could be set up to ingest CSPAN and other lawmaker/regulator public material, flag where industry insight should be applied, and alert experts, who could sign up and perhaps earn social credit by providing their more-real-time input. But lawmakers might still ignore this, and experts aren’t always the best at writing accessible and cogent content; BUT again LLM could provide summaries, enumerate the degree of agreement, duplication and disagreement between expert commentary, and alert lawmaker constituents and regulatory staffers about meaty ideas, solutions, and stats. But that won’t work, because…? (Your turn. You can extend or ‘shift your but’ to a related line of dialectic.)

[I’ll make attempts on the other buts soon. In the meantime, friends…show us your buts!] ;)

Expand full comment

Indeed. Great read. +But, Unaligned entities that won’t regulate? Smaller nation states and the likes of Palantir?

Expand full comment
author

Right on! So now, by the great law of #2buts, what’s your “But unaligned entities like nation states and companies like palantir wouldn’t be a problem if…”

Expand full comment