Categories
Interesting

SUPERAGENCY

Reid, the co-founder of LinkedIn, and Beato, a tech and culture writer, aim to dispel the public’s concerns about ceding control to AI systems and to establish trust in AI companies and their methods by showing “what could possibly go right” in AI development. Attempting to persuade readers that industry regulation is undemocratic and inhibits progress, the authors promulgate industry-friendly ideas such as permissionless innovation, iterative development, and risk tolerance. They take issue with AI “Gloomers,” who favor official oversight. They examine the historical context of technological adoption, using examples like the automobile, the power loom, and the printing press to illustrate how new technologies can transform societies. However, the authors don’t entirely prove their case. They frequently make comparisons that are an oversimplification of a complex issue, such as when they write: “Regulation is one way we try to compel certainty, but no regulation can completely eliminate the risk of some unfortunate thing happening.” The authors compare the regulation of Large Language Models (LLMs) used in AI to laws against robbery and professional licensing for doctors and lawyers: “Laws that make robbery a crime aren’t a guarantee that you won’t ever get mugged—they’re simply a policy designed to reduce that possibility.” But these are vastly different domains with distinct risks and regulatory challenges. The authors’ writing suffers from logical fallacies—hyperbole, hasty generalizations, and false dichotomies. At times, the book reads as if it were written by AI—meaning the arguments sound plausible, but may suffer from a biased feedback loop that could have occurred during one of their “endless conversations…with Claude, ChatGPT, and Gemini,” which Beato cites using “while drafting this book.” These problems render the book largely sophomoric.

Leave a Reply

Your email address will not be published. Required fields are marked *