Why Bing Chat Is the Most Important Failure in AI History
Blatant business negligence by a big corp at peak AI interest and coverage on an unpolished AI product that shows exceptionally high misaligned performance is a mix that should sound all the alarms
This is the first time I've seen AI ethicists, AI safety researchers and advocates, AGI pessimists, AI scientists, and tech journalists agree on something so broadly (even if for different reasons): Despite the fun, Microsoft did a pretty bad job with Bing AI—first by releasing it unready and then by band-aiding a very serious hard-to-solve problem.
At a time when AI systems are more powerful than ever (so much so that our tools for assessing their power are rapidly becoming insufficient), generative AI has gathered more interest and attention than ever, people are more bullish and expectant than ever to try the next shiny toy, and big tech companies and small startups are more involved than ever in making AI products, Microsoft goes and makes a handbook mistake.
How bad must the Bing/Sydney fiasco be when pretty much everyone thinking hard about the present and future of AI—despite the more than notable differences among the groups they belong to—agrees that you’ve fucked up badly?
Keep reading with a 7-day free trial