六合彩开奖结果直播

Skip to content

SUBSCRIBER ONLY

Opinion |
Wrong answer: The 六合彩开奖结果直播 chatbot experiment was never going to work

Mayor Eric Adams and Chief Technology Officer Matthew Fraser release the Adams administration鈥檚 comprehensive 鈥淣ew York City Artificial Intelligence Action Plan,鈥 City Hall. Monday, October 16, 2023. Credit: Ed Reed/Mayoral Photography Office.
Mayor Eric Adams and Chief Technology Officer Matthew Fraser release the Adams administration鈥檚 comprehensive 鈥淣ew York City Artificial Intelligence Action Plan,鈥 City Hall. Monday, October 16, 2023. Credit: Ed Reed/Mayoral Photography Office.
Author

Late last year, to significant fanfare, geared towards starting and operating 六合彩开奖结果直播-based businesses. The bot was meant to dispense advice, and that it did, though not how the city expected.

As , the has been advising people to break the law in myriad ways, from telling employers that they could illegally steal tips from workers to informing landlords they could discriminate against people with rental assistance, the law be damned. This debacle is especially infuriating not because it was unpredictable, but indeed because it was entirely predictable.

The failure comes down to the basic problem with every single system in AI, a reality that will always exist below the hype: these systems don鈥檛 know anything, can鈥檛 look it up and wouldn鈥檛 know how to synthesize the information if they could. That鈥檚 because, while they are able to very convincingly simulate thought, it is only a simulation.

An LLM has been trained on huge troves of data to become essentially an incredibly potent version of e-mail response generators; it predicts the structure of what could be a response to the query, but is not actually capable of thinking about it like a human. This isn鈥檛 a software issue that can just be patched or easily remedied because it鈥檚 not an error. It鈥檚 the output based on the system鈥檚 very structure.

Perhaps the embarrassing liability from having an official government program dispense plainly illegal advice will finally convince policymakers that they鈥檝e been duped by the buzz around a technology that has never been proven to be effective for the purposes they鈥檙e trying to use it for. Recently, Air Canada to refund a customer that had been given incorrect information by the airline’s chatbot, rejecting the stupid argument that the bot was meaningfully a separate entity to the company itself.

It鈥檚 not hard to imagine that happening in New York, except that it will be a remedy for a worker fired unlawfully or whose wages were stolen, or would-be a tenant discriminatorily denied housing. That鈥檚 if they bother to bring a case at all; many people whose rights have been violated on the erroneous advice of MyCity Chatbot probably won鈥檛 even seek redress.

Does this mean that all AI applications are useless or should be shunned by government? Not necessarily; for certain targeted reasons, a specifically tailored AI can cut down on wasteful tedium and help people.

Yet it simply isn鈥檛 currently possible to craft an LLM that will be able to routinely return accurate responses to the full range and complexity of the city鈥檚 legal, regulatory and operational functions. LLMs cannot pull laws or court decisions for the simple reason that they do not understand the question you are asking them, only what might look like a plausible enough answer based on reams of similar text they鈥檝e ingested from millions of online sources.

We鈥檙e asking an adding machine to do sudoku; it doesn鈥檛 matter how good and fast it is at adding and subtracting and multiplying, an adding machine cannot solve puzzles. It鈥檚 time to step back, before real people get hurt by MyCity Chatbot.

More in Opinion