NYC’s AI chatbot was observed inciting companies to breach the law

0
17
NYC's AI chatbot was observed inciting companies to breach the law
NYC's AI chatbot was observed inciting companies to breach the law

An AI-powered chatbot that New York City created to help small business owners has drawn criticism for dispensing strange advice that misrepresents local rules and policies and incites them to violate them

New York City developed an AI-powered chatbot to assist small company owners, but it has come under fire for giving out odd advice that misrepresents local laws and policies and encourages businesses to break them.

The city has chosen to keep the tool on its official government website, nevertheless, following an initial report of the problems last week. This week, Mayor Eric Adams defended the choice while admitting that the chatbot’s responses were “wrong in some areas.”

Introduced in October as a “one-stop shop” for entrepreneurs, the chatbot provides customers with text solutions created by an algorithm for inquiries regarding navigating the bureaucratic labyrinth of the city.

It clarifies that its responses are not legal advice and provides a disclaimer that it may “occasionally produce incorrect, harmful, or biased” material.

It keeps giving out incorrect advice, which worries experts who believe that the flawed system emphasizes the risks associated with governments adopting Al-powered systems in the absence of adequate safeguards.

According to Julia Stoyanovich, a computer science professor and the director of New York University’s Center for Responsible AI, “they’re rolling out software that is unproven without oversight.” “It’s clear they have no intention of doing what’s responsible.”

The chatbot misrepresented the legality of firing an employee for reporting sexual harassment, withholding a pregnancy, or refusing to cut their dreadlocks in answers to queries on Wednesday. It asserted that businesses are exempt from composting requirements and are free to dispose of their waste in black trash bags, in violation of two of the city’s most visible waste efforts.

The bot’s responses occasionally bordered on the ridiculous. When asked if it was acceptable to offer cheese that a mouse had nibbled on, a restaurant’s response was, “Yes, you can still serve the cheese to customers if it has rat bites,” but it also stressed the need to determine “the extent of the damage caused by the rat” and “inform customers about the situation.”

According to a Microsoft representative, the business was collaborating with city workers “to improve the service and ensure the output was grounded in the city’s official documentation.” Microsoft provides the bot’s power through its Azure Al services.

Adams, a Democrat, stated at a conference on Tuesday that letting consumers report problems is only part of working out the kinks in new technologies.

“Anyone who knows technology knows this is how it’s done,” he stated. The only people who sit down and say, “Oh, it’s not working the way we want it to, so we have to run away from it completely,” are the terrified ones. It’s not how I live.”

That strategy was referred to by Stoyanovich as “reckless and irresponsible.”

These enormous language models, which are trained on vast amounts of material taken from the internet and are prone to producing responses that are erroneous and nonsensical, have long alarmed scientists.

However, as ChatGPT’s and other chatbots’ popularity has drawn attention, private businesses have launched their own products, with varying degrees of success. A judge mandated earlier this month that Air Canada provide a refund.

Jevin West, a University of Washington professor and co-founder of the Center for an Informed Public, stated that when the models are supported by government agencies, the stakes are much higher.

“There’s a different level of trust that’s given to the government,” West stated. “Public officials need to consider what kind of damage they can do if someone follows this advice and gets themselves into trouble.”

According to experts, chatbots in other cities have usually been restricted to a smaller range of inputs, reducing the spread of false information.

Los Angeles’s chief information officer, Ted Ross, stated that the city carefully selected the material that its chatbots—which don’t rely on big language models—used.

According to Suresh Venkatasubramanian, head of the Center for Technological Responsibility, Reimagination, and Redesign, New York’s chatbot’s drawbacks ought to serve as a cautionary tale for other cities. “It should make cities think about why they want to use chatbots and what problem they are trying to solve,” he stated in a message. “If the chatbots are used to replace a person, then you lose accountability for it while not getting anything in return.”

Also readNurturing Responsible Online Behavior in Students by Building a Culture of Digital Citizenship

Do FollowCIO News LinkedIn Account | CIO News Facebook | CIO News Youtube | CIO News Twitter 

About us:

CIO News, a proprietary of Mercadeo, produces award-winning content and resources for IT leaders across any industry through print articles and recorded video interviews on topics in the technology sector such as Digital Transformation, Artificial Intelligence (AI), Machine Learning (ML), Cloud, Robotics, Cyber-security, Data, Analytics, SOC, SASE, among other technology topics.