fbpx

Type to search

International

NYC chatbot flop is turned into victory by a journalism professor

NYC chatbot flop is turned into victory by a journalism professor

Based on a recently unsuccessful chatbot pilot project in New York City (NYC), a data journalism professor quickly developed a successful version of his own, as reported by The Markup.

..Instead of serving as a helpful tool for entrepreneurs, the chatbot pilot became a warning about the need for monitoring and appropriate usage of AI.

NYC’s chatbot, available at chat.nyc.gov, was created to give people straightforward advice while they navigated the challenges of launching a business. Its AI-powered interface offered guidance on licenses, legal compliance, and other business necessities in an effort to expedite procedures.

The research did, however, identify a trend of false information. If the chatbot is followed, people can unintentionally break the law. False recommendations on cash acceptance policies, housing discrimination, and tip appropriation were a few examples.

In response, Mayor Eric Adams defended the bot’s potential for development while acknowledging its shortcomings.

Professor of data journalism at Columbia University Jonathan Soma spotted that story. He shows in a video how to create a comparable AI-powered chatbot that could scan and reply to queries based on submitted documents, using the NYC chatbot as a starting point.

An understanding of the technical facets of chatbot development was given by Jonathan Soma. His response to the results was consistent with the general skepticism regarding the validity of AI, especially when there are potential legal repercussions. “I would claim that artificial intelligence has the perpetual capacity to imagine and have hallucinations. Furthermore, it could be challenging to identify the documents that are truly pertinent if you have a huge collection of them,” he continued.

Soma’s own attempt to create a working chatbot highlighted the difficulties in implementing AI, as it functioned more accurately than NYC’s bot. Even while AI technologies have a lot of potential, it’s still difficult to guarantee their accuracy and dependability.

“There is a 100% guarantee that, at some point, there will be some sort of mistake in that chain, and there will be some sort of error introduced, and you’re going to get the wrong answer,” Soma stated in response to the question.

Soma’s conversation moved beyond technicalities to ethical problems in AI adoption, notably in journalism. He emphasized the vital role that human oversight plays and the inevitable occurrence of faults in content provided by AI. Even though chatbots are helpful for low-stakes jobs, they shouldn’t be used to give professional or legal advice without strict validation procedures. “It must be used for tasks where errors are acceptable.”

One of the main themes was how AI is changing the landscape of data journalism; Soma focused on how AI can scale jobs and produce insights. As an advocate of a balanced strategy that combines AI capabilities with human judgment and fact-checking procedures, he did, however, issue a warning against overreliance, saying, “But you can’t do that when you’re building a chatbot and every single conversation has to be meaningful and has to be accurate for the person who is using it.”

“I think the most problematic part is probably the explicitly chatbots because they are so confident in everything that they say,” he continued.

Tags: