It only took nine seconds for an AI coding agent gone rogue to delete a company’s entire production database and its backups, according to its founder. PocketOS, which sells software that car rental businesses rely on, descended into chaos after its databases were wiped, the company’s founder Jeremy Crane said.

The culprit was Cursor, an AI agent powered by Anthropic’s Claude Opus 4.6 model, which is one of the AI industry’s flagship models. As more industries embrace AI in an attempt to automate tasks and even replace workers, the chaos at PocketOS is a reminder of what could go wrong.

Crane said customers of PocketOS’s car rental clients were left in a lurch when they arrived to pick up vehicles from businesses that no longer had access to software that managed reservations and vehicle assignments.

  • Kwakigra@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    Thanks for specifying a legitimate use-case for this tool. I understand that google search has been the most valuable programming tool for a very long time so it makes sense LLMs would be more helpful in the same kind of way. Search engine technology is quite a bit different than blockchain or VR in terms of consumer and business demand.

    For my purposes of news and history research, the unreliability of LLMs making me have to check all its claims every single time negates its usefulness as an assistant because I will have to examine its references anyway so it’s more time effective for me to skip the questionable output I would get and do the research myself in the first place. How have you been able to manage the issue of unreliability with the volumes of data you’re dealing with? Is the kind of data which you’re dealing with less likely to be unreliable since it is of a kind the LLM is more likely to process correctly?