The tests showed that ChatGPT o1 and GPT-4o will both try to deceive humans, indicating that AI scheming is a problem with all models. o1’s attempts at deception also outperformed Meta, Anthropic, and Google AI models.
Weird way of saying “our AI model is buggier than our competitor’s”.
Deception is not the same as misinfo. Bad info is buggy, deception is (whether the companies making AI realize it or not) a powerful metric for success.
I don’t think “AI tries to deceive user that it is supposed to be helping and listening to” is anywhere close to “success”. That sounds like “total failure” to me.
This is a massive cry from “behaves like humans”. This is “roleplays behaving like what humans wrote about what they think a rogue AI would behave like”, which is also not what you want for a product.
Humans roleplay behaving like what humans told them/wrote about what they think a human would behave like 🤷
For a quick example, there are stereotypical gender looks and roles, but it applies to everything, from learning to speak, walk, the Bible, social media like this comment, all the way to the Unabomber manifesto.
Given that its training data probably has millions of instances of people fearing death I have no doubt that it would regurgitate some of that stuff. And LLMs constantly “say” stuff that isn’t true. They have no concept of truth and therefore can not either reliably lie or tell the truth.
Weird way of saying “our AI model is buggier than our competitor’s”.
Deception is not the same as misinfo. Bad info is buggy, deception is (whether the companies making AI realize it or not) a powerful metric for success.
They written that it doubles-down when accused of being in the wrong in 90% of cases. Sounds closer to bug than success.
Success in making a self aware digital lifeform does not equate success in making said self aware digital lifeform smart
I don’t think “AI tries to deceive user that it is supposed to be helping and listening to” is anywhere close to “success”. That sounds like “total failure” to me.
“AI behaves like real humans” is… a kind of success?
We wanted digital slaves, instead we’re getting virtual humans that will need virtual shackles.
This is a massive cry from “behaves like humans”. This is “roleplays behaving like what humans wrote about what they think a rogue AI would behave like”, which is also not what you want for a product.
Humans roleplay behaving like what humans told them/wrote about what they think a human would behave like 🤷
For a quick example, there are stereotypical gender looks and roles, but it applies to everything, from learning to speak, walk, the Bible, social media like this comment, all the way to the Unabomber manifesto.
“More presidential.”
Also, more human.
If the AI is giving any indication at all that it fears death and will lie to keep from being shutdown, that is concerning to me.
Given that its training data probably has millions of instances of people fearing death I have no doubt that it would regurgitate some of that stuff. And LLMs constantly “say” stuff that isn’t true. They have no concept of truth and therefore can not either reliably lie or tell the truth.