As far as I understand this, they seem to think that AI models trained on a set of affluent westerners with unknown biases can be told to “act like [demographic] and answer these questions.”

It sounds completely bonkers not only from a moral perspective, but scientifically and statistically this is basically just making up data and hoping everyone is impressed by how complicated the data faking is to care.

  • sim_@beehaw.org
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    There’s a very fine line with this. I can see the value in using AI to pilot your study. It may uncover flaws you hadn’t anticipated, help train research staff, or generate future ideas.

    But to use AI as the participants to answer your research questions is absurd. Every study faces the question of external validity; do our results generalize outside of this sample? I don’t know how you can truly establish that when your “sample” is a non-sentient bundle of code.