• 0 Posts
  • 30 Comments
Joined 1 year ago
cake
Cake day: June 4th, 2023

help-circle

  • LLMs cannot:

    • Tell fact from fiction
    • Accurately recall data from its training set
    • Count

    LLMs can

    • Translate
    • Get the general vibe of a text (sentiment analysis)
    • Generate plausible text

    Semantics aside, they’re very different skills that require different setups to accomplish. Just because counting is an easier task than analysing text for humans, doesn’t mean it’s the same it’s the same for a LLM. You can’t use that as evidence for its inability to do the “harder” tasks.













  • Responding to your first two paragraphs:

    The enjoyability of a piece of art isn’t independent of the creator. I will only speak for myself since I don’t know other people’s experiences. When you see something that tickles the happy part of your brain, part of that emotional response is in knowing that there’s another person out there who probably felt that way and wanted to share those feeling with you. In experiencing those emotions, you also experience a connection with another human being. The knowledge that you’re not alone and someone else out there has experienced the same thing. I wouldn’t read through the credits because I don’t care who that person is. I just care that this person existed. When you look at AI generated work and it just feels empty despite the surface beauty, this is the missing piece. It’s the human connection.



  • too

    Funny that you say that. I always get the low end phones so I don’t expect much performance-wise. I didn’t even know it was possible for me to have a reasonable mobile web browsing experience because Chrome was always so awfully laggy while also making everything else lag and I didn’t expect Firefox to be any different. Then I actually tried it, and holy shit the internet actually works. Not only that, I can’t even tell that I’m browsing on a shitty low end phone.



  • I don’t understand what you mean by “The Chinese Room has already been surpassed by LLMs”. It’s not a test that can be surpassed. It’s just a thought experiment.

    In any case, you do bring up a good point. Perhaps this understanding is in the organization of the information. So if you have a Chinese room where all the query-response pairs are in arbitrary orders, then maybe you wouldn’t consider that to be understanding. But if you have the data organized such that similar queries/responses are close to each other and this person in the room doing the answering can make mistakes such as accidentally copying out the response next to the correct response and still make sense, then maybe we can consider this system to have better understanding.