LLMs cannot:
- Tell fact from fiction
- Accurately recall data from its training set
- Count
LLMs can
- Translate
- Get the general vibe of a text (sentiment analysis)
- Generate plausible text
Semantics aside, they’re very different skills that require different setups to accomplish. Just because counting is an easier task than analysing text for humans, doesn’t mean it’s the same it’s the same for a LLM. You can’t use that as evidence for its inability to do the “harder” tasks.
Quickly filtering out a subset of them to prioritize so that we get the most value possible out of the time that humans spend on it.