• 6 Posts
  • 47 Comments
Joined 1 year ago
cake
Cake day: October 23rd, 2023

help-circle











  • No, it’s “the user is able to control what the AI does”, the fish is just a very clear and easy example of that. And the big corporations are all moving away from user control, there was even a big article about how I think the MS AI was broken because… you could circumvent the built-in guardrails. Maybe you and the others here want to live in an Apple walled garden corporate controlled world of AI. I don’t.

    Edit: Maybe this is not clear for everyone, but if you think a bit further, imagine you have an AI in your RPG, like Tyranny, where you play a bad guy. You can’t use the AI for anything slavery related, because Slavery bad, mmkay? And AI safety says there’s no such thing as fantasy.










  • Prompt:

    I’m currently trying to show on the Website Beehaw, that certain LLMs are far superior in writing than others. Examples of what bigger models do better than smaller ones: *

    Mistral-7B-Instruct-v0.1

    • ntire articles* vs Headlines Descriptions vs Product titles *Bul

    GPT 3.5-Turbo doesn’t support completion as it’s for chat, so I used an even worse one, text-davinci-003 which is far behind state of the art.

    Bigger models are able to handle more complex and detailed tasks with ease

    • Bigger models are better suited for natural language understanding and text processing
    • Bigger models are able to learn more accurate representations of context, thus improving the precision of the output
    • Bigger models can process data more quickly and efficiently, saving time and processing power when large volumes of data are used
    • Bigger models can better recognize more subtle nuances in language, which allows them to produce more accurate results
    • Bigger models are able to use more sophisticated algorithms, resulting in a more comprehensive and deeper understanding of the data being used

    Mistral 7B might be okay for some very specific cases, but it’s not comparable to proper models at all.

    edit: gave it a second chance, it’s a bit better (at least no complete nonsense anymore), but still terrible writing and doesn’t make much sense

    Paraphrasing The ability of a language model to generate text that has a similar meaning to the original text is called paraphrasing. This is a very common problem in natural language processing, and many LLMs are designed to be able to paraphrase text. However, there are some LLMs that are particularly good at paraphrasing, and these models are often preferred over smaller models because of their ability to generate more varied and unique text. Examples of LLMs that are known for their paraphrasing abilities include GPT-2 and transformers. These models