• jordanlund@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    18 days ago

    I wish they had broke it out by AI. The article states:

    “Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.”

    But I don’t see that anywhere in the linked PDF of the “full results”.

    This sort of study should also be re-done from time to time to track AI version numbers.

  • SaraTonin@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    18 days ago

    There’s a few replies talking about humans misrepresenting the news. This is true, but part of the problem here is that most people understand the concept of bias - even if only to the extent of “my people neutral, your people biased”. But this is less true for LLMs. There’s research which shows that because LLMs present information authoritatively that not only do people tend to trust them, but they’re actually less likely to check the sources that the LLM provides than they would be with other forms of being presented with information.

    And it’s not just news. I’ve seen people seriously argue that fringe pseudo-science is correct because they fed a very leading prompt into a chatbot and got exactly the answer they were looking for.

  • Yerbouti@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    14
    ·
    18 days ago

    I dont understand the use people make of AI. I know a lot of of professionnal composer who are like “That’s awesome, AI does the music for me now!” and I’m like, cool, now you only have the boring part of the job to do since the fun part was made by AI. Creating the music is litteraly the only fun part, I hate everything around it.

  • danc4498@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    18 days ago

    Makes sense. I have used AI for software development tasks such as manipulating SQL queries and XML files (tedious things) and am always disappointed with how AI will misinterpret some things. But it’s obvious with those when the requests fail. But for things like “the news” where there is no QA team to point out the defect, it will be much harder to notice. And when AI starts (or continues) to use AI generated posts as sources, it will get much worse.

  • paraphrand@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    18 days ago

    Precision, nuance, and up to the moment contextual understanding are all missing from the “intelligence.”

  • NotMyOldRedditName@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    17 days ago

    I’ve had someone else’s AI summarize some content I created elsewhere, and it got it incredibly wrong to the point of changing the entire meaning of my original content.

  • AnUnusualRelic@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    18 days ago

    Yet the LLM seems to be what everyone is pushing, because it will supposedly get better. Haven’t we reached the limits of this model and shouldn’t other types of engines be tried?