• LostWanderer@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    I think it’s due to a combination of the tech still being relatively young (it’s made leaps and bounds) and its thoughtless hallucinations that pass as valid answers. If the training data is poisoned by disinformation or misinformation, it makes any output potentially useless at best, at worst it’s harmful. The quality of LLM results purely depends on the people in charge of creating them and the source of its data. After writing it out, I feel that I mistrust the people in control of LLM development because it’s so easy to implement this tech incorrectly and for the people in charge to be completely irresponsible. Since, the techbros behind this latest push for making LLMs into AI are so gung-ho about it, the guard rails have been pushed aside. That makes it all the easier for my fears to become manifest.

    Once again, it sounds all well and good what Apple is likely trying to do with their implementation of LLM. However, I can’t help but wonder about how terribly wrong it can all go.