AI can’t be all that bad. The problem I’m always seeing with AI is a double-edged sword. You have corporations shoving AI in just about everything, treating it like its a cure for cancer and that really rubs people the wrong way. Then, on a more of a society level, you’ve got people who use AI for an assortment of things like making art with AI and still accredit themselves as an artist to people who treat AI like a therapist when it is not advised to.

However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.

  • AceFuzzLord@lemmy.zip
    link
    fedilink
    arrow-up
    1
    ·
    15 minutes ago

    Probably one of the best use cases I’ve seen for machine learning ( not the other LLM mega junk ) has been a recent game used it to allow the PVP/E CPU enemies to adjust to the map terrain.

    There are probably much better uses for machine learning, but I haven’t personally seen them AFAIK nor can I think of any others since my brain is falling asleep and I’m struggling to stay awake right now.

  • defrostedLasagna4921@piefed.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    28 minutes ago

    You could ask it what’s in an image, then do your own research on what it tells you. I used it to identify a math function from a meme.

    It can also just be fun to toy with. I personally like to play with image interrogation in StableUI, then put the generated caption into an img2text. Like some sort of alternate reality image. It gives pretty funny results sometimes.

  • WastedJobe@feddit.org
    link
    fedilink
    arrow-up
    2
    ·
    4 hours ago

    In engineering/manufacturing, machine learning can be used to monitor performance and predict part failures of machines so you only do maintenance when it’s actually required. Parts are usually replaced when the warranty runs out, but they will often still be good for a while.

  • MorkofOrk@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    9 hours ago

    An amazing use for it in audio engineering is for feedback suppression. The old way to give yourself more headroom required you to sit there and turn up the gain until feedback happens and cut that frequency. Now you just turn on the feedback suppression and it does all that for you on the fly. It’s game changing for live sound, every major venue has it now.

    • WastedJobe@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      4 hours ago

      Great for film sound too. You’re filming a rainy scene and the rain is way to loud? You had to get the actors into the studio and do voiceover, now you can often just filter it out.

    • DominusOfMegadeus@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      arrow-down
      6
      ·
      14 hours ago

      Because all other information on credit cards (or anything else) on the internet available to people eager to learn is 100% accurate, all the time?

      • a_non_monotonic_function@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        10 hours ago

        That is absolutely the worst excuse possible to shill for big tech that comes with no real guarantees about precision or accuracy.

        While there are trustworthy human sources in the Internet, there are no trustworthy LLMs.

  • logos@sh.itjust.works
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    14 hours ago

    I have a friend at work that does a lot of video. He films weddings, music videos etc. and is making a pilot for Netflix. He uses AI to go through all his footage and tag it according to content. E.g. if he needs a clip of birds, he can just search ‘birds’ and it will pull up all relevant footage. Incredibly useful.

  • seahag@lemmy.world
    link
    fedilink
    arrow-up
    22
    arrow-down
    2
    ·
    14 hours ago

    AI has uses in the medical, scientific, and disabled communities. I’ve seen it helping blind people with shopping, with Google glasses or whatever reporting what they’ve picked up and describing it to them. It can also identify/predict cancer tissue early.

    Generative AI is peak laziness and the death of human creativity. Using AI for companionship has a nasty effect on mental health.

    AI should have only ever been an assistant in medical/scientific research in my opinion, simply because it’s so damaging to the environment, economy, and society.

    • iByteABit@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      48 minutes ago

      It can also identify/predict cancer tissue early.

      Do you mean an LLM or a machine learning model specifically trained for this?

  • sicktriple@lemmy.ml
    link
    fedilink
    arrow-up
    17
    ·
    edit-2
    10 hours ago

    The technology itself is novel and cool. Its the complete and utter meltdown of all tech companies into brainless hype machines that is harmful, which is course, is a function of capitalist incentive and the need for the tech industry to come out with some new paradigm shifting innovation every decade. A normal, healthy society would have been able to leverage machine learning and LLM technology where its most useful, like parsing large amounts of data, or running a local instance on your computer to ask a few questions, etc. We wouldn’t see LLMs in every text editor, pencilcase and pair on sneakers but these snake oil salesmen who run the US economy are absolutely desperate for a new paradigm shift so they can keep making exponentially more money.

    The thing is, we don’t need to build these datacenters siphoning comically evil amounts of energy from the grid and making personal compute a thing of the past. Average everyday person doesn’t need cloud compute, they can run a local 4b parameter (very, very small) model on their laptop or phone if they need to ask chatgpt to make them a workout routine or to ask them who won the 1918 world series. But these fucking cretins don’t care, that’s not the point, they are in this because it’s a golden ticket to growth city and once they cash their check they don’t give one hot fuck about the human-spirit-stealing-machine they built.

    TLDR: our society is broken and that’s why we keep getting the shittiest, lowest-common-denominator version of everything. everything has to suck by definition because that’s the only version that the system we built will allow.

  • Lumidaub@feddit.org
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    15 hours ago

    If we’re strictly talking about LLMs: Certain accessibility services - MAYBE. Writing closed captions / transcription for the most part requires little “human” touch. If we ASSUME that AI will be able to it reliably one day - because it really can’t yet - that’s one thing that would benefit society.

    Image descriptions is another thing I might see done by AI one day but that still requires an understanding of what’s actually important about the image.

  • MerrySkeptic@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    14 hours ago

    I’m a therapist. I use HIPAA compliant AI to generate my (editable) case notes for my sessions now. Not only is it a huge time saver to simply edit a generated note as opposed to making one from scratch, but in many cases it takes more detailed notes, including quotes from clients.

    I have heard of other therapists and medical doctors also using AI to help with diagnosing.

    The danger is when therapistsdon’t review the content to check for accuracy. Because occasionally it will generate something not really reflective of what the therapist might have been doing, or it might lack detail that the therapist might have otherwise inclused. But more often the stuff it comes up with is surprisingly accurate.And editing is even easier when you can just tell the AI something like, “include more details about how the client noticed their pattern of putting their own feelings last,” and it just does what you asked. You don’t necessarily have to edit manually, though you can.

      • MerrySkeptic@sh.itjust.works
        link
        fedilink
        arrow-up
        6
        ·
        13 hours ago

        Yes basically, but since it is HIPAA compliant the recording is automatically destroyed when the note is saved. Also no protected recordings are used to teach the AI. The therapist can also choose from a number of different case note formats that might focus on different things

        • Helix 🧬@feddit.org
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          13 hours ago

          no protected recordings are used to teach the AI

          How do you know for certain?

          • SuperUserDO@piefed.ca
            link
            fedilink
            English
            arrow-up
            4
            ·
            11 hours ago

            People conflate security with risk mitigation. It’s not secure in the way that you can confirm the data has been deleted. The risk however is mitigated due to vendor attestations reinforced by contracts.

            • Helix 🧬@feddit.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              57 minutes ago

              Yep, so you can’t actually know if the recording is destroyed, it’s just contractually required to be destroyed. Big difference in my book.

              Wished these sensitive audios would be processed locally and never leave the therapist’s network instead.

  • rossman@lemmy.zip
    link
    fedilink
    arrow-up
    6
    ·
    14 hours ago

    rubberducking for those with social anxiety. Also small friction to get surface level answers that normally took digging from multiple sources.

    it’s a study monster that initially wiped chegg, duolingo, sparknotes etc. The double edge is that people forgot how to take notes, learn fundamentals to handle complex problems.

  • aceshigh@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    14 hours ago

    It’s very helpful for neurodivergent people - helps you figure out who you are and what you want, how you think, learn and work best, identify your obstacles and help you overcome them, understand your neurodivergency and compare it to how neurotypical people think. It’s fantastic at generating ideas that you then test out. The ideas that it gives you are based on how you actually function, so often times they’re valid.