Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, to detect and predict how someone is feeling. It can be used in the workplace, for hiring, etc. Loss of privacy is just the beginning. Workers are worried about biased AI and the need to perform the ‘right’ expressions and body language for the algorithms.

  • Apathy Tree@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    66
    ·
    edit-2
    10 months ago

    This would absolutely flag me for something. I tend to have flat delivery, low pitch, avoid eye contact, etc. and when combined with other metrics, could easily flag me as not being a happy enough camper.

    I mean don’t get me wrong, I’m never going to be happy to be working, but if I showed up that day, I’m also in a good enough headspace to do my job… and if you want to fire me for that… for having stuff going on and not faking vocal patterns…

    This is why I don’t want to work anymore. It’s gotten so invasive and fraught if you happen to be anything but a happy bubbly neurotypical fake. And that’s wildly stressful. I’m not a machine, and refuse to be treated like one. If that means I have to die in poverty, well, dump me in the woods, I guess.

    This shit should never be legal.

        • The Doctor@beehaw.org
          link
          fedilink
          English
          arrow-up
          7
          ·
          10 months ago

          “Our smart securicams don’t trust you” is the new “You’re not a good culture fit.”

      • joelfromaus@aussie.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Bring in Universal Basic Income. Introduce emotion tracking as job KPI. Fire me because I don’t emote per LLM datasets. Live comfortably unemployed.

        Best dystopian outcome. A guy can dream, right?

  • Lilith@beehaw.org
    link
    fedilink
    English
    arrow-up
    44
    ·
    edit-2
    10 months ago

    This feels like the AI equivalent of men telling female workers to smile more. I’m totally sure that bias wasn’t cooked into these algorithms. Honestly, how is this not profiling for neurodiverse individuals?

    • intensely_human@lemm.ee
      link
      fedilink
      arrow-up
      14
      ·
      10 months ago

      It is. The only reason I, an autistic man, can feed myself is because at least some jobs are defined in terms of measurable output. As soon as a human is making a personal judgment about me, they see that I’m like an alien acting human, and they find a way to fire me.

      As an Uber driver, or any other job where success is not based on my boss’s judgment, I kick ass.

      People have no fucking ability to stand by any of the “diversity” crap they preach. Like, maybe if diversity is so important to you, the fact that my voice sounds slightly tighter than usual one day shouldn’t result in me getting “Does not meet expectations” on my review.

      Can you tell I’m a little bitter about this?

      Now these kids are trying to organize Uber drivers into some kind of union.

      Please no! I only succeed because it’s gig work, because it’s independent contractor stuff. As soon as my benefits become codified, it becomes an employee situation, and I get put under the neurotypical microscope again.

      I cannot survive there, and I don’t want to live on state aid. Free money is not a substitute for work.

      • wathek@discuss.online
        link
        fedilink
        arrow-up
        2
        ·
        10 months ago

        Have you tried working for a small-mid size company? I got the same vibe in big companies, now im in a company of 50 people and they just do not care how weird i am, no middle managers trying to justify their existence, as long as you’re doing your best you’re good. Like i’m sure that doesnt apply to all small companies, but i’d certainly keep it in mind for the future

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          I have tried everything. Uber driver (ie job without a boss) is the only thing where I can succeed.

          I’ve worked at companies of every size from 3 to 10,000. My personality creates friction no matter how hard I try to fit in.

          I’ve done therapy, ayahuasca ceremonies, yoga, zen, men’s work, neurofeedback training, rolfing, adderall, anxiety meds, microdosing LSD and psilocybin, low carb, keto, raw diet, kung fu, alcohol, marijuana, polyphasic sleep, you name it.

          I’m 41 years old. My ability to adapt is declining. There is one little puddle where this fish can swim, and these busybody kids are trying to turn it into a clone of every other dirt pile out there.

          I want my independent contractor gig work to just stay as it is. I just want these “Let’s break some eggs and make a big omelet for everybody!” kids to slow their fucking roll and have a little humility instead of trying to save everyone by replacing dignified autonomy with a comfortable spot under Momma’s wing.

          • wathek@discuss.online
            link
            fedilink
            arrow-up
            2
            ·
            10 months ago

            Hmm, well i can’t really speak to any of that. But for the greater good, unions are a good thing. I understand that it makes difficult for you though.

            I’d throw the obvious stuff at you like, but it’s kinda hard to get an idea of what could possibly help without knowing you.

            I do know that i’ve had to change my attitude a few times in life to get by though, i don’t think autism should be used as an excuse to not have to do the hard thing everyone has to do (but is harder for us).

            Right now, the only advice i can give is to try to channel that resentment into motivation to improve yourself. Trying and failing is so much more valuable than just giving up and being angry about it.

            But yeah, i do have it easier than most so maybe it’s not my place to say things like that. I do wish you the best though.

            I do feel the same way about things being easier alone though. i would be much happier and productive doing my own thing, I have a ton of software projects i work on, somd even make a bit of money, but running business seems scary since my administration skills are shit and customers are acary.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      7
      ·
      10 months ago

      There are a lot of lonely people without social support groups or who otherwise may not be willing or able to seek help when they need it. Having an AI that is in a position to go “hey, are you alright?” Could be a boon for those folks.

      There are also situations where a worker could be a problem or even a danger to their co-workers, and having an AI that’s able to pay attention and potentially intervene in those situations could help prevent trauma from happening in the first place.

      I’m not saying this is what it’ll be used for, just answering your question about how it could be viewed in a non-dystopian way.

      • DarkThoughts@fedia.io
        link
        fedilink
        arrow-up
        18
        ·
        10 months ago

        Having an AI that is in a position to go “hey, are you alright?” Could be a boon for those folks.

        Oh, thanks, I’m cured. Definitely well worth the constant breach of my privacy.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          2
          ·
          10 months ago

          Is that not the first step toward providing aid? Would you rather the AI simply issue a prescription or something?

          Anyway, as I said, I’m not saying this is how it goes. I’m just presenting a view that’s non-dystopian, as was explicitly asked for. The AI could easily be operating under rules that would prevent it from telling anyone else of the trouble it had detected until you give it permission, if that would satisfy your privacy concerns.

          • DarkThoughts@fedia.io
            link
            fedilink
            arrow-up
            7
            ·
            10 months ago

            I’d rather not have an “AI” invade my privacy in general.

            The AI could easily be operating under rules that would prevent it from telling anyone else of the trouble it had detected until you give it permission, if that would satisfy your privacy concerns.

            What? That’s not how those “AIs” work at all. lol

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              3
              ·
              10 months ago

              I’m not talking about any specific currently-existing AI, I’m talking about a hypothetical one. It is indeed possible to set up an AI in such a way that it wouldn’t tell anyone else what’s going on. It’s just a computer program, it can be set up however one wants it to be set up.

  • PonyOfWar@pawb.social
    link
    fedilink
    arrow-up
    26
    ·
    10 months ago

    I’m glad to live in a place where that kind of surveillance is already illegal. I recently read that in some places, it’s already commonplace to track every single keystroke and mouse click on workers’ PCs. That’s bad enough even without putting AI and facial recognition into the mix. Truly dystopian.

      • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        10 months ago

        I’m the enemy. Because I like to think, I like to read. I’m into freedom of speech, the freedom of choice. I’m the kind of guy who likes to sit in a greasy spoon and wonder - “Gee, should I have the T-bone steak or the jumbo rack of BBQ ribs with the side order of gravy fries?” I want high cholesterol. I wanna eat bacon and butter and buckets of cheese, okay? I wanna smoke a Cuban cigar the size of Cincinnati in the non-smoking section. I wanna run through the streets naked with green Jell-O all over my body reading Playboy magazine. Why? Because I suddenly might feel the need to, okay, pal?

  • farsinuce@feddit.dk
    link
    fedilink
    arrow-up
    24
    ·
    edit-2
    10 months ago

    Interesting timing. The EU has just passed the Artificial Intelligence Act, setting a global precedent for the regulation of AI technologies.

    A quick rundown of what it entails and why it might matter in the US:

    What is it?

    • The EU AI Act is a comprehensive set of rules aimed at ensuring AI systems are developed and used ethically, with respect for human rights and safety.
    • The Act targets high-risk AI applications, including those in employment, healthcare, and policing, requiring strict compliance with transparency, data governance, and non-discrimination.

    Key Takeaways:

    • Prohibited Practices: Certain uses of AI, like manipulative behavior manipulation or unfair surveillance, are outright banned.
    • High-Risk Regulation: AI systems with significant implications for people’s rights must undergo rigorous assessments.
    • Transparency and Accountability: AI providers must be transparent about how their systems work, particularly when processing personal data.

    Why Does This Matter in the US?

    • Brussels Effect: Similar to how GDPR set a new global standard for data protection, the EU AI Act could influence international norms and practices around AI, pushing companies worldwide to adopt higher standards.
    • Cross-Border Impact: Many US companies operate in the EU and will need to comply with these regulations, which might lead them to apply the same standards globally.
    • Potential for US Legislation: The EU’s move could catalyze similar regulatory efforts in the US, promoting a broader discussion on the ethical use of AI technologies.

    Emotion-tracking AI is covered:

    Banned applications: The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.


    Sources:

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      8
      ·
      10 months ago

      Definitely a good start. Surveillance (or ““tracking””) is one of those areas where ““AI”” is actually dangerous, unlike some of the more overblown topics in the media.

      • farsinuce@feddit.dk
        link
        fedilink
        arrow-up
        10
        ·
        10 months ago

        I spent the better half of 45 minutes writing and revising my comment. So thank you sincerely for the praise, since English is not my first language.

        • Melmi@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          If you wrote this yourself, that’s even more ironic, because you used the same format that ChatGPT likes to spit out. Humans influence ChatGPT -> ChatGPT influences humans. Everything’s come full circle.

          I ask though because on your profile you’ve used ChatGPT to write comments before.

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    14
    ·
    10 months ago

    At last, the surveillance cameras will know it when I give them the finger.

  • 👍Maximum Derek👍@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    10
    ·
    10 months ago

    “Sentiment analysis” has been creeping into things like IVR and CRM systems for years now. I’ve been getting creeped out enough by that, I don’t need to be constantly thinking that my work computer is trying to read my emotions.