• 0 Posts
  • 55 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle

  • I’ve got two for a pair of cats we adopted at the same time.

    First was Stusy (pronounced stu-c). He was named after a typo. My partner and I were planning a move and I accidentally misspelled study. We looked at it and decided it was a good cat name, which it was. He was the smartest cat we ever had. He died a couple years ago too young from what the vet said was likely genetic kidney problems.

    His brother, our scaredy cat, is Big O. At the cattery (our name for the local cat adoption place), he was the one that wanted nothing to do with us and so we clearly had to adopt him. Every time we pet him he vigorously cleaned that spot. I don’t remember what we were going to name him. The cattery named him Big O after the tire place where he was found. He was driven from one small town in Indiana to another, about 50 miles, before he was found in the engine compartment of someone’s car who stopped at Big O to check the meowing from the engine. He was Stusy’s best friend and while he’s still easy to startle, he lets us pet him in controlled conditions (usually us lying down and holding very still) and is the goofiest of his siblings when they’re playing.















  • maniclucky@lemmy.worldtoMemes@lemmy.mlAI bros
    link
    fedilink
    arrow-up
    6
    ·
    6 months ago

    Absolutely. It’s why asking it for facts is inherently bad. It can’t retain information, it is trained to give output shaped like an answer. It’s pretty good at things that don’t have a specific answer (I’ll never write another cover letter thank blob).

    Now, if someone were to have the good sense to have some kind of lookup to inject correct information between the prompt and the output, we’d be cooking with gas. But that’s really human labor intensive and all the tech bros are trying to avoid that.


  • maniclucky@lemmy.worldtoMemes@lemmy.mlAI bros
    link
    fedilink
    arrow-up
    8
    arrow-down
    3
    ·
    6 months ago

    Gradient descent is a common algorithm in machine learning (AI* is a subset of machine learning algorithms). It refers to using math to determine how wrong an answer is in a particular direction and adjusting the algorithm to be less wrong using that information.


  • I’ve no significant opinion of India beyond anti-Modi, and that’s a product of John Oliver. Most of my engineering team are Indian and some I like, some I tolerate. And a fear of Indian traffic by reputation alone.

    But you could swap “American” with “Indian” in that first paragraph, change nothing else, and it be largely (if not entirely) accurate.


  • My grandmother was the county coroner for a while. She was a pharmacist professionally. In those places, it’s more “give it a quick kick and say they’re dead” (she never did that) more than anything else. She only declared death, not attribute cause to my knowledge.

    The other part of it is that, for whatever reason, in my county the only higher arresting authority than the sheriff was the coroner. It was her job to serve him with papers when he was being sued and, not that it ever came up, arrest him when it needed done.

    Weird system.