A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 3 Posts
  • 487 Comments
Joined 11 months ago
cake
Cake day: June 25th, 2024

help-circle

  • Uh, I don’t have a good answer for that, but I’d give them something like Linux Mint anyways. That way they can look up stuff, watch tutorials and don’t have a super niche thing running. Or give them one of the popular gaming distros, if it’s that.

    Idk. Gnome feels very much like Android to me. And KDE follows similar design patterns to Windows. And kids and teenagers tend to figure out all the things they want. If they have the motivation to do so.



  • Tja, wobei ich mir immer nicht so sicher bin ob die AfD-Wähler tatsächlich die AfD in der Regierung haben möchten, damit die dann soetwas wie ihre grandiose Wirtschaftspolitik umsetzen, die DM wiederbeschaffen und den Export durch Nationalismus ersetzen, was ihnen und vielen anderen Millionen von Bürgern hier den Arbeitsplatz kosten wird. Also ich unterhalte mich wirklich selten mit solchen Leuten, aber ich glaube an der Behauptung ist etwas dran, dass die AfD ein bisschen ein Sonderfall ist und viele Leute die nicht unbedingt aus den gleichen Gründen wählen, aus denen Leute hinter anderen Parteien/Regierungen stehen…

    Aber das stimmt wohl. Es sind immer noch circa 8/10 Deutschen, die mit Absicht nicht die AfD wählen… Trotzdem wird sie überall aufgebauscht, ob das nun gesund ist oder nicht. Und ich sehe auch nicht warum irgendwer die im Zusammenhabg mit der CDU sehen möchte… Also es gibt da sicherlich Überschneidungen im Populismus. Letztlich sind die Parteien aber meiner Meinung nach ziemlich fundamental inkompatibel. Und die potenziellen CDU Wähler:innrn die ich so kenne sind sich da auch ziemlich einig. Also ich halte ein Mandat für CDU+AfD auch für ausgemachten Blödsinn.



  • hendrik@palaver.p3x.detoLinux@lemmy.mlIn regard to Hyprland and Fascism
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    1 day ago

    And in addition to that: It’s also kind of a big thing that they get an audience. The more people use the projects, the bigger the audience. They’ll get a Discord and people will join because of the project, people will start reading their blog because of the attention via the software… People will maintain and package their software, or use it, or contribute to it… Directly resulting in interactions with the group which develops a project. That’s a direct consequence of the project getting attention. And “promoting” is a way to draw attention.




  • hendrik@palaver.p3x.detoLocalLLaMA@sh.itjust.worksSpecialize LLM
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 days ago

    And as far as I know people do fine-tuning so it picks up on the style of writing and things like that, for example to mimick an author, or specifics of a genre. I’d say to just fetch facts from a pile of text, RAG would be the easier approach. It depends on the use-case, the collection of books, however. Fine-tuning is definitely a thing people do as well.


  • Yeah, thanks but I’ve already tried that. It will write a short amount of text but very quickly fall back to refusal. Both if I do it within the thinking step and also if I do it in the output. This time the alignment doesn’t seem to be slapped on halfheartedly. It’ll probably take some more effort. But I’m sure people will come up with some “uncensored” versions.


  • hendrik@palaver.p3x.detoLocalLLaMA@sh.itjust.worksQwen3 officially released
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    10 days ago

    Uh, wow. That 30B A3B runs very fast on CPU alone.

    Sadly it seems to be censored. I always try to make them write some fictional stories, exploring morally reprehensible acts, in order to test this. Or just lewd short-stories. And it straight out refuses immediately… Since it’s a “thinking” model, I went ahead and messed with its thoughts, but that won’t do it either: “I’m sorry, but I can’t comply with that request. I have to follow my guidelines and maintain ethical standards. Let’s talk about something else.”

    Edit: There is a base model available for that one, and it seems okay. It will autocomplete my stories and write a wikipedia article about things the government doesn’t like. I wonder if this is going to help, though. Since all the magic is in the steps after the base model and I don’t know whether there are any datasets available for the community to instruct-tune a thinking model…


  • Yeah you’re right. I didn’t want to write a long essay but I thought about recommending Grok. In my experience, it tries to bullshit people a bit more than other services do. But the tone is different. I found deep within, it has the same bias towards positivity, though. In my opinion it’s just behind a slapped on facade. Ultimately similar to slapping on a prompt onto ChatGPT, just that Musk may have also added that to the fine-tuning step before.

    I think there is two sides to the coin. The AI is the same. Regardless, it’ll tell you like 50% to 99% correct answers and lie to you the other times, since it’s only an AI. If you make it more appeasing to you, you’re more likely to believe both the correct things it generates, but also the lies. It really depends on what you’re doing if this is a good or a bad thing. It’s argualby bad if it phrases misinformation to sound like a Wikipedia article. Might be better to make it sound personal, so once people antropormorphize it, they won’t switch off their brain. But this is a fundamental limitation of today’s AI. It can do both fact and fiction. And it’ll blur the lines. But in order to use it, you can’t simultaneously hate reading it’s output. I also like that we can change the character. I’m just a bit wary of the whole concept. So I try to use it more to spark my creativity and less so to answer my questions about facts. I also have some custom prompts in place so it does it the way I like. Most of the times I’ll tell it something like it’s a professional author and it wants to help me (an amateur) with my texts and ideas. That way it’ll give more opinions rather than try and be factual. And when I use it for coding some tech-demos, I’ll use it as is.


  • I’d have to agree: Don’t ask ChatGPT why it has changed it’s tone. It’s almost for certain, this is a made-up answer and you (and everyone who reads this) will end up stupider than before.

    But ChatGPT always had a tone of speaking. Before that, it sounded very patronizing to me. And it’d always counterbalance everything. Since the early days it always told me, you have to look at this side, but also look at that side. And it’d be critical of my mails and say I can’t be blunt but have to phrase my mail in a nicer way…

    So yeah, the answer is likely known to the scientists/engineers who do the fine-tuning or preference optimization. Companies like OpenAI tune and improve their products all the time. Maybe they found out people don’t like the sometimes patrronizing tone, and now they’re going for something like “Her”. Idk.

    Ultimately, I don’t think this change accomplishes anything. Now it’ll sound more factual. Yet the answers have about the same degree of factuality. They’re just phrased differently. So if you like that better, that’s good. But either way, you’re likely to continue asking it questions, let it do the thinking and become less of an independent thinker yourself. What it said about critical thinking is correct. But it applies to all AI, regardless of it’s tone. You’ll also get those negative effects with your preferred tone of speaking.


  • I’m always a bit unsure about that. Sure AI has a unique perspective on the world, since it has only “seen” it through words. But at the same time these words conceptualize things, there is information and models stored in them and in the way they are arranged. I believe I’ve seen some evidence, that AI has access to the information behind language, when it applies knowledge, transfers concepts… But that’s kind of hard to judge. I mean an obvious example is translation. It knows what a cat or banana is. It picks the correct french word. At the same time it also maintains tone, deals with proverbs, figures of speech… And that was next to impossible with the old machine translation services which only looked at the words. And my impression with doing computer coding or creative writing is, it seems to have some understanding of what it’s doing. Why we do things a certain way and sometimes a different way, and what I want it to do.

    I’m not sure whether I’m being too philosophical with the current state of technology. AI surely isn’t very intelligent. It certainly struggles with the harder concepts. Sometimes it feels like its ability to tell apart fact and fiction is on the level of a 5 year old who just practices lying. With stories, it can’t really hint at things without giving it away openly. The pacing is off all the time. But I think it has conceptualized a lot of things as well. It’ll apply all common story tropes. It loves to do sudden plot twists. And next to tying things up, It’ll also introduce random side stories, new characters and dynamics. Sometimes for a reason, sometimes it just gets off track. And I’ve definitely seen it do suspension and release… Not successful, but I’d say it “knows” more than the words. That makes me think the concepts behind storytelling might actually be somewhere in there. It might just lack the needed intelligence to apply them properly. And maintain the bigger picture of a story, background story, subplots, pacing… I’d say it “knows” (to a certain degree), it’s just utterly unable to juggle the complexity of it. And it hasn’t been trained with what makes a story a good one. I’d guess, that might not be a fundamental limitation of AI, though. But more due to how we feed it award-winning novels next to lame Reddit stories without a clear distinction(?) or preference. And I wouldn’t be surprised if that’s one of the reasons why it doesn’t really have a “feeling” of how to do a good job.

    Concerning OP’s original question… I don’t think that’s part of it. The people doing the training have put in deliberate effort to make AI nice and helpful. As far as I know there’s always at least two main steps in creating large language models. The first one is feeding large quantities or text. The result of that is called a “base model”. Which will be biased in all the ways the learning datasets are. It’ll do all the positivity, negativity, stereotypes, be helpful or unhelpful roughly like people on the internet are, the books and wikipedia, which went in, are. (And that’s already more towards positive.) The second step is to tune it for some application. Like answering questions. That makes it usable. And makes it abide by whatever the creators chose. Which likely includes not being rude or negative to customers. That behaviour gets suppressed. If OP wants it a different way, they probably want a different model, or maybe a base model. Or maybe a community-made fine-tune that has a third step on top to re-align the model with different goals.


  • hendrik@palaver.p3x.detoLocalLLaMA@sh.itjust.worksLess positive model
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    17 days ago

    That’s a very common issue with a lot of large language models. You can either pick one with a different personality, (I liked Mistral-Nemo-Instruct for that, since it’s pretty open to just pick up on my tone and go with that). Or you give clear instructions what you expect from it. What really helps is to include example text or dialogue. Every model will pick up on that to some degree.

    But I feel you. I always dislike ChatGPT due to its know-it-all and patronizing tone. Most other models also are deliberately biased. I’ve tried creative writing and most refuse to be negative or they’ll push towards an happy end. They won’t write you a murder mystery novel without constantly lecturing about how murder is wrong. And they can’t stand the tension and want to resolve the murder right away. I believe that’s how they’ve been trained. Especially if there is some preference optimization been done for chatbot applications.

    Utimately, it’s hard to overcome. People want chatbots to be both nice and helpful. That’s why they get deliberately biased toward that. Stories often include common tropes. Like resolving drama and a happy ending. And AI learns a bit from argumentative people on the internet, drama on Reddit etc. But generally that “negativity” gets suppressed so the AI doesn’t turn on somebody’s customers or spews nazi stuff like the early attempts did. And Gemma3 is probably aimed at such commercial applications, it’s instruct-tuned and has “built-in” safety. So I think all of that is opposed to what you want it to do.