Baby Marxist

Currently deprogramming and chewing though the reading list

https://www.mlreadinghub.org/

I like to argue to hone my ideas and concepts. If you keep responding then I will too.

  • 0 Posts
  • 4 Comments
Joined 9 months ago
cake
Cake day: July 26th, 2025

help-circle
  • This is just a classic case of bad use of the tools provided. Agents are notorious for making shit up Or getting something that’s just like super close, but not quite accurate.

    I bet this dude also probably just uses the same session over and over and over and over again, which clogs up his context window and makes the model less accurate the longer it goes on to.

    This probably could have been prevented if it had been forced to show a plan before it tried to do anything. It’s hard to know because the article is so light on details. You also shouldn’t brazenly trust the thing so much. You should run a command and walk away. You should keep an eye on what it is doing.

    It’s a bit like giving a junior developer a production key and being like “don’t delete production!” and then walking away.

    The way the guy was prompting this agent also leaves a lot to be desired. It’s trained to work on emulating human thoughts, speech patterns. Turns out When giving instructions, it’s really difficult to figure out what to do from a list of things to not do. If the dude just instead told the agent what to do and how he wanted it to work and when it needed to bring things to his attention, instead of telling it to not guess, instead explaining that it needed to use whatever tools to go look up a documentation to understand the context and scope of the project it’s working on It does a better job.

    Giving a model the right context to do something is the difference between a model doing something like deleting your production database or your model acting like a magical machine that can get anything done.


  • Those are excellent use cases for AI, but it is also not a magic bullet. It cannot do everything for you, and often it can leave you a strike, especially if you’re not willing to fact check it. It’s a well-known fact that LLMs hallucinate, or straight up lie to you about what they know. So in many niche cases, which is what I am doing and what we hire this guy to do, it’s often not effective. Just as often as it gives a silver bullet, it is often effectively wrong.

    I have seen this dude use code and use AI to say things that are absolutely not true, like claiming setting a very high UID can resize a Docker image to an an absurd level nearing 500 gigabytes.

    He also tried to use it to lecture me on how the auditors don’t audit our company correctly and how we’re actually doing things completely wrong and that he’s the guy to fix it all and that’ll take him just a little bit to train everybody up to shape.

    LLM tools are excellent when treated with respect and the limitations of the tool is understood but unfortunately, far too many people believe it is a magic talking box that always knows everything and always has the right solution. 😮‍💨

    I mean, this joker is so ridiculous that he can’t even figure out how to use the AWS CLI correctly or how to setup “deploy” repo github keys. We asked him if he was comfortable working with puppet, or at least capable of figuring it out, and he looked like we asked him to touch a hot stove. Did I mention this joker has 15 years of experience doing stuff like this?

    When I was looking at his code, it reeks of AI with anti-patterns I normally only see by strictly generated llm code.