• 1 Post
  • 79 Comments
Joined 5 months ago
cake
Cake day: August 27th, 2025

help-circle





  • I’m not claiming I “fixed” bullshitting. I said I was TIRED of bullshit.

    So, the claim I’m making is: I made bullshit visible and bounded.

    The problem I’m solving isn’t “LLMs sometimes get things wrong.” That’s unsolvable AFAIK. What I’m solving for is “LLMs get things wrong in ways that are opaque and untraceable”.

    That’s solvable. That’s what hashes get you. Attribution, clear fail states and auditability. YOU still have to check sources if you care about correctness.

    The difference is - YOU are no longer checking a moving target or a black box. You’re checking a frozen, reproducible input.

    That’s… not how any of this works…

    Please don’t teach me to suck lemons. I have very strict parameters for fail states. When I say three strikes and your out, I do mean three strikes and you’re out. Quants ain’t quants, and models ain’t models. I am very particular in what I run, how I run it and what I tolerate.


  • Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?

    Huh? That is the literal opposite of what I said. Like, diametrically opposite.

    Let me try this a different way.

    Hallucination in SUMM doesn’t “poison” the KB, because SUMMs are not authoritative facts, they’re derived artifacts with provenance. They’re explicitly marked as model output tied to a specific source hash. Two key mechanics that stop the cascade you’re describing:

    1. SUMM is not a “source of truth”

    The source of truth is still the original document, not the summary. The summary is just a compressed view of it. That’s why it carries a SHA of the original file. If a SUMM looks wrong, you can:

    a) trace it back to the exact document version b) regenerate it c) discard it d) read the original doc yourself and manually curate it.

    Nothing is “silently accepted” as ground truth.

    1. Promotion is manual, not automatic

    The dangerous step would be: model output -> auto-ingest into long-term knowledge.

    That’s explicitly not how this works.

    The Flow is: Attach KB -> SUMM -> human reviews -> Ok, move to Vault -> Mentats runs against that

    Don’t like a SUMM? Don’t push it into the vault. There’s a gate between “model said a thing” and “system treats this as curated knowledge.” That’s you - the human. Don’t GI and it won’t GO.

    Determinism works for you here. The hash doesn’t freeze the hallucination; it freezes the input snapshot. That makes bad summaries:

    • reproducible
    • inspectable
    • fixable

    Which is the opposite of silent drift.

    If SUMM is wrong and you miss it, the system will be consistently wrong in a traceable way, not creatively wrong in a new way every time.

    That’s a much easier class of bug to detect and correct. Again: the proposition is not “the model will never hallucinate.”. It’s “it can’t silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version”.

    And that, is ultimately what keeps the pipeline from becoming “poisoned”.


  • Parts of this are RAG, sure

    RAG parts:

    • Vault / Mentats is classic retrieval + generation.
    • Vector store = Qdrant
    • Embedding and reranker

    So yes, that layer is RAG with extra steps.

    What’s not RAG -

    KB mode (filesystem SUMM path)

    This isn’t vector search. It’s deterministic, file-backed grounding. You attach folders as needed. The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step. It can style and jazz around the answer a little, but the answer is the answer is the answer.

    If the fact isn’t in the attached KB, the router forces a refusal. Put up or shut up.

    Vodka (facts memory)

    That’s not retrieval at all, in the LLM sense. It’s verbatim key-value recall.

    • JSON on disk
    • Exact store (!!)
    • Exact recall (??)

    Again, no embeddings, no similarity search, no model interpretation.

    “Facts that aren’t RAG”

    In my set up, they land in one of two buckets.

    1. Short-term / user facts → Vodka. That for things like numbers, appointments, lists, one-off notes etc. Deterministic recall, no synthesis.

    2. Curated knowledge → KB / Vault. Things you want grounded, auditable, and reusable.

    In response to the implicit “why not just RAG then”

    Classic RAG failure mode is: retrieval is fuzzy → model fills gaps → user can’t tell which part came from where.

    The extra “steps” are there to separate memory from knowledge, separate retrieval from synthesis and make refusal a legal output, not a model choice.

    So yeah; some of it is RAG. RAG is good. The point is this system is designed so not everything of value is forced through a semantic search + generate loop. I don’t trust LLMs. I am actively hostile to them. This is me telling my LLM to STFU and prove it, or GTFO. I know that’s a weird way to operate maybe (advesarial, assume worst, engineer around issue) but that’s how ASD brains work.





  • Yeah.

    The SHA isn’t there to make the model smarter. It’s there to make the source immutable and auditable.

    Having been burnt by LLMs (far too many times), I now start from a position of “fuck you, prove it”.

    The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.

    If it does that more than twice, straight in the bin. I have zero chill any more.

    Secondly, drift detection. If someone edits or swaps a file later, the hash changes. That means yesterday’s answer can’t silently pretend it came from today’s document. I doubt my kids are going to sneak in and change the historical prices of 8 bit computers (well, the big one might…she’s dead keen on being a hacker) but I wanted to be sure no one and no-thing was fucking with me.

    Finally, you (or someone else) can re-run the same question against the same hashed inputs and see if the system behaves the same way.

    So: the hashes don’t fix hallucinations (I don’t even think that’s possible, even with magic). The hashes make it possible to audit the answer and spot why hallucinations might have happened.

    PS: You’re right that interpretation errors still exist. That’s why Mentats does the triple-pass and why the system clearly flags “missing / unsupported” instead of filling gaps. The SHA is there to make the pipeline inspectable, instead of “trust me, bro.”.

    Guess what? I don’t trust you. Prove it or GTFO.





  • D) None of the above.

    I didn’t “solve hallucination”. I changed the failure mode. The model can still hallucinate internally. The difference is it’s not allowed to surface claims unless they’re grounded in attached sources.

    If retrieval returns nothing relevant, the router forces a refusal instead of letting the model free-associate. So the guarantee isn’t “the model is always right.”

    The guarantee is “the system won’t pretend it knows when the sources don’t support it.” That’s it. That’s the whole trick.

    KB size doesn’t matter much here. Small or large, the constraint is the same: no source, no claim. GTFO.

    That’s a control-layer property, not a model property. If it helps: think of it as moving from “LLM answers questions” to “LLM summarizes evidence I give it, or says ‘insufficient evidence.’”

    Again, that’s the whole trick.

    You don’t need to believe me. In fact, please don’t. Test it.

    I could be wrong…but if I’m right (and if you attach this to a non-retarded LLM), then maybe, just maybe, this doesn’t suck balls as much as you think it might.

    Maybe it’s even useful to you.

    I dunno. Try it?


  • re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)

    Yep, good question. You can do that, it’s not wrong. If your KB is small + your question is basically “find me the paragraph that contains X,” then yeah: two-pass fuzzy find will dunk on any LLM for speed and correctness.

    But the reason I put an LLM in the loop is: retrieval isn’t the hard part. Synthesis + constraint are. What a LLM is doing in KB mode (basically) is this -

    1. Turns question into extraction task. Instead of “search keywords,” it’s: “given these snippets, answer only what is directly supported, and list what’s missing.”

    2. Then, rather that giving 6 fragments across multiple files, the LLM assembles the whole thing into a single answer, while staying source locked (and refusing fragments that don’t contain the needed fact).

    3. Finally: it has “structured refusal” baked in. IOW, the whole point is that the LLM is forced to say “here are the facts I saw, and this is what I can’t answer from those facts”.

    TL;DR: fuzzy search gets you where the info lives. This gets you what you can safely claim from it, plus an explicit “missing list”.

    For pure retreval: yeah - search. In fact, maybe I should bake in a >>grep or >>find commands. That would be the right trick for “show me the passage” not “answer the question”.

    I hope that makes sense?


  • Oh shit! Uh…thank you! Umm. Yes. That was unexpected :)

    Re: collab. I’m away for a bit with work, but let me think on it for a bit? There’s got to be a way to make this useful to more peeps.

    Believe it or not, I am not a CS guy at ALL (I work in health-care) and I made this for fun, in a cave, with a box of scraps.

    I’m not good at CS. I just have a … “very special” brain. As in, I designed this thing from first principles using invariants, which I understand now is not typical CS practice.


  • Fuck, I’m an all seeing, prescient wizard! Neat!

    Uh, I mean… “yes of course, I intended to have Ai Horde piping baked in from the get go”.

    Do let me know if my shit works with their shit to make…not shit.

    #####################

    EDIT: For fun, I asked my router to ask Qwen3-4B 2507 WTF Ai Horde was. I’m showing this for interest sake, because 1) shows you how the thing works IRL 2) shows you how and when it’s WRONG in a LOUD way, if you know how to read it.

    ######################

    This is in default mode, no KBs attached. Just straight YOLOing the question at it, cold, and asking it to argue from it’s own internal priors.

    AI-horde is a decentralized platform where users rent or sell computational power (GPU time) to run AI models. It connects AI model developers who need processing with individuals or organizations who have idle GPU hardware. The platform uses blockchain-like tokens to manage payments and incentives. It’s not a single AI system but a marketplace for GPU resources.

    Missing details: Specific token mechanics, current model library, or user onboarding flow could affect accuracy. For example, if token economics changed recently, the explanation might not reflect current conditions.

    Confidence: Medium | Source: Model

    #####################################

    That’s the routers way of saying “hey, best guess. Sorry”. And from what I can see, it sort of right, but specifics are wrong. So, ok for a cheap YOLO, not for “I bet my money on this” Let’s call this a 4/10

    ####################################

    Now, I gave it web access privileges. It did this. Let’s call this 7/10

    An AI-horde is a free, open-source community service that allows anyone to contribute computing power to generate AI-generated images and text. It functions as a crowdsourced distributed cluster of workers focused on image and text generation tasks [1]. This model resembles collaborative projects like Folding@home, where users share computational resources for a common goal [2].

    ####################################

    Finally, here’s what it does in >>KB mode. I scraped the FAQ and then did >>summ new. Eyeballing it against the FAQ, it’s call this 8.5/10. YMMV

    AI-horde is a decentralized platform for distributed AI computing. It allows users to rent out their idle GPU hardware to run AI models, connecting developers needing processing power with volunteers who provide it through open APIs. The system uses a non-cryptocurrency-based incentive called Kudos to prioritize contributions, where workers earn Kudos for completing tasks, and these are not tradable assets but rather internal metrics of service quality and project loyalty.

    Access to AI-horde is managed via API keys; anonymous usage is allowed but receives the lowest priority. The project emphasizes volunteerism and discourages commercialization of Kudos, treating its sale or exchange for money or cryptocurrency as contrary to its ethos. Participation as a worker involves installing the AI-horde client, selecting task types (e.g., image or text generation), configuring supported models, and running the worker to begin earning Kudos.

    Confidence: high | Source: Docs

    ################################

    FINALLY, for “fun”, I >>kb attached all and asked it to reason over the above three responses, in context of what it has in the KB store, and rank them out of 10.

    [3] is most correct, scoring 9.5/10. It accurately reflects the core model: decentralized GPU renting via open APIs, Kudos as a non-crypto, non-tradable incentive tied to volunteerism, and the crowdsourced nature of contributions. It also correctly notes access via API keys and lower priority for anonymous use.

    [2] is accurate but less complete, scoring 7.5/10. It correctly identifies the crowdsourced, open-source, volunteer-based nature and analogy to Folding@home. However, it omits key specifics like Kudos, API key access, and the explicit non-commercialization stance.

    [1] is partially correct but misleading, scoring 6/10. It correctly describes GPU

    Confidence: medium | Source: Docs


    TL;DR

    • KB mode: source-based info and when queried against, adjudication accuracy = v.good
    • Clean, reproducible demo. Try it yourself

  • Responding to my own top post like a FB boomer: May I make one request?

    If you found this little curio interesting at all, please share in the places you go.

    And especially, if you’re on Reddit, where normies go.

    I use to post heavily on there, but then Reddit did a reddit and I’m done with it.

    https://lemmy.world/post/41398418/21528414

    Much as I love Lemmy and HN, they’re not exactly normcore, and I’d like to put this into the hands of people :)

    PS: I am think of taking some of the questions you all asked me here (de-identified) and writing a “Q&A_with_drBobbyLLM.md” and sticking it on the repo. It might explain some common concerns.

    And, If nothing else, it might be mildly amusing.