lidd1ejimmy@lemmy.ml to Memes@lemmy.mlEnglish · 1 year agoOffline version of Chat GPTlemmy.mlexternal-linkmessage-square25linkfedilinkarrow-up1502arrow-down18
arrow-up1494arrow-down1external-linkOffline version of Chat GPTlemmy.mllidd1ejimmy@lemmy.ml to Memes@lemmy.mlEnglish · 1 year agomessage-square25linkfedilink
minus-squareneidu2@feddit.nllinkfedilinkarrow-up47·edit-21 year agoTechnically possible with a small enough model to work from. It’s going to be pretty shit, but “working”. Now, if we were to go further down in scale, I’m curious how/if a 700MB CD version would work. Or how many 1.44MB floppies you would need for the actual program and smallest viable model.
minus-squareNaz@sh.itjust.workslinkfedilinkarrow-up16·1 year agosquints That says , “PHILLIPS DVD+R” So we’re looking at a 4.7GB model, or just a hair under the tiniest, most incredibly optimized implementation of <INSERT_MODEL_NAME_HERE>
minus-squarecurbstickle@lemmy.dbzer0.comlinkfedilinkarrow-up13·1 year agollama 3 8b, phi 3 mini, Mistral, moondream 2, neural chat, starling, code llama, llama 2 uncensored, and llava would fit.
minus-squareBudgetBandit@sh.itjust.workslinkfedilinkarrow-up1·1 year agoJust interested in the topic did you 🔨 offline privately?
minus-squarecurbstickle@lemmy.dbzer0.comlinkfedilinkarrow-up1·1 year agoI’m not an expert on them or anything, but feel free
minus-squareNoiseColor @lemmy.worldBanned from communitylinkfedilinkarrow-up14arrow-down2·1 year agoRemoved by mod
minus-squareIgnotum@lemmy.worldlinkfedilinkarrow-up8·1 year ago70b model taking 1.5GB? So 0.02 bit per parameter? Are you sure you’re not thinking of a heavily quantised and compressed 7b model or something? Ollama llama3 70b is 40GB from what i can find, that’s a lot of DVDs
minus-squareNoiseColor @lemmy.worldBanned from communitylinkfedilinkarrow-up9·1 year agoRemoved by mod
minus-square9point6@lemmy.worldlinkfedilinkarrow-up7·1 year agoLess than half of a BDXL though! The dream still breathes
minus-squareSteve@startrek.websitelinkfedilinkarrow-up5·1 year agoFor some reason, triple layer writable blu-ray exists. 100GB each https://www.verbatim.com/prod/optical-media/blu-ray/bd-r-xl-tl/bd-r-xl-tl/
minus-squareerrer@lemmy.worldlinkfedilinkEnglisharrow-up8·1 year agoIt is a DVD, can faintly see DVD+R on the left side
minus-squareDannyBoy@sh.itjust.workslinkfedilinkEnglisharrow-up5·1 year agoIt does have the label DVD-R
minus-squarekindenough@kbin.earthlinkfedilinkarrow-up8·edit-21 year agoMaybe not all that LLM, https://en.wikipedia.org/wiki/ELIZA
minus-squareNum10ck@lemmy.worldlinkfedilinkEnglisharrow-up7·1 year agoELIZA was pretty impressive for the 1960s, as a chatbot for psychology.
minus-squarelidd1ejimmy@lemmy.mlOPlinkfedilinkEnglisharrow-up4·1 year agoyes i guess it would be a funny experiment for just a local model
Technically possible with a small enough model to work from. It’s going to be pretty shit, but “working”.
Now, if we were to go further down in scale, I’m curious how/if a 700MB CD version would work.
Or how many 1.44MB floppies you would need for the actual program and smallest viable model.
squints
That says , “PHILLIPS DVD+R”
So we’re looking at a 4.7GB model, or just a hair under the tiniest, most incredibly optimized implementation of <INSERT_MODEL_NAME_HERE>
llama 3 8b, phi 3 mini, Mistral, moondream 2, neural chat, starling, code llama, llama 2 uncensored, and llava would fit.
Just interested in the topic did you 🔨 offline privately?
I’m not an expert on them or anything, but feel free
Removed by mod
70b model taking 1.5GB? So 0.02 bit per parameter?
Are you sure you’re not thinking of a heavily quantised and compressed 7b model or something? Ollama llama3 70b is 40GB from what i can find, that’s a lot of DVDs
Removed by mod
Less than half of a BDXL though! The dream still breathes
For some reason, triple layer writable blu-ray exists. 100GB each
https://www.verbatim.com/prod/optical-media/blu-ray/bd-r-xl-tl/bd-r-xl-tl/
It is a DVD, can faintly see DVD+R on the left side
It does have the label DVD-R
Maybe not all that LLM, https://en.wikipedia.org/wiki/ELIZA
ELIZA was pretty impressive for the 1960s, as a chatbot for psychology.
yes i guess it would be a funny experiment for just a local model
pkzip c:\chatgpt*.* a:\chatgpt.zip -&