

Exactly once they can correlate a couple things, they can correlate and search for even more info until all you accounts are revealed


Exactly once they can correlate a couple things, they can correlate and search for even more info until all you accounts are revealed


I bet you could! The interface and literally be what ever you want with FPGAs. You’d just have to keep things organized and program them one at a time I think


I think I’ve heard that they can running LLMs!


I also have a 5060 (ti) with 16GB of RAM. I tend to use GPT-OSS:20B or Qwen3:14B with a context of ~30k. I have custom system prompt for my style of reponse I like on open web ui. That takes up about 14GB of my 16GB VRAM
But yeah it is slower and not as “smart” as the cloud based models, but I think the inconvenience of the speed and having to fact check/test code is worth the privacy and environmental trade offs


That I why I like small, specialized, locally hosted AI. Runs acceptably fast and quite on my gaming PC, it’s private, and I can give it knowledge is small doses in specific topics and projects.
Might not be your issue, butI had issues like this on my ender 3 and I eventually replaced the hotend and they went away. It was like the old hotend just didn’t have a consistent flow and there was some sort of clog or heating issue