

Not sure who downvoted lol but here’s proof
Not sure who downvoted lol but here’s proof
Your Roku should be trying to connect to the internal IP:8096 (Jellyfin port) of your arch device, not whatever your tailscale address is. I don’t personally use tailscale so if your setup blocks local access then you may need to solve that first
Like you ended up doing a PiHole at home? I’m surprised there’s no access control. I was on the verge of setting that or Adguard home up for myself but realized using Adguard’s public servers is effectively the same thing, just without the extra privacy of hosting at home.
I’ve run the duckduckgo version of this for years but only recently found out you can get most of this functionality natively in android (android 13 for me) by setting a private DNS as shown in the below image. My duckduckgo app tracking protection does still catch attempts but it’s basically just google now, instead of dozens of companies before.
I’m surprised you’re getting disappointing results with Qwen 3 Coder 480b. I run Qwen 2.5 coder 14b locally (Open WebUI + Ollama) on my 3060 12gb and I’ve been pretty pleased with it’s answers so far relating to python code, Django documentation/settings, and quirks with my reverse proxy.
I assume you aren’t hosting the 480b locally right? Are you using Open WebUI and an Open API key?
I initially installed Ollama/OpenWebUI in my HP G4 Mini but it’s got no GPU obviously so with 16GB ram I could run 7b models but only 2 or 3 tokens/sec.
It definitely made me regret not buying a bigger case that could accomodate a GPU, but I ended up installing the same Ollama/OpenWebui pair on my windows desktop with a 3060 12gb and it runs great - 14b models at 15+ tokens/sec.
Even better, I figured out that my reverse proxy on the server is capable of redirecting to other addresses in my network so now I just have a dedicated subdomain URL for my desktop instance. It’s OpenWebUI is now just as accessible remotely as my server’s.
Not really sure I understand how these work, do you just feed it a large textual document like a transcript or something, and it turns it into a more machine readable vector format or something?
Or is it just a much smaller LLM that’s more optimized for reading than generating?
The update is giving me a performance uplift on my 3060 that’s WAY more than 7%, using qwen2.5-coder:14b-instruct-q5_K_M here’s rerunning the exact same prompt before and after:
So I googled it and if you have a Pi 5 with 8gb or 16gb of ram it is technically possible to run Ollama, but the speeds will be excruciatingly slow. My Nvidia 3060 12gb will run 14b (billion parameter) models typically around 11 tokens per second, this website shows a Pi 5 only runs an 8b model at 2 tokens per second - each query will literally take 5-10 minutes at that rate:
Pi 5 Deepseek
It also shows you can get a reasonable pace out of the 1.5b model but those are whittled down so much I don’t believe they’re really useful.
There are lots of lighter weight services you can host on a Pi though, I highly recommend an app called Cosmos Cloud, it’s really an all-in-one solution to building your own self-hosted services - it has its own reverse proxy like Nginx or Traefik including Let’s Encrypt security certificates, URL management, and incoming traffic security features; it has an excellent UI for managing docker containers and a large catalog of prepared docker compose files to spin up services with the click of a button; it has more advanced features you can grow into using like OpenID SSO manager, your own VPN, and disk management/backups.
It’s still very important to read the documentation thoroughly and expect occasional troubleshooting will be necessary, but I found it far, far easier to get working than a previous Nginx/Docker/Portainer setup I used.
Using Ollama depends a lot on the equipment you run - you should aim to have at least 12gb of VRAM/unified memory to run models. I have one copy running in a docker container using CPU on Linux and another running on the GPU of my windows desktop so I can give install advice for either OS if you’d like
Qwen 3 coder is the current top dog for coding afaik, there’s a 30b size and something bigger but I can’t remember what because I have no hope of running it lol. But I think the larger models have up to a million token context window.
YA can never be too careful, you might be inside my network at this very moment and hiding the internal IP is my last line of defense! 😆