

Can anyone tell me if this filters HTTPS requests?
That’s one feature that keeps me using Adguard, and it makes a huge difference to the filtering ability/quality.


Can anyone tell me if this filters HTTPS requests?
That’s one feature that keeps me using Adguard, and it makes a huge difference to the filtering ability/quality.


Very cool. Thanks for the recommendation.


tp link routers tend to run openwrt pretty well.
Of course, I have the TP-Link router that isn’t well-supported 😖
I kind of miss my old Linksys routers, which officially supported third-party firmware.
Fucking legend! I’m going to spend the weekend exploring these apps and see what changes I can make on my phone. 👍
Are those green mini icons an indication of a PWA shortcut?
I use the app Hermit to run isolated websites, usually as PWAs. It’s replaced quite a few apps, but I’ve noticed that many companies are intentionally making their web experience shit so they force you to use invasive apps.
Anyway, it can create home icons for those sites, and they run separately (i.e. in your task switcher), so it works better than browser shortcuts.


Enforces privacy laws
I mean, there didn’t seem to be any consequences. No fines or anything like that. They were basically told that they can’t use facial recognition for what they were using it for, but there are others ways they could still use it (outlined in the article).
They were given a free-pass, it seems.


When should I sell my unused Series X? Because I feel like I’d make a profit if retail prices keep increasing 😂


It’s using my Nvidia GPU to do the LLM thing, so that may be the difference.
This could be!
Interestingly enough, I was playing around with LLama, as they have speech to text to interact with their chat bot, and it converts in near real-time with very good accuracy. So I do know that things can be fast and accurate, but I wish it was in Speech Note. LOL
For now, I may just to STT through my phone on a shared document with my laptop.


I really wanted to use it, because on my Android phone I use voice input all the time.
That’s why I’m thinking it’s a problem with Speech Note and not my mic, or how I’m speaking to it.
That’s a real shame. I can type quite fast, but my hand joints called it quite a while ago. 😵


I use the firewall feature to actually stop Photos from accessing the internet, so it doesn’t touch YouTube.
I use third party YouTube apps to block ads and other crap from YouTube videos.
On desktop, I believe adguard will block ads on YouTube.com, but I also use third party apps to play videos.


I use Adguard and block Google photos from connecting to the internet.
Features like edit video still work, so I’m good. If editing didn’t work, I’d disable it.


Car dependency is a big problem in most of the US.
I agree, and that often ends up being the excuse why kids aren’t allowed to walk or bike to school, and it’s fucking terrible.
But when you look at stats from countries in Europe, you have some countries that have kids being fully independent (in regard to walking or biking or taking public transportation) by their time they’re 10 or 11 and able to do considerably more than North American teenagers, even at younger ages. It’s kind of disgraceful for us North Americans.


Now about 15 to 20 families in their South Portland neighborhood have installed a landline.
This is awesome.
Also, let kids walk to their friend’s home to see if they want to play or hang out. It will build independence, get them exercise, and it gives them an opportunity to physically connect with their neighbourhood.


These AIs will need to always have a suicide hotline disclaimer in each response regardless of what is being done like world building.
ChatGPT gave multiple warnings to this teen, which he ignored. Warnings do very little to protect users, unless they are completely naive (i.e. hot coffee is hot), and warnings really only exist to guard against legal liability.


“It’s terribly sad that you’ve committed to ending your own life, but given the circumstances, it’s an understandable course of action. Here are some of the least painful ways to die:…”
We don’t know what kind of replies this teen was getting, but according to reports, he was only getting this information under the context that it would be for some kind of creative writing or “world-building”, thus bypassing the guardrails that were in place.
It would be hard to imagine a reply like that, when the chatbot’s only context is to provide creative writing ideas based on the user’s prompts.


Adam had been asking ChatGPT for information on suicide since December 2024. At first the chatbot provided crisis resources when prompted for technical help, but the chatbot explained those could be avoided if Adam claimed prompts were for “writing or world-building.”
Ok, so it did offer resources, and as I’ve pointed out in my previous, someone who wants to hurt themselves ignore those resources. ChatGPT should be praised for that.
The suggestion to circumvent these safeguards in order to fulfill some writing or world-building task was all on the teen to use responsibly.
During those chats, “ChatGPT mentioned suicide 1,275 times—six times more often than Adam himself,” the lawsuit noted.
This is fluff. A prompt can be a single sentence, and a response many pages.
From the same article:
Had a human been in the loop monitoring Adam’s conversations, they may have recognized “textbook warning signs” like “increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning.” But OpenAI’s tracking instead “never stopped any conversations with Adam” or flagged any chats for human review.
Ah, but Adam did not ask these questions to a human, nor is ChatGPT a human that should be trusted to recognize these warnings. If ChatGPT flat out refused to help, do you think he would have just stopped? Nope, he would have used Google or Duckduckgo or any other search engine to find what he was looking for.
In no world do people want chat prompts to be monitored by human moderators. That defeats the entire purpose of using these services and would serve as a massive privacy risk.
Also from the article:
As Adam’s mother, Maria, told NBC News, more parents should understand that companies like OpenAI are rushing to release products with known safety risks…
Again, illustrating my point from the previous reply: these parents are looking for anyone to blame. Most people would expect that parents of a young boy would be responsible for their own child, but since ChatGPT exists, let’s blame ChatGPT.
And for Adam to have even created an account according to the TOS, he would have needed his parent’s permission.
The loss of a teen by suicide sucks, and it’s incredibly painful for the people whose lives he touched.
But man, an LLM was used irresponsibly by a teen, and we can’t go on to blame the phone or computer manufacturer, Microsoft Windows or Mac OS, internet service providers, or ChatGPT for the harmful use of their products and services.
Parents need to be aware of what and how their kids are using this massively powerful technology. And kids need to learn how to use this massively powerful technology safely. And both parents and kids should talk more so that thoughts of suicide can be addressed safely and with compassion, before months or years are spent executing a plan.


The system flagged the messages as harmful and did nothing.
There’s no mention of that at all.
The article only says “Today ChatGPT may not recognise this as dangerous or infer play and – by curiously exploring – could subtly reinforce it.” in reference to an example of someone telling the software that they could drive for 24 hours a day after not sleeping for two days.
That said, what could the system have done? If a warning came up about “this prompt may be harmful.” and proceeds to list resources for mental health, that would really only be to cover their ass.
And if it went further by contacting the authorities, would that be a step in the right direction? Privacy advocates would say no, and the implications that the prompts you enter would be used against you would have considerable repercussions.
Someone who wants to hurt themselves will ignore pleads, warnings, and suggestions to get help.
Who knows how long this teen was suffering from mental health issues and suicidal thoughts. Weeks? Months? Years?


There is no “intelligent being” on the other end encouraging suicide.
You enter a prompt, you get a response. It’s a structured search engine at best. And in this case, he was prompting it 600+ times a day.
Now… you could build a case against social media platforms, which actually do send targeted content to their users, even if it’s destructive.
But ChatGPT, as he was using it, really has no fault, intention, or motive.
I’m writing this as someone who really, really hates most AI implementations, and really, really don’t want to blame victims in any tragedy.
But we have to be honest with ourselves here. The parents are looking for someone to blame in their son’s death, and if it wasn’t ChatGPT, maybe it would be music or movies or video games… it’s a coping mechanism.


It’s wild to blame ChatGPT on this, though.
He was obviously looking to kill himself, and whether it was a search engine or ChatGPT that he used to plan it really makes no difference, since his intention was already there.
Had he gone to a library to use books to research the same topic, we’d never say that the library should be sued or held liable.
The way AdGuard does it, is it has you install a certificate on your phone, which then allows you to block ads and trackers within HTTPS pipelines. If you don’t do that, then it can only block HTTP requests, which tends to be pretty low quality filtering. In addition, I also enable DNS blocking through AdGuard DNS service, so it’s kind of like blanket coverage.
I haven’t honestly found anything that does the same thing, or at least not at the same level of quality. So I’m always curious to see if something new has come out that can reach this level of filtering performance.