She’s almost 70, spend all day watching q-anon style of videos (but in Spanish) and every day she’s anguished about something new, last week was asking us to start digging a nuclear shelter because Russia was dropped a nuclear bomb over Ukraine. Before that she was begging us to install reinforced doors because the indigenous population were about to invade the cities and kill everyone with poisonous arrows. I have access to her YouTube account and I’m trying to unsubscribe and report the videos, but the reccomended videos keep feeding her more crazy shit.
At this point I would set up a new account for her - I’ve found Youtube’s algorithm to be very… persistent.
Unfortunately, it’s linked to the account she uses for her job.
You can make “brand accounts” on YouTube that are a completely different profile from the default account. She probably won’t notice if you make one and switch her to it.
You’ll probably want to spend some time using it for yourself secretly to curate the kind of non-radical content she’ll want to see, and also set an identical profile picture on it so she doesn’t notice. I would spend at least a week “breaking it in.”
But once you’ve done that, you can probably switch to the brand account without logging her out of her Google account.
I love how we now have to monitor the content the generation that told us “Don’t believe everything you see on the internet.” watches like we would for children.
We can thank all that tetraethyllead gas that was pumping lead into the air from the 20s to the 70s. Everyone got a nice healthy dose of lead while they were young. Made 'em stupid.
OP’s mom breathed nearly 20 years worth of polluted lead air straight from birth, and OP’s grandmother had been breathing it for 33 years up until OP’s mom was born. Probably not great for early development.
I’m a bit disturbed how people’s beliefs are literally shaped by an algorithm. Now I’m scared to watch Youtube because I might be inadvertently watching propaganda.
deleted by creator
It’s even worse than “a lot easier”. Ever since the advances in ML went public, with things like Midjourney and ChatGPT, I’ve realized that the ML models are way way better at doing their thing that I’ve though.
Midjourney model’s purpose is so receive text, and give out an picture. And it’s really good at that, even though the dataset wasn’t really that large. Same with ChatGPT.
Now, Meta has (EDIT: just a speculation, but I’m 95% sure they do) a model which receives all data they have about the user (which is A LOT), and returns what post to show to him and in what order, to maximize his time on Facebook. And it was trained for years on a live dataset of 3 billion people interacting daily with the site. That’s a wet dream for any ML model. Imagine what it would be capable of even if it was only as good as ChatGPT at doing it’s task - and it had uncomparably better dataset and learning opportunities.
I’m really worried for the future in this regard, because it’s only a matter of time when someone with power decides that the model should not only keep people on the platform, but also to make them vote for X. And there is nothing you can do to defend against it, other than never interacting with anything with curated content, such as Google search, YT or anything Meta - because even if you know that there’s a model trying to manipulate with you, the model knows - there’s a lot of people like that. And he’s already learning and trying how to manipulate even with people like that. After all, it has 3 billion people as test subjects.
That’s why I’m extremely focused on privacy and about my data - not that I have something to hide, but I take a really really great issue with someone using such data to train models like that.
Just to let you know, meta has an open source model, llama, and it’s basically state of the art for open source community, but it falls short of chatgpt4.
The nice thing about the llama branches (vicuna and wizardlm) is that you can run them locally with about 80% of chatgpt3.5 efficiency, so no one is tracking your searches/conversations.
Oof that’s hard!
You may want to try the following though to clear the algorithm up
Clear her YouTube watch history: This will reset the algorithm, getting rid of a lot of the data it uses to make recommendations. You can do this by going to “History” on the left menu, then clicking on “Clear All Watch History”.
Clear her YouTube search history: This is also part of the data YouTube uses for recommendations. You can do this from the same “History” page, by clicking “Clear All Search History”.
Change her ‘Ad personalization’ settings: This is found in her Google account settings. Turning off ad personalization will limit how much YouTube’s algorithms can target her based on her data.
Introduce diverse content: Once the histories are cleared, start watching a variety of non-political, non-conspiracy content that she might enjoy, like cooking shows, travel vlogs, or nature documentaries. This will help teach the algorithm new patterns.
Dislike, not just ignore, unwanted videos: If a video that isn’t to her taste pops up, make sure to click ‘dislike’. This will tell the algorithm not to recommend similar content in the future.
Manually curate her subscriptions: Unsubscribe from the channels she’s not interested in, and find some new ones that she might like. This directly influences what content YouTube will recommend.
I had to log into my 84-year-old grandmother’s YouTube account and unsubscribe from a bunch of stuff, “Not interested” on a bunch of stuff, subscribed to more mainstream news sources… But it only works for a couple months.
The problem is the algorithm that values viewing time over anything else.
Watch a news clip from a real news source and then it recommends Fox News. Watch Fox News and then it recommends PragerU. Watch PragerU and then it recommends The Daily Wire. Watch that and then it recommends Steven Crowder. A couple years ago it would go even stupider than Crowder, she’d start getting those videos where it’s computer voice talking over stock footage about Hillary Clinton being arrested for being a demonic pedophile. Luckily most of those channels are banned at this point or at least the algorithm doesn’t recommend them.
I’ve thought about putting her into restricted mode, but I think that would be too obvious that I’m manipulating the strings in the background.
Then I thought she’s 84, she’s going to be dead in a few years, she doesn’t vote, does it really matter that she’s concerned about trans people trying to cut off little boy’s penises or thinks that Obama is wearing ankle monitor because he was arrested by the Trump administration or that aliens are visiting the Earth because she heard it on Joe Rogan?
Great advice in here. Now, how do I de-radicalize my mom? :(
In the google account privacy settings you can delete the watch and search history. You can also delete a service such as YouTube from the account, without deleting the account itself. This might help starting afresh.
I was so weirded out when I found out that you can hear ALL of your “hey Google” recordings in these settings.
Log in as her on your device. Delete the history, turn off ad personalisation, unsubscribe and block dodgy stuff, like and subscribe healthier things, and this is the important part: keep coming back regularly to tell YouTube you don’t like any suggested videos that are down the qanon path/remove dodgy watched videos from her history.
Also, subscribe and interact with things she’ll like - cute pets, crafts, knitting, whatever she’s likely to watch more of. You can’t just block and report, you’ve gotta retrain the algorithm.
Yeah, when you go on the feed make sure to click on the 3 dots for every recommended video and “Don’t show content like this” and also “Block channel” because chances are, if they uploaded one of these stupid videos, their whole channel is full of them.