One fatal flaw: there's no data to compare it to and therefore nothing to base this conclusion on.
Personally, YouTube became boring after that. I don't use it's algorithm, it's just a video host to me now. Someone sends me a link, I watch it. It has become impossible to find anything interesting on the site anymore using the site alone. I used to love scoffing at flat earthers, then on to lizard people and so on. Did you know there's a face on Venus? Fun times. You can't even fall down a music rabbit hole anymore, it just shows you stuff you've already seen, often 5 minutes earlier. And then it tries to shill garbage at me. It's no fun anymore.
I'm in a country with more gun control than USA and way less firearms death.
YouTube recommends me pro-firearm 'freedom' propaganda.
I watch transit infrastructure video, YouTube recommends me "Karen who brake check on highway" videos.
Also out of nowhere I have videos of climate activists being ridiculed.
The problem isn't so much the rabbit hole but YouTube pushing peoples on certain political viewpoints.
I have been suspecting this for a while. I am a Christian, and YouTube regularly recommends channels about atheism with provocative titles not long after I watch a few videos on theology. Facebook has been doing similar things for the last decade, monitoring likes and reactions to determine what makes a person the most upset and engage the most. It would not surprise me if Google decided to do the same to drive engagement.
The rage train is profitable and has no breaks, after all!
Speculating it's because (previously) the recommendation system was heavily tuned towards novelty. Once you watch a video (or channel) which opens up a new part of the network it gleefully pushes a disproportionate number of videos your way.
So I watched a few videos from Derek of "More Plates More Dates" about body building/steroid use and sure enough I was getting various Jordan Peterson videos, some "why x woke thing is killing hollywood" and even some repackaged Andrew Tate stuff. It's like an excited friend who hears you enjoyed Watchmen and has hundreds of comics to recommend. Except it's much less pleasant.
The novelty bias occurs outside of YouTube as well. I was walking in London a few years ago and an old guy asked me for directions to the local mosque. I had to look it up on my phone for him and for a couple of days I had recommendations about other mosques/muslim bookshops I might want to check out.
My problem is that it never recommend me good stuff.
I never found Not Just Bikes videos on YouTube, but through reddit.
The same way the ai news channel I found made by someone that isn't someone who don't know anything they are talking about, was found also through reddit.
The only good thing YouTube recommended me this year are uscsb videos.
I’ve been on YouTube daily for ~15 years and my experience is totally different. I found NJB via the YT algorithm and I don’t get any recommendations for AI spam channels. In fact, I get a pretty steady flow of high quality channels. Periodically, I have to hit them 3 dots on the recommended video and click don’t recommend this anymore. This does a pretty good job of filtering out crazy political videos and low quality reactionary content.
I haven’t done it in awhile, but it’s getting close to where I’m probably going to pull the trigger.
As a general rule I do not login to YouTube. Instead I have some creators that I informally follow. And then, when my feed starts getting corrupted, I obliterate all vestiges of YouTube from my browser, start “fresh”, and re-prime with my list of favorites.
Also, as a rule, if something catches my eye, I open it in a private window, so as to limit my feed corruption.
It’s far too easy to click on a video and you are seemingly marked for life. Easy enough to reset and start over than fight the tide against you.
> Also, as a rule, if something catches my eye, I open it in a private window, so as to limit my feed corruption.
I do this too, however I suspect YouTube is guessing that's what I'm doing (from my mouse hovering over the video to right click it, perhaps) because my feed still seems to get corrupted, just more slowly.
I'm fairly certain it also does some traffic correlation by IP or similar. Probably on the logic that if someone else in your household (sharing the same gateway) likes a video you might too. Of course this also ends up associating things I open in a private window or with NewPipe with my account.
It's not really officially advertised but it is first party, this guide [0] shows how to find it but it's just https://www.youtube.com/feeds/videos.xml?channel_id=CHANNELI... and the guide just explains how to find CHANNELID since it isn't explicitly shared (which amounts to going a channel page, viewing page source, and searching for "browse_id" which will be a key in a JSON string where the value is the channel ID).
The underlying phenomena is not totally new. You always had a little money being made with bumper stickers, NRA memberships, Che Guevara t-shirts, guys on AM radio hawking anti-commie spray or magic crystals. But the vast majority just got Walter Cronkite echoing safely average thoughts between wholesome ads for McDonald's and Chevrolet. Radicalization wasn't a billion-dollar market, way back when. (Unless you count rock music or violent video games, or religion.)
But for a while now, starting with cable news and now with social media, the invisible hand of the market has been playing with some dangerous memetic levers.
In the meantime, Americans' trust in their institutions is languishing at rock bottom[0]. And when people lose trust like that, it becomes a self-fulfilling prophesy. Things don't look good, and the problem might be much bigger than YouTube's algorithm.
> In the meantime, Americans' trust in their institutions is languishing at rock bottom[0]. And when people lose trust like that, it becomes a self-fulfilling prophesy. Things don't look good, and the problem might be much bigger than YouTube's algorithm.
Why is trust in institutions considered good in itself? If institutions don't behave in a way that engenders trust, why is that the fault of the citizenry? It's like being told by an abuser that they aren't abusive
Why are we framing trust as a moral imperative here? Why is it that all other governments need to earn the trust of their citizenry, but the American government for some reason is entitled to it a priori?
> Why are we framing trust as a moral imperative here?
I'm not. Where did you get that impression? I was merely pointing out the fact of declining trust (with data). As for why it's declining, it's obviously a complex issue: the initial causes, and the attitudes that can become ingrained to prevent the situation from improving (why act in good faith if people will always assume ill intent?), how tensions can increase between the different stratas of culture, how people become more isolated and self-interested out of a sense of survival necessity.
And how capitalism will, well, capitalize on this environment, and have its own effect in turn.
> And how capitalism will, well, capitalize on this environment, and have its own effect in turn.
I'm with you on both this and trust (with one caveat). I get the impression that you talk about trust in individuals and trust in institutions as the same thing. They aren't the same thing
I trust the people I deal with on a regular basis, but the institutions can all go to hell. I find it disheartening that you criticism capitalism, yet talk about "trust in institutions" like it's a good thing. The institutions are agents of capitalism. The politicians, newspapers, the C suite, all of them are defenders of capitalism
> and the attitudes that can become ingrained to prevent the situation from improving (why act in good faith if people will always assume ill intent?)
I have no idea why you talk about "trust in institutions" in one comment, and then try to associate it with trust in individuals. People don't need to "make peace" with institutions in order to trust each other. This attitude is conservative in nature
No, I'm talking about trust in institutions. When I ask "why act in good faith if...", I mean as an agent of an institution.
> but the institutions can all go to hell
On the spectrum of whether present institutions are always our friend or if they are irretrievably malicious and evil, the extremes are not good places to be. One end is idiotically naive, and the other means we're imminently going to suffer in a civil war or be enslaved. Maybe there's productive room in the middle, where we can try to make things better, however unlikely that might seem? Otherwise, assuming a foregone conclusion might cause you to allow that conclusion.
> One end is idiotically naive, and the other means we're imminently going to suffer in a civil war or be enslaved.
We aren't stuck with the institutions that we have. We have the resources to create new ones as old ones die out. Some institutions could go to hell and the world would keep spinning.
And I don't think the possible outcomes are restricted to a civil war or slavery, we have no idea what the set of possible outcomes even is. If people choose to get rid of them, that's their choice to make, and it's a choice that many aren't even aware of
If institutions have nothing to fear because they know that the population is psychologically dependent on them, then what is going to make them accountable? Other institutions?
In very broad strokes, the problem with institutions is often that they don't have the option to act in good faith. A good example is the question: should you talk to police? A lawyer would say no, absolutely never talk to police. The reason is not because police are bad people - but rather simply that it is their job to collect evidence to aid in your prosecution. It is not their job to ensure a just outcome.
If the study was done collecting data post 2020, and YouTube announced changes in 2019, then what use does the study genuinely have? Potentially the changes YouTube have made are successful vaguely but again no data prior to compare it to.
Would have liked to see deeper dives into specific demographics, concerning both older generations who have more free time, as well as 13-18 year olds. Both groups are hypothetically vulnerable to such content for a variety of reasons and usually without proper supervision.
Ironically, the narrative of "right wing rabbit hole leads people to <whatever scary bad thing>" is itself far more concerning to me for the stability and functioning of the US.
I frequently talk to people across the full spectrum of political beliefs, and something that's become very clear to me is that there is almost no overlap between "what Republican-voters want" and "what liberals think Republican-voters want".
I read online that everyone who voted for Trump is a Nazi full of hate for everyone different from them. Then I talk to Trump voters in real life and they're generally decent people worried about their jobs and the safety of their kids who just have different ideas of how to solve problems and different priorities for which problems are most important than I (a purely Democrat voter) do.
The "rabbit hole" might be a problem, but the "shining a magnifying glass on the 0.0001% of people at the bottom of the hole" problem is bigger.
The study keeps talking about extremists but how is that defined? Also why would they select a specific sample from the random population? Seems like a dubious or potentially flawed study design.
I don't understand this at all. I get just as much extremist content, but I get it advertised at me as well. I've had Daily Wire and Epoch Times content forced on me at preroll thousands of times now, and before that it was Prager U.
And when the social media Naxalite guerillas mount a decapitation strike on Sacramento, we'll be able to complain about it. There is little symmetry here in terms of actual idealogues, and there is systemic bias in TOS enforcement and in funded outreach, of which the left has very little and the right has an unending flow.
Right wing rabbit holes lead to white supremacist videos and groups planning to overthrow elections. Left wing rabbit holes lead to… what? Queer Eye clips?
Questioning gender identity, abject depression and shaming for who you are, content that encourages property destruction and violence as long as it's for the "right cause," information about how to go about making permanent changes to your body, ideology about and around a system of government and economics responsible for the deaths of ~120+ million people, and things of that nature.
But yes, let's pretend that one side somehow has the high ground.
The misinformation distribution has spread to Telegram and other sources, video sites and livestreams that aren't part of the main HN / SV zeitgeist.
I met a Flat Earther a couple of weeks ago, in the interest of building common ground and understanding I started talking to him about his sources and journey into this belief. His journey started with a trusted friend passing on a set of videos presenting this 'alternate' view of reality.
He passed on a few references, names that when searched bring up the usual set of QAnon results and unsavory backgrounds interspersed with links to sites and streams I've never heard of. I stopped at this shallow look at things but an intrepid journalist could map a variety of sources of livestreams and recorded talks that spread this.
I think it's time to acknowledge that "when searched" should really (statistically, if not in this case) mean "when searched on google.com which happens to share some stuff with youtube"
The search results on google have been abysmal for very long; they might be harder to study for outsiders (no good indicators like popularity on individual links) but I wouldn't be surprised if they're just as bad an influence as youtube algos.
Of course they are. The point in Google isn’t good search results - it’s results that generate ad revenue. Their search algorithm is absolutely going to prioritize the same “engagement” heavy nonsense that YouTube does.
They were. It was a fun usenet meme I used to see, much like Trumps earlier runs for president, at some point you noticed that a lot of people didn't laugh.
It the changes are so successful, then why does youtube keep putting videos like "Canada's woke nightmare - a warning to the west" in my feed when nothing else in my history would suggest that I gave a damn.
I have disabled watch history and as much personal tailoring of the algorithm as I possibly can, to the point where my youtube main page is now completely empty, with just a prompt to enable watch history. I am now mostly free of those suggestions.
However, I do occasionally get flooded with "rabbit hole" video suggestions underneath a playing video. My personal experience is that it most often happens underneath mainstream news reporting from "reputable" sources (Reuters, etc.).
The most recent example was a news story about wildfires, where the first 20 recommendations underneath were exclusively China hysteria from far less "reputable" sources. Videos predicting imminent Chinese societal/political/economic collapse, or a major police state crackdown, or invasion, or about economic imperialism, etc. Personally, I think there's zero basis for any of that, but even if I had a more "mainstream" view that China is a peer competitor and that we're in a new Cold War, these videos would be radicalizing, and exclusively in a more aggressive direction.
Misinformation videos for a corrupt major political family in the Philippines started popping up early 2010s. I remember watching one as a kid and watching several related videos.
However, I don’t think I truly absorbed them only because I actually knew older people that were detained and tortured during their reign. If I didn’t have first-hand sources and access to books, looking back, I would have definitely believed them.
A decade later and that family is back in power. When I talk to their supporters, they bring up details I remember from those videos as facts. Stuff like hidden gold, royal bloodlines, etc.
I’m happy we have these safeguards now in place. I hope the LLM and generative AI models popping up will have the same safeguards. However, much like YouTube’s problem, I fear that we will only be able to put them once the damage is done and maybe irreversible.
I can’t go a few videos without a famous Romanian house arrested person saying non mainstream things about women, or a famous meat eating Canadian academic deadnaming a trans athlete, so I’m not sure anyone’s being saved from going down the rabbit hole yet?
YES, 100% same experience, at least with the first fella, and his brother. I don't know how many more times I have to tell Youtube that I'm not interested.
It's almost like the algorithm is constantly testing the waters just in case I'm suddenly more receptive to being radicalized into that particular world view.
I'm curious about this. I watch youtube a bunch and I never see rabbit hole videos. E.g. I've never seen the videos you mentioned. Is it because I'm signed in? I don't understand why I don't get any shady videos from youtube.
They're in the shorts section, which seems to have very different recommendation results compared to the regular feed. My regular Youtube video feed is almost exclusively stuff that's similar to what I usually watch, to the point where I need to go hunting for new content. My shorts feed clearly reacts to what I watch, but I get a lot more content that's well outside my interests, including new uploads by random users and alt-right garbage. The alt-right stuff is usually a reupload by some tiny account so it seems like there's a concerted effort to keep putting it in the feeds.
Do you have a source for "... and YouTube leadership is known to lean in that direction [left-wing]"? As a socialist, I can't fathom how Google's leadership would be considered left-wing. Liberal on some social issues, sure but not left-wing.
Be glad. It depends on many factors what videos "the algorithm" thins you want to watch. Location, videos you already watched, probably gmail mail contents, purchase history from google pay, music you like... the list is endless and iam sure even for some google engineers not transparent at all. And don't you dare click on the wrong video by mistake! A simple dislike isnt gonna get you out of that hole.
Iam actually mostly fine because I barely watch youtube and ads and suggesstions are only one reason.
Yes, being signed in encourages personalization to your profile, so YouTube can suggest more things you have a high chance to watch and fewer random/general things, the latter of which rabbit holes require.
I'm lucky enough to not (yet) see the first person, but I occasionally see Canadian academic guy. Not sure how many times I have to click "Don't recommend this channel" before they get the hint.
Correction: He is under arrest in Romania but is from England and the U.S. I don't think the fine people of Romania would want you mistaking him for one of their own.
Same here, no matter how many times I click dislike on said Canadian academic.
My theory is that the algorithm has figured out that even people who don't like him can still fall down the rabbit hole. And when they do, they become hyper-engaged. Which makes the Hail Mary attempt worth it.
I start questioning my Youtube life choices when I start seeing videos that include the words DESTROYS.
If I want to explore certain videos I make a habit of copying the link, opening a new window in Private Mode (in whatever browser), and viewing it through that: no sense 'contaminating' my preferences with exploratory paths. Even for some channels that I like occasionally (reaction stuff for (e.g.) Blind Wave) I often do it in private mode so that I don't get flooded with the genre.
Every so often I have to declare 'Youtube bankruptcy' by deleting my cookies and then manually going to a view channels that I know gear towards my general preferences (personal finance (not investing), history, IT/tech, urban planning, etc).
Might just be your personalized experience. I get music, comedy and old film clips recommended to me on YouTube. It's actually gotten really good at predicting what I might enjoy watching.
By contrast, Twitter is always showing me controversial tweets from people I don't follow. But then that's mostly why I use the site, to enjoy the arguments and drama.
The fact that you are bringing them up proves that you engage somewhat with content related to them. There are plenty of right wing people who complain about AOC, and know a lot about her.
Something about know your enemy? Just because I disagree with them doesn’t mean I haven’t listened to the arguments. But I haven’t done it through my YouTube account for explicitly avoiding the pollution of my recommendations only to no avail.
They track EVERYTHING. They know how long you pause while scrolling and a Tate video comes up. They have cookies from other website you visit cross referenced by source IP
So they proudly say that YouTube is better at content moderation. Yay! Victory!
Now we are free from any harmful content. Now we are free from evil of this world thanks to big corporations, they will keep us from harm!
They say they remove conspiracy theories, hate speech, disinformation, misinformation. I think that we can loose something more than that.
So there will be another crisis, in which politicians could have been involved? Well that is a conspiracy theory. Certainly YouTube cannot host such videos.
Do you have any controversial views? Well sorry pal, you can have such views outside of our platform.