Ask HN: Any way to hide all posts from specific domains?
Lately the number of high-ranking submissions from sites like economist, nytimes, and even twitter, seem to be on the rise. I don't remember ever reading anything interesting from these sources. So I am thinking maybe I can increase the signal by having all post from those domains to be hidden automatically.
Maybe someone has a solution for this? Here's a quick greasemonkey script I just whipped up that you could start from: Thanks! This works with one exception - when two links in a row are from the same domain only the first one is removed. I don't know any javascript, but this is my attempt at modification: I use this chrome plugin (HackerNew [0]) for a few QoL improvements and it can filter out posts by user/keyword/domain. [0] https://chrome.google.com/webstore/detail/hackernew/lgoghlnd... Thanks! I am trying it now, quite nice, but it does a bit too much. Also the hidden posts are still left as empty lines, so it's not a real "hide" as implemented by HN. Totally fair, and I don’t use the hide feature so I can’t speak to that. Since HN doesn’t offer a “hide” feature the best you’ll probably get is a way to remove that line have “$total - $hidden” links show on the page (like using the greasemonkey scripts). A Greasemonkey/Tampermonkey userscript will make quick work of this. Compare the submission domain to the list you want to ignore: https://www.tampermonkey.net/ Optionally, you can publish the userscript for others. try adding something like this to your ublock 'my filters' (if you add more, add the domains to each line, separated by a "|"), e.g. this blocks nytimes.com and economist.com. I want a way to hide all company specific job postings on the front page other than whoishiring. I built this thing https://ontology2.com/essays/ClassifyingHackerNewsArticles/ based on these gripes https://ontology2.com/essays/HackerNewsForHackers/ What I'd say based on that research project was that my intuition that I wanted to block things based on specific rules and keywords wasn't so hot. For instance I don't feel that Hacker News is crazy for Apple these days, when I was doing that project there was the combination of a high rate of Apple articles because Apple had just announced a round of products and also those products being derivatives of last year's products and not being exciting to me. I still think paywall articles are worth either blocking or automatically routing to archive.is, and there certainly are a set of bad sites (medium, right-wing substacks, etc.) But actually the learning approach is highly effective against the Apple obsessions, dogpill theorists who can't stand that people are talking about anything other than Ivermectin and most of what is truly annoying. A problem that still bugs me is what I was going to call "me tooism" when I was writing those articles but thankfully I didn't publish it before the #MeToo hashtag got popular. It isn't that particular phenomenon but the more general one that somebody sees an article about topic X got 500 votes on the front page of HN so then 10 people quickly write half-backed articles replying to the original article because they think if they write something fast enough it will also get 500 votes. That's closely related to the problem of "news" where inevitably an article about something (say the BBC reports some people in Ukraine blew up a bridge to slow down Russian invaders) is followed by a number of other reports by other news agencies about the same thing. Maybe one of those articles is newsworthy but the rest of them aren't. Really they should be clustered as a single topic and filtered on that basis but what that means is not so simple: do you show only the first article on the topic or do you show the "best" article? Does the cluster end up including other bridges getting blown up in the Ukraine? Other bridges blown up everywhere, etc. It's easy to talk about but not so easy to implement.
You would need to adapt the "includes()" to match any URL you don't care for. (function(){
'use strict';
var links = document.getElementsByClassName("titlelink");
for(var link of links){
if(link.href.includes("twitter") ){
var owner = link.closest(".athing");
owner.nextSibling.remove();
owner.remove();
}
}
})();
let domains = "twitter\.com|\
cnn.com\
";
(function(){
'use strict';
var links = document.getElementsByClassName("titlelink");
var owners = [];
for (var link of links) {
if (link.href.match(domains)) {
owners.push(link.closest(".athing"));
}
}
for (var owner of owners) {
owner.nextSibling.remove();
owner.remove();
}
})();
news.ycombinator.com##.itemlist > tbody > tr:has(.sitestr:has-text(/nytimes.com|economist.com/i))
news.ycombinator.com##.itemlist > tbody > tr:has(.sitestr:has-text(/nytimes.com|economist.com/i)) + tr
news.ycombinator.com##.itemlist > tbody > tr:has(.sitestr:has-text(/nytimes.com|economist.com/i)) + tr + tr