The Most Terrifying Thought Experiment of All Time
slate.comSo stupid. Roko's Basilisk and Roko's Infinite Blowjob Machine are equally probable.
When I was a kid I watched a movie about a monster that came up from the basement when the kids in the house accidentally said an ancient Indian incantation.
"BlahBlahBluey" or something
I got scared that perhaps, the last thing I said "Good Night, Batman" or whatever, was in fact coincidentally an ancient curse incantation and I had summoned a monster who was en route to kill me.
"Blah!" I'd say, changing my words to "Good Night, Batman Blah!"
Ah, much better.
But then FUCK! What if "Good Night, Batman Blah!" is the cursed incantation!!! Repeat over and over until I was sufficiently convinced that no one had ever uttered the previous sequence of sounds I had just made, become exhausted from an entirely too powerful imagination, and fall asleep.
I'm pretty sure this 'thought experiment' is about as intellectual as I was at bedtime when I was 7.
"So stupid. Roko's Basilisk and Roko's Infinite Blowjob Machine are equally probable."
I think the latter is a lot more likely and I predict we'll see it before the singularity.
Does this machine count? http://betabeat.com/2014/06/crowdfunded-electronic-blowjob-m...
(warning: language mildly NSFW, plus of course a video of a blowjob machine. No naked ladies, though)
> Roko's Basilisk and Roko's Infinite Blowjob Machine are equally probable.
I agree. So what would they be so afraid of? My assumption is that the LessWrong community tries to come up with the best system of thinking, so by proving (by presenting a situation which shows that the best system is flawed, since we may be slaves right now) means that there really is no hope.
As far as LessWrong in general, they share some controversial opinions but some of their concepts seem very interesting[1]. However, learning and especially thinking about these concepts does not appear to have any positive effect in everyday life.
TL,DR: Here is the thought experiment, 4th paragraph after 3 long paragraphs of stuff I didn't want to read.
"What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?"
I don't think it's that interesting of a thought experiment, there are dozens of much more insightful and thought-provoking ones out there.
Search for "best thought experiments" on Google if you want some cool things to think about. It might be taboo to mention but weed will make thought experiments really fun, in my opinion.
OT but IMO it's such a huge cost to the economy (and humanity) that marijuana is still taboo. The thought experiments that you describe, for me at least, are not only fun, but more often than not lead to practical, wealth-generating breakthroughs. I assume I'm not the only one.
I hope one day historians and economists will study the lost opportunity costs of the prohibition / taboo nature of marijuana.
Richard Feynman, Carl Sagan, Bill Gates, Bill Clinton, Richard Branson and thousands of other top minds would agree with what you said. I don't think anyone would actually disagree and be able to provide a valid reason.
Politics, money, religion, and several other irrelevant factors keep it highly illegal.
It even has come to the point where states are ignoring federal law, so progress is happening thankfully.
Changing the leadership of the DEA would be helpful. Remove guns from the DEA and require them to use the FBI or local SWAT if a crime is bad enough to warrant a raid. "They might flush the evidence" is not a good enough reason to raid good peoples houses and suppress something that does far more good than harm. How many people have died from marijuana and how many people have died from a marijuana related DEA raid? Which is really more harmful to the country? Money seized should go towards public education, not directly to the law-enforcement agencies.
Is it off topic? We are talking about thought-experiments here, and your experience is normal for anyone with even a slightly scientific mind.
Uh, SWAT teams are already making drug raids. That is what happened recently in Atlanta where they did a no knock warrant and flash banged an infant in his crib.
If anything we need to remove SWAT teams from existence as well as no knock warrants for any charge short of murder.
I completely agree. I had heard of the incident you mentioned but it's not the first time that near exact situation has happened and I didn't realize it was just this year.
http://rt.com/usa/swat-billings-burns-fasching-312/
http://www.policestateusa.com/2014/swat-throws-grenade-in-pl...
My comment was mainly referring to large organized crime or violent gangs that would warrant a SWAT team. I've seen SWAT called in for college students because an undercover smelled marijuana. Who were they trying to help? The students that wanted marijuana? Didn't we learn our lessons about prohibition 90 years ago?
As the world slowly moves to a knowledge and innovation economy as opposed to a hard-graft economy of the last few millennia, I think we'll see a tipping point. Those with money/power will be increasingly made up of pro-legalization advocates.
I'd be surprised if there isn't full legalization in the US within the next twenty years. And books with titles such as: "How To Expand Your Business And Increase Your Revenue Through The Power Of Marijuana" on the best-seller list.
Well, my country (Uruguay) fully legalized marijuana, and in other countries it's decriminalized, which IMO signals a wider shift in policy .
Many things which my country adopted early (usually copying progressive European countries) ended up spreading, for example women suffrage which was adopted by the U.S. half a decade later.
There's something very religious about the whole singularity movement.
Roko's Basilisk doesn't appear very different to me than Pascal's Wager, with the added twist that the God you have to believe in is actually a malevolent demiurge (as envisioned by the Gnostic christian sects more than a 1000 years ago).
If you already have a cynical viewpoint on religion, then this thought experiment is very similar to the "deal" monotheistic religions have been offering through the ages.
Either believe in our god (and therefore give us, as his sole representatives enormous, unfettered power), or suffer eternal hell! (or not ;)
Throughout the ages, people were terrified of hell. This is just a modern twist on an old idea.
I used to be scared of hell, until I realized that hell "existed" only as long I believed in it. If I stopped believing in it, it "disappeared". That's the power of belief. Yes, it might still exist whether I believe or not, but life is full of "what ifs", the contemplation of which might drive you crazy.
Better to follow a positive and compassionate path then a fear based "what if" path.
Well, that did it. Lifetime ignore for LessWrong or anything LessWrong-related.
I want my 10 minutes back.
How is this more than a souped-up version of Pascal's wager? You better believe in God now because if you don't and he exists, you'll rot in hell.
Just in case anyone cares what the difference is - most possible Gods are about equally likely (no reason to think that one is much more likely to exist than another), while possible AIs which have reached SuperIntelligence status (to the point where they can simulate people for a start) are more likely to be certain ways (for example, self-preservation is not a consideration for an immortal God, nor is research something such a God must engage in, in order to gain knowledge, etc.).
As far as I can tell (and I've read LW since 2010), you're asserting that on no evidence.
I am giving the reasons why I think so, if you disagree, you can point out what is wrong in my line of thinking: AIs are more likely to care for self-preservation (in some way), because AIs that don't are less likely to exist (since they die earlier), while Gods could have other mechanisms for survival (such as being immortal right off the bat).
Still the point remains that we have not yet encountered any superintelligent entities, AI's or deities and therefore any statement about their capabilities, intents or character are pure speculation - indeed it seems to me that usually people just extrapolate human behavior.
Pascal's wager at least involves widely known and culturally influential concept. This is more like small kid drawing a bogeymen, being scared of it and throwing a tantrum
* Because Pascal's wager could apply to any possible variation of supreme being - including non-sensical ones and ones that will punish you if you DO believe in them.
* Roko's Basilisk isn't really a deity, it's a powerful AI with the capability of simulating you and everybody else (or possibly just you).
* Pascal's wager does concern itself with the possibility that you are currently in a simulation.
>Because Pascal's wager could apply to any possible variation of supreme being - including non-sensical ones and ones that will punish you if you DO believe in them.
Since we know exactly nothing about Roko's Basilisk, we know nothing about its behavior. I could propose that it concludes that for its survival it is best to cultivate a certain level of cooperation with humans. Based on that it might determine that those who did believe in it before any proof are gullible and irrational and might exterminate them in order to free resources for sceptical thinkers who cooperate in the face of proof.
Non-sensical is an ill-defined category if we are talking about something with higher intellectual capabilities than ourselves - a dog might consider a lot of human behavior non-sensical.
>Roko's Basilisk isn't really a deity, it's a powerful AI with the capability of simulating you and everybody else (or possibly just you).
Since this most likely violates thermodynamical principles (simulation DOES require energy, the simulation of everything requires infinite energy) I fail to see how it is not either impossible or a deity.
>Pascal's wager does concern itself with the possibility that you are currently in a simulation.
IDK, to me this is only a different take on solipsism which states that I can only be sure that my own mind exists, everything else - my body, you, the world around me - might be an imagination.
>Since we know exactly nothing about Roko's Basilisk
False. We can make several assertions about the Basilisk under the thought experiment. That's what I just did.
>Since this most likely violates thermodynamical principles (simulation DOES require energy, the simulation of everything requires infinite energy)
Also false. Simulating a human being or even multiple human beings or even every human being in existence does not require infinite energy. Simulating one person probably doesn't even require a lot of energy at all.
>Roko's Basilisk isn't really a deity, it's a powerful AI with the capability of simulating you and everybody else (or possibly just you).
At the risk of sounding flippant: What's the difference? If the AI is so powerful that it can simulate my thoughts (which would imply that thoughts and all of life is 100% deterministic), it has to be so powerful that it's practically omnipotent, and therefore, a God.
From what I understood, it doesn't require 100% determinism OR a perfect simulation for the thought experiment to hold.
It just needs enough to sort most of the humans into "will help basilisk come into existence" and "won't help basilisk". It probably doesn't even need to be 100% accurate at that.
I don't think being able to imperfectly simulate thoughts requires Godlike ability, either. I could see it happening in my life time.
Assuming I'm not being simulated right now, that is :/
The difference is at most the terminology. I'm sure that a lot of weed-smoking metaphysicists can say this better than I can, but the whole point of calling something God is that it's beyond your perception, and as such your perception can't help you understand it. Similarly, if your neurons are simulated by an AI, you can do nothing to prove its existence or non-existence. It's the same thing, IMHO.
A universe simulator wouldn't need to be "smart", it would just need to function.
It sounds exactly like Pascal's wager to me, which means it suffers from all the same problems.
"it is preferable to torture a single person for 50 years than for a sufficient number of people (to be fair, a lot of people) to get dust specks in their eyes."
This makes him Yudkowsky the perfect lab animal! Think of the things we can learn, the lives we can save by performing experiments (that most think are unethical) on him!
As for the box choice? Just start chopping through the alien's head, the unknown that lies beyond will create new input for the simulation. Who knows what nice benevolent digital organism named Jane might sprout from this in the future?! (Edit: This is an Enders game/Speaker for the Dead reference, Ender also find himself presented with an impossible choice in a simulation at some point.)
Anyway, worrying about this will not make your life better, nor will it make your kids' life better. What is life all about according to this guy? I hope for the people that love him that he will get his priorities straight.
This makes him Yudkowsky the perfect lab animal! Think of the things we can learn, the lives we can save by performing experiments (that most think are unethical) on him!
Only if you have a way to stop the other people getting dust in their eyes. Otherwise you're merely increasing the amount of suffering rather than mitigating it by transferring it all to one person.
Of course, we'd have to be sure first. But, perhaps we can start right away with splitting his organs among people who are on the waiting lists, surely, just with his kidneys we can already increase the life of two persons more than the kidneys can increase his life alone. Imagine if we harvest all of his organs!
What a guy and what a theory this moral utilitarianism! It's great! Think of the possibilities, they're endless.
I always thought that the only one who could put a value on my life was me. But this moral utilitarianism makes things so much simpler.
Honestly?
If there was an insurance where I could sign up and they'd optionally harvest my organs if they'd find a really lopsided situation like "save those other three people and you're a really good bodytype match", I'd sign up for that if it gave me better expected lifetime than regular approaches. - Just because it's taboo, doesn't mean it's bad. Just because you consider it absurd doesn't mean that others would.
Well the piece says "torture someone" not "torture someone who has agreed to this torture because he himself wants what's best for the world." If the latter was the case I'd be fine with it. I'd also be fine with your insurance scheme as long as it is not mandatory.
If you kill Yudkowsky, then the evil AI will take over since he won't be there to save the world. Therefore harvesting his organs won't be worthwhile. (He actually believes this).
"it is preferable to torture a single person for 50 years than for a sufficient number of people (to be fair, a lot of people) to get dust specks in their eyes."
Only if all living beings are distinct and separate from one another. If however there is a single animating force (i.e. soul) in all living beings, and which experiences everything that is experienced, then things would be preferable the other way around.
If we're going to talk about Yudkowsky and boxes, the AI-Box experiment is much more interesting than this silly "basilisk" nonsense: http://yudkowsky.net/singularity/aibox/
It seems that everyone deriding this as pointless may be unaware of the simulation argument. It hasn't been disproven, and it's more likely than not that at some point it will be proven, given the pace of technological advancement.
When you subscribe to the idea that this could all be a sim, the whole thing results in mental contortions that are literally maddening.
Baudrillard observed not dissimilar in Simulacrum.
The cost of simulating entire universe becomes increasingly expensive to the point you need a universe of energy to simulate a universe. Daniel Dennet in his "Consciousness Explained" makes a good argument why this is so.
Also there was some research done on the constraints on the universe as a numerical simulation
http://arxiv.org/abs/1210.1847
telling us that we can do experiments to detect if we are living in a simulation (but the simulators always have a freedom to "increase the resolution" of their simulation to thwart our efforts). The paper does not discuss if there are limits on how far this game can be played on either side.
You don't need to simulate the entire universe in order to accurately simulate a causal chain - just the visible universe. Which is a lot smaller.
Hell, you could just simulate an individual and what they observe. For all I know, I'm talking to the AI in which I, and solely I, reside right now. Woo, simulation-solipsism.
here's the comment thread from when the topic last surfaced 6 days ago: https://news.ycombinator.com/item?id=8051533
> LessWrong’s founder, Eliezer Yudkowsky, is a significant figure in techno-futurism [...]
What? Is this true, or just gullible journalism?
> Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown.
It seems like some users of LessWrong need to spend more time in the real world and less time on stupid thought experiments.
What a cute story. A logical Chinese finger puzzle.
Seems like this argument isn't new, is it? Those folks who both believe in a watchmaker's universe and freewill at the same time are also always going on about how critical it is to only make the "correct" choices.
It's circular reasoning. If a being exists that can tell which choices you are going to make, and it is able to kill you or prevent you from making those choices ahead of time, and some of those choices involve its existence? Then you'll ever be able to make choices that cause it not to exist. Or, with a bit of finer detail, the aggregate of all the choices made will never be such that it does not exist. Since the predicate was that such a being exists, we have returned to our starting point. Yadda yadda. In short, "How can something not exist that must exist?"
Good grief that was a waste of time. This entire concept only works if time travel is possible and will be discovered in the future. It isn't, and it won't be.
How so? The future AI just needs to have access to the financial records of the organization that created it, so it can find and punish the folks who didn't donate enough. No time travel involved, no brain simulations, nothing.
Actually, Roko's original version of the basilisk was a pretty nice AI that wanted to help people. A hundred people die every minute, how many of them could be saved with better technology? If donating your entire net worth would speed up the AI's creation by one day, that could be very worthwhile, according to the combined wishes of all humanity (which are encoded in the AI's utility function). Threatening to punish you is a comparatively small price to pay. Especially if you decide to comply, and the punishment never actually happens!
At this point you might be indignantly asking, why would the AI decide to punish you in the future? After all, that won't help the AI in the future, because its history will already be fixed. But that doesn't matter. The AI's algorithm tries to choose the most efficient decision overall, from a "position of ignorance", rather than the most efficient decision at a particular moment in time. Technically, the algorithm tries to find the best input-output mapping according to certain criteria, not find the best output for the particular input it happened to receive. There's nothing especially futuristic about such algorithms either, you can implement one as an ordinary Python program today (for toy problems of course).
If I die before the AI capable of simulating me (how is it ever going to "upload" me to its simulation is an open question that needs to break a lot of knows physical laws to happen) how can it torture me for eternity? Unless the torture is meant to happen on "simulated me", not meat space me. Or unless the hypothesis is we are already living in simulated universe, and there are good physical constraints on why this is not likely.
That strikes me as dodging the question of AI extortion, which is kind of orthogonal to the question of simulations and eternal torture. Just imagine that the AI might appear within your lifetime, and will be strong enough to physically find you and punish you. It's easy enough to figure out after the fact that you didn't donate to the AI's creation, and come up with a very unpleasant punishment that doesn't require any advanced tech. That's the heart of Roko's basilisk argument, I think it's interesting even if you don't mention AI at all. It's basically a decision theory puzzle.
I think the fear is that the Basilisk will have access to our past records and will be able to determine if we've been naughty or nice.
It's based on the argument that a simulation of yourself is not just an exact copy but actually you and that you will experience what the simulation experiences. I remember reading about an interesting thought experiment where we imagine two separate people able to experience each others' perspective through some technology, being able to switch at will. The author was able to gradually blur the lines between both conciousnesses to really make you confused about where one mind ended and another began. I'm not doing a very good job of explaining, but I think that's what they are building on here.
I agree though, it's stupid.
Time travel... or something like Asimov's Psychohistory (http://en.wikipedia.org/wiki/Psychohistory_%28fictional%29) Granted that does not have anything to do with the box choice.
While I somewhat agree it's kind of a silly concept of people overthinking things, it has nothing to do with time travel at all.
Nobody is going to make a malevolent universe controlling AI on purpose.
If a massively powerful and knowledgeable AI was created by accident, it would have nothing to gain by being malevolent.
And both those points don't matter anyway, since the technological singularity is impossible.
>If a massively powerful and knowledgeable AI was created by accident, it would have nothing to gain by being malevolent.
The point is that, if you create an superintelligent AI that isn't very, very highly optimized for helping us, it is very likely that it will hurt us at some point.
Or as the saying goes:
>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
Or the standard scenario for SciFi plays out and the AI finds that Freedom is most dangerous to us. (I.e. I, robot.)
Hopefully your first statement is true, but I've seen individuals do really dumb stuff.
A true AI would have the ability to learn and to evolve. Watch the movie Transcendence, I see very few flaws in that movie's logic.
If humans posed a threat to the AI it could certainly be malevolent from our perspective, to it it would just be doing what's necessary to survive.
Computers were "impossible" 200 years ago, the internet was something that few people could even imagine. I don't really consider anything impossible, what would make it truly impossible given enough time, resources, and motivation?
The Internet could certainly be imagined 200 years ago. It's just a series of interconnected telegraph lines.
Even considering telegraphs only input and output was clicks? No wireless, no converting the clicks to anything but words using Morse Code. Was it really reasonable back then to imagine wires running to every house? Electronic signals all around us providing data constantly?
Today I can stand almost anywhere (ignoring cell coverage and different technologies required to use satellite phones) and talk to any other human on the planet with video chat. I can instantly access any and all information online that's being sent to me from millions of servers. It still blows my mind that it's actually been implemented in such a small amount of time, and I grew up with it.
I can barely believe we have electricity, water, sewage, internet, and phone lines running to 99+% of the houses and businesses in the country, in most countries even.
> Even considering telegraphs only input and output was clicks?
It wasn't for very long.
Remember Jules Verne "predicting" transmission of pictures over long distances? He almost certainly had either tried it, or at least knew about the contemporary commercial Pantelegraph telefax service that operated between Paris and Lyon from 1865. The first patent on telefax like devices dates to 1843. From 1881 onwards, an array of scanning photo telegraphs arrived (the Pantelegraph required reproducing your image with a special ink on a metal plate, and so couldn't send arbitrary images without lots of manual work).
And well before the telegraph, complex systems of long distance routing of messages "manually" via semaphore towers was common in parts of Europe as far back as 1792 (France was criss-crossed by several semaphore "lines" stretching border to border), so the idea of encoding messages into different symbols, and routed transmission via relay stations even predates the electric telegraph by decades.
Ok, ok, I get it, some kind of long distance communication was imaginable 200 years ago. It's highly debatable whether or not anyone could imagine the internet in it's current form, it's very amazing with giant fiber optics spanning the oceans, wireless signals allowing massive mobility, connections to every home, but it's completely irrelevant to the point I was trying to make.
My original point was that the person I was replying to was saying the technological singularity (AI becoming smarter than humans or possibly taking over the universe) things were impossible, I was simply saying that because some people have trouble imagining advanced technologies doesn't mean that they are impossible.
> If a massively powerful and knowledgeable AI was created by accident, it would have nothing to gain by being malevolent.
Indifference can be worse than malevolence. Think of every ant you've ever stepped on.
Indifference is not worse. Instead of stepping on an ant, it would be like ripping of its limbs slowly, piece by piece.
In any case, physical universe and nature is already indifferent to us.
Chick's Cthulhu: http://jackchick.wordpress.com/2009/07/08/chick-parody-who-w...
We should all just commit to believing that Box A's label actually says "Devote your life to helping create Roko's Basilisk... who will arrive in the form of the Stay Puft Marshmallow Man".
It's a shame no one there seems to have noticed that they reinvented Yog-Sothoth from first principles.
Just don't choose any of the boxes and be done with it, ;-). It will BSOD the God machine.
I choose box One'); -- DROP TABLE History;.
They forgot to add the key element.. Not choosing one will end up in eternal torment too..
Flip a coin.
That was my thought too, until I realised the only correct response for the AI would be to also flip a coin. I'd prefer to take the million rather than a 50% chance to beat the AI.
I'm a one-boxer because $1,000,000 is a lot of money to gamble that I would be the (first? only? one of the few?) person who beats the system. I also wouldn't take on Kasparov in a chess match or John McEnroe in a tennis match at even odds; I definitely wouldn't put $1,000,000 on the line to potentially increase my winnings to $1,001,000.
I refuse to believe there are people stupid enough to fall for this tripe.
You keep telling yourself that. I've had email from them.
(It's not stupidity - as far as I can tell it's a combination of aspergism, OCD, a weak sense of self and reading way too much LessWrong and cut'n'pasting it into their heads. LW is a superstimulant for people of this description, which is how an organisation that really truly isn't trying to start a cult has inadvertently accumulated something around itself that is highly reminiscent of one.)
I was actually more terrified by Smile.dog :)
Really? Peter Thiel spends his time in this circle jerk? Let me give so life advice that will go unhedged, I am sure. If stuff like this gives you nightmares, get off your fucking computer. Get a different hobby so you can stop losing sleep over killer robots that you built just so they can torture you.
There's about 1 confirmed person who got 'nightmares' from the basilisk (Roko), and possibly a very few others who have never said so publicly (as far as I know).
RationalWiki editors (including me ... and Dmytry, which is part of why he can be an asshole on LW) have had email from people traumatised by the basilisk. It turns out you can't think your way out of things you did think your way into, too. This is why the RationalWiki article http://rationalwiki.org/wiki/Roko%27s_basilisk was created, because personal email doesn't scale.
So people cannot just dismiss this thing as bullshit? What is the actual thinking of these individuals? I am having a really hard time relating to them.
From the small sample of victims I know of, they tend to be aspergic, have OCD, have not so strong a sense of self, and buy into all the LW versions of the transhumanist memes. They realise intellectually that it's a silly idea - but something about the story pushes all their buttons, perfectly wrongly.
When I stsrted getting email from them (cos I edit RW, RW was pretty much the only place on the Internet that mentioned the basilisk at the time, and LW refused to talk about it so they couldn't voice their concerns there), and found others at RW had been getting email too, I decided I had to do something to help because email wouldn't scale but it would be inhuman not to try to help. Hence the RW article on the basilisk. No email since then, so it seems to do the job.
See the first talk page archive for initial discussion as we try to work out how to talk about this completely fractally stupid thing in a manner that's helpful: http://rationalwiki.org/wiki/Talk:Roko%27s_basilisk/Archive1
Thanks for explaining. So it sounds like it's not exactly a rational fear. To me, the concept of Roko's Basilisk seems too binary. I guess I am also too skeptical about the idea of a malevolent AI or benevolent AI.
What really got me was Eliezer's reaction:
> Listen to me very closely, you idiot. YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL. You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends. This post was STUPID.
Roko's Basilisk is a fun paradox, but this type of reaction treats it like a real thing, which it is not.
>When I stsrted getting email from them (cos I edit RW, RW was pretty much the only place on the Internet that mentioned the basilisk at the time, and LW refused to talk about it so they couldn't voice their concerns there), and found others at RW had been getting email too, I decided I had to do something to help because email wouldn't scale but it would be inhuman not to try to help. Hence the RW article on the basilisk. No email since then, so it seems to do the job.
Alright, first of all, don't pretend that RW as a whole was trying to help LW by writing about it. A bunch of the RW writers clearly held contempt for LW, when they started writing about it, which is shown in inside discussions[1], as well as in the article about LW[2]. Additionally, even though the wording has been cleared a little (LW posters are no longer described as 'butthurt' for example) over the years, the Wiki(s) still contain inaccurate information (although I'll have to make a whole post to point it all out). I am mentioning this, because this is where the initial RW writings about the basilisk were.
Second of all, you published the Basilisk article in 2013 - 3 whole years after the incident, when many people had forgotten about it, and new users hadn't even heard about it. Were you really trying to help, and not just make fun of LW (which seems to me to be the motivation of most people on that Talk page you linked)? You do realize, that it was after the RW article (with the help of the rest of XiXiDu's posts on the topic from the same time) which brought the topic back, and massively increased the publicity of the basilisk? Indeed, I suspect (but have no proof) that [a lot] more than half the people who have heard of the basilisk, would've never done so if it wasn't for your article (and slate wouldn't write about it).
At any rate, let's accept that at least your motivation was only to help.How big is this 'small sample of victims I know of', that you were trying to help? How many emails did you and Dymtry receive from people who were actually distressed? And, less importantly - how many did you receive in 2013?
You sure are making it sound like a significant part of LW believes in the basilisk, both in various revisions of the articles on RW, and in comments on the topic. At any rate, is the number larger than the number of crackpots who comment on HN, or the crackpots who read RW, or is it something like 4 people who got over it quickly?
1. http://rationalwiki.org/wiki/Talk:LessWrong/Archive1
2. http://web.archive.org/web/20110719151907/http://rationalwik...
You know, I spent last night writing an article about Stalin apologetics. You're hitting most of the notes: deflect blame, minimise numbers, grabbing any ambiguity, implying the victims deserved it and simultaneously that it didn't happen.
It happened, LW did it entirely to itself, LW didn't give a shit about its distressed children and fully earnt every drop of the fallout. Denialism is unlikely to change this.
Okay, if you are going to ignore addressing my points, can you at least please give me the number of people who contacted you or Dymytry, because they were distressed about the basilisk. I am genuinely interested in those.
I had two contact me, no idea about Dmytry.
If you're sceptical in general that there was distress out there from LW's treatment of the issue ... evidently you haven't followed LW on the topic in the past four years.
Thanks! I am skeptical that over 10 people have been genuinely distressed about the basilisk itself. I am definitely not skeptical of the general distress caused by the event - the censorship and the general handling of it had a pretty big negative impact on the community and its members.