As the many comments that follow it show, normal people tend to be underwhelmed, but amused. It seems that it may be that the originator of the meme also denies he believed it.
However, I did like the last paragraph:
I worry less about Roko’s Basilisk than about people who believe themselves to have transcended conventional morality. Like his projected Friendly AIs, Yudkowsky is a moral utilitarian: He believes that that the greatest good for the greatest number of people is always ethically justified, even if a few people have to die or suffer along the way. He has explicitly argued that given the choice, it is preferable to torture a single person for 50 years than for a sufficient number of people (to be fair, a lot of people) to get dust specks in their eyes. No one, not even God, is likely to face that choice, but here’s a different case: What if a snarky Slate tech columnist writes about a thought experiment that can destroy people’s minds, thus hurting people and blocking progress toward the singularity and Friendly AI? In that case, any potential good that could come from my life would far be outweighed by the harm I’m causing. And should the cryogenically sustained Eliezer Yudkowsky merge with the singularity and decide to simulate whether or not I write this column … please, Almighty Eliezer, don’t torture me.There was also a reference to a book or article which I should look up, but later:
(I’ve adopted this example from Gary Drescher’s Good and Real, which uses a variant on TDT to try to show that Kantian ethics is true.)
1 comment:
I read this yesterday and it reminded me both of Anselm's proof for God's existence and Pascal's wager - albeit with a silly supercomputer substituted for God. And as I mentioned yesterday, too, a number of SF stories have been written around this very premise.
Post a Comment