This paper uses anthropic reasoning to argue for a reduced likelihood that superintelligent AI will come into existence in the future. To make this argument, a new principle is introduced: the Super-Strong Self-Sampling Assumption (SSSSA), building on the Self-Sampling Assumption (SSA) and the Strong Self-Sampling Assumption (SSSA). SSA uses as its sample the relevant observers, whereas SSSA goes further by using observer-moments. SSSSA goes further still and weights each sample proportionally, according to the size of a mind in cognitive terms. SSSSA is required for human observer-samples to be typical, given by how much non-human animals outnumber humans. Given SSSSA, the assumption that humans experience typical observer-samples relies on a future where superintelligent AI does not dominate, which in turn reduces the likelihood of it being created at all.Sounds rather silly to me, actually, but perhaps I should read it first.
Tuesday, September 19, 2017
That's a heck of a lot of S
I clicked onto Nature News and a Springer publications alert suggested I might like to see this article, at arXiv. (Not sure why it would refer me to arXiv, but whatevers.) Anyway, no time to read it yet, but here's the abstract, brought to you by the letter "S":
No comments:
Post a Comment