Monday, January 05, 2015

Catching up with Kruggers

Paul Krugman has been writing good stuff recently, including his one on Reaganolatory; how the anti-Keynesians keep making claims about inflation and Keynesianism that are wrong; his nominated most important chart of 2014 (although it's not that easy to understand that one);  and the interview in which he discusses some science fiction-y ideas as well as economics.

From that last link, I liked this discussion about whether super AI is really a threat, or not:
Ezra Klein: But let’s assume it does emerge. A lot of smart people right now seem terrified by it. You've got Elon Musk tweeting, "Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable." Google's Larry Page is reading Nick Bostrom’s new book Superintelligence. I wonder, reading this stuff, whether people are overestimating the value of analytical intelligence. It’s just never been my experience that the higher you go up the IQ scale, the better people are at achieving their goals.

Our intelligence is really lashed to a lot of things that aren’t about intelligence, like endless generations of social competition in the evolutionary fight for the best mates. I don’t even know how to think about what a genuinely new, artifical intelligence would believe is important and what it would find interesting. It often seems to me that one of the reasons people get so afraid of AI is you have people who themselves are really bought into intelligence as being the most important of all traits and they underestimate importance of other motivations and aptitudes. But it seems as likely as not that a superintelligence would be completely hopeless at anything beyond the analysis of really abstract intellectual problems.

Paul Krugman: Yeah, or one thing we might find out if we produce something that is vastly analytically superior is it ends up going all solipsistic and spending all its time solving extremely difficult and pointless math problems. We just don't know. I feel like I was suckered again into getting all excited about self-driving cars, and so on, and now I hear it's actually a lot further from really happening that we thought. Producing artificial intelligence that can cope with the real world is still a much harder problem than people realize.

6 comments:

  1. Anonymous9:42 am

    you're now using the same abbreviations as Homer Paxon. you should be worried

    Jason

    ReplyDelete
  2. Heh. Yes, I did wonder whether I should use it or not.

    ReplyDelete
  3. By the way, sorry Homer. :)

    ReplyDelete
  4. I am a trend setter!

    By the way did you notice that little tid bit I gave you today in a forthcoming book review.

    I liked the hobbitt as well.

    ReplyDelete
  5. This of course means Soony reads Around the Traps, as he should, as well as listening to my ,rare now, excellent musical interludes on Sunday.

    Gotcha!!

    ReplyDelete
  6. Homer, I'm not at all convinced that I should be pleased that you think of me when you read about frozen German anuses.

    Even so, I am curious for further details....

    Oh never mind, I've looked it up myself:

    'even the simple act of going to the bathroom could be fatal, as Generaloberst Heinz Guderian, commander of the 2nd Panzer Army Group, recorded in his diary on Dec. 10, 1941, writing that “many men died while performing their natural functions, as a result of a congelation of the anus.” '

    I had to look up "congelation", and it appears to simply be another word for frostbite.

    Well, sort of makes you curious as to how Antarctic explorers handled that problem too!

    ReplyDelete