From that last link, I liked this discussion about whether super AI is really a threat, or not:
Ezra Klein: But let’s assume it does emerge. A lot of smart people right now seem terrified by it. You've got Elon Musk tweeting, "Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable." Google's Larry Page is reading Nick Bostrom’s new book Superintelligence. I wonder, reading this stuff, whether people are overestimating the value of analytical intelligence. It’s just never been my experience that the higher you go up the IQ scale, the better people are at achieving their goals.
Our intelligence is really lashed to a lot of things that aren’t about intelligence, like endless generations of social competition in the evolutionary fight for the best mates. I don’t even know how to think about what a genuinely new, artifical intelligence would believe is important and what it would find interesting. It often seems to me that one of the reasons people get so afraid of AI is you have people who themselves are really bought into intelligence as being the most important of all traits and they underestimate importance of other motivations and aptitudes. But it seems as likely as not that a superintelligence would be completely hopeless at anything beyond the analysis of really abstract intellectual problems.
Paul Krugman: Yeah, or one thing we might find out if we produce something that is vastly analytically superior is it ends up going all solipsistic and spending all its time solving extremely difficult and pointless math problems. We just don't know. I feel like I was suckered again into getting all excited about self-driving cars, and so on, and now I hear it's actually a lot further from really happening that we thought. Producing artificial intelligence that can cope with the real world is still a much harder problem than people realize.