Governments and the international community often have little warning of impending crises. Likely trouble spots can be flagged a few days or sometimes weeks in advance using algorithms that forecast risks, similar to those used for predicting policing needs and extreme weather. For conflict risk prediction, these codes estimate the likelihood of violence by extrapolating from statistical data4and analysing text in news reports to detect tensions and military developments (see go.nature.com/2oczqep). Artificial intelligence (AI) is poised to boost the power of these approaches.
Several examples are under way. These include Lockheed Martin’s Integrated Crisis Early Warning System, the Alan Turing Institute’s project on global urban analytics for resilient defence (run by W.G. and A.W.) and the US government’s Political Instability Task Force.
Thursday, October 18, 2018
Not quite Minority Report, but still sounds potentially open to abuse
Nature has an article that is upbeat about the potential for using AI to predict armed conflict, so as to enable early intervention:
No doubt, in keeping with so much of what passes for AI, these models will not be able to provide a verifiable chain of reasoning. At least it's kinder on animals than reading their entrails.
ReplyDelete