Róisín Read & Bertrand Taithe
On 12 February at the Dutch Humanitarian summit held at Humanity House in The Hague, an event run by 10 leading Netherlands NGOs in preparation for the World Humanitarian summit, the Dutch minister for foreign trade and development cooperation Lilianne Ploumen raised a forceful challenge to the humanitarian sector and pointed to a potential solution. Her main point was that the ‘humanitarian system’ devoted less than 1% of its budget to R&D; the solution, implied rather than explicit, was that the Dutch government would invest heavily in a coalition of academics and NGOs to develop the future of ‘Big Data’ in humanitarian aid. Though immediately corrected by the minister, a frisson of optimism ran through the room: could big data become the bedrock of intelligence-led and evidence-based humanitarian aid?
OCHA has already produced in June 2013 a corrective to all naïve assumptions that by itself and for itself, data could be a solution to the information deficit in humanitarian decision-making, but in an industry which, like most others, tends to focus its hopes on innovation rather than routine processes, big data remains alluring. In an era dominated by security concerns (for instance medical care under fire, a campaign run by MSF), the sense that security data in particular might be submitted to the big data treatment, whatever that might be, offers the prospect of humanitarian intelligence coming into existence. Humanitarian organisations have typically steered clear of using the term intelligence, its connotations of espionage jarring with humanitarian principles. However, in light of a perceived shrinking of humanitarian space, the need to have better situational awareness through the collection of information about the political and security situation is central to effective operating. Information sharing for security purposes between NGOs and with UN bodies is commonplace in insecure contexts. But thus far, big data, despite its widespread use by traditional state intelligence structures, has had limited uses in humanitarian contexts
This invites us to ask what big data for security would look like in humanitarian contexts. The use of large-scale data mining for intelligence has been the subject of no small amount of controversy in recent years, raising ethical questions about privacy and data security. Interestingly, the outcry against PRISM does not seem to have dampened the enthusiasm for exploiting big data in other contexts. More generally in the humanitarian sphere, questions of data exploitation are presented as technical challenges to be overcome, with a focus on the use of mobile phones and social media tools to enable real-time monitoring, the focus not on perfect data but on ‘good enough’. The potential of this information and its associated technologies is presented as transformative, though its advocates are aware of the limitations in practice. Our goal is not to build a big data strawman, of course no one is suggesting big data will solve all humanitarian problems. However, it is useful to reiterate that the collection of information by humanitarian organisations is highly political, as the case of Darfur has repeatedly demonstrated.
From a technical point of view, there are a number of problems facing those who seek to use big data for security in the humanitarian arena. Firstly, it is debateable how ‘big’ the data humanitarians have access to actually is. The Data Wars project, looking at the use of big data in European security initiatives puts into perspective the sheer difference between the scale of state security institutions ‘big data’ and humanitarians’ ‘big data’ which predominantly relies on crowd sourcing and data mining of social media. The focus on real-time solutions may obscure historical trends, a problem exacerbated by the rate of technological change rendering many programmes defunct.
Given the popularity of local, contextually-driven humanitarian aid, however, at least in the rhetoric, the question arises as to whether ‘big’ data is appropriate for many of the problems humanitarians face, especially with regards to security. The experience of UN peacekeeping intelligence structures has suggested that the majority of ‘actionable intelligence’ is human intelligence, information provided by locals. This humanitarian intelligence has always remained fractured and fragmentary, shaped by gossip and anecdotes, informed by drivers and local staff as much as by whatever security forces might be around. Its frame of reference remains at best contextual, at worse a self-intoxicating reflection of a bunkered reality.
Databases, datasets abound that attempt to capture danger and risks – often conflating the two – the risk might be small but the danger absolute. Databases seek to record incidents, with varying degrees of success: insults, menaces, shots, physical violence, or even psychological pressure. All of these fragments paint a picture of ongoing or historical dangers but the aim is to measure risk (the near misses) which can seldom be portrayed accurately for want of a common definition of what a miss might be. Databases of this kind tend to focus on event-based, rather than process-based violence leading to a lack of consideration of the structural factors at play in a given conflict, offering only a very narrow interpretation of violence. They also tend to focus attention on extreme incidents, ignoring the smaller day-to-day experiences of insecurity which more commonly shape security policy and influence programming decisions.
This incomplete and necessarily frustrating humanitarian intelligence does not lend itself well to a big data treatment as defined by the users of major data warehouses. In intelligence terms it presents the risk of being perceived as the collecting of evidence – of the kind that ends up in The Hague to the ICC – or worse even, intelligence in the common parlance of spooks worldwide. A clear response to this ‘new’ threatening situation (it is of course not new in any way, the assumption that humanitarians may be spies being at least as old contemporary humanitarian aid with recorded cases of arrests and interrogation in 1877 for instance), has been the urge to secrecy. This contradictory urge which balances the need to pool information to create useable data with the fear of being perceived as informers – especially given existing concerns about humanitarians being targets of violence – paralyses the idea of an increased use of data which would formalise humanitarian intelligence.
Furthermore, as often when faced with new technologies, the capacity to interrogate data lags far behinds the technology to accumulate. What big data promises is therefore an archive rather than intelligence – a vault of information loosely connected where patterns cannot emerge for fear of being intelligible – where debate might be mired by internal and external pressure to keep information secret on a ‘need to know’ basis – where humanitarian intelligence might remain the dream of the weak. ‘A tale told by an idiot, full of sound and fury, signifying nothing’.