Overarching research questions:
What does digitalised law enforcement mean and how is it practiced in Denmark, Estonia, Latvia, Norway, Sweden and the UK?
How is public participation, transparency, human rights in the procurement, implementation and use of digital policing technologies ensured when public and private actors are collaborating at these digital infrastructures?
What values, politics and affordances are embedded in digital law enforcement solutions, and how are these negotiated and transformed before and after implementation?
Researcher: Vasilis Galis and PhD candidate Björn Karlsson (IT University of Copenhagen)
Focus:This case study examines the processes of procurement and implementation as well as how a data-driven police technology is used in specific encounters with citizens in Denmark.
The WP will focus on the case of the POL-INTEL infrastructure, which Denmark’s national police purchased from the Palantir Technologies. POL-INTEL gathers and analyses information from different sources such as police databases, social media, CCTV-cameras and automated number plate recognition (Politiken 2017). The police aim to use POL-INTEL to pave the way to predict a crime before any offence has been committed (EUobserver 2017).
The research in WP2 will investigate expectations in relation to the procurement of POL-INTEL.
First, the media coverage and public/policy debate around POL-INTEL and data-driven policing in general will be gathered and analyzed in order to identify concerns as well as non-debated issues.
Second, a round of interviews with public servants from the Ministry of Justice and the police will be conducted. Interviews with programmers at Palantir are also planned.
Third, the use of the software will be investigated to all extents possible, which includes interviews with users and leaders in the police who have been part of the procurement and implementation of POL-INTEL. EU officials responsible for the rights to privacy and data protection will also be interviewed and policy documents on security and privacy will be analyzed.
WP2 will focus on how institutional values in law enforcement are changing as automated data analytics continues to gain a foothold in the protection and prosecution of citizens. Analyzing changing patterns of authority will be central to this WP as POL-INTEL is likely to affect the ways in which information gets acted on.
Researcher: Anu Masso and Tayfun Kasapoglu (Tallinn University of Technology)
Focus: This study explains the possible prejudices among decision-makers developing gene data solutions, including those for predictive policing purposes.
The work package is based on the developments of social datafication (Schäfer & Van Es, 2017), indicating (a) that all of the human activities are turned into data points, (b) these data are used for the diversity management purposes, (c) the construction, access and use of data (re)produce prejudice.
The empirical cases where the misconceptions of diversity through predictive policing are:
Digital migration control, using the example of Estonian e-residency case. This program facilitates placeless work. The digital activities of e-residents are traceable through a variety of register data. The digital migration enabled by Estonian e-residency program has no physical residency assumption, still the selectivity principles of traditional migration policy are implemented in the digital migration policy instruments. This selectivity principle is implemented through datafied control and based on the fused register data. The predictive datafied solutions are implemented for ‘selecting out’ the applicants not ‘appropriate’ for the country. The unknown mechanisms of the prejudices through the datafied solutions are examined in this study.
Automated solutions for predictive border control. The facial recognition algorithms are implemented for identifying travelers crossing the Tallinn-Helsinki border. The legal agencies compare the pictures of travelers with those in the database, for identifying wanted criminals and for predicting crimes. This study explains the prejudices of the human decision-makers who use the predictive border control tools.
Profiling in predictive policing, using the example of gene data. The Estonian gene data bank is one of the world’s largest. As the DNA genealogical databases are a valuable source for police there are no rules in use of these data in predictive policing. The potential prejudices in the use of gene data for predictive policing purposes are unknown.
In-depth interviews will be conducted with experts: (1) from private institutions, (2) from non-profit organizations, and (3) policy developers and decision-makers. Possible subjects of these data solutions will be interviewed, too. The interviews will be combined with a novel cognitive experimental testing approach where the perceptions of data solutions are evaluated using Tobii Eye Tracker glasses. The interactions with predictive policing tools and cognitive processes underlying these interactions will be evaluated.
Researchers: Anda Adamsone-Fiskovica and Emils Kilis (Baltic Studies Centre), Irena Nesterova and Lolita Buka (University of Latvia)
Focus: This WP will focus on the use of cameras and visual data in enforcing road traffic regulations as the primary example of digitalization of law enforcement.
The study will be based on four micro-cases involving varying degrees of reliance on technological systems and data gathering as a prerequisite for enabling predictive tools and raising new ethical, legal and social issues regarding surveillance in public spaces and data governance:
The study will be conducted using several qualitative methods – desk research, analysis of regulatory frameworks, interviews with officials and police officers, user focus group, visual ethnography.
Use of the road traffic management solution FITS (Future Intelligent Transport Systems) ITEMS, which aims to “understand” transport flows and events from collecting and managing the data provided by various sensors and cameras;
Exploitation of an unmarked police bus with a 360-degree camera for identifying violations of road traffic regulations;
Wearing on-body cameras to record interactions with civilians;
Use of a smartphone app that civilians can use to notify the police about crimes and violations (also on illegally or improperly parked cars) by sending photos and video material.
The first two cases involve communication between equipment gathering visual material in the field and databases (e.g. insurance databases, the database of the Road Traffic Safety Directorate of the Republic of Latvia). They are often framed as tools for ensuring the safety of all drivers and vehicles. The other two cases involve cameras facilitating new and imagined relationships between civilians and the police, albeit with contrasting emphases.
Case III is primarily framed as a deterrent against corruption and as a way to protect the interests of individual police officers against unfounded accusations by civilians.
Case IV is a way for civilians to extend the reach of police by implicitly increasing the presence of surveillance in the city.
Researcher: Helene Oppen Ingebrigtsen Gundhus (University of Oslo), Pernille Erichsen Skjevrak (Oslo Metropolitan University)
Focus: The objective of the WP is to explore predictive policing as a tool for reducing uncertainty and risks in the Norwegian police. This topical subject will be investigated through several case studies. In this WP we will particularly explore differences in the use depending on whether AI is integrated in the software program or not (Williams et al. 2017).
In the Norwegian police there are now several projects using risk indicators to predict crime, developing the indicators from life course-theories in criminology, forecasting who are at high, medium and low risk. These practices will be compared with predictive tools using AI in two different contexts.
First, it will be compared with machine learning systems approached in WP X.
Second, it will be compared with the actual use of PredPol in Danish police (Volquartzen 2018), in this project researched by WP2 in Denmark.
The aim is to capture the significance of contextual aspects as education, discretion, but also the type of data put into the predictions (automatisation of data, ‘Internet of Things’, real time analytics) and explore how these aspects influence the decision-making processes and discretionary power within the police system and by the individuals. The backdrop is the increased aim for standardization of police data tools, methods, use of databases, to reduce biases and make the police intelligence more neutral and objective.
The WP has an exploratory aim, searching for conceptualizing the making of the new knowledge within police/technology dynamics. Policy documents and interviews with decision-makers and software engineers plus with participants in different cases in Oslo will be conducted. Ethnographic observations of police practice in different police districts will also take place. To explore risk assessment tools comparatively will contribute new insights in understanding the contextual aspects of predictive policing, and how it is co-produced.
Researcher: Antonis Vradis (University of St. Andrews)
The collaboration between the Swedish police and the Business Analytics private company QlikTech began in 1993 (Sullivan 2013). Initially, the main goal was the digitalization of large volumes of crime reports to follow-up on crime trends. However, after 2004, police authorities developed applications not only for crime monitoring, but also for decision-making processes within the organization at various levels: preventive, operational, administrative, and financial (Garpensved 2013). Nowadays, policy goals to link police and intelligence services across different Swedish provinces are imprinted on the development of a national system called Status. This consists of a set of applications developed with the QlikView tool, which allows geographical, temporal and organizational searches, and uses national databases and special collecting files from other sources (Simonsson 2012).
WP6 will investigate the modes in which the digitalization of the Swedish police is implemented through Status. The WP will examine the Status system through the study of annual official police reports and crime surveys, and interviews with key stakeholders at various levels: system developers, criminologists, system administrators, public servants, police directors and officers. An ethnographic study will be conducted on the everyday use of QlikView by police officers. Finally, WP6 conclusions will involve narrative analysis as a methodology to interpret the findings from media coverage.
Researcher: Antonis Vradis (University of St. Andrews)
The UK is a pioneering state in the integration of digital technologies in surveillance and policing, from smart borders (Amoore and Hall 2010) to monitoring through big data (Amoore and Raley 2017) and all the way to the management of recent security threats (Stevens and Vaughan-Williams 2016). Major cities in the country have seen a sharp rise in the use of facial recognition to monitor and control public spaces (The Independent 2019). Pancras Square in central London is at the forefront of the use of such technology. The Metropolitan Police apologized for secretly sharing images from its own database with the private corporation running the site (Metropolitan Police 2019), while the Mayor of London has written to the project owner (The Guardian 2019) over his apparent alarm concerning the use of facial recognition by the corporation’s CCTV system.
WP7 will use a range of mix methods, including field observation, semi-structured interviews with users and key stakeholders of the Kings Cross site, as well as a media discourse analysis of the coverage the site has received in the time leading to, and following the revelations concerning its use of facial recognition as a means of policing the site. Through a series of Freedom of Information (FOI) requests, we will collect visual data from the private corporation running the site and the police itself. The aim here is to better understand the interplay between public and private in relation to the governance of facial recognition technologies and practices, as well as the effect this has on the freedoms and the everyday experiences of the site users.