Introduction
In spring 2022, the Swiss Federal Police Fedpol proposed a law allowing for the collection of Passenger Name Records PNR from airline companies for the purpose of combating transnational terrorism and serious crime. While airline companies have collected PNR historically during the booking process for commercial purposes, its transmission, processing and analysis by law enforcement and intelligence agencies has expanded over the past two decades.
Against the backdrop of 9/11 and the more recent wave of terrorist attacks in Europe between 2015 and 2020, proponents argue that PNR is a valuable source for monitoring and identifying terrorists and criminals involved in serious crime. Yet, with the gradual expansion of PNR collection by state authorities in Europe and North America, issues related to biased profiling, data protection and the right to privacy have been raised by watchdogs, the media and courts.
Addressing such issues in this post, I want to discuss the role of PNR in counterterrorism CT specifically – the second field of application, serious crime, is not subject of this post. The widespread and largely uncritical use of PNR by law enforcement and intelligence agencies, what I will call henceforth CT actors, merits a more systematic and critical assessment.
Specifically, I want to offer some preliminary thoughts on a set of interconnected operational and ethical concerns involved when collecting PNR for counterterrorism purposes:
In regards to the former, the historical origins of PNR collection by CT actors raises concerns of biased profiling; that is a disproportionate focus on one among many strands of terrorism, namely Jihadist terrorism. In what way such bias potentially affects the collection, processing and assessment of PNR data is thus subject of the first section. Related to this, wether a preoccupation with Jihadist terrorism is warranted depends largely on the question whether PNR collection is effective in this regard? Does it indeed help CT actors curb terrorist attacks? This is subject of section two.
Together, these two questions – the impact of bias and whether PNR is effective nonetheless – leads to a third ethical-legal question: Is PNR collection proportionate and necessary? Since this is a guiding legal principle, I will only offer a brief discussion thereof in section three. More on this subject has been written by competent courts and legal experts, which I am not.
After a brief definition off PNR and outlining its collection-process by state authorities, I will discuss the issues raised above and end with a brief conclusion.
PNR
Passenger Name Records denote a set of information that passengers or travel agencies on their behalf enter when booking a flight. These include personal and contact details, address, flight itineraries, payment and booking methods, and on-board information such as seat number, luggage information, but also information on accompanying passengers and possibly meal preferences on board.
An important distinction here has to be drawn between PNR and another set of passenger data also forwarded to state authorities: Advanced Passenger Information API. Unlike PNR, API only contains personal details as stated in the official ID documents and is not self-reported. It is therefore less prone to human error or misrepresentation than PNR.
PNR Collection, Transmission and Processing
Today, most EU member states as well as the US and Canada have legislation that obliges airline companies to transmit PNR data to state authorities for all international flights. In fact, inside the Schengen area, Switzerland remains one of a few countries that lack the legal basis for PNR collection.
For this purpose, states have established national Passenger Information Units PIUs. Usually staffed by law enforcement personell, PIUs are separate entities and don’t initiate criminal investigations or take counterterrorism measures on their own. Their sole purpose is to collect, process and possibly forward PNR data to competent authorities for further action. The whole process is as follows:
- Collection: PIUs receive the data from air carriers at two points in time: Between 48 and 24 hours before departure and again immediately after boarding is completed.
- Processing: They then process the data in two ways: On the one hand, they match personal details contained in the PNR against international and national databases and watchlists on known suspects; on the other hand, they also check PNR indicators (e.g. nationality, place of departure, booking and payment method, etc.) against so-called risk profiles with the aim of identifying hitherto unknown persons of concern (more below on how risk profiles are developed).
- Forwarding: If the software highlights a passenger, either due to a match in an existing database or watchlist or because he or she corresponds to a risk profile, PIU staff assesses the case individually before deciding whether to forward it to the competent authorities or not.
In short, PIUs receive vast amounts of PNR data on all inbound International flights from airline companies in order to either identify known suspects or highlight persons of concerns based on predetermined risk profiles. To what extent and how these practices are potentially problematic when used for counterterrorism purposes is subject of the following sections.
Historical Origins and Bias in PNR Collection
To understand the potential pitfalls in PNR collection and processing for counterterrorism purposes, a look at the political climate in which it was introduced is revealing. After Al-Qaeda’s attack on the twin towers on 9/11, the US sought new ways to secure international air travel.
To this end, PNR, already collected by airline companies themselves, was a welcome trove to collect information on passengers on a large scale. Under the banner of the so-called war on terror, the US, Canada and a number of European states consequently introduced legislation that obliged air carriers to transmit PNR data to competent law enforcement and intelligence services, i.e. CT actors.
While understandable, the political climate at the time suggests that this was done with the purpose to combat one specific strand of terrorism, namely of the Jihadist persuasion – consciously or unconsciously. This becomes even more obvious when looking at the expansion of PNR legislation since 2015, when a wave of terrorist attacks by the Islamic state IS rocked a number of European states. In its aftermath, the European Union EU introduced the PNR Directive with the aim to provide a legal basis for and harmonize PNR collection by individual member states. While many EU member states already collected PNR data, following the EU PNR Directive, an additional number stated their intention to implement corresponding national legislation.
The evolution of PNR collection by state authorities over the past twenty years thus needs to be understood in the wider discourse on securitization in the face transnational terrorism and more specifically the threat posed by Jihadist terrorists in the West – interestingly, it remains to be seen how or if the shifting discourse towards the threat by rightwing and anti-government terrorism more generally affects CT measures in international aviation.
Bias in PNR Processing and Analysis
In light of the prevailing discourse on transnational Jihadist terrorists that enabled PNR collection over the past two decades, questions surrounding biased profiling and discriminatory practices need to be taken seriously.
A nuanced discussion on the subject matter first and foremost needs to distinguish between the two ways in which PNR is generally used by CT actors: To crosscheck personal details against a number of national and international databases and watchlists in order identify known suspects; and to compare a host of PNR indicators against risk profiles to possibly detect hitherto unknown persons of concern.
On the first usage, comparing personal details in PNR against databases on known suspects and perpetrators seems largely unproblematic in regards to biased profiling. That is, since PIUs crosscheck every passenger, underlying biases don’t apply. Either national and international databases (e.g. by Interpol, or the Schengen Information System SIS) and watchlists contain the passenger’s details and a match occurs, or not – the interesting question here is rather, “how do names end up in these databases?”, a discussion beyond my scope here.
The second purpose however is more problematic: The use of PNR data to identify persons of concern based on a set of indicators that together make up so-called risk profiles. Thus, the issue of bias here concerns the way in which risk profiles are developed: This is either done by CT actors and PIU staff drawing on their past experience in dealing with (Jihadist) terrorists and suspects; or fully automated by training machine learning algorithms using past PNR data.
Illustration Risk Profiles: CT actors could classify all passengers with a specific nationality, male, below 40, and who use a travel agency and indirect flight as persons of concern. The software would highlight every passenger who meets these criteria. PIU staff would then decide whether each case merits closer scrutiny by CT actors or not.
When CT actors rely on past cases to draw risk profiles, which are then translated into PNR indicators to identify possibly unknown suspects, the onus lies very much on individual practitioners. Ultimately, it is their experience that determines how and which risk profiles are developed in regards to PNR. Paired with a preoccupation with the Jihadist threat over the last two decades, there is a chance that risk profiles concentrate on one specific among many actors in the terrorist universe – i.e. Jihadist terrorist as compared to say rightwing terrorists. By consequence, a certain subgroup of all passengers are adversely affected by this targeted search: Namely passengers from Muslim-majority countries, who hail from places of conflict and who exhibit an additional set of travel behaviors that match these profiles. Regardless of whether such profiling is successful in detecting unknown suspects once in a while, it remains discriminatory for a much larger group of passengers.
With progress in big data mining and machine learning capabilities over the last years, a second alternative to human-drawn risk profiles has emerged: The use of large amounts of PNR data to train machine learning algorithms in drawing terrorist risk profiles automatically. Put simply, algorithms search for patterns in past PNR data on known terrorists (supervised search) in order to come up with risk profiles. Used in this way, PNR thus not only serves as a database to match risk profiles against but is itself used as a training set.
Machine Learning: A subset of the broader field of artificial intelligence, machine learning is used in diverse fields today: From finance, to linguistics, advertisement, but also law-enforcement (see predictive policing). While a complex subject, in simple terms computer algorithms are fed training data to detect patterns automatically. The most common method is “supervised machine learning”. In this method, labelled training data is fed to teach the algorithm a specific task (e.g. pictures of dogs), which it is supposed to solve independently in the future. ‘For example, an algorithm would be trained with pictures of dogs and other things, all labeled by humans, and the machine would learn ways to identify pictures of dogs on its own.’
Unsurprisingly, the quality of the training data is crucial for this endeavor. If input data is inadequate, false or even biased, automated risk profiles by design incorporate such flaws – a telling example of this with very real-life consequences is the case of predictive policing in the US, which often compounds racial profiling. For PNR too, issues concerning data quality need to be taken seriously:
Firstly, PNR is self-declared information and therefore prone to human error or even misrepresentation. Thus, PIU staff needs to verify and potentially correct data errors before it can be used as training data, which involves an extra amount of labor on their part (p. 133-134).
Secondly, there needs to be further discussion by experts if PNR data indicators are fit for purpose: Do they hold any explanatory value for drawing risk profiles? Or put differently, can the indicators realistically capture the underlying concepts (in this case terrorist profiles)? And if so, do they capture terrorists of various coleur (e.g. Jihadist, right-wing, separatist) or rather the travel behaviour associated with one specific type? Here again, concerns of biased profiling come into play.
Taken together then, issues of bias in PNR processing by state authorities exist, albeit to varying degrees depending on the specific purpose. When only matched against databases and watchlists, the use of PNR for CT purposes is unproblematic as long as all passengers are screened. Yet when comparing PNR data with risk profiles, concerns of biased profiling in identifying previously unknown persons of concern exist. This is first and foremost the case when CT practitioners draw on their past experience to develop risk profiles, since the prevailing discourse concerning the Jihadist threat in Western Europe and North America cannot be dissociated from individual practice. Likewise, when using past PNR data to train machine learning algorithms to detect patterns of suspect behavior associated with terrorists, the data quality and adequacy for counterterrorism purposes needs to be addressed. Both of which raise a number of concerns: On the one hand the quality of PNR data is up for debate since it is self-declared information; on the other hand, if PNR data indicators are effective in capturing risk profiles and which ones is unclear.
Is PNR an effective Tool in Counterterrorism?
How to asses issues of underlying bias in PNR processing and analysis is directly tied to the question of efficacy. Put simply, whether the preoccupation with Jihadist terrorism is warranted depends on whether PNR collection by CT actors has indeed helped curb terrorist threats. Unfortunately, the covert nature of counterterrorism measurers often preempt a broader debate on the subject matter.
This is the case for PNR collection, processing and analysis too. Understandably, CT actors fear that once their methods are public – for example the specific criteria and indicators used to draw risk profiles –, terrorists will adapt their travel patterns accordingly and literally fly under the radar.
Yet while fears of a learning effect are warranted, unless the use of PNR by CT actors is intelligible to the public and and political stakeholders, we cannot adequately asses if biased profiling and targeted searches are indeed effective and thus somewhat qualify the negative consequences for passengers at large.
Meanwhile, a look at the draft legislation by Fedpol reveals a surprising lack of reference to empirical evidence that would support the large-scale mining of private passenger information. The Swiss draft simply states that the widespread use of PNR for counterterrorism purposes in over sixty states and since twenty years is proof of its efficacy (p. 18). In similar vein, an evaluation by the University of Amsterdam and “It’s Public”, a consultancy, on the Dutch PNR act, suggest that the demand for PNR data ’is an indicator of its use’ albeit not its efficacy, and that CT actors see added value in it (p. 8-9). Clearly, these anecdotes alone don’t justify the collection of private passenger information on such a large scale.
The issue is not only pertinent when PNR is used for predictive counterterrorism, i.e. the comparing of individual data against risk profiles to detect unknown persons of concern. The use of PNR personal details for database and watchlist screening on known suspects also needs further explanation. As discussed above, CT actors also receive the more parsimonious API dataset. This not only contains less private information (e.g. address, payment details), but is also more reliable since the information is based on official ID documents and not self-declared. When running API against databases, the chances are therefore lower that a suspect is not detected due to misspelling or misrepresentation than when using the PNR dataset.
In light of these reservations, a public debate on the use of PNR in CT urgently needs more expert opinion and empirical evidence on its efficacy or lack thereof. Without a sound knowledge basis, activists, political stakeholders, decision-makers and, most importantly, the public at large cannot judge whether PNR collection with all its consequences for biased profiling, privacy and the right to data protection is justified or not. Or put in legal terms, if it is necessary and proportionate.
Tradeoff between National Security and Passenger Privacy
The question whether PNR is an effective tool for counterterrorism directly ties into the tradeoff between national security and passenger privacy. If PNR collection indeed has helped curb terrorist threats and if this would not have been achieved by other, less intrusive means, then the tradeoff seems warranted. However, unless CT actors release information on how exactly they use PNR data in their fight against terrorism, the protection of individual passenger data and privacy remains as important.
In fact, watchdogs, the media and courts have repeatedly raised concerns about the proportionality of PNR collection. Already in 2011, the European Data Protection Supervisor EDPS commented on a EU draft legislation on PNR collection for combatting transnational terrorism and serious crime, stating that the proposal ‘does not meet the requirements of necessity and proportionality’. As we have seen, this did not stop the European parliament from introducing the PNR directive in 2016 in the wake of terrorist attacks in Europe. Still, a ruling by the Court of Justice of the European Union CJEU in 2022 has curtailed the directive in important aspects: It reduced PNR data retention from five years to six months after which it is to be deleted; until now most PIUs have kept PNR data for up to five years in anonymized form. Moreover, unless there are clear indications, PNR is only to be collected for flights from outside the Schengen area and not for intra-European flights. Finally the ruling prohibits the use of PNR to train self-learning systems (i.e. machine learning, see the discussion above). Clearly, from a legal point of view, justification for the current PNR collection practices in Europe are tenuous.
The recent CJEU ruling also affects the Swiss draft legislation as it foresaw a retention period of five years. The proposal furthermore met cross partisan resistance in the Swiss parliament. In its current form, the draft law won’t pass. However Fedpol already stated that it will revise the relevant passage accordingly and resubmit it.
Conclusion
Today, it seems that proponents still believe in the added value of PNR for combating transnational terrorism and serious crime. While I leave an assessment of the latter to someone else, the collection, processing and usage of PNR data for counterterrorism is still work in progress. Until a public debate is informed by the necessary empirical evidence, this will not stop.
Thankfully in Switzerland, we have the democratic means to accelerate the process: If the draft law passes in parliament, a popular referendum could still challenge the decision. This would be welcome as it would force a public debate and propel issues of biased profiling, and data protection and privacy of passenger information into the public. Maybe this would finally convince CT actors to provide more data and insights on the question if PNR is indeed an effective tool for combating transnational terrorism and uphold national security.