Europe’s high-tech arsenal of border applied sciences is usually narrated as a futuristic story of sunshine, velocity and computing energy. Identification programs such because the Eurodac database retailer, course of and evaluate the digitized fingerprints of migrants utilizing near-infrared gentle, fibre optic cables and centralized servers. Drones patrol the skies with their unblinking optical sensors. And huge volumes of knowledge are fed to pc programmes that predict the subsequent surge in arrivals.
Information tales and NGO reviews specializing in the high-tech nature of European borders abound. Every elicit how distant types of surveillance, deterrence and management more and more complement and, in sure instances, supersede border fortifications. Whereas this type of analysis and advocacy is crucial for holding the EU and tech builders to account for his or her function in driving asylum seekers in the direction of deadly migration routes, it glosses over the lengthy histories of those applied sciences and their established function in Western apparatuses of governance. This not solely dangers amplifying ‘AI hype’ amongst policymakers and builders, who hail these instruments as a way each to create ‘smarter’ borders and to shield the human rights of migrants. Extra importantly, this type of historic amnesia can even misinterpret the violence and exclusions enacted by these applied sciences as a technical challenge of ‘bias’ simply corrected by extra correct measurements or bigger datasets. As an alternative, a lot of the hurt incurred by these applied sciences must be understood as inherent of their design.
A listing of identification
The deployment of superior applied sciences to regulate human mobility is something however new. Image an city European police station within the late nineteenth century. If the municipality had adopted the newest identification expertise, suspects would have been subjected to a posh measurement course of. Taking down their measurements was a exact and extremely specialised course of, requiring a talented and skilled technician.
Contemplate these directions for measuring an ear:
The operator brings the instrument’s fastened jaw to relaxation in opposition to the higher fringe of the ear and immobilizes it, urgent his left thumb pretty firmly on the higher finish of the instrument’s jaw, with the opposite fingers of the hand resting on the highest of the cranium. With the stem of the calliper parallel to the axis of the ear, he gently pushes the movable jaw till it touches the decrease finish of the lobe and, earlier than studying the indicated quantity, makes certain that the pinna [external part of the ear] is on no account depressed by both jaw.
This course of could sound like a quaint if considerably curious relic of the Fin de Siècle, however it’s something however. Bertillonage, the system of measurement, classification and archiving for prison identification devised within the 1870s by the eponymous French police clerk, was a milestone within the historical past of surveillance and identification expertise. Remarkably, its key tenets underwrite identification applied sciences to this present day, from the database to biometrics and machine studying.
An in depth and traditionally established hyperlink exists between fears across the uncontrolled circulation of varied ‘undesirables’ and technological innovation. Nineteenth century strategies, developed and refined to handle issues round vagrancy, colonial governance, deviance, insanity and criminality, are the foundations of in the present day’s high-tech border surveillance equipment. These strategies embrace quantification, which renders the human physique as code, classification, and trendy strategies of indexing and archiving.
Fashionable invasive registration
Good border programs make use of superior applied sciences to create ‘modern, effective and efficient’ borders. On this context, superior applied sciences are sometimes portrayed as translating border processes reminiscent of identification, registration and mobility management right into a purely technical process, thereby rendering the method fairer and fewer liable to human fallibility. Algorithmic precision is characterised as a way of avoiding unethical political biases and correcting human error.
As a researcher of the technoscientific underpinnings of the EU’s high-tech border equipment, I acknowledge each the rising elasticity of up to date border practices, and the traditionally established methodology of its instruments and practices.
Take the Eurodac database, a cornerstone of EU border administration, for instance. Established in 2003, the index shops asylum seeker fingerprints as enforcement of the Dublin Regulation on first entry. Fingerprinting and enrolment in interoperable databases are additionally central instruments utilized in latest approaches to migration administration such because the Hotspot Strategy, the place the attribution of identification serves as a way to filter out ‘deserving’ from ‘undeserving’ migrants.
Over time, each the kind of information saved on Eurodac and its makes use of have expanded: its scope has been broadened to serve ‘wider migration purposes’, storing information not solely on asylum seekers but additionally on irregular migrants to facilitate their deportation. A not too long ago accepted proposal has added facial imagery and biographic data, together with identify, nationality and passport data, to fingerprinting. Moreover, the minimal age of migrants whose information will be saved has been lowered from fourteen to 6 years previous.
Since 2019 Eurodac has been ‘interoperable’ with numerous different EU databases storing data on needed individuals, international residents, visa holders and different individuals of curiosity to prison justice, immigration and asylum adminstrations, successfully linking prison justice with migration while additionally vastly increasing entry to this information. Eurodac performs a key function for European authorities, demonstrated by efforts to attain a ‘100% fingerprinting rate’: the European Fee has pushed member states to enrol each newly arrived individual within the database, utilizing bodily coercion and detention if mandatory.
Marking criminality
Whereas nation states have been gathering information on residents for the needs of taxation and army recruitment for hundreds of years, its indexing, group in databases and classification for specific governmental functions – reminiscent of controlling the mobility of ‘undesirable’ populations – is a nineteenth-century invention. The French historian and thinker Michel Foucault describes how, within the context of rising urbanization and industrialization, states turned more and more preoccupied with the query of ‘circulation’. Individuals and items, in addition to pathogens, circulated additional than they’d within the early trendy interval. Whereas states didn’t search to suppress or management these actions solely, they sought means to extend what was seen as ‘positive’ circulation and reduce ‘negative’ circulation. They deployed the novel instruments of a positivist social science for this objective: statistical approaches had been used within the subject of demography to trace and regulate phenomena reminiscent of births, accidents, sickness and deaths. The rising managerial nation state addressed the issue of circulation by creating a really specific toolkit amassing detailed details about the inhabitants and creating standardized strategies of storage and evaluation.
One notably vexing downside was the circulation of recognized criminals. Within the nineteenth century, it was extensively believed that if an individual offended as soon as, they might offend once more. Nonetheless, the programs out there for prison identification had been woefully insufficient to the duty.
As criminologist Simon Cole explains, figuring out an unknown individual requires a ‘truly unique body mark’. But earlier than the arrival of recent programs of identification, there have been solely two methods to do that: branding or private recognition. Whereas branding had been extensively utilized in Europe and North America on convicts, prisoners and enslaved folks, evolving concepts round criminality and punishment largely led to the abolition of bodily marking within the early nineteenth century. The prison file was established instead: a written doc cataloguing the convict’s identify and a written description of their individual, together with figuring out marks and scars.
Nonetheless, figuring out a suspect from a written description alone proved difficult. And the system was susceptible to using aliases and completely different spellings of names: solely an individual recognized to their neighborhood might be recognized with certainty. Early programs of prison identification had been essentially susceptible to mobility. Notably, these issues have continued to hang-out modern migration administration, as databases usually include a number of entries for a similar individual ensuing from completely different transliterations of names from Arabic to Roman alphabets.
The invention of images within the 1840s did little to resolve the problem of prison identification’s reliability. Not solely was a photographic file nonetheless beholden to non-public recognition however it additionally raised the query of archiving. Legal data earlier than Bertillonage had been saved both as annual compendiums of crimes or alphabetical lists of offenders. Whereas pictures supplied a extra correct illustration of the face, there was no strategy to archive them based on options. If one needed to look the index for, say, an individual with a outstanding chin, there was no process for doing so. Pictures of convicts had been sorted alphabetically based on the identify supplied by the offender, thereby affected by the identical weak spot as different identification programs.
Datafication’s ancestor
Alphonse Bertillon was the primary to resolve this downside by combining systematic measurements of the human physique with archiving and file maintaining. The criminologist improved file retrieval by sorting entries numerically slightly than alphabetically, creating an indexing system primarily based solely on anthropomorphic measurements. Index playing cards had been organized based on a hierarchical classificatory system, with data first divided by intercourse, then head size, head breadth, center finger size, and so forth. Every set of measurements was divided into teams primarily based on a statistical evaluation of their distribution throughout the inhabitants, with averages established by taking measurements from convicts. The Bertillon operator would take a suspect’s profile to the archive and search for a match by means of a means of elimination: first, excluding intercourse that didn’t match, then head lengths that didn’t match, and so forth. If a tentative match was discovered, this was confirmed with regards to bodily marks additionally listed on the cardboard. Wherever this method was applied, the popularity charges of ‘recidivists’ soared; Bertillon’s system quickly unfold throughout the globe.
With Bertillon, one other hallmark of up to date border and surveillance expertise entered the body: quantification, or what is called ‘datafication’ in the present day. Bertillon not solely measured prisoners’ peak and head lengths however invented a way to translate distinctive options of the physique into code. For example, if a prisoner had a scar on their forearm, earlier programs of prison identification would have merely famous this within the file. Against this, Bertillon measured their distance from a given reference level. These had been then recorded in a standardized method utilizing an idiom of abbreviations and symbols that rendered these descriptions in abridged type. The ensuing portrait parlé, or spoken portrait, transcribed the bodily physique right into a ‘universal language’ of ‘words, numbers and coded abbreviations’. For the primary time in historical past, a exact topic description might be telegraphed.
The interpretation of the physique into code nonetheless underwrites modern strategies of biometric identification. Fingerprint identification programs that had been first trialled and rolled out in colonial India transformed papillary ridge patterns right into a code, which may then be in comparison with different codes generated in the identical method. Facial recognition expertise produces schematic representations of the face and assigns numerical values to it, thereby permitting comparability and matching. Different types of biometric ID like voice ID, iris scans, and gait recognition comply with this similar precept.
From taxonomy to machine studying
In addition to quantification, classification – a key instrument of data era and governance for hundreds of years – is one other hallmark of recent and modern surveillance and identification applied sciences. As famous by many students from Foucault to Zygmunt Bauman and Denise Ferreira da Silva , classification is a central software of the European Enlightenment, evidenced most iconically by Carl Linnaeus’ taxonomy. In his graduated desk, Linnaeus named, categorised and hierarchically ordered the pure world from vegetation to bugs to people, dividing and subdividing every group based on shared traits. Classification and taxonomies are extensively seen as an expression of the elemental epistemological shifts from a theocentric to a rationalistic epistemology within the early trendy period, which enabled scientific breakthroughs however had been additionally tied to colonization and enslavement. Of their e-book on the theme, Geoffrey Bowker and Susan Leigh Star underscore classification’s use as a robust however usually unrecognized instrument of political ordering: ‘Politically and socially charged agendas are often first presented as purely technical and they are difficult even to see. As layers of classification system become enfolded into a working infrastructure, the original political intervention becomes more and more firmly entrenched. In many cases, this leads to a naturalization of the political category, through a process of convergence. It becomes taken for granted.’
At this time, classification is central to machine studying, a subfield of synthetic intelligence designed to discern patterns in massive quantities of knowledge. This enables it not solely to categorize huge quantities of data but additionally to foretell and classify new, beforehand unseen information. In different phrases, it applies discovered information to new conditions. Whereas analysis on machine studying started in the course of the final century, it has come to unprecedented prominence not too long ago with functions like ChatGPT.
Machine studying can also be more and more utilized in border work. Not often used as a stand-alone expertise, it’s extensively deployed throughout present applied sciences to reinforce and speed up long-established types of surveillance, identification and sorting. For example, algorithmic prediction, which analyses massive quantities of knowledge together with patterns of motion, social media posts, political battle, pure disasters, and extra, is more and more changing statistical migration modelling for the aim of charting migratory patterns. The European Fee is presently funding analysis into algorithmic strategies which might increase present types of threat evaluation by drawing on wider information sources to determine novel types of ‘risky’ conduct. Machine studying can also be being both trialled or utilized in ‘lie detector’ border guards, dialect recognition, monitoring and identification of suspicious vessels, facial recognition on the EU’s inside borders and behavioural evaluation of inmates at Greek camps. As this wide selection of functions illustrates, there would appear to be no border expertise exempt from machine studying, whether or not assisted picture evaluation of drone footage or the vetting of asylum claims.
Classification lies on the core of machine studying – or at the very least the kind of data-driven machine studying that has develop into dominant in the present day. Particular person information factors are organized into classes and sub-categories, a course of performed both by means of supervised or unsupervised studying. In supervised studying, coaching information is labelled based on a predefined taxonomy. In apply, this normally implies that people assign labels to information reminiscent of ‘dog’ to a picture of stated canine. The machine studying mannequin learns from this labelled dataset by figuring out patterns that correlate with the labels. In unsupervised studying, the information shouldn’t be labelled by people. As an alternative, the algorithm independently identifies patterns and constructions inside the information. In different phrases, the algorithm classifies the information by creating its personal clusters primarily based on patterns inherent within the dataset. It creates its personal taxonomy of classes, which can or could not align with human-created programs.
The supposed prison kind
Because the AI and border scholar Louise Amoore factors out, casting algorithmic clusters as a illustration of inherent, ‘natural’ patterns from information is an ‘extraordinarily powerful political proposition’ because it ‘offers the promise of a neutral, objective and value-free making and bordering of political community’. The thought of the algorithmic cluster as a ‘natural community’ contains a big racializing transfer: types of conduct related to irregular migration are consequently labelled as ‘risky’. As these clusters are fashioned regardless of pre-defined standards, reminiscent of ‘classic’ proxies for race like nationality or faith, they’re tough to problem with present ideas like protected traits or bias. For example, a migrant could be recognized as a safety threat by a machine studying algorithm primarily based on an opaque correlation between journey itineraries, social media posts, private {and professional} networks, and climate patterns.
The creation of classes based on inherent attributes echoes and extends to different nineteenth-century practices: specifically, a spread of scientific endeavours utilizing measurement and statistics to determine regularities and patterns that will level to prison behaviour. Like unsupervised machine studying, the fields of craniometry, phrenology and prison anthropology systematically gathered information on human topics to glean patterns that might be sorted into classes of criminality.
For example, phrenologists like Franz Joseph Gall linked particular character traits to the prominence of areas of the cranium. Within the associated subject of physiognomy, figures just like the Swiss pastor Johann Kaspar Lavater undertook a scientific research of facial options as a information to prison behaviour. Fuelled by the event of images, research investigating indicators of criminality within the face gained traction, with convicts and inmates of asylums repeatedly subjected to such ‘studies’. The composite pictures of Frances Galton, the founding father of the eugenics motion and a pioneer of fingerprint identification, are a working example: photographs of convicts had been superimposed onto each other to glean regularities as bodily markers of criminality.
Legal anthropology consolidated these approaches right into a coherent try to topic the prison physique to scientific scrutiny. Below the management of the Italian psychiatrist and anthropologist Cesare Lombroso, prison anthropologists used a variety of anthropomorphic instruments of measurement, from Bertillon’s exact measurements of limbs to craniometric cranium measurements, mapping facial options, and noting distinctive marks like scars and tattoos. On this foundation, they enumerated a listing of so-called ‘stigmata’ or bodily regularities discovered within the physique of the ‘born criminal’ Whereas this notion is extensively discredited in the present day, the underlying technique of classification primarily based on massed information traits nonetheless exists.
Trusting the conclusions drawn from quantitative evaluation of facial options stays a powerful attract. A 2016 paper claimed it had efficiently skilled a deep neural community algorithm to foretell criminality primarily based on head photographs from drivers licenses, whereas a 2018 research made related claims about studying sexual orientation from relationship web site pictures.
When partaking critically with these programs, it’s crucial to stay conscious of the bigger political venture they’re deployed to uphold. As AI scholar Kate Crawford writes: ‘Correlating cranial morphology with intelligence and claims to legal rights acts as a technical alibi for colonialism and slavery. While there is a tendency to focus on the errors in skull measurements and how to correct for them, the far greater error is in the underlying worldview that animated this methodology. The aim, then, should be not to call for more accurate or “fair” skull measurements to shore up racist models of intelligence but to condemn the approach altogether.’ Put in a different way, strategies of classification and quantification can’t be divorced from the socio-political contexts they’re tasked to confirm and vouch for. To rephrase Worldwide Relations scholar Robert Cox, classification and quantification are all the time for somebody, and for some objective.
But, as Science and Expertise Research scholar Helga Nowotny cautions, if we ‘trust’ the outcomes of algorithmic prediction as essentially true, we misunderstand the logic of deep neural networks. These networks ‘can only detect regularities and identify patterns based on data that comes from the past. No causal reasoning is involved, nor does an AI pretend that it is.’
Whereas these machines could produce ‘practical and measurable predictions’, they haven’t any sense of trigger and impact – briefly, they haven’t any ‘understanding’ within the human sense. Moreover, an overreliance on algorithms nudges us towards determinism, aligning our behaviour with machinic prediction in lieu of different paths. This can be a downside in political cultures premised on accountability. If we want to be taught from the previous to construct higher futures, we can not depend on the predictive outputs of a machine studying mannequin.
AI déjà-vu
There are numerous threads in addition to the shared and continued reliance on quantification and classification one may pull on to discover the entangled historical past of surveillance and identification applied sciences from the nineteenth century to the current. Marginalized, surplus populations like convicts and colonized folks have lengthy been used as ‘technological testing grounds’ to hone classificatory programs and prepare algorithms. A concern of uncontrolled human mobility continues to be leveraged as a driver for analysis and improvement, with tech, in flip, deployed to repair issues it has itself created. And positivistic social scientific strategies stay instrumental to the duty of translating roaring multiplicities into neat, numerical values.
As an alternative of falling for AI hype, we’d as an alternative attune ourselves to a way of déjà-vu: the unsettling feeling that we’ve seen all this earlier than. This manner, we’d higher resist the fantastical claims made by company and border actors, and start uncoupling applied sciences from international initiatives of domination.
This text relies on analysis carried out in the course of the venture ‘Elastic Borders: Rethinking the Borders of the 21st Century’ primarily based on the College of Graz, funded by the NOMIS basis.