Security screening plays a critical role in securing airports, relying heavily on human perception, judgment, attention, and decision-making. The screening task can be draining and monotonous because it is both repetitive and constrained by the limitations of the human perceptual system. These limitations “could potentially be exploited by terrorists or criminals seeking to attack the aviation system”.
What makes this screening process so prone to failure? We’ll look through some of the contributing human factors, and then discuss how artificial intelligence (AI) can revolutionize checkpoint security.
Divided attention reduces effectiveness
Though X-ray screening is a task that requires dedicated focus, many factors in the busy and noisy checkpoint environment can distract operators. Noise, slow-downs, temperature, screen reflection, and air quality can all affect screener performance. Even an additional monitor on a dual-view machine can increase the operator’s reaction time and inadvertently lead them to miss items.
A long and growing prohibited items list makes screening complex
During the hectic screening process, operators are looking for dozens, if not hundreds, of potential items on the prohibited items list. Research indicates that the more items a human has to look for, the more time they need to look for those objects, and as a consequence, rare items have higher miss rates (like firearms and ammunition).
Mental fatigue dramatically worsens performance over time
The life of a security screening officer can be demanding and incredibly tiring. Research shows that the performance of an X-ray screener starts to suffer after only 10 minutes, and that performance declines exponentially with increasing time. During busy periods, operators might be screening for much longer than expected.
Operators only have a few seconds to make a decision
Screeners have roughly 2.5 seconds to observe the inside contents before the bag is fully in view, after which another bag might immediately begin to appear.
Motivation (or lack thereof) can impact performance.
These aspects of the job can have a demotivating effect on screeners, who can suffer from emotional exhaustion, low job satisfaction, and a lack of motivation. Decreased motivation can have a detrimental impact on their screening ability, given that screeners who are not motivated might miss threats. They’ll also be more likely to quit their job. Some airports analyzed by BNA experienced a turnover of 30-80 per cent across five years.
The role of artificial intelligence (AI)
These aspects that make the job difficult for humans, can be easily carried out by AI:
- AI can analyze an image in under 0.2 seconds;
- AI doesn’t get tired or distracted;
- AI can get better over time and learn from operators;
- AI can be updated to detect new items;
- AI can adapt to certain conditions like seasons or flight destinations.
Though it wasn’t possible to use AI for security screening 5-6 years ago, there have been significant advancements in recent years. AI can automate repetitive, well-defined tasks while learning complex rules that previously only humans could understand. Studies show that some algorithms can better recognize your own friends’ faces than you would. In X-ray screening, operators have to look at every bag every time. Whether a bag contains an obvious threat or is obviously clear of threats, an operator has to look at it, which can lead to operator boredom. AI can reduce the number of bags that the operator is required to review, and can bring checkpoint screening closer to how hold-baggage screening works, where a handful of operators can screen thousands of bags per hour.
This transition will happen in a few stages:
Stage 1: “Operator assist”
AI points out potential prohibited items to operators. Operators still look at every image, but the AI reduces the cognitive load and fatigue placed on these operators, increasing detection rates by helping the operator find more, and decreasing image review time (improving throughput if X-ray screening is a bottleneck). Systems like Threat Image Protection (TIP) can help prevent operators from becoming dependent on the AI.
Stage 2: Divide and conquer
AI looks for certain items, while the operator looks for other items and verifies the output of the AI.
This is similar to the adaptive cruise control on a car (“automatic following”). It is one less thing you need to worry about, and the car might prevent an accident by reacting to something faster than you could.
Stage 3: Partial Autoclear
At this point, AI could also be trained to “autoclear” bags by combining several algorithms including Explosives Detection Systems, density checks, and threat detection. If AI detects no potential explosive, no gun or knife (or other items, like perhaps batteries or wires), and the bag is low-density, the AI might automatically clear a certain percentage of bags.
Stage 4: “Image on alarm only”
By this point, the AI is capable of doing a “first pass” on all bags, and deciding which bags need an operator “post-primary” review. The operator has to look at bags that the AI is unsure about, or that the AI thinks is “too close to call”. In our experience, this amounts to less than five per cent of bags. The rest of the bags are either automatically diverted to secondary inspections or cleared. This could free up screeners’ time to do other high-value activities like assisting passengers, or carrying out behavior detection.
Answers to common questions about AI at the checkpoint.
Well-developed AI uses all available information, including item shapes, material and density. AI could detect a firearm even if it was partially made of polymer or partially occluded by another object. It can also learn more advanced features than those that are easily defined. In the above image, AI has flagged a knife based on not only the sharp tip and metal content, but also the tapering metal edge and mounting bracket.
You don’t need a new machine to leverage AI
AI can be retrofitted to add automated detection abilities to existing machines, including older and smaller single-view models.
Modern algorithms can have very low false alarm rates
Some algorithms today can achieve false alarm rates as low as 0.2 per cent on sharp objects – which could mean about 1-3 false alarms per hour. False alarms are not a bad thing, given that they are equivalent to a human zooming into a part of a bag, using a filter, or squinting to get a closer look. Many things can look like threats and should still be looked at closely (conversely, if there are zero false alarms it is probably because some items are being missed).
Algorithms “learn” very deliberately
While an AI can also learn and improve, the process can be highly controlled. An AI can be customized to a specific airport and adjust to seasonal or other variations in processes that are controlled by AI engineers that submit each change to rigorous testing each time. If a new threat comes out (say, a new hard-to-detect gun), the AI could also be quickly updated to detect this new threat making the system more resilient to evolving terrorist tactics.
Overall, when evaluating AI solutions, it is worth asking:
- What have been the observed false alarm rates in the field? Over what time period, and on how many images, were these rates determined? What is the estimated detection rate in the field? And what have been the false alarm rates and detection rates in the lab?
- How many labelled images were used to train the algorithm? How were these labels quality-controlled?
- How does AI improve over time? Is that learning customizable to any airport or checkpoint?
- How quickly can new items be added to the system?
About the author
Bruno Faviero is Co-Founder and Chief Operating Officer of Synapse Technology, which develops automated threat detection algorithms for security screening, and co-founder of the Global Security Innovation Network. Prior to Synapse, Bruno worked as a software engineer, venture capitalist, and financial product manager.