You Don’t Want the New U.S. Customs Robot to Find You Suspicious

U.S Customs and Border Protection has a new hire on hand at its Nogales, Ariz., border crossing between the United States and Mexico. CBP has installed an avatar kiosk at the checkpoint to help quickly move persons enrolled in CBP’s Trusted Traveler program through the border crossing quickly, analyzing what they say–both their words and the way they say them–for suspicious signals.

Trusted Traveler is a program that pre-screens international travelers and determines them low-risk, allowing them to pass through customs and immigration checkpoints at airports and border crossings more easily. Enrollees are put through a full background check and an interview with a CBP agent prior to being admitted, but once they are in they can take advantage of priority lanes at CBP checkpoints that move faster and perform less rigorous entrance interviews.

The new avatar speaks and understands both English and Spanish, and evaluates a travelers responses to questions with both speech recognition software and voice anomaly-detection programs. So it’s not just what you say to the avatar that counts, but how you say it. Pause too long while responding or speak in patterns indicative of deception, and the avatar might flag your response. The evaluation of the interview is then beamed to a live agent with a tablet computer, who can decide how to proceed from there. Flagged responses may lead to increased probing into a particular response by live CBP personnel.

This is actually the second generation avatar–the first one was demoed from December 2011 to March of this year–but the first generation possessed some shortcomings. It only spoke and understood English, and its conversation wasn’t very natural. If the respondent began speaking before the avatar had finished asking, the computer missed the first part of the response and the query got flagged, often unnecessarily.

The University of Arizona researchers that developed the avatar have fixed some of those problems and hope that version 2.0 will be more widely deployable. Thus far, it seems to be working well enough. The researchers report that by giving their computer a face, a collar, and a tie, people tend to treat it like a real human. They even call it “sir.”

Scientific American