In the weeks after the Jan. 6, 2021, insurrection, law enforcement agencies and internet sleuths identified hundreds of people who stormed the U.S. Capitol. Many were later arrested or faced consequences at their jobs or in their communities.
Authorities used a variety of technologies to speed up that process, which was needed because there were millions of images, videos, messages, social media posts and bits of location data to parse.
Anjana Susarla is professor of responsible artificial intelligence and information systems at Michigan State University and has been studying the role that tech, especially image recognition, is playing in the ongoing search for suspects. The following is an edited transcript of our conversation.
Anjana Susarla: I would say that a lot of facial recognition, but more broadly, even image recognition techniques have been deployed. There have been examples where someone’s wearing a T-shirt with a logo, and people were able to trace that logo to, let’s say, a specific coffee shop in, say, North Carolina. So a lot of sophisticated digital fingerprinting analysis of social media posts, even hotel and Airbnb stays, even evidence came from dating sites like Bumble, when folks posted pictures of themselves during the insurrection, so women were able to take screenshots and alert the FBI.
Kimberly Adams: How much do you think AI and this online detective work contributed to these positive identifications and eventually arrests of people?
Susarla: Artificial intelligence helps us filter through — if you will — a whole lot of different images. And then the final step is human intelligence. You want to see it as a triumph of surveillance techniques, then it absolutely is. Each of our phones are really like very sophisticated tracking devices, records from apps that people have used. And so I think the FBI subpoenaed records from, you know, everyone from Apple to there was writing sites like Parler, but there’s also mobile phone records. So there is a very intimate portrait we can create of what they did, where they stayed, where they went after they participated in the events of Jan. 6.
Adams: You said it’s a sort of triumph for surveillance technology. Is that a good or a bad thing?
Susarla: It definitely raises all these questions of what happens when police departments everywhere are using these same methods. [Facial recognition technology company] Clearview AI reported the 26% increase in usage among police departments. What it means is it essentially changes social media and all the online digital fingerprinting that is possible. So that raises the question of, you know, what’s the civil rights implications of all these technologies?
Adams: Coming back to that human intelligence part of this, it feels like Jan. 6 happened at something of a unique tech moment where we had this AI available, people on social media and a lot of people at home because of the pandemic who maybe traditionally might not have had that much time on their hands. What did the moment at which Jan. 6 occurred sort of in the arc of technological development mean for how this all played out?
Susarla: You know, it definitely comes at an important moment in our history where we are doing everything online. And we leave traces of our lives, so much of our lives online. And the digital platforms also control a lot of what we see because of the methods that are used for how they recommend posts for us or trending posts or how they curate content for us. So, in that sense, I think this is a moment that fundamentally changes maybe our relationship with algorithms and the world we live in. You have police departments can use Clearview AI, but me sitting at home can also go and use some of the same methods and work with other people using social media. So does that mean we get a little bit more control back as individuals? I’m not quite sure.
Adams: How would you describe the tech legacy of Jan. 6?
Susarla: Well, increased reliance on facial recognition, definitely increased surveillance techniques. And this also raises some issues about how we are monitoring hate speech online and the content filters. Facebook, for example, some of the civic filters, they look at activity in any group over a seven-day period, and they will declassify something as a hate group if no hate speech is posted on a seven-day period, but that’s maybe inadequate because group dynamics may be different and how people communicate may be different.
Related links: More insight from Kimberly Adams
That detail Susarla mentioned about Facebook’s content filters — The Verge wrote about it during coverage of “The Facebook Files” leaked by Frances Haugen.
While many focused on social media platforms like Parler as playing a key role leading up to the attack, The Washington Post reports that Facebook also played a “critical role” in spreading the misinformation that fueled the riot. According to the report, there were at least 650,000 posts on the site between Election Day and Jan. 6 attacking the legitimacy of the 2020 election. That’s about 10,000 a day.
And like the professor said, tech helped people weed through the millions of images and videos that are helping authorities find and prosecute the people who raided the Capitol, but the final step is human intelligence.
The Huffington Post is among several outlets that have profiled people who’ve come to be known as sedition hunters.