The Threat Analysis of You

Macrobius

Megaphoron

“The government solution to a problem is usually as bad as the problem and very often makes the problem worse.”—Milton Friedman
You’ve been flagged as a threat.

Before long, every household in America will be similarly flagged and assigned a threat score.

Without having ever knowingly committed a crime or been convicted of one, you and your fellow citizens have likely been assessed for behaviors the government might consider devious, dangerous or concerning; assigned a threat score [1]based on your associations, activities and viewpoints; and catalogued in a government database according to how you should be approached by police and other government agencies based on your particular threat level.

[1]: archived version: https://archive.ph/QQBE7

If you’re not unnerved over the ramifications of how such a program could be used and abused, keep reading.

It’s just a matter of time before you find yourself wrongly accused, investigated and confronted by police based on a data-driven algorithm or risk assessment culled together by a computer program run by artificial intelligence.

Consider the case of Michael Williams, who spent almost a year in jail for a crime he didn’t commit. Williams was behind the wheel when a passing car fired at his vehicle, killing his 25-year-old passenger Safarian Herring, who had hitched a ride.

Despite the fact that Williams had no motive, there were no eyewitnesses to the shooting, no gun was found in the car, and Williams himself drove Herring to the hospital, police charged the 65-year-old man with first-degree murder based on ShotSpotter, a gunshot detection program that had picked up a loud bang on its network of surveillance microphones and triangulated the noise to correspond with a noiseless security video showing Williams’ car driving through an intersection. The case was eventually dismissed for lack of evidence.

Although gunshot detection program like ShotSpotter are gaining popularity with law enforcement agencies, prosecutors and courts alike, they are riddled with flaws, mistaking “dumpsters, trucks, motorcycles, helicopters, fireworks, construction, trash pickup and church bells…for gunshots.”

As an Associated Press investigation found, “the system can miss live gunfire right under its microphones, or misclassify the sounds of fireworks or cars backfiring as gunshots.”

In one community, ShotSpotter worked less than 50% of the time.

Then there’s the human element of corruption which invariably gets added to the mix. In some cases, “employees have changed sounds detected by the system to say that they are gunshots.” Forensic reports prepared by ShotSpotter’s employees have also “been used in court to improperly claim that a defendant shot at police, or provide questionable counts of the number of shots allegedly fired by defendants.”

The same company that owns ShotSpotter also owns a predictive policing program that aims to use gunshot detection data to “predict” crime before it happens. Both Presidents Biden and Trump have pushed for greater use of these predictive programs to combat gun violence in communities, despite the fact that found they have not been found to reduce gun violence or increase community safety.

The rationale behind this fusion of widespread surveillance, behavior prediction technologies, data mining, precognitive technology, and neighborhood and family snitch programs is purportedly to enable the government takes preemptive steps to combat crime (or whatever the government has chosen to outlaw at any given time).

This is precrime, straight out of the realm of dystopian science fiction movies such as Minority Report, which aims to prevent crimes before they happen, but in fact, it’s just another means of getting the citizenry in the government’s crosshairs in order to lock down the nation.

Even Social Services is getting in on the action, with computer algorithms attempting to predict which households might be guilty of child abuse and neglect.

All it takes is an AI bot flagging a household for potential neglect for a family to be investigated, found guilty and the children placed in foster care.

Mind you, potential neglect can include everything from inadequate housing to poor hygiene, but is different from physical or sexual abuse.

According to an investigative report by the Associated Press, once incidents of potential neglect are reported to a child protection hotline, the reports are run through a screening process that pulls together “personal data collected from birth, Medicaid, substance abuse, mental health, jail and probation records, among other government data sets.” The algorithm then calculates the child’s potential risk and assigns a score of 1 to 20 to predict the risk that a child will be placed in foster care in the two years after they are investigated. “The higher the number, the greater the risk. Social workers then use their discretion to decide whether to investigate.”

Other predictive models being used across the country strive to “assess a child’s risk for death and severe injury, whether children should be placed in foster care and if so, where.”

Incredibly, there’s no way for a family to know if AI predictive technology was responsible for their being targeted, investigated and separated from their children. As the AP notes, “Families and their attorneys can never be sure of the algorithm’s role in their lives either because they aren’t allowed to know the scores.”

One thing we do know, however, is that the system disproportionately targets poor, black families for intervention, disruption and possibly displacement, because much of the data being used is gleaned from lower income and minority communities.

The technology is also far from infallible. In one county alone, a technical glitch presented social workers with the wrong scores, either underestimating or overestimating a child’s risk.

Yet fallible or not, AI predictive screening program is being used widely across the country by government agencies to surveil and target families for investigation. The fallout of this over surveillance, according to Aysha Schomburg, the associate commissioner of the U.S. Children’s Bureau, is “mass family separation.”

The impact of these kinds of AI predictive tools is being felt in almost every area of life.

Under the pretext of helping overwhelmed government agencies work more efficiently, AI predictive and surveillance technologies are being used to classify, segregate and flag the populace with little concern for privacy rights or due process.

All of this sorting, sifting and calculating is being done swiftly, secretly and incessantly with the help of AI technology and a surveillance state that monitors your every move.

Where this becomes particularly dangerous is when the government takes preemptive steps to combat crime or abuse, or whatever the government has chosen to outlaw at any given time.

In this way, government agents—with the help of automated eyes and ears, a growing arsenal of high-tech software, hardware and techniques, government propaganda urging Americans to turn into spies and snitches, as well as social media and behavior sensing software—are spinning a sticky spider-web of threat assessments, behavioral sensing warnings, flagged “words,” and “suspicious” activity reports aimed at snaring potential enemies of the state.

Are you a military veteran suffering from post-traumatic stress disorder? Have you expressed controversial, despondent or angry views on social media? Do you associate with people who have criminal records or subscribe to conspiracy theories? Were you seen looking angry at the grocery store? Is your appearance unkempt in public? Has your driving been erratic? Did the previous occupants of your home have any run-ins with police?

All of these details and more are being used by AI technology to create a profile of you that will impact your dealings with government.

It’s the American police state rolled up into one oppressive pre-crime and pre-thought crime package, and the end result is the death of due process.

In a nutshell, due process was intended as a bulwark against government abuses. Due process prohibits the government of depriving anyone of “Life, Liberty, and Property” without first ensuring that an individual’s rights have been recognized and respected and that they have been given the opportunity to know the charges against them and defend against those charges.

With the advent of government-funded AI predictive policing programs that surveil and flag someone as a potential threat to be investigated and treated as dangerous, there can be no assurance of due process: you have already been turned into a suspect.

To disentangle yourself from the fallout of such a threat assessment, the burden of proof rests on you to prove your innocence.

You see the problem?

It used to be that every person had the right to be assumed innocent until proven guilty, and the burden of proof rested with one’s accusers. That assumption of innocence has since been turned on its head by a surveillance state that renders us all suspects and overcriminalization which renders us all potentially guilty of some wrongdoing or other.

Combine predictive AI technology with surveillance and overcriminalization, then add militarized police crashing through doors in the middle of the night to serve a routine warrant, and you’ll be lucky to escape with your life.

Yet be warned: once you get snagged by a surveillance camera, flagged by an AI predictive screening program, and placed on a government watch list—whether it’s a watch list for child neglect, a mental health watch list, a dissident watch list, a terrorist watch list, or a red flag gun watch list—there’s no clear-cut way to get off, whether or not you should actually be on there.

You will be tracked wherever you go, flagged as a potential threat and dealt with accordingly.

If you’re not scared yet, you should be.

We’ve made it too easy for the government to identify, label, target, defuse and detain anyone it views as a potential threat for a variety of reasons that run the gamut from mental illness to having a military background to challenging its authority to just being on the government’s list of persona non grata.

As I make clear in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries, you don’t even have to be a dissident to get flagged by the government for surveillance, censorship and detention.

All you really need to be is a citizen of the American police state.
 
Last edited:

Macrobius

Megaphoron
The problem using AI technology for this sort of application is of course that Machine Learning is inherently discriminatory (the aim is to *build* a 'classifier' or 'discriminator' function) and prejudicial (a model by definition pre-judges the facts).

It hides behind the notion that anything a computer does is 'objective' -- only in the sense that it is not subjective because not carried out by a thinking being at all. Dropping a woman in the water to see if she's a witch is 'objective' in this sense. The outcome is determinate and follows physics, not reason.

The word 'algorithm' is of course misused -- Machine Learning models are not 'algorithms' but produced by them. The training algorithm comes with certain guarantees, sometimes, about how well the training data can or will be fitted by the model. The ability to 'generalize' to other (OOS or out-of-sample) data is not guaranteed nor can it be, in any algorithmic sense.[1]

[1]: https://en.wikipedia.org/wiki/Generalization_error - there are *bounds* on generalization error in some cases. The term 'Probably Approximalely Correct' (PAC) from SLT should not give you the warm fuzzies.[2]

[2]: https://en.wikipedia.org/wiki/Probably_approximately_correct_learning

'A is a PAC-learning Algorithm for Concept C' does not imply that the output of A (learned model) is an algorithm for determining membership in C, with no, or in many cases, even little, risk.

PAW ... Probably Approximately Huwhyte.

Two possibilities with the (mis)classification:

- it might not actually be even 'approximately white' [ our 'algorithm' tried to apply the paper bag test and fails some percent of the time even at that ], and
- it was never really white to begin with, only approximately so [ passes paper bag test but isn't really white ]
- there is also type three misclassification (politically correctness model error): men cannot be probably approximately preggers.

That was approximately two. ;)
 

Macrobius

Megaphoron
The WP article from 7 month ago that was archived in the OP

FRESNO, Calif. — While officers raced to a recent 911 call about a man threatening his ex-girlfriend, a police operator in headquarters consulted software that scored the suspect’s potential for violence the way a bank might run a credit report.

The program scoured billions of data points, including arrest reports, property records, commercial databases, deep Web searches and the man’s social- media postings. It calculated his threat level as the highest of three color-coded scores: a bright red warning.

The man had a firearm conviction and gang associations, so out of caution police called a negotiator. The suspect surrendered, and police said the intelligence helped them make the right call — it turned out he had a gun.

As a national debate has played out over mass surveillance by the National Security Agency, a new generation of technology such as the Beware software being used in Fresno has given local law enforcement officers unprecedented power to peer into the lives of citizens.

Police officials say such tools can provide critical information that can help uncover terrorists or thwart mass shootings, ensure the safety of officers and the public, find suspects, and crack open cases. They say that last year’s attacks in Paris and San Bernardino, Calif., have only underscored the need for such measures.

But the powerful systems also have become flash points for civil libertarians and activists, who say they represent a troubling intrusion on privacy, have been deployed with little public oversight and have potential for abuse or error. Some say laws are needed to protect the public.
In many instances, people have been unaware that the police around them are sweeping up information, and that has spawned controversy. Planes outfitted with cameras filmed protests and unrest in Baltimore and Ferguson, Mo. For years, dozens of departments used devices that can hoover up all cellphone data in an area without search warrants. Authorities in Oregon are facing an internal investigation after using social media-monitoring software to keep tabs on Black Lives Matter hashtags.

“This is something that’s been building since September 11,” said Jennifer Lynch, a senior staff attorney at the Electronic Frontier Foundation. “First funding went to the military to develop this technology, and now it has come back to domestic law enforcement. It’s the perfect storm of cheaper and easier-to-use technologies and money from state and federal governments to purchase it.”

Few departments will discuss how — or sometimes if — they are using these tools, but the Fresno police offered a rare glimpse inside a cutting-edge $600,000 nerve center, even as a debate raged in the city over its technology.

An arsenal of high-tech tools

Fresno’s Real Time Crime Center is the type of facility that has become the model for high-tech policing nationwide. Similar centers have opened in New York, Houston and Seattle over the past decade.

Fresno’s futuristic control room, which operates around the clock, sits deep in its headquarters and brings together a handful of technologies that allow the department to see, analyze and respond to incidents as they unfold across this city of more than 500,000 in the San Joaquin Valley.
ca1361455a9cc1b0897f3da3d4a12c1aa1711ca2.webp

Fresno police are using software that has given law enforcement powers to peer into the lives of citizens. (Nick Otto/For The Washington Post)
On a recent Monday afternoon, the center was a hive of activity. The police radio crackled over loudspeakers — “subject armed with steel rod” — as five operators sat behind banks of screens dialing up a wealth of information to help units respond to the more than 1,200 911 calls the department receives every day.

On 57 monitors that cover the walls of the center, operators zoomed and panned an array of roughly 200 police cameras perched across the city. They could dial up 800 more feeds from the city’s schools and traffic cameras, and they soon hope to add 400 more streams from cameras worn on officers’ bodies and from thousands from local businesses that have surveillance systems.

The cameras were only one tool at the ready. Officers could trawl a private database that has recorded more than 2 billion scans of vehicle licenses plates and locations nationwide. If gunshots were fired, a system called ShotSpotter could triangulate the location using microphones strung around the city. Another program, called Media Sonar, crawled social media looking for illicit activity. Police used it to monitor individuals, threats to schools and hashtags related to gangs.

Fresno police said having the ability to access all that information in real time is crucial to solving crimes.
d8726fe211c5d0614c2bbbc5e0d95333338652f3.webp

Officers with the Fresno Police Department respond to a domestic disturbance call. (Nick Otto/For The Washington Post)
826ab213cc7b6e79ba86a9a5956a6f40402bd8ef.webp

Fresno police officers inside the police department's crime center. (Nick Otto/For The Washington Post)

They recently used the cameras to track a robbery suspect as he fled a business and then jumped into a canal to hide. He was quickly apprehended.

The license plate database was instrumental in solving a September murder case, in which police had a description of a suspect’s vehicle and three numbers from the license plate.

But perhaps the most controversial and revealing technology is the threat-scoring software Beware. Fresno is one of the first departments in the nation to test the program.

As officers respond to calls, Beware automatically runs the address. The searches return the names of residents and scans them against a range of publicly available data to generate a color-coded threat level for each person or address: green, yellow or red.

Exactly how Beware calculates threat scores is something that its maker, Intrado, considers a trade secret, so it is unclear how much weight is given to a misdemeanor, felony or threatening comment on Facebook. However, the program flags issues and provides a report to the user.

In promotional materials, Intrado writes that Beware could reveal that the resident of a particular address was a war veteran suffering from post-traumatic stress disorder, had criminal convictions for assault and had posted worrisome messages about his battle experiences on social media. The “big data” that has transformed marketing and other industries has now come to law enforcement.

Fresno Police Chief Jerry Dyer said officers are often working on scant or even inaccurate information when they respond to calls, so Beware and the Real Time Crime Center give them a sense of what may be behind the next door.
d4ad0892dd244939fa95f7d3675740419b13a2e8.jpg

Fresno Chief of Police Jerry Dyer inside the Fresno Police Department's crime center. (Nick Otto/For The Washington Post)

“Our officers are expected to know the unknown and see the unseen,” Dyer said. “They are making split-second decisions based on limited facts. The more you can provide in terms of intelligence and video, the more safely you can respond to calls.”

But some in Fresno say the power and the sheer concentration of surveillance in the Real Time Crime Center is troubling. The concerns have been raised elsewhere as well — last year, Oakland city officials scaled back plans for such a center after residents protested, citing privacy concerns.

Rob Nabarro, a Fresno civil rights lawyer, said he is particularly concerned about Beware. He said outsourcing decisions about the threat posed by an individual to software is a problem waiting to happen.

Nabarro said the fact that only Intrado — not the police or the public — knows how Beware tallies its scores is disconcerting. He also worries that the system might mistakenly increase someone’s threat level by misinterpreting innocuous activity on social media, like criticizing the police, and trigger a heavier response by officers.

“It’s a very unrefined, gross technique,” Nabarro said of Beware’s color-coded levels. “A police call is something that can be very dangerous for a citizen.”

Dyer said such concerns are overblown, saying the scores don’t trigger a particular police response. He said operators use them as guides to delve more deeply into someone’s background, looking for information that might be relevant to an officer on scene. He said officers on the street never see the scores.
3cd39e6f513b5f81e04f866d5e7ae4e06b66e7a0.jpg

Lt. Dave Ramos of the Fresno Police Department checks his computer after responding to a disturbance call that came in through the crime center. (Nick Otto/For The Washington Post)

Still, Nabarro is not the only one worried.

The Fresno City Council called a hearing on Beware in November after constituents raised concerns. Once council member referred to a local media report saying that a woman’s threat level was elevated because she was tweeting about a card game titled “Rage,” which could be a keyword in Beware’s assessment of social media.

Councilman Clinton J. Olivier, a libertarian-leaning Republican, said Beware was like something out of a dystopian science fiction novel and asked Dyer a simple question: “Could you run my threat level now?”

Dyer agreed. The scan returned Olivier as a green, but his home came back as a yellow, possibly because of someone who previously lived at his address, a police official said.

“Even though it’s not me that’s the yellow guy, your officers are going to treat whoever comes out of that house in his boxer shorts as the yellow guy,” Olivier said. “That may not be fair to me.”

He added later: “[Beware] has failed right here with a council member as the example.”

An Intrado representative responded to an interview request seeking more information about how Beware works by sending a short statement. It read in part: “Beware works to quickly provide [officers] with commercially available, public information that may be relevant to the situation and may give them a greater level of awareness.”

Calls for ‘meaningful debate’

Similar debates over police surveillance have been playing out across the country, as new technologies have proliferated and law enforcement use has exploded.

The number of local police departments that employ some type of technological surveillance increased from 20 percent in 1997 to more than 90 percent in 2013, according to the latest information from the Bureau of Justice Statistics. The most common forms of surveillance are cameras and automated license plate readers, but the use of handheld biometric scanners, social media monitoring software, devices that collect cellphone data and drones is increasing.

Locally, the American Civil Liberties Union reports that police in the District, Baltimore, and Montgomery and Fairfax counties have cellphone-data collectors, called cell site simulators or StingRays. D.C. police are also using ShotSpotter and license plate readers.

The surveillance creates vast amounts of data, which is increasingly pooled in local, regional and national databases. The largest such project is the FBI’s $1 billion Next Generation Identification project, which is creating a trove of fingerprints, iris scans, data from facial recognition software and other sources that aid local departments in identifying suspects.

Law enforcement officials say such tools allow them to do more with less, and they have credited the technology with providing breaks in many cases. Virginia State Police found the man who killed a TV news crew during a live broadcast last year after his license plate was captured by a reader.

Cell site simulators, which mimic a cellphone tower and scoop up data on all cellphones in an area, have been instrumental in finding kidnappers, fugitives and people who are suicidal, law enforcement officials said.
b40f5feb2d12768879122952bc2dee355d4d42a8.webp

A security camera used by the Fresno Police Department. (Nick Otto/For The Washington Post)
cf2f6281c6002288a67339d1ee084f653fc02034.webp

A computer inside a patrol car with a disturbance call on the screen. (Nick Otto/For The Washington Post)

But those benefits have sometimes come with a cost to privacy. Law enforcement used cell site simulators for years without getting a judge’s explicit consent. But following criticism by the ACLU and other groups, the Justice Department announced last September that it would require all federal agencies to get a search warrant.

The fact that public discussion of surveillance technologies is occurring after they are in use is backward, said Matt Cagle, an attorney for the ACLU of Northern California.

“We think that whenever these surveillance technologies are on the table, there needs to be a meaningful debate,” Cagle said. “There needs to be safeguards and oversight.”

After the contentious hearing before the Fresno City Council on Beware, Dyer said he now wants to make changes to address residents’ concerns.

The police chief said he is working with Intrado to turn off Beware’s color-coded rating system and possibly the social media monitoring.
“There’s a balancing act,” Dyer said.

An earlier version of this story incorrectly identified the nature of the probe Oregon authorities are facing for allegedly monitoring Black Lives Matter traffic on Twitter. It is an internal investigation.
 
Top