Advertisement

New York lawmaker questions state’s use of identity-verification vendor

In a letter to the state chief information officer, New York state Sen. Jeremy Cooney raises concerns with the state's use of AI-powered software from the identity-verification firm Socure. The company says many of the claims are false.
(Getty Images)

In a letter penned this month to Dru Rai, New York state’s chief information officer, state Sen. Jeremy Cooney raised concerns regarding Socure, a fraud prevention and identity verification firm used by the state, citing the vendor’s data practices and how it uses artificial intelligence.

The letter from Cooney, dated July 10, asks Rai how the state’s Office of Information Technology Services has vetted Socure, which in addition to more than 20 state government agencies and multiple federal agencies, provides New York state with identity verification services. In an interview with StateScoop, though, Socure executives said many of Cooney’s claims are simply false and that he misunderstands how the company’s technology works.

Socure’s technology relies on AI and machine learning to analyze several thousand data points to predict fraudulent identity activity. For governments, it predicts fraud for resident services, such as by scanning benefits applications. Fraud is a growing concern for state agencies, which since the COVID-19 pandemic have seen heightened levels of fraud across many government functions.

Cooney, who also chairs the Senate Procurement and Contracts Committee, said how Socure obtains and uses those data points — many of which would be considered personally identifiable data — is concerning.

Advertisement

Referring to the company as a data broker, Cooney said the company “collects, purchases and stores billions of data points, including sensitive personal identifiable information, on New Yorkers without their consent to confirm their identities.” While noting that identity verification “is critical for ensuring equitable access to public services,” Cooney said that the potential risks associated with using AI include preventing people from accessing critical government services.

“Innovation should never come at the cost of good governance and transparency,” Cooney wrote in an email to StateScoop. “Given the widespread concerns around Socure’s business practices and the growing recognition of AI’s risks, it is important to scrutinize any work they are doing for New York state agencies. I deeply appreciate the hard work and ongoing efforts of the State CIO’s office to make sure our state’s digital systems grant every New Yorker secure, equitable access to state services and uphold personal privacy.”

In the letter, Cooney also asked Rai whether the state requires that Socure include a human review of algorithmic output to ensure it’s accurate and not discriminatory, and whether the state has tested Socure’s fraud prediction models for bias.

“Has the state confirmed whether Socure’s practices fully comply with NY state privacy law, specifically related to its mass collection of sensitive PII, partnership with data brokers, and use of social media data?” Cooney asked in the letter.

Advertisement

‘We are not a data broker’

Jordan Burris, vice president of public sector strategy for Socure and the former chief of staff in the White House’s Office of the Federal CIO, told StateScoop that portions of the letter fundamentally misunderstand what the company does, noting that Socure is not a data broker. Additionally, New York state has yet to pass a comprehensive data privacy law that would legally define within the state what constitutes a data broker. Its data privacy act is still in committee for the second year in a row.

“We do not sell data to third parties, we do not use it for marketing. We do not use it to run a marketplace, offering online discounts for e-commerce, like other companies in the space,” Burris told StateScoop. “We are only focused on verifying identity and rooting out fraud, and ultimately, under looking at what is exactly New York State law today, we are not a data broker, and to suggest otherwise is simply false.”

Cooney’s letter follows at least two other instances this year in which New York state leaders have levied concerns regarding Socure and its data practices. Rep. Ritchie Torres, D-N.Y., in February wrote a letter to Socure CEO Johnny Ayers over concerns that his company’s digital identity verification software might lead to discrimination.

“You claim your product, ‘fuses personal identifiable information (PII) validated by thousands of data sources’ in order to prevent fraud,” Torres’ letter read. “Companies’ abuse of private data can also lead to the unwanted tracking and sale of people’s sensitive health data, genetic information, religious participation, and location. Given the lack of transparency around your services, constituents in my district have expressed legitimate privacy concerns and demand to know how you source their data, how it is used, and whether it is equitable for all American communities.”

Advertisement

While Cooney’s recent letter claims Torres’ letter went unanswered, Socure told StateScoop it met with Torres’ office to review some of its complaints. StateScoop contacted Torres’ office for comment, but did not hear back before publication.

Data sources

In March, Rev. Al Sharpton of the National Action Network, wrote a letter to New York State Attorney General Letitia James citing concerns with Socure’s lack of transparency regarding the types of data it uses to perform identity verification.

“Socure also collects data from thousands of data sources, including personally identifiable information (PII), without providing any meaningful transparency regarding how that data is acquired, stored, and used,” Sharpton’s letter read. “Socure scrapes social media, utilizes geolocation technology, and deploys artificial intelligence technology to conduct its business. They have no help line, and people have no recourse should their identity be denied mistakenly. These practices have historically and consistently hurt marginalized communities.”

When asked how Socure obtains data to perform identity verification, Burris said the company buys and otherwise obtains data from a variety of public and private sources to “bring in house.” These sources include public records, mobile network operators — like Verizon and AT&T — and higher education institutions, Burris said. He added that Socure’s data scientists evaluate the “authoritativeness of that data.”

Advertisement

“I’m not looking to buy data for data’s sake. I’m looking at data for the purpose of what we can do with it,” Burris said. “The only purpose for us having it is to help with identity verification in particular. … And then we even have a proprietary database that we’ve built of known fraudulent identity identities that we’ve identified over our 12-year existence.”

‘Pressure testing’

As far as concerns of effects on marginalized communities, Burris said the company is “pressure testing” its AI models by testing for bias across demographics like age, race, gender and other protected classes.

On the topic of human review in the identity verification process, Burris said “humans are involved all throughout the process.”

“The question of are human reviewers evaluating every identity decision fundamentally misunderstands the challenges that exist with verifying identity today,” Burris said. “We are going backwards if we heavily rely on human reviews to verify identity. The cost is long wait times, backlogs and good people who ultimately will continue to be underserved.”

Advertisement

In an email, a spokesperson for the New York Office of Information Technology Services said: “We take our responsibility to protect the privacy of every single resident accessing state programs or services very seriously, and have implemented the strongest possible security measures to ensure it.”

Latest Podcasts