U.S. Intelligence Community Helping Build New Digital Tools To Hunt ‘Misinformation’ Online

The U.S. intelligence community is partnering with a company building software that aims to root out misinformation online, raising questions about whether the U.S. government is spying on speech on the internet.

Tech company Trust Lab said it is working with the intelligence community’s investment fund In-Q-Tel on a “long-term project that will help identify harmful content and actors in order to safeguard the internet.” 

The federal government is set to receive new internet monitoring tools intended to identify harmful foreign content and speakers.

In-Q-Tel is a nonprofit contracted with the CIA that spends taxpayer dollars on private companies to develop solutions for national security problems facing the group’s partners in the intelligence community.  

“Our technology platform will allow IQT’s partners to see, on a single dashboard, malicious content that might go viral and gain prominence around the world,” Trust Lab CEO Tom Siegel said in a statement. “It’s a positive step forward in helping safeguard the internet and prevent harmful misinformation from spreading that could influence elections or cause other negative outcomes.”

The software uses artificial intelligence to spot “high-risk” content, people and transactions, according to Trust Lab. The tech pinpoints “toxicity and misinformation” and helps people understand online trends and narratives relevant to national security.

This new tech is foreign-focused and not aimed at collecting Americans’ content, according to a source familiar with the matter. 

Still, government officials’ monitoring of Americans’ speech has previously skirted the law. The U.S. Postal Service’s inspector general said earlier this year that postal employees overstepped their law enforcement authority in the use of an open-source intelligence tool as part of an internet covert operations program (iCOP) analyzing social media platforms. The program’s analysts monitored American protesters. 

Government tools to spot misinformation have faced heavy scrutiny, particularly after the roll-out earlier this year of the Biden administration’s intended disinformation governance board. The plan to detect and counter disinformation within the Department of Homeland Security was paused amid a public outcry about the government serving as an arbiter of fact and fiction.

Trust Lab’s website said that its products are intended to assist tech platforms in determining which users are fake or unsafe and to discover content that the company deems bad. 

“We label content appropriateness based on real people’s feelings so you can intuitively manage the true impact of your content on users, your brand, and regulators,” according to Trust Lab’s website. 

The company said a “majority of the leading social media companies” already use its tools and services and that its products are built by former executives at tech companies such as Google, YouTube, TikTok, and Reddit. 

Precisely how the U.S. intelligence community plans to use the technology is not fully known, but the CIA said it follows laws regarding data collection involving Americans. 

“Without commenting on specific programs or relationships, CIA at all times abides by U.S. laws, regulations, and executive orders that prohibit unlawful collection related to U.S. persons,” a CIA spokesperson said in a statement.

Application of the new tools may mirror how private companies study foreign influence online. For example, cybersecurity firm Mandiant said in an October report it identified a pro-China influence campaign leveraging Twitter and other platforms with messages intended to discourage Americans from voting and the disinformation campaign used fake personas to promote its content. 

In-Q-Tel said its work with Trust Lab is about safety and declined to directly answer how the technology would be used and who would use it.

In-Q-Tel also has a longstanding interest in people’s online speech that predates the Biden administration. In August 2020, In-Q-Tel investor Morgan Mahlock said her team had worked with an unnamed company that detects toxic content on social media platforms such as Facebook, Reddit, and Twitter. 

In-Q-Tel did not answer whether Trust Lab was the company Ms. Mahlock referenced, and In-Q-Tel’s attention to social media companies did not start in the last few years. 

In February 2020 testimony to the House Permanent Select Committee on Intelligence, In-Q-Tel CEO Chris Darby said he met with Twitter’s leadership more than a decade earlier in San Francisco. He said one of his fellow venture capitalists explained that In-Q-Tel needed not to invest in Twitter but in “all of the analytic engines” that explain what is happening on the platform. 

Details on how many taxpayer dollars were spent on In-Q-Tel’s “strategic partnership” with Trust Lab are not known, and neither group answered questions about funding for the project. 

In-Q-Tel received more than $526 million in taxpayer funds during a five-year period ending in 2020, which represented more than 95% of its revenue, according to paperwork filed with the IRS.   

In-Q-Tel spokeswoman Carrie Sessine said in an email her team’s investments range from $500,000 to $3 million and that In-Q-Tel makes about 65 investments annually in areas such as artificial intelligence, cyber, data analytics, autonomous systems, and more.  

Reporting by The Washington Times.

LATEST VIDEO