Connecticut will pay a security analyst 150k to monitor election memes for misinformation
The state is the latest hoping to quash online rumors before they impact the polls.
Ahead of the upcoming midterm elections, Connecticut is hiring a “security analyst” tasked with monitoring and addressing online misinformation. The New York Times first reported this new position, saying the job description will include spending time on “fringe sites like 4chan, far-right social networks like Gettr and Rumble and mainstream social media sites.” The goal is to identify election-related rumors and attempt to mitigate the damage they might cause by flagging them to platforms that have misinformation policies and promoting educational content that can counter those false narratives.
Connecticut Governor Ned Lamont’s midterm budget, approved in early May, set aside more than $6 million to make improvements to the state’s election system. That includes $4 million to upgrade the infrastructure used for voter registration and election management and $2 million for a “public information campaign” that will provide information on how to vote. The full-time security analyst role is recommended to receive $150,000.
“Over the last few election cycles, malicious foreign actors have demonstrated the motivation and capability to significantly disrupt election activities, thus undermining public confidence in the fairness and accuracy of election results,” the budget stated, as an explanation for the funding.
While the role is a first for Connecticut, the NYT noted that it’s part of a growing nationwide trend. Colorado, for example, has a Rapid Response Election Security Cyber Unit tasked with monitoring online misinformation, as well as identifying “cyber-attacks, foreign interference, and disinformation campaigns.” Originally created in anticipation of the 2020 presidential election, which proved to be fruitful ground for misinformation, the NYT says the unit is being “redeployed” this year. Other states, including Arizona, California, Idaho, and Oregon, are similarly funding election information initiatives in an attempt to counter misinformation, provide educational information, or do both.
The federal government also attempted to create its own Disinformation Governance Board in April to “coordinate countering misinformation related to homeland security,” according to The Washington Post. The initiative proved controversial, though, earning the ire of conservatives and free speech advocates alike, and the plan was put on pause just weeks after its initial announcement.
Politics aside, there is ample data and real-world examples that support the growing threat misinformation poses to the electoral process in the United States. Along with the work being done in government, social media giant Meta—which received significant criticism for its handling of disinformation on Facebook in the 2016 election—released its own plan to tackle the expected influx of such content related to the fall midterms.