America’s potential adversaries have been actively meddling in the 2024 presidential election, and that includes efforts directed from Beijing. Known as “Spamouflage,” it reportedly utilizes fake social media accounts claiming to be American voters and even U.S. military personnel.
It is a largely bipartisan effort—with some attacks directed at President Joe Biden and Vice President Kamala Harris, while others criticized former President Donald Trump. The goal seems to be sowing division, and further dividing America politically.
“Spamouflage is not the most sophisticated influence operation targeting the United States, but it is quite interesting from a strategic perspective. Until the last couple of years, the Chinese Communist Party was mostly focused on ‘positive’ in its propaganda. In other words, rather than trying to denigrate adversaries, their primary—though not sole—strategy was to make China look better both for domestic and international audiences,” explained Dr. Craig Albert, graduate director of the Master of Arts in Intelligence and Security Studies at Augusta University.
A Spamalot Of Effort
As with other misinformation and disinformation campaigns, it is about confusing issues, raising the volume in the process, and drowning out civil discussions. It is hardly original, but China has a long history of copying from the best—and in this case it is Russia that created the template.
“Spamouflage is interesting because it illustrates the Chinese Communist Party’s shift from positive messaging to a narrative mirroring Russia and Iran: traditional information warfare targeting decision-making processes and influencing opinions of U.S. citizens to sow discord and spark potential riots, unrest, and violent protests if not more serious events,” warned Albert.
Much like those efforts directed from the Kremlin, China’s ultimate goal in this campaign, and its overall influence operations, is to make the U.S. lose faith in democracy and, as a result, citizens distrust the regime and government overall.
“Their grand idea, akin to Russia’s strategy, is to spark a civil war within the United States by pitting ideological extremes against one another,” said Albert.
Even as Americans do remain as politically divided as any time since the Civil War, the fact remains that we’re not really quite as close to armed confrontation. The discourse seems to be more of an uncivil war conducted on the anti-social networks.
However, Albert further cautioned that Beijing may be testing the waters by interfering in the election, even if it won’t lead to actual confrontation.
“Spamouflage is not there yet, but it is noteworthy in that it shows China is probing the possibilities they could exploit for election interference or for setting the battleground conditions in the case of a pending conflict in Taiwan,” Albert noted. “It also demonstrates that China, in line with the academic research in this manner, is increasing its use of artificial intelligence for rapid messaging through fake accounts and botnets, as well as in creating deep fakes that seek to create division in the US. China has spent a significant amount of money on the use of AI for information warfare, and Spamouflage is likely to be an initial glimpse of what the future holds for information warfare and influence operations.”
Although it has not been substantially effective, much like Russian influence operations since the 2016 election, it could suggest that the Chinese Communist Party are now contending in this contest of election interference.
“It’s only a matter of time before what I call, the information adversarial axis—Iran, China, and Russia—are successful in influencing how U.S. citizens see the political system, and consequently, how the citizen may vote,” said Albert. “For these reasons, U.S. policymakers and intelligence analysts should pay close attention to Spamouflage as a probing operation and prepare for more sophisticated, Generative AI-based informational attacks, perhaps closer to the election.”
AI Efforts Will Continue
China, along with Russia and Iran—and likely other states—will continue to use the latest technology against Americans. The irony is that the same technology that was developed to bring us closer together is now dividing us politically.
The good news is that the same technology would be used to prevent similar disinformation campaigns.
“Technology platforms have made significant strides in detecting and preventing disinformation campaigns. One area of improvement may be the enhancement of artificial intelligence to better identify subtle patterns in language, imagery, or behavior in relation to emerging forms of disinformation such as deepfakes,” said Dr. Masahiro Yamamoto, associate professor and department chair of the Department of Communication at the University at Albany. “Continuously training and updating algorithms on diverse datasets would be crucial. These platforms can also invest in media literacy education to help users develop critical thinking skills and discern misleading content.”
Yet, the situation is likely to get worse, especially in the weeks leading up to the election as Americans remain so addicted to social media.
“Research has shown that well-coordinated disinformation efforts can reach large audiences by infiltrating online communities predisposed to certain beliefs and exploiting algorithms that prioritize content likely to provoke strong reactions and generate high engagement,” added Yamamoto. “Research suggests that disinformation campaigns target swing states where people may be more susceptible to false information that algins with their views. These efforts can deepen existing divides, sow seeds of distrust, and undermine confidence in democratic institutions.”
Read the full article here