decorative

Narrative, Disinformation and Strategic Influence

What we do

The Center on Narrative, Disinformation and Strategic Influence (NDSI) fuses humanities and social science research with state-of-the-art computer science and modeling to better understand how people make sense of the world around them, and thereby support efforts to safeguard the United States, its allies, and democratic principles against malign influence campaigns.


Why study narrative, disinformation and strategic influence?

In the 21st century security environment, information has become a key instrument of national power. Adversaries use propaganda and disinformation to assault political will, manipulate public opinion and erode socio-political institutions, thereby weakening democracies. Expert knowledge and insights about the information environment and its impact on geopolitical events and human behavior is vital to maintaining security and making better policy decisions. By conducting cutting-edge research in the overlapping fields of strategic communication, influence, data analytics and gray zone operations, NDSI generates actionable insights, tools and methodologies to maneuver in the information environment and for security practitioners.

Frequently asked questions


Disinformation is false or misleading information that is spread with a willful intent to deceive. It is important to recognize the inclusion of inaccurate or misleading qualities as part of the definition of disinformation. Factual information can be taken out of context and redistributed with malicious intent.


The main difference between disinformation and misinformation is the intent. Disinformation involves a willful intent to deceive. Misinformation, on the other hand, is false or misleading information that is spread for other-than-deceptive reasons or purposes. For example, misinformation can be spread because of the ignorance of the person sending it. Additionally, disinformation can turn into misinformation when the person sharing is unaware that a piece of information is inaccurate or misleading, and that it was originally created or shared with malicious intent — they become an unwitting participant in a disinformation campaign.


“Fake news” is a colloquial term for a type of disinformation in which a falsehood, a distortion, or partially incorrect information is presented specifically to look like news reporting. “Fake news” has also become a problematic term, because public figures often use the phrase to describe any type of news reporting that they don’t find flattering or supportive of their agenda.


Disinformation sows confusion and distrust, diminishing people’s faith and confidence in the institutions that are critical to a functioning, healthy democracy, such as government, news media and science. This undermines citizens’ ability to effectively participate in society through voting and other civic activities — voters require reliable information in order to make quality decisions at the ballot box, for example. Disinformation can also lead people to make decisions that negatively impact the personal health, safety and financial security of themselves, their families and their communities.


It is important to note that while disinformation is often spread online, the problem did not originate with the advent of the internet. Humans have always sensationalized, distorted or falsified information for a variety of reasons. The internet simply provides an easy way for disinformation to spread quickly and widely. Furthermore, the ease of cutting and pasting of digital media content contributes to the ways that content such as photos can be shared outside of their original context in misleading ways. Digital media allows for amplification of disinformation, especially by automated or computational means.  Bots, or computer-controlled social media accounts, can make a piece of information appear to be spreading rapidly.  News media may report on the rapid spread of a piece of fictitious information, only serving to further amplify the deceptive message. This is just one example of how social media can contribute to the rapid spread of disinformation, but bots are not the only problem. Some studies have shown inaccurate information is shared by humans on social media much more rapidly than true news stories. Generally, social media algorithms are designed to prioritize content the audience will engage with (via shares, reactions, links, etc.), because that’s how they earn advertising revenue.  Emotion-ridden posts tend to result in more engagement, which means they get prioritized by the algorithm and more people see them, and then they engage and the cycle accelerates.


Along with false or misleading statements in any medium, disinformation can also take the form of manipulated images, misleading headlines, wrongful attribution of quotes, and past news or photos presented as current events, among others. Disinformation can be perpetrated by provocateurs in online discussion forums, blogs and social media platforms, but it can also be developed and/or distributed by governments and corporations, unethical news outlets and other sources. A key tactic of disinformation is fabrication, where brand new false/misleading information is created by the disinformation actor. Another is manipulation, in which existing pieces of information such as images, videos or text documents are altered for malign purposes. There are also rhetorical tactics such as framing, where words and images and figures of speech are used to shape interpretation of facts or events—when these elements are misleading the resulting framing would be an example of disinformation.  Disinformation actors also deceive audiences by putting events and characters into a narrative structure that leads to false or inaccurate conclusions. One emerging tactic is the “deepfake.” A deepfake is photo, video or audio content that has been altered in a way that makes the audience believe someone said or did something that they did not, or was present somewhere they weren’t. Typically, this overlays a target image over someone or something in the original video.  While the ability to make crude edits has existed for some time, new advances in artificial intelligence and, in particular, generative adversarial networks (GANs), allow for a level of fidelity not previously seen.  Such “deepfakes” can be nearly impossible to distinguish with the unassisted human eye (or ear).

  • Read a variety of sources.
  • Be attentive to the difference between news articles, opinion pieces, and editorials; on social media, they are not always properly marked.
  • Be attentive to paid or sponsored links — they are not always deceptive, but they have an additional agenda beyond information.
  • Read the whole article, not just the headline, before sharing a piece of information; headlines are designed to grab your attention — not to inform you with quality information.
  • Be wary of memes, social media posts or articles that trigger an emotional reaction, especially anger or disgust — emotional manipulation is another tactic of disinformation actors.
  • See if you can confirm a piece of information through multiple sources, especially well-known, well-established news or fact-checking organizations.
  • Don’t feed the trolls! Engaging with trolls and bots only serves to amplify their efforts to rack up the engagements – likes, retweets, etc.

The Global Security Initiative is engaged in research spanning social sciences, the humanities and computer science to better understand manipulative information practices such as disinformation and propaganda. Learn more in the below video:

Capabilities

  • Fusion of humanities, social science and computer science theories, methods and tools to better analyze, share and dissect relevant data.
  • University hub for researchers interested in related topic areas, with 20 affiliated faculty in our Disinformation Working Group.
  • Expertise in narrative analysis applicable strategic communication and information operations contexts.
  • Leading expertise in the field of counterspeech: one of the most promising methods to counter disinformation and deny foreign propaganda from controlling the local narrative.
  • Proven, scalable method of detecting adversarial framing in information environment and discerning an influence signal, through collaboration with Center for Strategic Communication.
  • Experience working with international collaborators.
  • Developing state-of-the-art AI-manipulated text detection techniques.

Featured projects

decorative

Semantic information defender

Collaborating with a large, interdisciplinary team composed of academic and commercial research organizations, GSI contributes to “Semantic Information Defender,” a project under the Defense Advanced Research Projects Agency’s SEMAFOR (Semantic Forensics) program. The project will develop a system that detects, characterizes and attributes misinformation and disinformation – whether image, video, audio or text. ASU provides content and narrative analysis, media industry expertise, text detection and characterization methods, and a large dataset of known disinformation and manipulated media objects.

decorative

Detecting and tracking adversarial framing

A pilot project with Lockheed Martin ATL created an information operations detection technique based on the principle of adversarial framing – when parties hostile to U.S. interests frame events in the media to justify support for future actions. This research helps planners and decision-makers identify trends in real time that indicate changes in information operations strategy, potentially indicating imminent actions. A follow-on project funded by the Department of Defense expands techniques developed in the pilot project to additional countries; incorporates blog data into the framing analysis alongside known propaganda outlets; studies the transmediation of these frames to non-Russian, non-propaganda sources; and seeks to develop the ability to automatically detect adversarial framing as it occurs.

decorative

Analyzing disinformation and propaganda techniques

A recently completed GSI project sponsored by the U.S. State Department studied ideological techniques (narrative and framing) and operational procedures (mechanisms of amplification) of disinformation and propaganda in Latvia, Sweden and the United Kingdom, providing policymakers with a fuller understanding of the adversarial communication landscape. The team identified adversarial framing around contentious issues, trained a machine classifier to detect such framing at scale, revealed shifts in messaging strategies, and analyzed anti-democracy narratives. The team also developed a new feature-driven approach to identify “Pathogenic Social Media” — malicious actors exhibiting inauthentic behavior amplifying disinformation frames and topics.

Leadership

News

Events

I want to learn more about NDSI

Name:
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.