decorative

Narrative, Disinformation and Strategic Influence

What we do

The Center on Narrative, Disinformation and Strategic Influence conducts interdisciplinary research, merging humanities, social sciences and cutting-edge computer science to develop tools and insights for decision-makers, policymakers, civil society groups and communities. Our work aims to understand how narratives shape reality and how manipulations of the information environment threaten democratic norms and institutions. We support efforts to safeguard the United States, its allies and democratic principles against malign influence campaigns.


Why study narrative, disinformation and strategic influence?

In the 21st century, information is a key instrument of national power. Adversaries use propaganda and disinformation to undermine political will, manipulate public opinion and weaken sociopolitical institutions, thereby threatening democracies worldwide. Expert knowledge of the information environment and its impact on geopolitical events and human behavior is vital for security and informed policy decisions. By conducting cutting-edge research in strategic communication, influence, data analytics and gray zone operations, NDSI generates actionable insights, tools and methodologies for security practitioners, citizens and communities to navigate the information environment effectively.

Frequently asked questions


Disinformation is false or misleading information that is spread with a willful intent to deceive. It is important to recognize the inclusion of inaccurate or misleading qualities as part of the definition of disinformation. Factual information can be taken out of context and redistributed with malicious intent.


Strategic influence is the use of targeted communication and actions to shape perceptions, attitudes and behaviors in order to achieve specific objectives and advance strategic goals.


A narrative is a system of related stories, each of which connect a desire created by conflict or deficiency to an actual or projected resolution via an arc of locations, events, actions, events, participants and things.  It can also refer to the way an individual perceives and understands the world around them.

The Global Security Initiative is engaged in research spanning social sciences, the humanities and computer science to better understand manipulative information practices such as disinformation and propaganda.


Disinformation sows confusion and distrust, diminishing people’s faith and confidence in the institutions that are critical to a functioning, healthy democracy, such as government, news media and science. This undermines citizens’ ability to effectively participate in society through voting and other civic activities — voters require reliable information in order to make quality decisions at the ballot box, for example. Disinformation can also lead people to make decisions that negatively impact the personal health, safety and financial security of themselves, their families and their communities.


Please send an inquiry using the contact form below. We look forward to speaking with you.


Along with false or misleading statements in any medium, disinformation can also take the form of manipulated images, misleading headlines, wrongful attribution of quotes, and past news or photos presented as current events, among others. Disinformation can be perpetrated by provocateurs in online discussion forums, blogs and social media platforms, but it can also be developed and/or distributed by governments and corporations, unethical news outlets and other sources. A key tactic of disinformation is fabrication, where brand new false/misleading information is created by the disinformation actor. Another is manipulation, in which existing pieces of information such as images, videos or text documents are altered for malign purposes. There are also rhetorical tactics such as framing, where words and images and figures of speech are used to shape interpretation of facts or events—when these elements are misleading the resulting framing would be an example of disinformation.  Disinformation actors also deceive audiences by putting events and characters into a narrative structure that leads to false or inaccurate conclusions. One emerging tactic is the “deepfake.” A deepfake is photo, video or audio content that has been altered in a way that makes the audience believe someone said or did something that they did not, or was present somewhere they weren’t. Typically, this overlays a target image over someone or something in the original video.  While the ability to make crude edits has existed for some time, new advances in artificial intelligence and, in particular, generative adversarial networks (GANs), allow for a level of fidelity not previously seen.  Such “deepfakes” can be nearly impossible to distinguish with the unassisted human eye (or ear).

  • Read a variety of sources.
  • Be attentive to the difference between news articles, opinion pieces, and editorials; on social media, they are not always properly marked.
  • Be attentive to paid or sponsored links — they are not always deceptive, but they have an additional agenda beyond information.
  • Read the whole article, not just the headline, before sharing a piece of information; headlines are designed to grab your attention — not to inform you with quality information.
  • Be wary of memes, social media posts or articles that trigger an emotional reaction, especially anger or disgust — emotional manipulation is another tactic of disinformation actors.
  • See if you can confirm a piece of information through multiple sources, especially well-known, well-established news or fact-checking organizations.
  • Don’t feed the trolls! Engaging with trolls and bots only serves to amplify their efforts to rack up the engagements – likes, retweets, etc.


The main difference between disinformation and misinformation is the intent. Disinformation involves a willful intent to deceive. Misinformation, on the other hand, is false or misleading information that is spread for other-than-deceptive reasons or purposes. For example, misinformation can be spread because of the ignorance of the person sending it. Additionally, disinformation can turn into misinformation when the person sharing is unaware that a piece of information is inaccurate or misleading, and that it was originally created or shared with malicious intent — they become an unwitting participant in a disinformation campaign.


“Fake news” is a colloquial term for a type of disinformation in which a falsehood, a distortion, or partially incorrect information is presented specifically to look like news reporting. “Fake news” has also become a problematic term, because public figures often use the phrase to describe any type of news reporting that they don’t find flattering or supportive of their agenda.


It is important to note that while disinformation is often spread online, the problem did not originate with the advent of the internet. Humans have always sensationalized, distorted or falsified information for a variety of reasons. The internet simply provides an easy way for disinformation to spread quickly and widely. Furthermore, the ease of cutting and pasting of digital media content contributes to the ways that content such as photos can be shared outside of their original context in misleading ways. Digital media allows for amplification of disinformation, especially by automated or computational means.  Bots, or computer-controlled social media accounts, can make a piece of information appear to be spreading rapidly.  News media may report on the rapid spread of a piece of fictitious information, only serving to further amplify the deceptive message. This is just one example of how social media can contribute to the rapid spread of disinformation, but bots are not the only problem. Some studies have shown inaccurate information is shared by humans on social media much more rapidly than true news stories. Generally, social media algorithms are designed to prioritize content the audience will engage with (via shares, reactions, links, etc.), because that’s how they earn advertising revenue.  Emotion-ridden posts tend to result in more engagement, which means they get prioritized by the algorithm and more people see them, and then they engage and the cycle accelerates.

Capabilities

  • Expertise in interdisciplinary research, integrating humanities, social science and computer science to understand the sociotechnical problems associated with information warfare.
  • A hub for interdisciplinary collaboration, with over 20 faculty affiliates in our Disinformation Working Group from across the university.
  • Thought leaders in counterspeech, a key citizen-driven method to counter disinformation and prevent foreign propaganda from controlling the local narrative
  • Extensive experience in collaborating with partners worldwide.
  • Expertise in developing methods for the detection and attribution of AI-manipulated text.
  • Proven, scalable methods for detecting adversarial framing and identifying influence signals, developed in collaboration with the Center for Strategic Communication.
  • Expertise in narrative analysis for strategic communication and information operations.
  • Years of experience in military operations and defense-oriented research.

Disinformation Working Group

The Disinformation Working Group is an interdisciplinary collaborative effort spanning several university departments aimed at researching, understanding, and countering disinformation worldwide. The group meets monthly, providing a forum for like-minded researchers to exchange ideas and develop papers and grants. If you are an ASU faculty member or graduate student interested in joining, please use the contact form below to start the conversation.

decorative

Semantic information defender

Collaborating with a large, interdisciplinary team composed of academic and commercial research organizations, GSI contributes to “Semantic Information Defender,” a project under the Defense Advanced Research Projects Agency’s SEMAFOR (Semantic Forensics) program. The project will develop a system that detects, characterizes and attributes misinformation and disinformation – whether image, video, audio or text. ASU provides content and narrative analysis, media industry expertise, text detection and characterization methods, and a large dataset of known disinformation and manipulated media objects.

decorative

Detecting and tracking adversarial framing

A pilot project with Lockheed Martin ATL created an information operations detection technique based on the principle of adversarial framing – when parties hostile to U.S. interests frame events in the media to justify support for future actions. This research helps planners and decision-makers identify trends in real time that indicate changes in information operations strategy, potentially indicating imminent actions. A follow-on project funded by the Department of Defense expands techniques developed in the pilot project to additional countries; incorporates blog data into the framing analysis alongside known propaganda outlets; studies the transmediation of these frames to non-Russian, non-propaganda sources; and seeks to develop the ability to automatically detect adversarial framing as it occurs.

decorative

Analyzing disinformation and propaganda techniques

A recently completed GSI project sponsored by the U.S. State Department studied ideological techniques (narrative and framing) and operational procedures (mechanisms of amplification) of disinformation and propaganda in Latvia, Sweden and the United Kingdom, providing policymakers with a fuller understanding of the adversarial communication landscape. The team identified adversarial framing around contentious issues, trained a machine classifier to detect such framing at scale, revealed shifts in messaging strategies, and analyzed anti-democracy narratives. The team also developed a new feature-driven approach to identify “Pathogenic Social Media” — malicious actors exhibiting inauthentic behavior amplifying disinformation frames and topics.

Our team

News

I want to learn more about NDSI

Name:
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.