Analyzing Breaking Events
By TorchStone VP, Scott Stewart
Over the decades I’ve spent doing analysis and leading analysts, I’ve learned that one of the most difficult tasks an analyst faces is rapidly untangling a breaking event and then contextualizing the situation in a form that can be easily understood and acted upon by a decision maker.
It is very difficult to cut through the confusion caused by the deluge of information that occurs during a breaking incident, especially when much of the information is redundant or inaccurate.
However, there are some tools and techniques that assist in this challenging task.
A Healthy Sense of Skepticism
When I was a young special agent, I had a boss who was not only an experienced agent but also a streetwise former beat cop.
One of the many invaluable lessons he taught me was to always be skeptical about the first report of an incident.
His mantra was: “The first story is never the real story.”
As I’ve progressed through my career as an investigator and analyst, I’ve found his advice has stuck with me and I have frequently benefitted from maintaining a healthy sense of skepticism regarding first reports.
In a recent example, following the May 3 drone attack against the Kremlin, some of the initial media and social media accounts reported that the incident was an attempt to assassinate Russian President Vladimir Putin by the Ukrainian military.
These reports further noted that the assassination attempt had been thwarted when Russian air defenses shot down two drones.
When we began to examine the incident more closely, however, it became evident that the incident was not an assassination attempt.
First, the two drones were detonated above the dome of the Kremlin’s Senate Palace at 0227 and 0243 local time—an hour when Putin was not even at the Kremlin.
Second, once better video became available, it became apparent that the drones involved in the incident were small, commercially available quadcopters; vehicles that do not have the range to fly the roughly 300 miles from the Ukrainian border to Moscow.
They were not the modified Soviet-era TU-141 reconnaissance drones Ukraine has used to conduct previous long-range attacks against Russian military targets.
Whoever was responsible for launching the drones was likely in the Moscow area at the time—not Ukraine.
Finally, the better-quality video also allowed an observer to note that the explosive payload carried by the drone was small, and likely comprised of a low-explosive compound similar to the perchlorate mixture used in commercial fireworks.
The explosion created a bright flash and cloud of smoke, but it did not create the type of sharp blast wave associated with military-grade high explosives.
These videos also demonstrated that the drones had detonated and had not been shot down as the Russians had reported.
The explosive charge carried by the drone simply was too small and too weak to cause appreciable blast damage to the Kremlin.
Therefore, the incident was clearly not an attempt to assassinate the Russian President.
Furthermore, given the limited impact of the attack, and the commercially available items used to conduct it, it was probably not conducted by Ukrainian government agents.
Some have floated the idea that the attack was a false flag attack conducted by the Russian government to justify further action in Ukraine, but I believe that theory falls flat for two reasons.
First, the incident was a great embarrassment for Russian security forces.
Second, the Russians have already been hitting Ukraine as hard as they can.
They did not require any justification to escalate.
I believe the theory that the attack was carried out by dissidents looking to embarrass the Kremlin is far more plausible.
Dissidents across Russia and Belarus have carried out hundreds of attacks on lower-profile government and military buildings across both countries, including a drone-based attack on Russian aircraft at a base outside Minsk in March.
But the bottom line is that nearly every element of the first reports of this incident was incorrect, other than the fact that there was indeed an incident involving drones exploding over the Kremlin during the early morning hours of May 3.
Some of the confusion, misinformation, and disinformation that can arise during breaking events can be avoided by carefully vetting the news sources and social media accounts you follow—I’ve spent over ten years tweaking my Twitter follow list, and find it often helps me get to the heart of an issue.
I also find my colleagues at Samdesk to be a helpful resource for assessing breaking events.
But even with good tools, you must still maintain a healthy sense of skepticism.
Trust Your Eyes—Within Reason
When I was assigned to the counterterrorism investigations branch at the State Department’s Diplomatic Security Service, after we received a cable that an attack had occurred, we would either have to send an agent out to investigate an attack or wait days or weeks for photos from an attack to arrive in the diplomatic pouch.
Sometimes we would get lucky and would be able to find a few photos in the newspaper or a brief video on a television news program, but it was often difficult to get a visual representation of the crime scene.
That has all changed today due to technology.
With smartphones, almost everyone has a still and video camera in their pocket.
When combined with high-speed internet and social media applications, photos and videos of an attack can be quickly disseminated across the globe.
This often allows an investigator or analyst to be able to see a variety of photos and videos of the incident taken from different angles and perspectives and can greatly speed up the assessment and contextualization of an event, like in the Kremlin drone incident discussed above.
However, the ability of almost anyone to “inform” the world about a breaking event can also be problematic.
First, it has exponentially increased the amount of information that must be sifted through while attempting to triage an event.
While some of this material can be unique, insightful, and incredibly helpful, much of it can also be redundant, misinformed, or inaccurate.
In such a free-for-all environment, it is increasingly easy for inaccurate information to be widely circulated as fact.
As I was writing this, tensions between Israel and Palestinian militant groups have escalated, resulting in barrages of rockets being fired into Israel from Gaza on May 10.
An image was circulated on social media reporting that a rocket had impacted Tel Aviv, however, the photo was actually from a 2021 rocket attack that was being recirculated as a current photo.
One Twitter post claiming the 2021 photo was a 2023 photo was viewed over 68,000 times in just two hours, illustrating how rapidly misleading information can be spread.
Of course, this again illustrates the need to maintain a healthy sense of skepticism when viewing photos and videos.
Another problem is that most bystanders and professional journalists documenting an event simply do not think like analysts, and in many cases do not photograph or videotape the aspects of an attack site that are most important to an analyst attempting to determine what happened.
For example, an analyst attempting to assess a bombing is interested in seeing detailed images of the seat of the blast, the extent of the physical damage caused by the explosion, and the effect of the explosive on different types of material.
However, bystanders and journalists tend to focus much of their attention on the victims and often do not provide images of the things at an incident scene that analysts want to see the most.
Complicating the unintentional spread of inaccurate information—misinformation—is the fact that in many cases there are actors who are intentionally spreading misinformation, a practice known as disinformation.
Those who employ disinformation are attempting to mislead or deceive to influence how an incident is perceived and interpreted.
The Russian news agency TASS framing the May 3 drone attack on the Kremlin as a Ukrainian assassination plot targeting President Putin is a classic example of disinformation, but not all disinformation comes from obvious government sources.
Intelligence services employ fake social media accounts and vast networks of fake accounts called social botnets that can be used to spread disinformation.
At first glance, some may be led to believe a social media user with hundreds of thousands of followers is a reliable source of information, but when those followers are bots, such accounts can purvey all sorts of disinformation.
The same caveat applies to other accounts with legions of followers.
The number of followers is not an indication of the account’s veracity.
For example, look at the number of followers on Q-Anon conspiracy accounts.
As Abraham Lincoln famously noted, don’t trust everything you read—or see—on the Internet.
Frameworks and Tradecraft
Analytical frameworks are also good tools that can help quickly place pieces of information into context during a breaking incident.
Examples of frameworks include the attack cycle, the pathway to violence, the cyber attack cycle, and the social media threat continuum.
In the protective intelligence analysis world, it is also helpful to understand how firearms, edged weapons, and explosives function, how protective details operate, and how hostile surveillance is conducted.
I also like to emphasize examining the elements of tradecraft involved in an attack because looking at the mechanics of how it was conducted often permits an analyst to draw conclusions that are difficult to see if the analyst is heavily focused on who conducted the attack.
The who of an attack is important and can oftentimes provide some clues as to how the attack was conducted, but if you are attempting to understand an incident to prevent the next attack, the how is critically important—and is unfortunately often overlooked.