A quick thread on intelligence analysis in the context of cyber threat intelligence. I see a number of CTI analysts get into near analysis paralysis phases for over thinking their assessments or over obsessing about if they might be wrong. (1/x)

Consider this scenario. A CTI analyst identifies new intrusions and based on the collection available and their expertise note that the victims are all banks. Their consumer wants to know when threats specifically target banks (not just that banks are victims).
The CTI analyst has, from their collection, at this time, and based on their expertise enough to make an activity group (leveraging the Diamond Model in this example) that meet's the requirement of their consumer. So what's the problem?
The CTI analyst begins to over think it. "What if I had more collection? Would my analysis change? I really don't *know* they aren't also targeting mining companies in Australia as I don't have collection there."
The analyst knows their analysis is going to be shared. Maybe even public. "What if another team or professional intelligence firm has more collection and ends up noting that it isn't banking specific at all. Banks are victims, not targets. Will my consumer distrust me later?"
It's a scenario I see often. I see many of my #FOR578 students run into scenarios like this. "What if I say something about ICS, what will Dragos say. What if I say something about this APT, what will FireEye say. What if I say something about $X, what will CrowdStrike say."
All of our assessments are made using our expertise at a point in time with the available collection. One of the values of estimative language is explicitly accounting for the gaps. If you have significant collection gaps, bake that into the assessment: e.g. Low Confidence
Too many analysts and consumers look for facts. Intelligence is analysis. It's allowed to evolve and change as collection, time, or other considerations change. We're advising consumers to help them make better choices. Not to know the unknowable.
I would advise that analyst to make the group. Sure they can always try to coordinate with others, share, see if other groups/teams see what they see or can expose collection gaps, etc. But that's not always necessary or possible.
One of my favorite articles is the Fifteen Axioms of Intelligence Analysts by Frank Watanabe. It's been in my CTI class for years now as the last slide of the course: https://t.co/45kzw5kLz7
He doesn't start off with anything about being wrong. Or making mistakes. He starts off with "Believe in your own professional judgements." The #2 is "Be aggressive, and do not fear being wrong." It's not until #3 we hear "It is better to be mistaken than to be wrong."
Build confidence with yourself. Then build trust with your consumer. That you are going to deliver the best judgement based on the insights you have at that time. And that if it changes, you'll let them know and admit it. That's really hard to do - but it's vital.
Don't sit on your intelligence and not disseminate it, or overthink it to even finishing your work, if it can be valuable to your consumer. It's always a point in time and based on what you know at that time. You'll never have enough to feel comfortable
The balance is in not blasting your consumer with thinly supported guesses though. Go through your processes. Use your team. "Aggressively pursue collection of information you need." But then make the call. If it was easy it wouldn't be intelligence.

More from Tech

These past few days I've been experimenting with something new that I want to use by myself.

Interestingly, this thread below has been written by that.

Let me show you how it looks like. 👇🏻


When you see localhost up there, you should know that it's truly an experiment! 😀


It's a dead-simple thread writer that will post a series of tweets a.k.a tweetstorm. ⚡️

I've been personally wanting it myself since few months ago, but neglected it intentionally to make sure it's something that I genuinely need.

So why is that important for me? 🙂

I've been a believer of a story. I tell stories all the time, whether it's in the real world or online like this. Our society has moved by that.

If you're interested by stories that move us, read Sapiens!

One of the stories that I've told was from the launch of Poster.

It's been launched multiple times this year, and Twitter has been my go-to place to tell the world about that.

Here comes my frustration.. 😤
A brief analysis and comparison of the CSS for Twitter's PWA vs Twitter's legacy desktop website. The difference is dramatic and I'll touch on some reasons why.

Legacy site *downloads* ~630 KB CSS per theme and writing direction.

6,769 rules
9,252 selectors
16.7k declarations
3,370 unique declarations
44 media queries
36 unique colors
50 unique background colors
46 unique font sizes
39 unique z-indices

https://t.co/qyl4Bt1i5x


PWA *incrementally generates* ~30 KB CSS that handles all themes and writing directions.

735 rules
740 selectors
757 declarations
730 unique declarations
0 media queries
11 unique colors
32 unique background colors
15 unique font sizes
7 unique z-indices

https://t.co/w7oNG5KUkJ


The legacy site's CSS is what happens when hundreds of people directly write CSS over many years. Specificity wars, redundancy, a house of cards that can't be fixed. The result is extremely inefficient and error-prone styling that punishes users and developers.

The PWA's CSS is generated on-demand by a JS framework that manages styles and outputs "atomic CSS". The framework can enforce strict constraints and perform optimisations, which is why the CSS is so much smaller and safer. Style conflicts and unbounded CSS growth are avoided.
The YouTube algorithm that I helped build in 2011 still recommends the flat earth theory by the *hundreds of millions*. This investigation by @RawStory shows some of the real-life consequences of this badly designed AI.


This spring at SxSW, @SusanWojcicki promised "Wikipedia snippets" on debated videos. But they didn't put them on flat earth videos, and instead @YouTube is promoting merchandising such as "NASA lies - Never Trust a Snake". 2/


A few example of flat earth videos that were promoted by YouTube #today:
https://t.co/TumQiX2tlj 3/

https://t.co/uAORIJ5BYX 4/

https://t.co/yOGZ0pLfHG 5/
There has been a lot of discussion about negative emissions technologies (NETs) lately. While we need to be skeptical of assumed planetary-scale engineering and wary of moral hazard, we also need much greater RD&D funding to keep our options open. A quick thread: 1/10

Energy system models love NETs, particularly for very rapid mitigation scenarios like 1.5C (where the alternative is zero global emissions by 2040)! More problematically, they also like tons of NETs in 2C scenarios where NETs are less essential.
https://t.co/M3ACyD4cv7 2/10


In model world the math is simple: very rapid mitigation is expensive today, particularly once you get outside the power sector, and technological advancement may make later NETs cheaper than near-term mitigation after a point. 3/10

This is, of course, problematic if the aim is to ensure that particular targets (such as well-below 2C) are met; betting that a "backstop" technology that does not exist today at any meaningful scale will save the day is a hell of a moral hazard. 4/10

Many models go completely overboard with CCS, seeing a future resurgence of coal and a large part of global primary energy occurring with carbon capture. For example, here is what the MESSAGE SSP2-1.9 scenario shows: 5/10

You May Also Like