Themes, Personal Notes, & Resources From SANS CTI Summit 2016
This year’s SANS CTI Summit was my first security conference ever. And I loved it. It was a chance to meet great people, absorb new ideas, and engage in stimulating discussions–both in and out of the conference hall–about threat intelligence.
This post is divided into two parts. Part I presents my synopsis of the themes and motifs that were echoed throughout the CTI Summit. Part II contains all of my notes from the two day event. Where some of my original notes were the digital equivalent of chicken scratch, I’ve done my best to revise them into somewhat legible and complete sentences. Part II also has links to slide decks, reports that speakers referenced, and other content.
Finally, I recommend checking out the #CTISummit hashtag on Twitter. Folks such as Kyle Maxwell and Rick Holland were live tweeting sound bites and insights during the event. Kyle shared his Evernote notes here. Other attendees also posted photos of speakers and their slides.
Feel free to use my notes however you wish, just as long as you source/provide attribution.
Can’t wait for next year’s summit! Enjoy!
Part I: CTI Summit Themes
Differentiating commodity indicators and “real” intelligence
Low-level, commodity indicators aren’t cool. After all, they are at the bottom of the the Pyramid of Pain! Rohan Amin, CISO at JPMorgan Chase & Co., said in his talk that “indicators do not equal intelligence”–this has practically become a maxim in the threat intelligence community (and is one that I’ve cheered on). Many talks also emphasized the need for strategic intelligence which would presumably be devoid of indicators. But adversary-centric, strategic intelligence isn’t for everyone. In my opinion, RIchard Struse of DHS made the most compelling argument: “we need to think about the 99% who have very basic needs and for whom commodity indicators (appropriately sourced and processed) are critically important… we shouldn’t ignore the tactical level because we’ve been there, done that.”
Communicating threat intelligence as business risks
Managers, CISO’s, the broader C-Suite, and even The Board want to know how digital threats will affect the business. Unfortunately, security and intelligence analysts haven’t cracked the code on how communicate up to these levels. Mike Cloppert’s opening talk on day one focused on the strategic level of intelligence. At this level, intelligence communicates to the C-Suite how the threat landscape might change given a business strategy. Rebekah Brown encouraged us to avoid technical jargon, communicate business impacts, and tell a story. I think Nick Albright and Jason Trost’s talk on supply chain intelligence also fits into this theme; strategic levels of intelligence should assess threats not only to your immediate corporate network, but also to tangential, connected assets and vendors. And Adam Myers similarly proposed that we do a better job of bridging the gap between the decision maker and the “threat intelligence nerdery” (which is full of technical stuff that executives don’t understand).
Exploiting what you’ve already got and doing more with less
The sex-appeal of externally provided threat intelligence–with its promises of protecting us against sophisticated adversaries–is hard to resist. But there’s rich intelligence from our own internal environments that we aren’t taking advantage of. Rick Holland urged us to use data from past intrusions and to enrich threat intelligence with internal assets, ID’s, and vulnerabilities. Rebekah Brown introduced her “frugal girl’s guide to threat intelligence” which reminded us that threat intelligence doesn’t have to blow the budget; there are plenty of free solutions (free, as in puppies–you still need to take care of them, said Brown). Scott Roberts shared with us his purpose-built threat intelligence pipeline architected from open source tools. Holland suggested this very approach, calling it the “Millennium Falcon” approach. Lastly, Rich Barger of ThreatConnect offered some words of wisdom: security is a cost center for the business. To get more resources, we need to show that we are good stewards of and have maximized the resources that we’ve got. Only then can we expect to be rewarded with larger budgets.
Sharing is Caring (Or Not)
Alex Pinto presented a wonderful analysis which concluded that sharing threat intelligence in private communities may not provide as much value as we think. His analysis suggests that data shared in private communities shows the same pattern that he observed when analyzing public indicator feeds: there are minimal overlaps and high uniqueness. So what does this mean? It means that we are getting a similar quality of data using a “good” TIP or sharing community as we would using a free or paid threat feed. On the other hand, Richard Struse reminded us that indicator sharing, whether in public or private circles, is a valuable endeavor for many organizations. By themselves, these two talks have started to re-shape my own perspective on sharing. One last point about sharing: it doesn’t have to be limited to indicators! During a Q&A discussion, Rick Holland and Mike Cloppert put forward that we share processes; this could do more to mature programs.
Applying and Evaluating Analytic Techniques
We are in the business of analysis. We should therefore have the latest-and-greatest techniques at the ready, while simultaneously considering how our techniques affect the adversary. Paul Vixie discussed passive DNS (pDNS) and warned us that the bad guys are taking notice of this powerful technique. From the use of WHOIS privacy protection to reverse proxies, attackers are obfuscating their infrastructure in ways that reduce the effectiveness of pDNS methods. John Bambenek gave a fascinating talk about conducting malware campaign analysis on a large scale–clustering samples based on shared passwords or other unique configuration values allows the analyst to identify patterns. John Hultquist proposed that we broaden our forecasting methods to account for an adversary’s objectives as doing so may allow us to predict future targets that aren’t immediately obvious. In what is one of the best strategic analyses I’ve seen, Kristen Dennesen discussed how threat actors respond to public disclosure. Some actors shut down completely, others push forward. Dennesen also stressed that we should closely evaluate how threat actors respond to our research and consider the ethical implications of our research. Finally, Richard Bejtlich talked about the revolution in open source intelligence. The open source intelligence revolution is empowering analysts with satellite imagery and powerful software.
Starting With Good Foundations and Requirements
No matter how much “cool” intelligence you’ve purchased, a threat intelligence program is only as good as the foundations it rests upon. On day two, Mark Arena educated us on requirements. Requirements are the key questions that security operators and executives need answered. The requirements drive collection and analytic focus. Arena’s talk triggered a great discussion in the audience about how to build requirements. Has anyone read their company’s 10-K report? Does anyone have documented requirements? If so, how often are they reviewed? It became clear from Arena’s talk and from the group discourse that the community has a gap when it comes to developing requirements. ThreatConnect’s Rich Barger and Rob Simmons also talked about developing processes that let the analysts do analysis. Often, analysts find themselves doing rote, time consuming tasks. Barger and Simmons presented their solution for automating malware hunting in VirusTotal. If the foundations of your program rest on time intensive, manual tasks, take a pointer from these guys!
Part II: Personal Notes & Resources
The following are my notes for each of the CTI Summit talks; some of the titles of the talks are hyperlinked to the slides the speakers made available. Where some of my original notes were the digital equivalent of chicken-scratch, I’ve edited and revised them for (hopefully) easier reading. I did not attend the Lunch and Learn events, nor did I take notes on the Last CTI Vendor Standing panel (which was actually too-the-death vendor octagon match). If you were a speaker and I appear to have misinterpreted something you said, just let me know!
The Levels of Threat Intelligence, Mike Cloppert, CIRT Chief Research Analyst, Lockheed Martin & Summit Co-Chair
#CTISummit is now underway, @mikecloppert kicking us off pic.twitter.com/gc37RYbBoV
— Rick Holland (@rickhholland) February 3, 2016
Take for instance Von Clausewitz’s theory on war: it has survived centuries and continues to shape our thoughts. Clausewitz introduced the tactical and strategic concepts of war. (And he alluded to the application of strategy as operations.) That old Prussian guy’s concepts are still relevant to us!
These three levels—tactics, operations, and strategy—are reflected in JP 2-0 and are roughly equivalent to data, information, an intelligence.
The three levels provide a model for threat intelligence.
Admittedly, models are imperfect. “Every model is wrong, some are useful.”
So, at the tactical level you have indicators, detection signatures, mitigations, IR, and products that compare the Kill Chains of adversaries.
The operational level involves your investment priorities, capability development, your collection and access to data, and products that analyze adversary campaigns.
The strategic level should be geared towards the C-Suite. This level requires nation-state threat assessment products and analyses that help to answer questions such as, “how will our threat landscape change given new business strategy?” and “what business environments will create opportunities for adversaries?”
An End User’s Perspective on the Threat Intelligence Industry (Hint: We Have Work to Do), Rohan Amin, Ph.D., Global Chief Information Security Officer, JPMorgan Chase & Co.
With four major lines of business (consumer/retail banking, investment banking and asset management, credit card services) the scope and scale of JPMC’s business operation is massive. They process trillions of dollars a year and run on a $500M security budget (!!!).
JPMC internally develop an operational risk framework that incorporates threat intelligence. The framework follows four steps:
Identify the threats. This is where intelligence comes in.
Enumerate the controls that are in place and map controls to the identified threats.
Evaluate the effectiveness of controls in detecting, deterring, preventing the threats.
Analyze how corporate risk can be reduced.
JPMC also adapted the Kill Chain to cybercriminal acts against retail banking assets and customers. This, JPMC calls the Fraud Kill Chain: Recon > Deliver > Exploitation > Positioning > Abuse > Monetization.
When it comes to CTI vendors, Amin wants vendors to “go where we cannot.” From here, he outlined what he believes to be shortcomings of threat intelligence:
Materially, Amin doesn’t see “a whole lot” coming from external threat intelligence vendors.
He says that indicators are not intelligence.
Summaries of daily open source and news reports aren’t intelligence: “Tell me something I don’t know,” says Amin.
"Who here receives a daily email summary of the news?" Everyone sadly raises their hand. #CTISummit pic.twitter.com/YIsVURtjB4
— Jez (@0daypizza) February 3, 2016
He wants vendors to understand specific industries. This is more important than generic threat and vulnerability data.
Requirements are important. He thinks many vendors don’t onboard requirements. Requirements are necessary to drive a formalized collection process.
APT is not everything. Company’s, especially banks, face a broader range of threats than just APT. “We are suffering from APT-itis.”
Actionable intelligence entails more than just indicators. Vendors should provide actionable intelligence at the strategic levels (and let customers know what level their intelligence applies to).
Threat Intelligence Awakens, Rick Holland, Summit Co-Chair, SANS Institute
“The vendor landscape has been pretty crazy… it’s been quite chaotic”
Holland lays out four wedges of the intelligence market: providers, platforms, enrichment, integration.
Providers
These are the groups that provide external intelligence (aka CTI vendors).
Generally, it’s easy for providers to deliver low-level indicators. But moving up the Pyramid of Pain isn’t just painful for adversaries: it’s painful for vendors too! It is hard to provide TTPs.
One provider doesn’t fit all and providers may struggle with relevancy to your business.
Don’t think about providers as just commercial providers – look internally as well or, in the community.
Platforms
The TIP is like your quarterback. It helps The Force to flow through your organization.
There is gross misuse of the term “platform.” Many “platforms” aren’t—they aren’t open, and can’t be extended or built on top of.
When exploring platforms, you should ask the vendor, “how can you integrate into my stack?”
The Threat Intelligence Platform should address five functional areas: Ingestion, Enrichment, Analysis/Exploration, Collaboration, and Integration/Orchestration.
Enrichment
Types of enrichment include GeoIP, pDNS, ReverseIP, malware associations, WHOIS.
However, Holland says that we are lacking on internal enrichment sources. “We are not taking internal sources of enrichment: ID, asset, vulnerability, data value.” E.g., tying into your CMDB
Integration
It is painful to integrate intelligence – available APIs are often weak. Ideally, the GUI will just be the icing on the cake because all data and functionality would be fully programmable via robust APIs.
We DoS our own gear as we dump a million indicators to one box. Holland calls the ream of indicators “indicators of exhaustion.”
As you embark on your threat intelligence journey (and before you invest in a commercial provider), you must maximize intelligence from your own environment such as previous incidents and intrusions. What is in your own environment that you aren’t taking advantage of?
You don’t need expensive technology to operate a successful threat intelligence program. Try taking the “Millennium Falcon” approach: run open source tools!
We need to “spawn camp” in our own environments. Where is the adversary going to be? Segment your network to better predict where the adversary will show up.
We need to find and nurture our threat intel analysts. A $10-20k may sound unreasonable, but also consider the time and cost that it take to train a new analyst!’
There was a rich Q&A and discussion following Rick’s talk!
One attendee brought up an excellent point about open source: open source solutions don’t scale. More importantly, we don’t yet have many sophisticated open source packages for threat intelligence to really make the FOSS leap yet.
Finding talent is a big challenge. You have to have a pipeline to groom talent (universities, training programs, internal recruitment).
Are TIPs going down the path of SIEMs? People are disenchanted with their SIEM. Holland says, “it’s incumbent on TIP vendors to learn the lessons of SIEM.”
Plumbing’s Done! Now What Do We Do With All This Water? Richard Struse, Chief Advanced Technology Officer, U.S. Department of Homeland Security’s National Cybersecurity and Communications Integration Center (NCCIC)
Most of people aren’t anywhere near the CTI world – we need to think about the 99% who have very basic needs and for whom commodity indicators (appropriately sourced and processed) are critically important.
We shouldn’t assume that all of the foundational stuff is done, works, is consistently applied.
The goal of STIX was to provide an ecosystem where CTI is automatically shared. And STIX is on its way to becoming a true international standard. It’s now in the hands of OASIS. The CTI group in OASIS in nearly 200 strong!
Struse admits that STIX and CybOX complexity is an issue – STIX 2.0 is now the mandatory representation and will move from XML to JSON format. This should make sharing and consumption easier.
With STIX 2.0, There’s also an opportunity to refactor with an eye towards simplification. How are we going to make this easy for people to implement who have day jobs?
On indicator sharing, Struse says, that we shouldn’t ignore the tactical level because “we’ve been there, done that.”
Struse contrasts the “artisanal threat indicators” with the commodity indicators. Artisanal threat indicators are those that you have developed from intrusions and/or internal threat intelligence processes. It is reasonable to expect that organizations share their artisanal indicators. However, we should recognize that for many organizations, there is value in being able to share and collect broad pools of commodity indicators. These can be produced and consumed at scale (aka “indicator sightings” at scale).
“How you form your trust relationships is critically important.”
Struse proposes a grander vision: defining a layer of abstraction (code) that can allow us to structure machine-interpreted courses of action at scale. This of course would require a high degree of confidence/trust in the information source.
He suggests going beyond TLP for sharing and using the layer of abstraction to define who can do what with what data; how the data can be used (e.g., passive monitoring only). Machines would then enforce these policies.
Could TIPs have a policy engine that prevents sharing of certain data depending on confidence, source, etc.?
Multivariate Solutions to Emerging Passive DNS Challenges, Dr. Paul Vixie, CEO/Co-Founder, Farsight Security
pDNS is a well-known technique to the security and intelligence community. However, the bad guys are starting to take notice of our pDNS capabilities.
pDNS captures domain names, historic IP resolutions, and name servers. With pDNS, we are trying to establish guilt by association. This technique works most of the time – exact matches will typically work (e.g., same IP, name server).
pDNS doesn’t work with “lone wolf” attacks, or when adversaries use shared hosting. Examples include,
When every IP used to send spam is totally unrelated to any other IPs the criminal uses.
When every domain is registered using diverse registrars, name servers, diverse / fictitious POC details, unique / anonymous payment details.
When criminals use WHOIS privacy services. We never really accounted for privacy protection; we thought that people would be responsible on the internet because they would fear retribution.
When criminals hide behind reverse proxy IP services (e.g., from CloudFlare).
So how do analysts get around this? Look for characteristics that may not be obfuscated.
If a domain is involved in spam or other “bad stuff,” sometimes the providers / registrars can strip the privacy protections.
When examining Reverse Proxies, look at non-A records such as MX, TXT.
If criminals are using reverse proxies, examine the historical IP resolutions. The criminals may not have previously used any obfuscation methods.
Data Mining Malware for Fun and Profit: Building a Historical Encyclopedia of Adversary Information, John Bambenek, Sr. Threat Analyst, Fidelis Cybersecurity; Incident Handler, Internet Storm Center
There’s no good way to get ahead of the malware problem, just look at the VirusTotal statistics.
Bambenek’s data mining efforts are geared towards take-down operations with law enforcement. He is aware of his intelligence collection biases – he doesn’t need to worry about direct operationalization of the data he collects.
You’ve got to go through the critical analysis step to get intel out of data but this is hard to do at a point in time. Point in time data is of limited use – there’s no trending that you can do; there’s no higher level understanding. So, Bambenek’s analysis focuses on tearing apart malware families to identify patterns.
He discusses two case studies: 1) data mining malware using decoders. ( He is currently processing about 30 families.) And 2) surveillance of DGA malware.
What can you do with bulk malware configurations?
Sink holing for victim notification.
Mining the data for correlations.
Mining historical databases for indicators that didn’t seem important at the time but became important later. “We don’t really know what we’re looking for until it already happened.”
Kevin Breen’s RAT decoders are valuable for ripping out intelligence from malware.
The use of free form text fields are great data points for correlating malware data sets. Criminals fill-out these fields when configuring their tools – talk about bad OPSEC! Different malware families have different variables and configuration values / fields. Many times, actors will use the same configurations. This makes it easier to track campaigns and actors.
Bambenek provided the example of his DarkComet Campaign ID analysis. His analysis was based on the passwords the actors were using – they used *strong* passwords allowing for clustering according to the passwords.
The next step was trying to resolve the configured hostnames. Most C2’s didn’t resolve. It turns out that sophisticated actors may not resolve their domain names when they aren’t operating – they don’t need to. Thus, we shouldn’t dismiss use of RFC 1918 (private, internal address space) resolutions which might, at first glance, not seem to matter. Actors are also known to use 8.8.8.8. They may use this strictly for testing purposes when they upload their malware to VT to check detection rates. It’s a simple deception tactic.
Many malware families will use dynamically generated domain algorithms. Bambenek outlined three different types of DGAs:
Date based seed
Static seed – these are easy to reverse engineer and from there to predict new domain creations.
Dynamic seed – but these must be globally consistent which means you can RE the algorithm and then monitor the domain creations.
Interestingly, attackers may not even be able to manage their DGAs because they’ve purchased the algo from another actor.
The Bedep malware uses currency exchange values for foreign exchange to seed its DGAs. Wow!
Also, some privacy WHIOS providers will still provide unique strings / IDs for each registrant. So, just like a unique email address, these ID’s can be used to correlate domains and malware.
Community Intelligence & Open Source Tools: Building an Actionable Pipeline, Scott J. Roberts, Director of Bad Guy Catching, GitHub
Problem: we need to feed new telemetry with intelligence. But the real problem is that we too much intelligence stuff (i.e. sources): Articles, Chat, Note books, Twitter, Email lists, Feeds, Ongoing incidents, Manual collection, RSS.
Background: Roberts works at GitHub. Because his job is to protect other’s property (code), he inherits their threat model.
Roberts decided to build his own solution to manage the intelligence. First question he posed: what do we care about, breadth or depth?
Next step: how do we exploit this data? You need to be able to extract technical indicators. This requirement resulted in two tools: Jager and Cacador.
The point isn’t the platform, the point is how it all works together. Threat intel tools work when they are integrated. To do this, Roberts suggests embracing the UNIX philosophy:
Small is beautiful.
Make small things that work together.
Favor portability over efficiency.
Store data in flat files.
This approach will bring greater success than attempts to build monolithic “do everything!” tools. Don’t let the pursuit of perfection be the enemy of good enough.
Last point: Roberts stresses the importance of basic coding skills and becoming “developer analysts.” It’s about identifying what needs to be done and then figuring out how to do it.
Six Years of Threat Intel: Have We Learned Nothing? David J. Bianco, Security Technologist, Sqrrl Data, Inc
The era of modern threat intelligence reporting began with the HB Gary report Operation Aurora.
Question: are we getting better at communicating useful threat information over time?
To answer this question, Bianco relied on reports and data contained in the APTNotes GitHub repository. He analyzed all of the reports from 2010 and then analyzed a handful from 2015 reports.
From 2010 to 2015, there was a sharp increase in APT reports published
According to APT Notes, there has been an exponential increase in reports from 2010-2014. #CTISummit #ThreatIntel pic.twitter.com/t4ptVJPCX1
— Rick Holland (@rickhholland) February 3, 2016
The average number of pages has gone down as well from 22 in 2010 to 15 in 2015. He attributes this to the theory that vendors used to consider consider these to be “special event reports.” Today, APT are no longer novel.
The next step was to measure indicator usefulness using the Pyramid of Pain (PoP). Where the PoP has gaps, he incorporated Ryan Stallions’ Detection Maturity Model to create a “combined indicator hierarchy.”
Comparing 2010 to 2015, we see some more TTPs in the 2015 than in the 2010 reports, along with more tools (e.g., YARA rules). But the 2015 reports also contained a lot more hashes and IP addresses.
Could the increase in the lower-level indicators also be attributed to an increase in customers who are more sophisticated and say “show me the data?”
Lessons for intelligence producers: keep intel reports brief; include appendices and sort by indicator type; include machine-readable files/formats such as CSV, JSON, and STIX to speed-up consumption.
Breaking down the intelligence by the Kill Chain phases is also helpful for consumers.
Data-Driven Threat Intelligence: Metrics on Indicator Dissemination and Sharing, Alex Pinto, Chief Data Scientist, Niddel
A data scientist’s approach to analyzing community-based / shared threat intelligence.
Pinto’s talk from last year was all about measuring the “value” of public threat intelligence feeds using the TIQ Test. The TIQ Test looks at Overlap and Uniqueness across feeds. So, how similar are the public feeds?
Most existing feeds have minimal overlap and are very unique. Why? Likely because the population of threats is beyond what providers are able to collect and disseminate.
This means that even with all of the public the feeds, you aren’t protected and have a grossly incomplete picture of what the threat landscape actually looks like. These feeds really don’t do you any good unless you have analysts looking at them. The analyst should be thinking, “I only have IP addresses, but now I can extrapolate more about the attackers and their infrastructure?”
So, will the data shared within threat intelligence communities be any better than the public data feeds that we are consuming?
First, let’s consider the “herd immunity” analogy for threat intelligence. It is bad analogy. We are able to detect more “virus mutations” (i.e. threats) but we are really bad at inoculation. The things we detect most (low Pyramid of Pain indicators) mutate too quickly. And, those who don’t get immunized still get sick.
Next, the concept of trust. The community mandates that you share. But, you must also be in the “circle of trust” to share. Pinto thinks this doesn’t make sense.
There are two sides to the “trust coin:” 1) do you trust the group enough to share, and 2) do you trust the group enough to consume the data?
So, looking at the closed, private communities, what does the data look like? It looks very similar to the public feeds! We see the same patten of high Uniqueness and little Overlap.
The conclusion? We are getting similar quality of data using a “good” TIP or sharing community as we would using a free or paid threat feed.
Looking more closely at private feeds:
Activity metric: how much new intelligence gets added? IOC sharing in the smaller communities was much higher in the closed versus open communities. This is probably where the “trust factor” comes into play.
Diversity metric: who is sharing? Roughly 10% of organizations, no matter the community, share intelligence. Others are mute. Shouldn’t more people be sharing?
What about measuring feedback? Is anyone using this intelligence? This is important because if you want to encourage collaboration, you need to be able to provide feedback to know if it is working.
Trust metric: private sharing boosts the “trust equation” but damages the “herd immunity” argument when it comes to sharing. Maybe we could use simple reputation systems such as “likes” or “up votes.” AlienVault OTX is good at this! It tracks followers and contributions from each participant.
Pinto proposes that even though we can’t all share IOC, we can share our “sightings” and observations about threat activity that we are seeing.
Ultimately, we can’t turn an analyst-driven activity into an automated activity.
Takeaways from @alexcpsec #ctisummit pic.twitter.com/L7IQ92D7o4
— Danny Pickens (@dannypickens) February 3, 2016
You Got 99 Problems and a Budget’s One, Rebekah Brown, Threat Intelligence Lead, Rapid7
Why are budgets such a problem? We need to figure out how to do more with less, which in the CTI world is hard: CTI is expensive!
But threat Intelligence does not have to be expensive. Follow the “Frugal Girl’s Guide to Threat Intelligence:” have a solid foundation; use open source feeds, platforms, and reports; use community tools; get help from the community and smart colleagues.
There are many free options. “Free as in puppies”—you have to take care of them!
How to we do this? Identify problems > identify solutions > implement solutions > gather feedback > adjust as needed.
Some solutions will work for you, and some won’t. List all the possible solutions that you could use. Once you introduce a solution, gather constant feedback. Remember that you aren’t going to get it right the first time.
What functions do you want to affect with threat intelligence?
Risk and priority assessment: what threats are facing us and what can we do to prevent it? What are the threats facing your organization? What actions do they take? Are we positioned to detect an attack?
Situational awareness: understand ongoing or impending attacks and be prepared to respond.
Tactical intelligence for situational awareness: is this new tactic/tool/method going to impact us? How quickly do we need to respond?
Threat identification and alerting.
Hunting.
Other security functions.
Brown pointed out the Congressional Research Service (CRS) as a good source for finding high-level reports on security and threats. Specifically, she suggested starting with CRS’s report Cybersecurity: Authoritative Reports and Resources, by Topic.
It is important to know what internal resources are available to you, and to understand your own organization.
You also need to be able to communicate to the C-Suite and board by not using technical jargon; by discussing business impact of threats; by telling the story; by keeping it simple; and by being prepared to answer business-relevant questions.
Brown says that in the security world, “we tend not to communicate things appropriately.”
Also, we need to make sure that our reporting actually gets in the hands of decision makers. Make sure to remain objective and provide updates on threats.
One low-cost method of tracking threat activity is to track vulnerabilities that are weaponized as Metasploit modules. Brown references HD Moore’s Law.
The Revolution in Private Sector Intelligence, Richard Bejtlich, Chief Security Strategist, FireEye & Nonresident Senior Fellow, The Brookings Institution
Bejtlich says that we are in the middle of a revolution in private sector intelligence. What is driving this? Imagery from satellites and drones; expertise; collaboration in forums and on social media; private job opportunities; and powerful software.
Starting the day w/ examples of revolution in private sector & insights into its future w/ @taosecurity #CTISummit pic.twitter.com/YBu12To7kT
— SANS DFIR (@sansforensics) February 4, 2016
Hobbyists are playing a big role in this revolution, as is Google Earth for imagery. Imagery is powerful in conveying a story.
CSIS reporting tracking land reclamation activities in the SCS. Pictures say a lot. Imagery is powerful.
Imagery is also being used for economic analysis. Take Project 38 North’s use of imagery to track North Korean economic activity. Imagery has shown factories being built in DPRK, but wth power coming from China.
Institute for Study of War provides excellent maps, many of which are used by major news outlets.
Another report: GWU Report on ISIS in America.
Regarding collaboration, the level of collaboration in the open source side on military hardware is remarkable. The detail that users provide suggests that they have direct experience with and knowledge of aircraft, weapons, ships, etc.
Another report demonstrating the revolution in private intelligence is the Atlantic Council’s Hiding in Plain Sight which details Russian activity in Ukraine.
In the threat intelligence world, the revolution began with, and was pushed forward by several seminal reports: GhostNet, ShadyRAT, APT1, and CameraShy.
Bejtlich noted that we are having to turn into academics: we are increasingly citing previous works and bringing in diverse experts.
Hide and Seek: How Threat Actors Respond in the Face of Public Exposure, Kristen Dennesen, Senior Threat Analyst, FireEye
Dennesen poses the question: what are we doing when we release a report or a blog post on threat actors?
Cyber threat intelligence is a very new field. If you look to the field of journalism, they have a code of ethics, they study what it means to report on vulnerable populations, etc. Our field doesn’t have a code of ethics yet.
We regularly see “threat shifting” or, changes in tools and tactics in response to strengthened defenses. For example, we’ve seen banking malware shift from clumsy keyloggers to sophisticated, modularized trojans in response to elevated defenses from banks.
We see threat shifting in four areas: in timing, targets, resources, and planning and methods.
Three activities trigger threat shifting: detection, remediation, and public exposure.
Actors are keenly aware of our research and pubic reporting on their operations—they are probably the most loyal visitors to our blogs.
As researchers, we should also acknowledge that actors can manipulate information and deceive us. For example, APT28 conducted an attack on Le TVMonde and left markers to make it look like IS sympathizers conducted the attack. While the research community eventually figured it out though, the false flag was enough to manipulate early news reports.
Public reports can be deeply disruptive to a great group’s operations…or not.
Looking at APT28, there were 20 reports on this actor between Oct 2014 – 2015. How did their tactics change over time as these reports came out? Why do they keep pushing forward? In contrast, FIN4 completely shut down their operations following a detailed report from FireEye.
But FIN4 was an opportunistic actor. Their incentives shifted quickly from “get money” to “don’t get caught.”
APT28, on the other hand, is almost certainly requirements-incentivized and has “top cover” from government echelons.
We also know that public reports are a common trigger for re-tooling. As soon as Arbor Networks published its reporting on the APT12 RIPTIDE malware, the group immediately re-tooled and shifted to a new variant called HIGHTIDE.
Operation SMN was quite successful in shutting out HIKIT.
Dennesen hypothesizes that APT28 receives cash injections that fuel zero-day development and usage every 90 days or so.
Analysis of the Cyber Attack on the Ukrainian Power Grid, Robert M. Lee, Author & Instructor, SANS Institute
When we are looking at causing damage, bringing down power, there are a few things we need to look at: most of what has been reported in the news is hype. There is a real threat to infrastructure, but it takes a long time to learn the processes and engineering to actually cause damage. The capability barrier is high.
BlackEnergy3 was indeed found on the network. News sources reported this finding as “BlackEnergy malware caused the power outage.”
BlackEnergy played a role, but did not *cause* the power outage.
The adversary used BE3 to get into the environment. They used KillDisk to take out the SCADA servers; they also DDoS’ed phone lines to inhibit swift response.
These actions resulted in a loss of visibility into the environment. This was a very deliberate effect.
The attack was extremely sophisticated in terms of logistics, operations, and the adversary’s understanding of the environment. But the effect only last for 3-6 hours. The adversary conducted internal reconnaissance for months before launching the attack.
One interesting note: prior to the attack, Ukraine had threatened to nationalize the power companies that were affected. (Note: I haven’t found a source for this reference – I may have misinterpreted what Lee said.)
Lee believes that the Sandworm team is civilian company, possibly contracted by RU gov.
Lessons for threat intelligence:
There is a difference between generating and consuming intelligence. Most organizations want to be intelligence consumers.
Looking into the environment and using what was available would have been enough for the defenders to make a difference.
Threat Intelligence will help to make a good security program better—it’s the “5 percent secret sauce.” But threat intelligence will not replace your bad security practices.
How do you move from intel consumption to intel production? Intelligence consumption is an active defense approach. But if your intel team needs to address the pain points in your organization. What are your security pain points?
Final point: ICS networks are actually much more defensible than IT environments.
Anticipating Novel Cyber Espionage Threats, John Hultquist, Director of Cyber Espionage Analysis, iSIGHT Partners
Adversaries don’t organize their operations by sector. Only looking at attacks on your sector is a flawed way of examining threats.
Operation Ababil could have been foreseeable.
There is a lot of evidence that the Sandworm team is interested in targeting SCADA systems. Ukraine has become a canary for attacks against SCADA systems.
Ukraine CERT has has been publishing a lot of information on these attacks.
There are at least two RU espionage teams who appear to have a strong interest in SCADA systems. They may be poisoning supply chains and affecting firms that are not in the immediate SCADA sector.
It is important for organizations to consider their geographic presence / location.
It’s not just about the sector. It’s about the adversary’s mission and what they are after. Consider the OPM case.
If we follow the hypothesis that the attack on OPM was part of a broader requirement to gather data for counter intelligence purposes, what else might the adversary be interested in gathering? Perhaps, travel information (target: airlines), health care information (target: health care), financial information (target: banks)
So, It’s less about what sectors adversaries are targeting than the adversary’s objectives from which we can hypothesize and forecast future targets.
Cyber Threat Intel: Maturity and Metrics, Mark Arena, CEO, Intel471
A basic three intelligence maturity model: crawl, walk, run, fly.
Crawl: most indicator-based; IP-based blocking; only able to consume tactical products.
Walk: indicators are grouped and have context; some requirements are documented; relevancy still isn’t clarified.
Run: production of unique and relevant products for different internal customers; prioritized requirements; (and probably making significant dollar investment).
Fly: no one flies.
The main intelligence customers are going to be network defenders and executives.
Arena says that your intelligence programs maturity is based on your organizations ability to do each part of the intelligence cycle.
Arena highlights two approaches to intelligence: incident-centric and actor-centric.
The incident-centric intelligence process: incident / IOC > TTP/campaign > actor (attribution / motivations).
Pro: immediate relevancy; you produce more IOC.
Con: it’s reactive; business impact may have already occurred; you aren’t proactively collecting against the bad guys.
The actor-centric: actor > TTP/campaign > IOC
Pro: prioritized tracking of threat actors.
Con: relevancy isn’t immediately apparent; it may be tough to flesh out IOC.
Arena distinguishes between intelligence and collection requirements. Intelligence requirements are the key questions that need to be answered. The collection requirements are the documented things that the intel team needs to gather in order to answer the intelligence requirements.
Note: Arena has an excellent blog post that further articulates the intelligence and collection requirements.
Arena also suggested checking out Treverton’s “Real Intelligence Cycle”
Borderless Threat Intelligence: Proactive Monitoring your Supply chain and Customers for Signs of Compromise, Nicholas Albright, VP of Security and Intelligence, ThreatStream, Inc. Jason Trost,VP of Threat Research, ThreatStream, Inc.
The need for threat intelligence extends beyond just the borders of your organization. We need to look at the organization’s supply chain.
We can define the supply chain as: 1) on premise and 2) zero premise.
On premise: internal feeds; code/library reviews/threat feeds
Zero premise: public credential exposure; threat feeds; Shodan/Censys; Suspicious domain registrations; Social media monitoring.
Supply chain questions to ask: network cleanliness; web footprint?; supply chain brand used to phish you?
For suspicious domain name monitoring (i.e., typo-squatted domains), you can use URLcrazy. (Note: I prefer dnstwist.)
Network Cleanliness monitoring
Systems from your IP space or your supply chain showing up as bot IPs; Scanning IPs; Brute Force IPs; Spam IPs;
Case study: Large hi-tech firm evaluating IT staffing company for outsourcing some development and IT services.
Social Media and Dark Web
Credential Exposure Monitoring: bulk usernames and passwords exposed;
Sources: paste sites; Google dorks, dark web.
So what do you need to do? Inventory email domains, email addresses for key executives, IP address space, brand names, external and internal domain names.
We Can Rebuild Him; We Have the Technology, Rich Barger, CIO, ThreatConnect Rob Simmons, Sr. Threat Researcher, ThreatConnect
Threat intel sharing is still very indicator based.
How are people prioritizing the process versus the product. Are you putting more emphasis on the creator, or the thing that is created?
Indicators will only give you some utility for a little while. But if you share the recipe that resulted in those indicators, than you can probably get more value.
Threat intelligence is ultimately a business process. Business processes should always be measurable and demonstrate value (or at least save time and money).
We are in the early “paleolithic age” of threat intelligence. We have tools, but they aren’t integrated well. It will take us some time to figure out how these pieces fit together.
Take for instance a malware hunting process in VirusTotal using YARA rules. This can be a manual and time intensive process where the balance of effort is more on the analyst side, rather than the system (i.e. automated) side.
If you are running YARA hunting rules then you are probably getting buried in VT notifications. When combing through those notifications, the analyst has to think “what rule was that?” “what was the quality of the rule?” “which samples do I want to send to the sandbox for further analysis?”
Next, the analyst needs to download the file. But you typically can’t download everything…and you don’t want “extra” VT queries at the end of the month. You want to make sure your fully utilizing your VT resource / investment.
Think about the CFO perspective: “if I already have a department that is a cost center, then they better be maximizing their resources!”
Then the analyst needs to submit the file to a sandbox (finally some automation!); copy data from sandbox report; analyze the sandbox report; and finally write an intelligence report.
The good news is that most of this process can be automated! ThreatConnect leverages Git to manage a repository of YARA rules. This allows for change management of YARA rules. The flood of VT notifications can be managed using the VT API. The YARA rules must still be prioritized as some are “wider” and “noisier.” Confidence rating of YARA rules allows for automatic prioritization. This is an important aspect of collection management.
Good stewardship of resources demonstrates that you can do a lot with a little, and can do more down the line. To get more resources, you need to be able to articulate what you’ve done, and what your challenges are: “here are our bottle necks, we need to be able to automate X.”
Automate the crappy tasks and leaving the analysis to the analysts. Analysts should analyze, they shouldn’t have to pick up heavy things and put them down.
The measurement of growth isn’t what you know, it’s how much you share
From the Cyber Nerdery to the Board Room, Adam Meyers, VP of Intelligence, CrowdStrike
Problem: the “cyber industrial complex” has shaped the executive view of cyber threat intelligence as a “pew pew” map. This view is easy to conceptualize. But the reality is that threat intelligence looks more like boring IDA Pro and reverse engineering screenshots—it’s highly technical and executives don’t get it.
It’s our own fault that executives think cyber intelligence is the pew pew map. We are not good at communicating threat intelligence.
Today, we used IOCs to communicate intelligence. We also don’t describe CTI very well. We’ve got TTP, the Kill Chain, the Diamond Model, the Pyramid of Pain, and STIX, but these models and formats don’t help executives to understand the business impact of threats.
Executives aren’t nerds (Myers uses the term lovingly), so out technical stuff doesn’t help them make decisions.
We need to bridge the gap between the decision maker and the “threat intelligence nerdery.”
We should relate intelligence to business risks. We need to understand the business objectives, key metrics, and risks.
Interesting point: executives are used to consuming intelligence—market reports, competitive intelligence, risk reports, etc. We just need to put threat intelligence into a package that they can consume.