Sharing (intel) is caring... or not?

Published: 2016-07-31
Last Updated: 2016-07-31 09:31:47 UTC
by Pasquale Stirparo (Version: 1)
3 comment(s)

I think almost every one of us working in the IR/Threat Intel area has faced this question at least once: shall we share intel information?

Although I have my own opinion on this, I will try to state some of the most common arguments I have heard in these years, pro and against sharing publicly, as objectively as possible not to influence the reader.

Why not sharing publicly?

  • Many organizations do not share because do not want to give away the information that they (may) have been attacked or breached. On this regard, there are closed trusted groups of organizations within the same sector (e.g. ISAC communities) where the willingness to share in such closed environments increases.
  • Trust is an extremely important factor within the intelligence community, and establishing trust is impossible when sharing publicly. Moreover, by not knowing with whom they are sharing, people are inclined to share less or not to share at all.
  • Part of the community suggests that we should “stop providing our adversaries with free audits”[1], since in many occasions it has been observed a clear change within the TTP after the publications of analysis’ results on blogs or reports.

 

Why sharing publicly? 

  • Relegating everything to sub communities may bring the problem of missing the big picture, since this may tend to create silos on the long term, and organizations relying entirely on them may miss the opportunity to correlate information shared from organizations belonging to other sectors.
  • Many small organizations may not always be able to afford getting access to premium intelligence services, nor to enter in any of these closed sub-communities for several reasons. 
  • Part of the community believes that we should share publicly because bad guys just don’t care and this is also proven by the fact that often times they reuse the same infrastructure and modus operandi.
  • By sharing only within closed groups, those mostly affected would be DFIR people who uses such public information as their source of intel to understand if they have been compromised or not.


What is your view on this?

Pasquale

[1] – “When Threat Intel met DFIR”, http://archive.hack.lu/2015/When%20threat%20intel%20met%20DFIR.pdf

3 comment(s)

Comments

Pasquale,

Sharing is definitely caring.

I believe there is an dimension of sharing that does not have to get into all the gory details, rather it can be in the form of "trust me, you should look for this in your environment".

Russell
You can add one more "sharing" fan to your pile

To me it comes down to 3.5 core problems:
1/ I want to share but would like to keep my name/organization as anonymous/confidential as possible

2/ I want to consume but need to trust the information submitted by someone else
2a/ I would like to be able to analyze the modus operandi by seeing if and how others are impacted (reconstruct their TTP and see how it changes, compare similarities from the past)

3/ I would like to limit the bad guys' access to what I shared

How I see it:
Subgroups: satisfy partially (1), (2) and (3) - as these groups are not anonymous, the information gathered are limited even between the trusted parties (e.g. an insurance company might not be willing to disclose all details of an attack which led to breach of several millions of health-related personal sensitive data until it is fully analyzed and vulnerability fixed which might take days/weeks). Also if several big companies are attacked by the same folks, they might not be part of the same subgroup, so even if they share, it might be difficult to see the big picture in this case as already pointed out

A community-owned portal for everyone: satisfies (1) and (2a), potentially also (2) in case of registrant verification process is in place which would still allow keeping entity submitting the info confidential. It doesn't satisfy (3) but I would argue that getting the big picture and even seeing the changes in modus operandi will be extremely beneficial, even for the future pattern recognition. Yes, the bad guys will follow it as well, and might adapt, but at the end, this is what they already do, just currently it is with couple days/weeks/months delay when they realize that their existing TTP doesn't work anymore.
Closed communities and paid intelligence services is only means of earning money. We need to understand how our adversaries are working. Sharing intelligence does not mean you have to share your organisations name. Mostly you are sharing Indicators of compromises. If you are putting your organisations name on any open threat intelligence platform you are violating major policies and breach notifications legalities. Organisation will require to analyse and disseminate based on their infrastructure and consumption capabilities.

Example : If "bad guys" are on a open threat intelligence platform, try an evaluate what are the risks. The person can get Indicators of compromises or actual code or malware. That is available for free anyway. If a new malware is designed its freely available on forums and sometimes github. Love being on DFIR and sharing my research via blogs and updating open threat intelligence sources. There are lot of blogs where most of DFIRs and other independent security researchers updates and share.

Our adversaries are working as a team and we should too.

Diary Archives