Manage a Data Breach

The 5 most important things to consider during a data breach


Learning that you have experienced a data breach is an uncomfortable moment in any person’s life. Especially if you are a cyber security professional charged with keeping information safe and secure. More so if a third party tells you that you have seemingly lost information. Unfortunately, any day involving a data breach will be a bad day. How bad a day, will depend on a number of factors, and your level of readiness. The five important things to consider during a data breach presented here aim to help make a bad day, just a little bit easier.

Please keep in mind that managing a data breach is complex. There is no substitute for experience and knowledge, as no two data breaches are identical. The caveat I need to provide with this advice, is that the five most important things to consider during a data breach is not exhaustive advice and there are nuances that you need to consider. Please treat the following as general good practice advice.

Before we dive into a Top 5

There is a propensity to look for blame and jump to conclusions. Keep in mind, if you take your security obligations seriously, respect the role you have as a custodian of sensitive information, invest appropriately in security and manage your risk appropriately, you need to accept the fact that you are not alone. You are not incompetent or even special, breaches happen, and are a part of the world we live in. You are not the first person or organisation to experience a breach and unfortunately, you won’t be the last. Having said that, if you are guilty of consistently ignoring your security obligations, underinvesting in people, process and technology, overlooking your obligations to protect sensitive information, and trusting to providence that everything will be fine, those feelings of regret, remorse and discomfort are entirely appropriate.

At this stage you haven’t even confirmed whether it really is a security incident, you’ve just received (or uncovered) some information that indicates there might be a breach. So, before we panic, we really just need to work through the steps and work the facts.

Incident response is a process typically consisting of six main steps:

1.  Preparation

2.  Identification

3.  Containment

4.  Eradication

5.  Recovery

6.  Lessons Learned

Now if you haven’t actually done step number 1. Preparation, then a little bit of panic is probably appropriate at this stage, but all is not lost.

 

#1 – Confirm the breach, work the data.

Before we get carried away let’s establish whether there, in fact, has been a breach. This is part of the identification stage in the incident response process. You need to look at what has been reported and how. Was it third-party notification? i.e. someone outside the organisation told you a customer perhaps or business partner. Was it an internal staff member that reported something weird, or clicked a link? Was it your bank letting you know that there have been fraudulent transactions on credit cards and the common factor is your organisation. Was there data on Pastebin or similar services that looked like it may have come from your databases? Was there an alert from an IDS/IPS, SIEM event or other systems that indicated there may be a breach? Are files on the network suddenly encrypted?

These notifications all need to be validated and confirmed. It wouldn’t be the first time an incident turned out to be a new feature on a website, a new system or a misconfiguration (which can be a breach as well BTW).

How do you confirm the breach? Simple, assume the information received is correct and form a hypothesis of how it could have occurred. We’re doing a privacy blog here so let us use the loss of Personally Identifiable Information (PII) as an example. Let’s say data has been identified on Pastebin and it looks like your client records. Some key questions you will need to ask are:

  • Where does this information exist in our organisation?
  • How can it be accessed, is it internal only or internet facing? Perhaps it is stored by a third party?

Asking these two questions will help you establish whether it is indeed your data and perhaps give a clue as to which controls may have failed. These questions will provide guidance as to where you need to look next. Are we looking through web and application logs, or are we digging through internal access logs in Active directory, proxy logs, email logs, etc? By just following up on these two questions, the Shearwater team have in the past confirmed incidents where hackers had gained access to systems and were actively retrieving data, but we’ve also identified incidents where a staff member inadvertently mailed out the bulk of a confidential database. In one case, the breach was actually at a third party where the data was stored for other purposes.

Now that you have validated it is indeed a breach or a suspected breach we can move on to containment. If you haven’t already done so to help identify the issues this is a good time to get the incident response team together. It might be a good time to let management know there is a potential breach that needs to be dealt with and give the privacy officer a heads up to let them know that there may be a notifiable data breach requirement. But this is all in your incident response plan…. right ?

Manage Data Breach
Having a structured approach to a security incident will help make a bad day, just a little bit easier.

 

#2 – Contain the pain

Containment of the incident is the next step in the process. It is possible that the damage has been done, true, but you still need to deal with the fact that an attacker may still be in your network and may still have access to the data. There is an argument to allow the breach to continue as it may provide you with valuable information that may allow you to better prosecute the perpetrator. To be honest, to me this is like saying “let the bank robbers get away with the money because I want to see how they make their getaway”. If you are losing PII the best response is generally to shut them out. Remember the attacker doesn’t necessarily know why they lost access. They will often assume they did something wrong.

In the identification stage you would have looked at the various logs and established how the deed occurred, or at least you’ll have a good idea. If the web logs indicate an SQL injection, perhaps remove the application, or configure a WAF to drop those requests. Maybe shut the service down whilst you identify the root cause and eradicate the issue (the next step). If it was a mail-out by a staff member, have a chat to the culprit and explain the result of their actions. So to contain the issue you may be:

  • Resetting passwords and disabling compromised credentials
  • Addressing known vulnerabilities and bugs via patching
  • Blocking network access
  • Quarantining compromised hosts or applications or shutting down systems.
  • Having some stern discussions on following processes.

Various business decisions should inform all of the above approaches, and should weigh up the harm occurring due to the compromise/breach versus the harm that could occur from shutting down systems. The decision to shut down systems that effectively shut off business operations should not be taken lightly, but may be necessary to help prevent a greater harm. Don’t forget if your systems are being used to attack others, you may be in deeper water than you first realised. Also don’t forget to communicate to management what has been happening and where things are at.

 

#3 – Fumigate, eradicate, exterminate

Once the containment has been accomplished, there is huge pressure to remove the badness immediately. However, you need to identify the root cause of the issue. During identification you had the first clues, during containment you shut them out and hopefully gained more insight. Now it is time to do some navel gazing and identify exactly the how, what and why of the issue. There really is no substitute for a thorough investigation. This is no time to take shortcuts. If you do not have the skills, consider getting some in. Getting this wrong will result in a system that is compromised over and over and over. We see this quite often when organisations miss this step or get the next step (recovery) wrong.

Identifying the root cause of the issue is paramount. Analysis should be undertaken and the path to compromise understood in intimate detail. If you can’t explain the breach in excruciating detail and don’t have a complete timeline of events (within the realms of what is possible), then the investigation is not complete. You will be under pressure to undertake the investigation quickly but resist the urge to finalise the investigation until you understand the breach and can have sufficient input for recovery. Make sure you have your facts and are as certain as you can be. Remember number 1, the issue has been contained, you are no longer hemorrhaging data.

When looking for the root cause make sure you manage your evidence, establish your timelines and identify the how and why. Was it missing patches, misconfiguration of a system, a missing firewall rule, a bad piece of code in an application, a WAF that was switched off. Creating a timeline is by far the best approach to get clarity on the events that have resulted in the breach.

Go through all the elements. On servers perhaps take a forensically sound image or snapshot. Safeguard log files. All of these can be used as evidence and help identify the how. Use the tools you have to identify the vulnerability that was exploited. It could be technical, it could be procedural. Consider deploying an incident response tool to help identify the compromised systems or malware if present.

Once you have established the how you can now devise strategies to eradicate the issue.

In the case where you have lost PII your privacy officer or committee should now have the relevant information that they need to complete their analysis on whether the breach needs to be reported or not. You will have information on:

1.  the timeframe (when did the breach start?)

2.  what systems and information has been disclosed, accessed modified or lost

3.  who has been impacted. Is the impact likely to cause serious harm

4.  are third parties involved or impacted

You may have some of the information already from the previous stages, but until the investigation has concluded you may not have certainty.

Manage Data Breach
Post-breach clean-up is vital to prevent recurrence.

 

#4 – FIX IT, once, correctly.

This is the recovery stage of the incident response process. Rebuilding systems, recovering data, patching systems, fixing the configuration to make sure the same issue does not reoccur. This step is informed and guided by the output of the previous eradication steps. Post-breach clean-up is vital to prevent recurrence. We have instances where a breach occurred in 2011, every two or three days the attackers return to test and see if the system is vulnerable again. That is a long-term game. We have seen instances where the system was brought back online prematurely and the attackers took control before all security measures could be implemented. We’ve seen organisations recover corrupted data from backups, only to be breached again because the application was not fixed.

Build it from scratch, patch it, test it, scan it, patch it again, test it again, make sure that you apply all the additional controls you identified that would have helped prevent the issues. Test it again. After all that is complete, that is the stage where the system can be put back online.

Keep in mind that during recovery, your support and administration staff are likely to remain overworked and under pressure. Implement and enforce fatigue management processes to manage workloads to ensure silly mistakes don’t creep in at this stage.

Then watch it as they will be back, remember they do not know why the system went away or they lost access to the system.

 

#5 – Notify and Prevent

The lessons learned at preparation stage is key. Once the incident is over sit down and debrief. See what should have gone better. Review the information from the root cause analysis and determine what is to become BAU and what is part of incident response. Update documentation, perhaps write a rough post-incident report and go to sleep. As soon as you are able to, complete the Post Incident Report (PIR). It provides great lessons learned, enables objective review of current processes, and provides opportunities for improvement.

From a NDB notification perspective there is still some work left to do. The NDB scheme provides clear guidelines on how to notify individuals and the OIAC (please see my earlier posts). You should follow their recommendations to the letter and meet all scheme compliance requirements.

If I put myself in the position of an individual affected by a breach. I will evaluate the breach to see if the breached organisation has made every effort to secure my personal information and sensitive data prior to the breach. I am probably going to be understanding to a point. What will matter most to me from the point of being notified, is how the organisation manages the breach, and recovery. If the recovery and management are exemplary, I am more likely to provide the disclosing firm with a degree of understanding and give them the benefit of the doubt. If the breach management is poor or slipshod, I’m taking my data and my business elsewhere.

Hopefully, you have found this post helpful and the series of blog posts on the data breach topic illuminating. If you have any follow up questions, or would like some further information on related topics, please don’t hesitate to get in contact.

 

Security News and Alerts

Get updates and exclusive content from our security experts
  • This field is for validation purposes and should be left unchanged.