Here you will find Shearwater’s latest security advisories, security updates and expert insights.

5 things to help you prepare for the Notifiable Data Breach scheme


Following on from my last post that covered the 5 things you need to know about the Notifiable Data Breach (NDB) scheme, this post is focused on the 5 things you really must do, in order to be prepared for the Notifiable Data Breach scheme. As you will remember the NDB impacts a significant number of organisations and requires specific actions to be followed in the event of a breach. So here is a top 5:

  1. Find out whether you need to comply with the provisions of the NDB.
  2. Determine what sensitive personal information you hold, and make a determination of what the following terms mean to you and your organisation:
    a. likely to ‘occur’
    b. ‘serious harm’.
  3. Prepare a step by step process of what you need to do in the event of a breach.
  4. Educate your stakeholders.
  5. Run a practice drill.

1.  Find out whether you need to comply with the provision of the NDB Scheme

This task should be the simplest of the 5 things you need to do. A good starting point is provided in my previous blog post, but if you are in any doubt, please refer to the Office of the Australian Information Commissioners website.

If you are covered by the scheme and need to comply, and haven’t already started on your NDB compliance journey, I’d suggest you need to initiate some internal conversations. If necessary engage some external expertise.

Even if you don’t need to comply, the investment you make in preparing a breach process will not be wasted.

2.  Determine what sensitive personal information you hold

This task may actually sound a little easier than it is for a large number of organisations. Unfortunately, many organisations have a very poor understanding of their information assets, what is important to them, and what information they need in order to run their business. If sensitive information is not understood, you may be capturing, storing or processing more sensitive information than you need to.

You should also consider, where that sensitive information is stored. Long gone are the days when you could safely say that all my data is on my big file server in my data centre under lock and key. When you really look into where sensitive personal data is stored, you are likely to find that it is located on multiple servers and applications, SAN devices, laptops, iphones, USB sticks, on your backups media, on SharePoint, OneNote, DropBox, and in a myriad of other cloud and/or shadow IT environments.

The next consideration needs to be who has access to the sensitive personal information you possess. Questions to consider include: Do you outsource functions, systems or operational tasks. Are you storing data entirely within Australia, or are you working offshore and around the Globe. Do your partners know that you have an NDB obligation. What is the state of your information supply chain, and where are you exposed. In fact, the legislation does recognise that organisations can jointly hold personal information, and has made provisions to avoid duplicate obligations.

Only once you have a full appreciation of what information you hold custodial responsibility over, where it is, and who else has access to it, can you make a determination and a judgement on what is ‘likely to result in serious harm’.

As with most approaches to information security and privacy matters, a solid understanding of risk management in terms of likelihood and consequence should be leveraged to inform the conversation around the serious harm question. The implementation of the NDB scheme effectively raises the bar on expectations from a risk management perspective.

 

Security News and Alerts

Get updates and exclusive content from our security experts
  • This field is for validation purposes and should be left unchanged.

 

3. Prepare step by step process of what you need to do in the event of a breach

After you have undertaken an information asset inventory and understood what sensitive personal information you have, where it is and who has access to it, you need to prepare for a breach by developing a breach response framework. The framework should include:

  • A process that provides:
    • Identification, investigation, validation and containment rules
    • Clear authority to initiate an investigation and declare a breach
    • High-level resolution guidelines and plans
    • Permitted timeframes for each phase of the breach
    • Communications protocols internally including a clear RACI model
    • Key contacts both within your organisation and with specialist external parties to assist with investigation and resolution where required
    • Plugs in to your work health and safety policy to help manage fatigue
  • Notification protocol for individuals affected. There are proforma’s available and these can be leveraged rather than invented. The information provided that relates to the breach should include:

    • the date, or date range, of the unauthorised access or disclosure
    • the date the data breach was detected
    • the circumstances and or known causes of the data breach
    • who has obtained or is likely to have obtained access to the information
    • the steps undertaken to contain or remediate the breach
  • Options for notifying individuals include:

    • Notify all individuals impacted
    • Notify only individuals who are at likely risk of serious harm
    • Publish your notification, and publicise it to bring it to the attention of individuals at likely risk of serious harm
  • Notification protocol for the OAIC. Again, proforma’s exist that can be used. Items required to be provided in the notification to the OAIC include:
    • Contact details for your organisation
    • A description of the data breach
    • The kind of information involved in data breach
    • The steps you recommend for impacted individuals in response to the breach

4. Educate your stakeholders

Without appropriate education and guidance, responsibility for everything during a breach may fall on you – the reader of this blog! Each stakeholder must know their roles and responsibilities and must be able to operate autonomously and as part of a team when it comes to managing a breach. An internal education activity is definitely something that you should undertake as a priority after your preparation activities. But don’t forget step 5. Knowledge helps, but nothing makes that knowledge stick like having stepped through the protocol at least once.

5. Running a practice drill

As the old saying goes, practice makes perfect. Running a breach practice drill doesn’t have to be onerous or take massive amounts of time to prepare. Although the more you plan and the more often you can practice, the better off you will be. As a first step, prepare some meaningful scenarios, book a meeting with relevant stakeholders, establish some ground rules and run through your established breach process for each of the practice scenarios. Appoint a note taker who will observe and record variations to the process flow. Initially, stick to the process that you have designed, but annotate any issues. Then roll those lessons learned into a second iteration of your NDB process.

Then keep practicing. Perhaps utilise your regular business continuity and disaster recovery drills as a vehicle to test your NDB processes.

The 5 things you need to know about the Notifiable Data Breach scheme


Mandatory Data Breach Disclosure and the Notifiable Data Breach (NDB) scheme are both really hot topics at the moment. There is a number of experts from the legal, cyber security and business community all providing their advice, many providing guidance in forensic detail on what should be done to prepare an organisation for this change.

I’m not planning to cover NDB in detail, the aim of this blog post is to quickly and succinctly outline the 5 most important things you need to know about NDB scheme within Australia.

Essentially, the why, what, when, who, and which of NDB. I’ll follow with a number of additional posts designed to provide practical guidance for organiations on this topic.

Why NDB?

With the prevalence and increased impact of data breaches on the news and in our lives, there is a greater need than ever for a consistent treatment mechanism. The absence of any industry consensus on data breach notification meant that it was only a matter of time before the Government put in place a scheme to protect the interests of consumers, and individuals.

After extensive industry and professional consultation, the Notifiable Data Breaches (NDB) scheme was passed under Part IIIC of the Privacy Act 1988 (Privacy Act).

What is the NDB?

The Notifiable Data Breaches (NDB) scheme establishes a framework governing how data breaches are assessed and responded to, and the obligations of organisations in reporting breaches.

Specifically, the NDB introduces obligations for organisations who experience a data breach that exposes personal information and meets the criteria specified as likely to cause ‘serious harm’. More on what constitutes ‘serious harm’ in a moment.

Any breach notification must include recommendations for impacted individuals on the steps that they should take as a result of the breach.

The NDB also specifies that the Australian Information Commissioner must be notified of eligible data breaches.

When does NDB come into effect?

The NDB comes into effect on the 22nd of February 2018.

Who does the NDB impact?

Unless you live entirely off the grid and share no personal information, ultimately, the NDB affects us all.

Whilst not an exhaustive list, with some exceptions, a good summary of the organisations that are impacted by the NDB include:

  • Australian Government agencies
  • Businesses and not-for-profit organisations with an annual turnover of $3 million or more
  • Credit reporting bodies
  • Credit providers:
    • banks, building societies, credit unions, finance companies
    • retailers who issue credit card
    • organisations where payment is deferred for at least 7 days – telco’s, energy and water utilities
    • organisations that provide credit for hiring, leasing or renting goods
  • Health service providers
  • TFN recipients, which likely impacts State Government entities if they use TFN’s

An important thing to note is NDB applies to overseas organisations that have been incorporated or formed in Australia.

Which breaches are covered by the NDB?

In broad terms a data breach is defined as either: unauthorised access; unauthorised disclosure; or loss of personal information. The type of personal information covered includes:

  • An individual’s health information or other ‘sensitive’ information
  • information used as a precursor to identity fraud (Medicare card, driver licence, and passport details)
  • financial information
  • a combination of types of personal information.

As with all legislation, the devil is in the detail. This information does not seek to be exhaustive, and the usual legal disclaimers around seeking professional legal advice do apply.

The Office of the Australian Information Commissioner (OIAC) states:

The NDB scheme only applies to data breaches involving personal information that are likely to result in serious harm to any individual affected. These are referred to as ‘eligible data breaches’. There are a few exceptions which may mean notification is not required for certain eligible data breaches.

What does all this mean? The terms ‘likely’ and ‘serious harm’ are key.

  • ‘Likely to occur’ means more probable than not/possible
  • ‘Serious harm’ may include serious physical, psychological, emotional, financial, or reputational harm to an individual

These terms are subjective and require some assessment against the so called ‘reasonable person’ test. Harm can include: loss of business or employment opportunity, damage to a person’s reputation, relationships; humiliation; identity theft; significant financial loss; threats to physical safety; and workplace or social bullying or marginalisation. The circumstances of the breach is also an important factor.

The stated exceptions are interesting, because if an organisation acts quickly to remediate a data breach, and as a result of their quick response the impact of the data breach reduces the breach to something less than what is termed serious harm, then there is no requirement to notify any individuals or the Commissioner.

Hopefully you have found this blog useful to set the scene for NDB. I’ll be following up with an additional series of posts on how to prepare for NDB, what is important during a breach and how your organisation can be prepared.

Information Security Report – December 2017


Over the past month, we have seen a number of threats, vulnerabilities, and spear phishing attacks affecting organisations worldwide. Read on for a summary of these events to help you assess their implication on your environment.

Threats and Exploits


Mailsploit

Mailsploit Allows Spoofed Mails to Fool DMARC. Mailsploit is a collection of vulnerabilities in various email clients which allow an attacker to perform code injection attacks, spoof senders and bypass email protection mechanisms such as DMARC(DKIM/SPF). The security researcher who developed Mailsploit described how Mailsploit allows an attacker to send emails from any address they choose by taking advantage of how servers validate the DKIM signature of the original domain and not the spoofed one. It has been reported that this technique does not currently get detected or blocked by the majority of mail client vendors.

All major email clients and web mail vendors were notified about Mailsploit prior to its public release, however a large number of popular clients still remain vulnerable.

The list of impacted mail clients can be found here >>

It is recommended that users should update their email client whenever there’s a software update available, use end-to-end encrypted messages for personal conversations and at work and/or use PGP/GPG to verify the identities and encrypt email contents.

You can read more on Mailsploit on info security magazine and mailsploit.com

Spear Phishing

Huge Increase in Email Impersonation Attacks: According to Email Security Risk Assessment (ESRA) report, a report released byMimecast Data Security, it was discovered that although organisations continue to face an ongoing threat from malware, the fastest growing threat is impersonation attacks. An organisation is seven times more likely to be hit by an impersonation attack than by email-borne malware. These attacks are also known as whaling or spear phishing where attackers trick recipients into wiring money transfers to the fraudster. These scams are highly targeted and often done after a cybercriminal has gathered enough information to send the right person the right message. These attacks continue to grow faster than malware due to the fact that it’s very hard for traditional defenses like email filters to detect them.

Good user training will give an edge in avoiding most of these payment and impersonation scams. A few other tips for security teams to help combat the social engineering threat include:

  • Conducting internal phishing by phishing your own employees and sharing the results of the testing with them so that they can learn what to look out for. This should be combines with good training on how the users can detect the phishing emails.
  • Impersonation attacks often try to mimic emails from C-level executives. Implement a company policy that closes scam avenues for would-be spear phishers (e.g., never request the sharing of sensitive documents via email).
  • Disable links inside email bodies to force users to manually navigate to the site mentioned in the email. It adds extra steps, but it can prevent a user from clicking on a phishing link by accident.

Read more on info security magazine and TechRepublic

 

Security News and Alerts

Get updates and exclusive content from our security experts
  • This field is for validation purposes and should be left unchanged.

 

Breaches


Virtual Keyboard App Data Breach

Massive Breach Exposes Keyboard App that Collects Personal Data on its 31 Million Users. A team of security researchers have discovered a huge trove of personal data of the users of the virtual keyboard app ‘AI.type’ that was accidentally leaked online for any one to download. This app is a customization for on-screen keyboards on mobile phones and tablets with more than 40 million users worldwide. It is reported that the app requests for ‘full access’ to all user data stored on the phone and appears to collect everything from contacts to keystrokes. The leaked data includes full names, phone numbers, email addresses, device information including device name, screen resolution, model details, android version, mobile network name, country of residence, GPS location and even links and information associated with social media profiles.

Events such as this raise the question about what permissions mobile applications have on our devices (and just how much access these applications NEED). In order to best protect yourself against this form of application privilege abuse, it is recommended to always read and be cautious of what access is granted to applications.

Read more on The Hacker News

Uber Technologies Data Breach

Personal data of 57 million customers and drivers was stolen last year from ride-sharing company Uber with the breach revealed to have been concealed by the company for more than a year. It is suggested that the company paid $100,000 to the attackers. The company however advised that no social security numbers, credit card information, trip location details or other data were taken. Uber is being condemned for how it chose to deal with the issue after discovery of the attack and has also been sued for negligence over the breach by a customer.

It is reported that two attackers were able to retrieve login credentials from a private GitHub coding site which they used to access Uber data from an Amazon Web Services account where they discovered customer and driver related information. Although there are state and federal laws in the United States that require companies to alert people and government agencies when sensitive data breaches occur, Uber failed to comply.

Read more on Bloomberg.com

Breach at PayPal Subsidiary Affects 1.6 Million Customers. Paypal disclosed on 1st December 2017 a data breach on its recently acquired company TIO Networks. Personal information for 1.6 million individuals may have been compromised. TIO is based in Canada and serves some of the largest telecom and utility network operator in North America. Paypal pointed out that the Paypal platform has not been impacted as the TIO systems have not been integrated into its own platform. Paypal advised that affected companies and individuals would be contacted via mail and email, and offered free credit monitoring services via Experian. The data breach was discovered as part of ongoing investigations for identifying vulnerabilities in the processing platform.

Read more on SecurityWeek.com

Other News


Simulated Attacks Uncover Real-World Problems in IT Security. A research report by SafeBreach, a cybersecurity company that has developed a platform that simulates hacker breach methods, reveals that virtual hackers “have a 60% success rate of using malware to infiltrate networks. And once in, the malware could move laterally almost 70% of the time. In half the cases, they could exit networks with data.” The research found that it was not hard to get past the perimeter and once in, it was easy for attackers to move around and exfiltrate data. This is because most organisations overlook concerns over lateral movement as they mostly focus on the perimeter.

According to the report, malware infiltration methods like nesting or “packing” malware executables were effective in bypassing security controls 50% of the time. The success rate of infiltrating a network using packed executables was found to be 55%-61% using JavaScript, VBScript (VBS) using HTTP and using HTML file format (CHM) extension. It is recommended that network security controls should be VBScript (VBS) using HTTP and using HTML file format (CHM) extension. It is recommended that network controls should be configured to scan for malicious files and block them before they make their way to the endpoints/hosts for installation to disk. The report
further outlines how cybercriminals exfiltrate data using the easiest methods which are often through traditional clear or encrypted Web traffic. Ports having the highest exfiltration success rate include Port 443 (HTTPS) and Port 123 (NTP).

It is recommended that in order to better protect resources, organisations should optimize their current security solutions, constantly update the configurations as needed, and then test the changes they make.

Read more on DARKReading.com

How to set up the right Vulnerability Management processes


Managing your network vulnerabilities and identifying the right vulnerability management processes can be complex. Whilst finding and prioritising vulnerabilities are the responsibility of the security leader, the speed at which these vulnerabilities are remediated is dependent on other people in your organisation. System architects and administrators, IT managers and system owners all play a part in remediating the issues.

As a security professional, you are acutely aware of the security risks in leaving systems in a vulnerable state. However, addressing the issues does not always align with business priorities or present workloads. So how do you set up a process that addresses the challenges above and keeps you on speaking terms with colleagues?

Here is a 3 part process — Categorise, Prioritise, Bitesize — that can help you streamline your activities. More specifically:

  1. Helps you see patterns before they become an issue
  2. Allows you to narrow down the most important threats, and
  3. Execute resolutions as effectively as possible

1- Categorise


After running your first few scans the first step to managing vulnerabilities is to categorise. This helps to indicate potential process issues and highlights common trends and weak areas.

The main categories we come across are:

Missing patches

Many of the issues we see are caused by missing patches. The scans, apart from showing that certain patches are missing may indicate gaps in the patching process. Perhaps the organisation is patching forwards only and never applies past patches to systems that may have changed over time or changed purpose.

Configuration issues

Vulnerability scans can also show an organisation how effective their build standards are. When scans show many different vulnerabilities on similar devices it can be an indication that build standards or hardening guides are not being adhered to.

I have a colleague who works at a large multinational organisation. We were talking about patching and vulnerability management and I asked him how many servers he looked after. His answer surprised and confused me, he said “One”. In reality, he looked after close to 50,000 servers, but the build was consistent, essentially the same server replicated 50,000 times. So, when he fixes one issue on his single server, he’s actually fixing the same issue on all systems.

Scans can also highlight other configuration issues such as misconfigured devices or services, default passwords being used… etc. Many of which can be fixed by fixing the process.

Outdated software

Scans will also highlight the use of outdated software. It is also quite common to discover devices that you were not aware of. For example, in one vulnerability assessment we did, the old Windows 2003 servers were known. The multitude of Windows XP devices and a Windows NT server were more of a surprise.

False positives

Every scanner has a particular way to identify issues. For example, in the early 2000s, there was computer worm called Code Red that attacked Windows IIS servers. To combat this, the vulnerability scanners at the time were primed to spot the product code and version number for IIS. However, not long after Code Red was fixed, Microsoft no longer updated the version number. This meant that vulnerability scanners would still think, based on the version number, that the system was vulnerable to this attack. Even though it had long been fixed. So it is important to understand how the scanner you use identifies certain issues. This allows you to identify false positives.

As part of your process, you need to identify and manage false positives and carefully weed out the irrelevant information for your particular environment.

Don’t care/low risk

The final category we use is the ‘Don’t care’ or Low-risk category. Whilst scanners assign their own risk ratings, there are always findings that would have no or minimum impact on your environment.

Every environment has low-risk items. One of the most common we see is the ICMP timestamp issue. While timestamping issues should be fixed, for many organisations there are more important tasks that need addressing first.

There are also issues that could almost be considered trivial. For example, if “Last user logged on” is shown then it’s a “We’ll get around to it” fix. I’m fairly safe in saying no organisation was ever compromised through this particular issue.

2- Prioritise


When it comes to vulnerabilities, everyone tends to say that every vulnerability is important and urgent – but in reality, it isn’t. Not everything is important or urgent, you do need to prioritise and focus on the most important vulnerabilities you’ve identified.

You can create your priority list by considering:

Importance of asset

Start by looking at the criticality of each asset for your organisation. That is, if the system were to go down or be broken into, what is the realistic impact, would it spell the end for the organisation or just cause a mild inconvenience.

The risks of remediation or not remediating

What is the risk of not fixing the issue? Many organisations deprioritised MS17-010(Eternal Blue). The risk, as many companies found out, was that their environments got infected with Ransomware and suffered significant downtime.

The reverse is also true. Applying a patch for Flash on a critical server, when the server can’t be used to access the internet can probably be left alone for a little while as the risk to the server is higher than the issue it addresses.

Ease and/or difficulty of remediation

The reality is that some issues can be easy to fix, others are complex and could require extensive testing. As you evaluate the vulnerabilities identify how difficult or easy it would be to address as well as the spread of the issue. An issue that has a high impact, i.e. affects a large number of devices, may be addressed prior to a critical issue identified on a few devices.

Accuracy of vulnerability

Vulnerability scanners make suggestions, based on the tests conducted, that a certain vulnerability exists and whilst in many cases that is true, in your environment that may be how things work. The tests may also be basic version checks rather than a comprehensive test, so you need to be technically minded to decide whether the vulnerabilities identified are relevant and accurate for your environment. Scanners still require human interpretation to make the right call.

Scanners, like many software tools, provide a suggested value on the vulnerabilities detected within your environment. However, while you can tweak values to better reflect your needs, you can’t always rely on these numbers to make decisions – let me show you why.

Here we have some examples of common vulnerabilities scanners detect. Let’s explore the suggested values:

Vulnerability Management Processes

 

Password that never expires: the scanner has ranked this as ‘severe’. I tend to agree and would recommend addressing this if the password contained only a handful of characters.

TLS/SSL attacks: Again, I agree with the moderate rating, however, these types of attacks are quite tricky to do as they need very specific information. We could probably leave this one down the list of priorities.

Diffie-Hellman: While this is ranked as moderate, I would categorise this risk as severe if this was an internet facing service. Interestingly, we have found on many occasions that addressing higher-priority issues like this resolves other lower-priority issues.

Windows display last username enabled: This is ranked moderate, but I know it’s a lineball call as some organisations care more about this than others.

3- Bitesize


Vulnerability Scanning Report

 

 

 

 

 

 

 

 

 

 

As you can see from the image, this scanner has spat out a report over 11,000 pages long. Imagine if someone dropped this on your desk with a “here you go, get cracking”. What are the chances you’ll get stuck into it? What are the chances you’ll stay on speaking terms with that person?

Sadly, it’s this sort of common approach that makes it almost impossible for organisations to tackle vulnerabilities effectively.

So instead, we turn this report into bitesize chunks by:

  • Selecting what aligns with the organisation’s priorities. We want to maximise valuable resources.
  • Checking that the task is achievable. This helps to determine the sort of support you need.
  • Identify the quick wins and slow burns. Will the completion of one simple task resolve a widespread issue? Or, do you need to take out more testing or request additional help to complete something more complex

Based on the priorities and the risk to the organisation liaise with the relevant teams. Provide smaller achievable tasks and objectives rather than one large bucket of issues. By splitting the tasks into smaller achievable objectives the teams will be better able to cope.

Identify:

  • What vulnerability has to be fixed now, and
  • What can the business cope with until later

Once you have your priorities in order, create a task list and work your way from top, to bottom. Perhaps start with addressing the easily achieved remediation tasks and build up.

We can’t stress enough how successful this approach is; breaking down your tasks into manageable chunks not only makes it easier to visualise results but engages your organisation along the journey.

As you can see, setting up a process for vulnerability management is essential in streamlining what can otherwise be a difficult and lengthy process. The above approach can make huge improvements in your security posture and guide your continuous improvement when it comes to cybersecurity.

 


This helpful advice is Best Practice #2 in our Vulnerability Management 101: 5 Best Practices for Success where you will find advice on your next steps of improving the categorisation and prioritisation of your scan data and selecting and configuring your vulnerability management tools.

Find out more >>

Ten things you should know about ISO/IEC 27001


1.    What is ISO 27001?

ISO 27001 is an international standard for information security management.

2.    Why is ISO 27001 important to me?

Information is the lifeblood of most contemporary organisations’. It provides intelligence, commercial advantage and future plans that drive success. Most Organisation store these highly prized information assets  electronically. Therefore, protection of these assets from either deliberate or accidental loss, compromise or destruction is increasingly important. ISO 27001 is a risk-based compliance framework designed to help organisations effectively manage information security.

3.    Why are international standards like ISO 27001 important?

Many Industries and many Governments have adopted ISO 27001 as the de facto standard for information security management practices. ISO is particularly popular at the State Government level within Australia where it is often mandated, and in industries such as ICT and data centre hosting.International Standards provide significant benefits overall to the domestic and global economy.

For Consumers
Proof of conformity to International Standards helps reassure consumers that products, systems and organisations are safe, reliable and good for the environment.

For Business
International Standards can be a strategic tool to help businesses tackle challenges and compete on a global stage.
Adoption can: open up new markets, improve competitiveness through greater customer satisfaction, reduce costs, streamline systems and processes, and increase productivity.

For Society
Standards improve safety, quality and environmental outcomes as well as encouraging international trade.

4.    Why is ISO 27001 important?

Having an international standard for information security allows a common framework for managing security across business and across borders. With an ever more connected world, the security of information is increasing in importance.

Data and information needs to be safe, secure, and accessible. The security of information is important for personal privacy, confidentiality of financial and health information and the smooth functioning of systems and supply chains that we rely on in today’s interconnected world.

ISO 27001 provides the framework for you to effectively manage risk, select security controls and most importantly, a process to achieve, maintain and prove compliance with the standard.
Adoption of ISO 27001 provides real credibility that you understand security and take security seriously.

5.    What are the elements of ISO 27001?

ISO 27001 is made up of a number of short clauses, and a much longer annexe listing 14 security domains and 114 controls. The most important of the short clauses relate to:

  • The organisational context and stakeholders
  • Information security leadership and high-level support
  • Planning of an Information Security Management System (ISMS), including risk assessment; risk treatment
  • Supporting an ISMS
  • Making an ISMS operational
  • Reviewing the system’s performance
  • Adopting an approach for corrective actions

Based on the risk profile of the organisation, controls may be selected to manage identified risks. Within the Annex, the 114 listed controls are broken down into 14 key domains which are listed below:

  • Information security policies
  • Organisation of information security
  • Human resource security
  • Asset management
  • Access control
  • Cryptography
  • Physical and environmental security
  • Operations security
  • Communications security
  • System acquisition, development and maintenance
  • Supplier relationships
  • Information security incident management
  • Information security aspects of business continuity management
  • Compliance

6.    How does it work? – What is a Risk-Based Approach to Compliance?

Unlike other security standards, for example, the Payment Card Industry – Data Security Standard (PCI-DSS) and Sarbanes-Oxley (SOX), which are highly prescriptive and control driven, ISO takes a risk-based approach to security compliance. In other words, there are no defined set of security controls that must be implemented regardless of the type of business operation, as is the case with PCI-DSS. Controls are selected based on their ability to mitigate risks to the organisation

ISO 27001 is concerned with the process of continual improvement and a demonstrated commitment to managing information security based on risks to the organisation’s information assets.
A risk-based approach to managing information security ensures that security risks are appropriately prioritised, cost effectively managed as well as ensuring that only those controls that are necessary to manage these risks are implemented. It is a comply or explain approach. Based on your organisations’ risk, you can comply with the controls that help manage risk, or simply explain why they aren’t relevant and why you don’t need them. There is no compliance for the sake of compliance with ISO.

7.    Where should I start?

Before starting out on the path to certification, it may be worthwhile understanding if certification is required, or if compliance will suffice. For many organisations, certification is not a requirement.

For those industries where certification is a requirement, the path to achieving certification should not be treated as a one-off project. Firms that successfully maintain certification over multiple years, treat information security as a critical business process and invest time, resources and effort into ongoing compliance. Certification is the logical consequence of compliance, and should be relatively easy if a solid compliance regime is established and maintained.

For most organisations, the logical place to start is to conduct a gap analysis against the requirements of ISO 27001.

8.    The Audit Process

External certification can only be conducted by an Accredited Certification Body (CB). In Australia, Shearwater recommends certification services from reputable CB’s only, such as BSI and SAI Global.

The initial audit process is undertaken in two stages:

  • Stage 1 – A Documentation Review that focuses on a desktop review of available ISMS documentation and processes. Sufficient evidence of a functioning ISMS is required in order to progress to the Stage 2 audit.
  • Stage 2 – Focuses on evaluating the implementation and effectiveness of the management system. The audit will assess evidence and will typically require the ISMS to have been running for a period of at least three months.

The certification cycle also requires regular external surveillance audits to be performed and evidence that the management system is being actively maintained. Surveillance audits for ISO 27001 are typically performed every six months, however, mature systems in low-risk industries can be extended to an annual audit cycle in consultation with the certification body. ISMS re-certification occurs every 3 years.

9.    Who wrote ISO 27001? – History

ISO (International Organization for Standardization) is the world’s largest developer of voluntary International Standards. Many Countries have their own national standards governing everything from railway gauges, electrical power point specifications, building materials, personal protective equipment and children’s toys, to name just a few. When a standard reaches maturity and has widespread application in more than one jurisdiction, ISO forms a working group and works towards publishing an International Standard.
The original forerunner of ISO 27001 was written by the UK Government’s Department of Trade and Industry (DTI), and then published by the British Standards Institute (BSI) as BS 7799 in 1995.

10.    Tips, trick and pitfall avoidance

Before Certification
Don’t underestimate the number of stakeholders you will need to consult. In large organisations, stakeholder management can be a large undertaking and key requirement for a successful compliance activity.

Partner with experienced information security providers who know the implication of advice, in particular with respect to the selection of information security controls. Many controls sound like a good idea, but the implementation can be much more challenging.
Start with an understanding of risks and development of a management system before jumping into controls and technology. Investing time up front to understand your risk posture will pay long-term benefits.

During Certification

Avoid anybody who guarantees certification within 1 month. They can’t! Certification Bodies require at least 3 months of evidence at the stage 2 Audit to make a recommendation for certification to the Accreditation Body.

Certification Bodies are prevented under another ISO standard (19011) and scheme rules from performing certification and consulting/advisory services due to conflict of interest issues. Some get around this by offering extended pre-assessments of gap analysis. Whilst these may appear cheap, there are limits to the amount of actionable recommendations that can be provided.

After Certification
You will be entitled to display an ISO 27001 certification mark. The certification mark is tangible proof that you take care of information, are committed to protecting data entrusted to you, and are fulfilling your commercial, contractual and legal responsibilities with respect to information security. A great idea would be to promote this certification on your marketing collateral and website as a source of differentiation from your competitors.

Need assistance with ISO 27001? Get advice from one of our experienced consultants! We’ll arrange a scoping call, and offer you tips and suggestions for a clear roadmap to achieving and maintaining compliance. Talk to us today!

What should I look for in a Threat Intelligence Solution?


This blog article is part of a series: Part 1 | Part 2 | Part 3

In this final article in this series, I provide some guidance on what to look for in a CTI solution.

The four important questions when assessing CTI should be:

  1. How current is the Threat Intelligence Provided?
  2. How broad is the coverage?
  3. What contextual information is available to help understand the risk?
  4. Integration and automation

One other consideration on what to look for in a CTI solution is related to the importance of attribution. A lot of time and effort is spent arguing over the importance of attribution, and I don’t believe there is a definitive answer. I believe it depends upon your circumstances, resourcing and the sector in which you work. Attribution, may not matter at all for certain sectors or companies, but it is will certainly be important if you are a specialist manufacturer with process secrets, who is being infiltrated by a lead competitor. Similarly, if you are a large government defence agency, it is probably important to understand if a nation states is behind an intrusion. Cybercriminals, issue motivated groups, hacktivists, disgruntled employees, or some other disenfranchised assortment can certainly cause many problems, but attribution may not be important at all in looking at CTI solutions. If attribution is important to your organisation, then that should be a fifth consideration when assessing CTI solutions.

After going through these questions, you may also find that you have sufficient coverage currently with the Threat feeds you are getting via your existing vendors or via various open source providers.

CTI information currency is all important. Put simply, the more frequent the updates, the smaller the potential threat window is. Frequent, meaningful updates are important to keep your threat intelligence information updated and current over time. Real time, or near real time updates are optimal.

Coverage is the second important assessment criteria. It is impossible to cover all threat sources, and any vendor that promises this should be avoided. Coverage really comes down to being a big data issue. Some useful measures include:

  1. the number of IP addresses monitored.
  2. the number and variety of Threat Intelligence sources. A good cross section is important, and could include: verified existing feeds; anonymised customer data; Internet registries; known Botnets; DNS information; geolocation information (down to the country, state, city and ideally GPS coordinates); deployed honeypots; darknet data; deployed crawlers; anonymous proxy information (including TOR); free DNS services; and wherever financially viable external networks (although this can be costly).
  3. the volume of traffic monitored on a daily basis.
  4. Catch rate improvements, verified by independent and respected test authorities.
  5. The last consideration may be if internal threat information is used from other customers and can this data be broken down based on a particular data categorisation such as industry.

Contextual data should include all the metadata that relates to the threat intelligence, such as the time that the intelligence is collected, the type of threat, the geolocation to enable high risk geographies to be highlighted, and the source of the intelligence (internal, external, free). Probably the most important piece of contextual information, is how the threat intelligence is rated from a risk perspective. Here is where it can get a little tricky, as most CTI vendor will promote their own proprietary algorithm or methodology. The only real way to get to grips with this element is to run a proof of concept before purchasing and take up site references and specifically drill into this element with current clients. Because things change pretty quickly in cyberspace, currency of this contextual information is also very important.

Automation and integration is the last important factor in assessing CTI. Automation makes the intelligence actionable from a technology configuration perspective. Integration is important to ensure that automation is possible within your chosen technology stack. Broad support of common technologies is important, as is an accessible or open API.

In summary the issues to focus on when selecting a CTI solution should therefore come down to speed, reach, accuracy across a seemingly infinite data set, together with the ability to integrate and automate.

I hope that you enjoyed this series on cyber threat intelligence. If you would like to learn more about the subject or would like to talk to me, I can be contacted via email at: slane@shearwater.com.au

ASD Essential 8 Summary


So you have mastered the ASD Top 4? What do you need to tame the Essential 8? 

In this ASD Essential 8 Summary, we will answer:

  • What has stayed the same?
  • What has changed?
  • What that means?
  • What do I need to do to achieve this baseline standard?
  • When do I need to complete it by?

 

What has stayed the same?

The key thing that has remained constant from the ASD Top 4 to the Essential 8, is the pragmatic, good advice provided by ASD. The focus is still on making systems and information secure, in order to safeguard organisational reputations and save time and money. However, unlike a great number of global compliance regimes such as SOX, JSOX, PCI, SSAE, etc, the Essential 8:

  • Helps organisations manage risks that are relevant to their specific context
  • Provides prioritised steps to address relevant threats
  • Represents a baseline for organisations to achieve

The risk-based approach and the prioritised controls are world class and equate to a cost effective and intelligent use of security budgets.

The evolution of the Top 4 to the Essential 8 quite firmly underlines the core message that good security is a process and not a project. Organisations that have conducted a ‘Top 4 project’ and not implemented an ongoing security process, may in fact have missed the point. The Essential 8 is ASD’s reminder to keep improving.

What has changed?

There is one large change and a number of smaller changes. The large change shifts focus from the Top 4 being Strategies to Mitigate Targeted Cyber Intrusions, to being an essential 8 Strategies to Mitigate Cyber Security Incidents. Top 4 was designed to keep the malicious out. Essential 8 recognises that whilst a lot can be done to keep people out, the reality is that you need to plan and design for when eventually they do get in.

The smaller changes add 4 more controls and shift the initial Top 4 around. You now have two columns:

Prevent Malware from running
Keep ‘em Out
Limit the extent of incidents and recover data
Plan for when they get in and respond
Application Whitelisting (Top 4 original) Restrict administrative privileges (Top 4 original)
Patch Application (Top 4 original) Patch Operating Systems (Top 4 original)
Disable untrusted Microsoft Office macros (New) Multi-factor authentication (New)
User application hardening (New) Daily backup of important data (New)

What this means?

The ASD has reinforced that good security is a journey that never ends. In other words, you should expect the Essential 8 to continually change over time. ASD’s subliminal challenge is to think about what will provide you with the best returns for your effort and investment across both prevention and response. ASD wants organisations and security leaders to answer 4 searching questions:

  1. Do I know what my mission critical assets are and what needs protecting?
  2. Who are my adversaries, or who do I need to guard against?
  3. What is the gap between my current security controls and those outlined in the Essential 8? In other words, what other strategies do I need to implement based on my risks?

If your security posture is risk based, pragmatic and process rather than project driven, adding a few more tasks or re-ordering a few initiatives within your work programme should be straight forward.

When do I need to have done it?

With respect, you are asking the wrong question! The goal of establishing a layered defence to protect against and respond to threats does not have an end date. But if you want to know where to start, Shearwater are the experts who can help you avoid wastage of time, effort and money. Engaging our expert team of advisors will allow you to plan at the strategic level whilst executing at the tactical.

If you don’t know where or how to start with the Essential 8, Shearwater can assist. For expert help, please contact us.

Is Cyber Threat Intelligence worth investing in?


This blog article is part of a series: Part 1 | Part 2 | Part 3

In this blog article, I am seeking to address the question of whether CTI is worth investing in.

Many vendors of Web Proxies, SIEM solutions, IPS, Firewall, UTM’s and email filtering technologies already provide a threat feed. The question that needs to be asked is how effective these feeds and blacklists are. Can they protect and block threats to your organisation? Can these threat feeds be positioned in the right place to stop threat agents/attackers from doing their dirty work? If you restrict your attention solely to the roughly 4 Billion IP addresses within the IPV4 address range, it is estimated that more than 16 M are currently, or have been, put to use for malevolent means. Clearly there are challenges to keeping tabs on all these dubious IP addresses from which threats manifest. I’d challenge you to name more than a handful of organisations globally who have the inclination or capacity to keep track of what is happening within these Internet locations. Sure, vendors and the open source community are trying. However, vendors are somewhat blinkered by the user base they can draw on, and the security function they focus on. At the other extreme, open source offerings are always best effort and in this space regrettably slow to react. IP Addresses are clearly only one part of the picture, when you include URL’s, domain names, known bad hosts and payloads into the items needing to be managed, it is clear that automation and intelligence is required.

The problem with many mainstream accepted security technologies, is that they become less and less effective over time, require superior analytical skills to operate (skills that are hard to find), and can be somewhat reactive. These issues prompt security professionals and business managers to seek out better ways of working and more advanced technologies to increase effectiveness.

Is CTI any different to the traditional security vendors? Unfortunately, only partially. It certainly needs highly skilled people to operate, and it is likely to be less effective over time, as hackers develop countermeasures to hide their tracks from specific CTI tool sets. The one ray of light, is that CTI does try and avoid the old paradigm of waiting for something to arrive that is known to be bad and then blocking it. Cyber professionals are trying to get ahead of this preventative mindset and become agile with threat detection and response. Any approach that can offer the potential of reaching out into the dark web, blending in, uncovering what is happening in real time and then giving you actionable intelligence, ideally coupled with workflow and automation is a significant benefit.

The business problem that CTI attempts to solve is still dependent on skilled people. By investing in CTI, you may be able to uplift your internal capability, but to deliver real results you do need a team there to start with. If you do have a specialist team in place, CTI has potential to act as a multiplier effect and save you money. CTI is categorically not an appropriate or intelligent security investment for organisations that do not have adequate skills in place and are looking at new technology as a cure all. There must also be clarity about what you are seeking to achieve from CTI. Without a clear vision of what it is that you wish to achieve, then delivering results may be difficult. This vision may of course change over time as you start to leverage CTI and assess the benefits produced.

As with all security investments, context is all important in evaluating new technologies. With the right prerequisites, CTI should appear on your investment radar. So, in summary, is CTI worth investing in? conceptually yes, provided you have the highly skilled people needed to make this effective. If you don’t have these people, then the answer becomes a very clear no. CTI should not be considered until you have an appropriate internal resource capability available, or a suitable managed service provider capable of bringing to bear the right skills, technology, business insight to effectively manage risk.

In my last blog in this series, I will endeavour to round out this series with a third and final post that will focus on what to look out for in a Threat Intelligence Solution.

What business problem does Cyber Threat Intelligence (promise to) solve?



This blog article is part of a series: Part 1 | Part 2 | Part 3

The cyber industry is certainly excited by CTI, and I don’t want to make any predictions on whether the excitement will blow over any time soon. The Threat Intelligence approach, does provide some hope, yes hope, of lessening a really difficult issue of knowing what to trust and what not to trust on the Internet. Even slowing down malevolent Internet based threats should be treated as a success. Is that the whole picture though, what business problem does CTI solve?

I’m not planning to run through all of the potential impact that stem from cybercriminals, hacktivists, nation states, malicious insiders and careless users, other than to say that recent history demonstrates that the impacts from these threat actors can be significant. In fact, they can send businesses out of business. The accessibility and prevalence of hacking tools, malware, bots, darknets and hacking services for hire, should help to crystallise these risks.

So CTI provides the promise of:

  1. Prevention – by pre-emptively blocking attacks from hitting and hurting your organisation. Prevention is achieved through the ingestion of CTI feeds within existing security infrastructure such as firewalls, IPS and SIEM and configuration of automated responses based on pre-set rules.
  2. Increasing visibility – of emerging threats that could be an issue now or in the future. Increased visibility can be delivered via simple manual searches conducted by an analyst within a CTI platform.
  3. Detection and reaction – to compromises that are happening now. Detection and reaction can be a combination of both methods, coupled with intervention or as part of an integrated incident response process.

CTI can help to more fully inform the risk assessment process by providing real time actionable intelligence about the types of threats that are relevant to an organisation and the frequency and severity of these threats. Information on threat actors, frequency and severity of threats are vital inputs into the risk assessment process.

At a very high level, there are three broad categories of CTI available within the market at the moment. the differences could be the subject of a separate series of articles, so this high level view is anything but comprehensive. The three broad categories are:

  1. Open Source CTI – provides some pretty handy threat intelligence data, but like all open source efforts, it relies on community involvement and may lack the necessary contextual information that makes CTI actionable for specific organisations and sectors. There may be a lot of noise to be sifted within the data to derive truly useful intelligence.
  2. Vendor Provided CTI – has the advantage of providing more contextual data. Many vendors have sharing arrangements in place and their own research and analysis teams that leverage these sharing arrangements and the open source feeds available. They also draw from their client community. You do need to be a little careful in selecting vendors, as some draw heavily from open source information only. The only real advantage that you get here is the convenience of not having to collect and sift available open source information yourself.
  3. CTI Vendor solutions – have the benefit of generally being the sole commercial focus of these CTI vendors. CTI vendors have their own research and analysis teams, leverage other feeds and often possess big data driven infrastructure to contextualise the intelligence. Such feeds can be very granular and can stem from application intelligence and social media. As a consequence, these vendors can provide flexible and highly customised CTI feeds to clients.

Additionally, CTI feeds can be produced by internal systems within an organisation, via Government entities or independent groups such as the Internet Storm Centre within SANS. Irrespective of whether you chose to deploy open source , vendor bundled, or stand-alone commercial CTI vendor solutions, other benefits can be delivered by a CTI approach. One important potential improvement delivered by dedicated threat intelligence equipment (CTI appliance) is the freeing up of other technology resources and traditional tools to operate more efficiently. Reducing the load on your existing security stack, in particular firewalls and IDS/IPS, which can potentially extend the working life of your infrastructure and hence save money. For appliance based CTI that sits in front of existing security infrastructure, whereby CTI can identify threats before reaching firewalls and IDS/IPS, then configuration complexity and processing loads on these technologies can be reduced. Dynamically blocking is happening, but the reality is that people need to invest time in support of CTI. Without smart people constantly tuning, then you run the risk of blocking legitimate traffic or wasting your money on the investment.

The promise of Threat Intelligence is that it will increase your agility of response, guiding your operational security decisions and optimising the efficiency of your existing security stack. The Ultimate aim being to reduce the number of annual security incidents.

In the next blog in this series, I will discuss whether CTI is worth investing in.

What is Cyber Threat Intelligence? And when do you need it?


Cyber Threat Intelligence (CTI) appears to be one of the hot topics in information security at the moment. Almost every vendor as well as the open source community has their unique take on what is, and what is not important in the CTI arena. I have been asked a number of questions by clients and colleagues alike about CTI. Many questions focus on whether threat intelligence is worth investing in right now, or budgeting for. It is a good question, but to be honest I am probably the wrong person to ask. After close to twenty years in the information security industry, I am always a little sceptical of the next big thing, given the long line of next big things I have seen during my career. My scepticism is exacerbated when vendors claim that their method or technology is better or more robust than those of their competitors. My scepticism is magnified when vendors keep their approach secret or don’t provide any data or evidence to back up their claims. A good recent example is that of Norse Corporation, who had a rapid, well publicised and complete unravelling, when it was revealed that their secret CTI methods and products proved little more than highly polished marketing claims.

Perhaps a better question would be, ‘what business problem will CTI actually solve for me and my organisation?’ or ‘how long until CTI is mature enough to justify investment?’ or even, ‘What do I need to consider before investing?’

In this post series, I’ll be answering these three questions in turn:

  1. What business problem does CTI actually solve?
  2. Is CTI worth investing in now?
  3. What do I need to consider before investing?