Risk Management Process (The Complete Guide 2019)

 

Risk Management Process

Risk Management Process

In this blog, we shall review the underlying principle of cyber security – that of risk management.

 

A GENERAL VIEW OF RISK

Impacts or consequences are just two of the elements of risk management. The others are assets – the things we care about; vulnerabilities – those things that weaken our defenses against cyber-attacks; and likelihood or probability – the chance that the threat will be successfully carried out.

 

However, there are two sides to the question of motivation – on one hand, there are attackers who have a strong motive for carrying out the attack, whilst on the other, there are script kiddies who happen upon an exploit and try it out to see what happens. When combined with a vulnerability, either situation can result in the likelihood of being high.

 

Occasionally, people confuse ‘threats’ and ‘risks’. They may say that there is the risk of rain when they actually mean there is the threat of rain. The risk is that if it does rain, we might get wet as a result. As we shall see later, the difference is subtle, but important when it comes to risk management.

 

It is also not unusual for people to confuse probability and likelihood. As we shall see later in this blog, there is a considerable difference between them, the probability being an objective assessment with some form of statistical underpinning, and likelihood being subjective, based on emotions and gut feel.

 

There are so-called ‘inherent’ risks in many areas of cybersecurity, the main one being the possibility that despite all efforts to secure the organization, an attacker may still find a way of accessing a system and causing damage.

 

ASSETS

ASSETS

Assets in the wider sense can be almost anything, but in cybersecurity terms, assets can include not only the data – information we may be trying to protect – but also the complete technical infrastructure – hardware, software, data and information HVAC and premises.

 

Last, but by no means least, are the staff who have the technical knowledge and skills to design and implement the appropriate security measures in place, to maintain them and to respond to incidents.

 

Although I have drawn the distinction between data and information, for the purposes of this blog I have considered the terms to be interchangeable, since they are both assets that have value for their owners and must be equally protected, although the owner of the original data and the owner of the resulting information may be entirely different entities.

 

VULNERABILITIES

VULNERABILITIES

Vulnerabilities are things that reduce the effectiveness of securing assets and come in two distinct varieties. 

 

Intrinsic vulnerabilities are inherent in the very nature of an asset, such as the ease of erasing information from magnetic media (whether accidental or deliberate), whereas extrinsic vulnerabilities are those that are poorly applied, such as software that is out of date due to a lack of patching, or vulnerable due to poor coding practices.

 

Threats exploit vulnerabilities in order to cause an impact to an asset, whether it is copied or stolen (confidentiality), changed or damaged (integrity) or access to it is prevented (availability).

 

Vulnerabilities can exist without our knowledge. There may be security issues with an operating system or an application that a hacker has discovered, but is unknown to the software vendor – this type of vulnerability is called a zero-day vulnerability.

 

One of the biggest problems with this kind of vulnerability is that once it becomes known to the hacking community it will be ruthlessly exploited until a fix is developed – and more importantly, applied.

 

Once the software vendor announces the fix, knowledge of the vulnerability becomes even greater, and will often result in increased attacks, and an added danger is that individuals and organizations will fail to apply the fix placing themselves at greater risk.

 

An interesting twist on the publication of known vulnerabilities is the situation in which attackers reverse engineer the vulnerabilities in order to design and build dedicated attack tools.

 

Other vulnerabilities are more obvious – the lack of antivirus software, which can allow malware to reach the target through email, or firewall protection, which can result in the same problems for internet access.

 

Disaffected staff can either allow malware through the organization’s defenses by reconfiguring them, or by bypassing them completely, introducing malware on a USB stick for example.

 

Computers without passwords, with default passwords for operating system software and application software, and shared passwords present easy pickings for even the least experienced attacker.

 

QUALITATIVE AND QUANTITATIVE ASSESSMENTS

ASSESSMENT

The problem we face in risk management is deciding which of the two types of measure to use – a subjective assessment of likelihood or an objective assessment of probability.

 

Although we have provided boundaries for the levels, there will be a degree of uncertainty about the upper and lower limits of each, but in general, the ranges should be sufficient to provide a fairly accurate assessment.

 

Clearly, these ranges will differ from one scenario to another but set a common frame of reference when there are a substantial number of assessments to be carried out.

 

THE RISK MANAGEMENT PROCESS

The generic process for managing risk is illustrated in Figure. Since we are only taking a brief look at risk management, we will focus on context establishment, risk assessment and risk treatment and omit the communication and consultation, and monitoring and review stages.

RISK MANAGEMENT PROCESS

Context establishment

If we look at just the basic components of risk as described above, we can certainly make some form of assessment, but unless this is placed within the context in which the organization operates, any judgment will have been taken in isolation.

 

The first stage of the risk management process then is to understand the context in which the organization operates – financial, commercial and political – so that the later steps take these into account when making decisions regarding how to treat the risks.

 

Risk assessment

This second stage of the risk management process is broken down into three distinct areas: risk identification, risk analysis, and risk evaluation.

 

Risk identification

Risk management begins by identifying the assets, deciding what value they have for the organization, and therefore what the impact will be if they were damaged or lost. All assets require a single clearly identified owner who will have overall responsibility for the asset, even if the asset is shared between a number of departments in an organization.

 

Some organizations mistakenly allocate ownership of information assets to the IT department, but this (unless it is an IT-specific asset) is a mistake, since the IT department can easily become the unwitting owner of many assets over which they have little or no influence, despite the assets being held on the IT department’s systems, since only the genuine owner of the asset will be able to estimate its value to the organization.

 

Once we have established the assets, their ownership, and their value to the organization, we can move on to understand what might threaten these assets and what (if any) vulnerabilities the assets have, which provides us with a basis for deciding on the likelihood or probability.

 

There is an ongoing debate about which aspects of risk identification come in which order. Some people feel that it is easier to identify the impacts if they understand the threats first; others feel that threat assessment can come later. Whichever approach you favor, it is important that you assess:

  • the impact of the loss or degradation of assets; the vulnerabilities that might contribute to this; the threats those assets face;
  • the likelihood or probability that the threats will exploit the vulnerabilities and result in an impact.

 

When assessing the threats, we can make use of a number of models – one of these is referred to by the initial letters D.R.E.A.D. and asks five questions:

  • Damage – how bad would an attack be?
  • Reproducibility – how easy is it to reproduce the attack?
  • Exploitability – how much work is required to launch the attack?
  • Affected users – how many people will be impacted?
  • Discoverability – how easy it is to discover the threat?

 

Although rather subjective, the answer to each question is allocated a value (say between 1 for ‘low’ and 3 for ‘high’), and the sum of the five elements delivers the relative threat level. 

 

Impact and likelihood are the two key outputs of this part of the process, and there are two methods of deciding the level of them:

 

qualitative impact and likelihood assessment; quantitative impact and likelihood assessment.

 

In the case of the qualitative assessment, the outputs are measured in general subjective terms, such as low, medium and high, whereas in quantitative assessment, objective numerical data is used – for example, financial values for impact and percentages for likelihood.

 

Each method has its own merits – qualitative assessment can be carried out quite quickly and does not require detailed research or investigation, whereas quantitative assessment can be time-consuming but will usually deliver more accurate results.

 

It is for the organization to decide whether such a high degree of accuracy adds value to the assessment exercise – if the resulting risk is very high, the problem will require urgent attention, regardless of whether the risk comes out at 90 percent or 95 percent.

 

As already mentioned, there is, however, a halfway house in which qualitative and quantitative assessments are combined in a ‘semi-quantitative’ assessment. In these, boundaries are set for the values.

 

for example for impact assessments, ‘low’ might indicate a financial value between zero and one million pounds; ‘medium’ might indicate a financial value between one million and ten million pounds, and ‘high’ might indicate a financial value above ten million pounds.

 

Similarly, for likelihood assessments, ‘low’ might indicate a likelihood between zero and 35 percent; ‘medium’ might indicate a likelihood between 35 percent and 70 percent, and ‘high’ might indicate a likelihood above 70 percent. This provides a more meaningful assessment of risk, especially when presenting a business case to the board for approval.

 

Risk analysis

Risk analysis

Once we have conducted the initial risk identification, we then take the impact and likelihood and combine them in the form of a risk matrix, which will allow us to compare the risk levels.

 

The risk matrix is simply a pictorial representation of the relative levels of all the risks we have identified, and which will allow us to understand the order in which we wish to treat them, based on some form of priority.

 

Risk matrices most commonly consist of three, four or five ranges of values. Three is often considered to be too few to be meaningful, whilst five allows the possibility of too many results being in the middle. Four is sometimes thought to be a better choice since the assessor must choose some value either side of the middle ground.

 

In conjunction with others, the risk assessor will allocate a risk category to each part of the matrix, in order to assist prioritization. Alternatively, values can be assigned to each cell in the matrix, which enables grouping of risks. Risks measuring 1 to 5 might be graded as trivial; 6 to 10 might be minor; 11 to 15 might be moderate; 16 to 21 might be major, and 21 to 25 might be critical.

 

Risk evaluation

Risk evaluation

Finally, we can decide how we are going to deal with the various risks, usually recording the results in a risk register.

 

Risk avoidance or termination

In this method of risk treatment, we either stop doing whatever it is that has caused or might cause the risk, or if it is a planned activity, we simply avoid doing it. Whilst this will usually result in the risk being completely eliminated, it may cause the organization other problems.

 

for example, if an organization was planning to build a data center and the risk assessment indicated a high likelihood of flooding in the proposed location, the decision would almost certainly be to avoid the risk by abandoning that location and building elsewhere.

 

However, this might prove problematic, since alternative sites might be difficult to locate, be excessively costly or have other limiting factors. This would result in the organization reviewing all these risks against one another.

 

Risk sharing or transfer

If we find that we cannot avoid the risk, an organization may decide to share it with a third party. 

 

This is usually in the form of insurance, but it is important to remember that even though the organization may let someone else share or take the risk, they still own the responsibility for it.

 

However, some insurance companies will refuse to ensure certain types of risk, particularly when the full possible impact is unknown, and in such cases, the organization must find an alternative method of dealing with it.

 

Risk reduction or modification

Risk reduction

Some people refer to this as risk treatment, although it is actually just one form of risk treatment. In this option, we do something that will reduce either the impact of the risk or its likelihood, which in turn may require that we reduce either the threat or the vulnerability where this is possible.

 

It is often the case that threats cannot be reduced – one cannot, for example, remove the threat of a criminal attempting to hack into an organization’s website, but it may in such cases be possible to reduce the likelihood by applying strict firewall rules or other countermeasures.

 

Risk acceptance or tolerance

The final option is to accept or tolerate the risk, especially if it has either a very low impact or likelihood. 

 

This is not to be confused with ignoring risk – never a sensible option – but is undertaken knowingly and objectively, and is reviewed at intervals or when a component of the risk changes, such as the asset value, the threat level or the vulnerability.

 

Risk acceptance is based largely on the organization’s attitude to risk, known as its risk appetite. 

 

Some organizations have a very low-risk appetite – for example pharmaceutical companies, who understand that the impact of failure to keep details of their products secure can mean enormous financial loss if they are stolen, or that patients could die if the manufacturing process is tampered with.

 

On the other hand, organizations like petrochemical companies will have a much higher risk appetite, investing vast sums of money in test drilling for oil reserves, knowing that some attempts will produce no useful results.

 

Residual risk

Whilst some forms of risk treatment will completely remove the risk, others will inevitably leave behind an amount of so-called ‘residual’ risk.

 

This residual risk is either not possible to treat, or, more frequently, too expensive when compared to the cost of the likely impact. Residual risk must be accepted by the organization and will require monitoring and regular reviews to ensure that it does not grow and become a treatable risk.

 

Risk treatment

Risk treatment

Risk treatment is also sometimes referred to as risk mitigation, which is generally taken to mean a reduction in the exposure to risk (the impact or consequence) and/or the likelihood of its occurrence.

 

Once we have decided on the most appropriate method of treating risks, we move to the final stage of the risk management process – risk treatment and the use of controls or countermeasures to carry out our decisions.

 

There are four distinct types of controls:

  • detective controls, which allow us to know or be made aware when something has happened or is actually happening;
  • directive controls, which invoke some form of procedure that must be followed; preventative controls, which stop something from happening; corrective controls, which fix a problem after it has happened.

 

Directive and preventative controls are proactive in nature since they are carried out before an attack has occurred in order to reduce its impact or the likelihood of it occurring. 

 

Detective and corrective controls are reactive in nature since they take effect once an attack has actually happened. The four types of control are implemented in one of three ways:

 

Procedural controls, which dictate what actions must be taken in a particular situation. An example of a procedural control would be one in which users are required to change their system access passwords at regular intervals. Such controls might include the vetting of staff by the HR department.

 

Physical controls, which prevent some form of physical activity from taking place, such as fitting locks on computer room doors to prevent unauthorized entry. Technical controls, which change the way in which some form of hardware or software operates, such as configuring firewall rules in a network.

 

Sometimes, the risk treatment options – avoid/terminate, transfer/share, reduce/modify and accept/tolerate – are referred to as strategic risk treatment controls;

 

the four types of control – detective, directive, preventative and corrective can be referred to as tactical risk treatment options; and finally, the three methods of implementing the controls – procedural, physical and technical are sometimes referred to as operational controls.

 

Although it is not strictly speaking an information risk topic, for many years, and for a variety of purposes, organizations have linked the risk management process with a system known as the Plan–Do–Check–Act (PDCA) cycle, otherwise known as the Deming cycle.

 

The PDCA cycle has been widely adopted as a basic reference framework in the cybersecurity, information security, information risk management, and business continuity management disciplines as well as many others. The four stages are described as follows:

 

Plan

Plan

In this stage, we establish the objectives and the processes necessary to deliver the required results. In the cybersecurity context, this equates to understanding the organization and its context.

 

Do

The next stage of the process implements the plan, initially as a means of testing that the plan has been successful. In the cybersecurity context, this equates to the implementation of the information risk management framework.

 

Check

In this stage, we examine the results we have achieved by either measurement or observation. In the cybersecurity context, this equates to testing, monitoring, and review of the framework.

 

Act

In the final stage, we put the validated plans into action when an incident occurs and bring lessons learned from incidents into revisions of the plan. In the cybersecurity context, this equates to a continual improvement of the framework.

 

Although the descriptions above relate to the wider area of risk management, in cybersecurity terms, any of these methods can be used to treat risk since cyber threats can be used equally easily against poor procedures, a lack of good physical security and poor technical security. Just because the river is quiet does not mean the crocodiles have left.

 

Malay proverb

In this blog, we will briefly examine the concepts of business continuity, which looks at the business as a whole, and disaster recovery (DR), which looks at just the IT infrastructure, and which usually forms a component part of an organization’s business continuity (BC) programme.

 

Although business continuity covers a much broader area than just cybersecurity, it is important to understand the underlying principles since it is a means of preparing for possible cybersecurity incidents. Likewise, disaster recovery again is not all about cyber security but can play a major part in recovering from cybersecurity incidents.

 

Both business continuity and disaster recovery have a proactive and a reactive element to their contribution to cybersecurity; the proactive side attempts to reduce the likelihood that a threat or hazard may cause a disruption, and the reactive side takes care of the recovery if one does occur.

 

Generally speaking, the longer a disruption lasts, the greater the impact on the organization, so it helps to clarify the type of disruption, its duration and impact, and how an organization manages the situation.

 

Glitches

These are extremely short occurrences, usually lasting just a few seconds at the most and are generally caused by brief interruptions in power or loss of radio or network signal. Activities usually return to normal following most glitches as equipment self-corrects automatically.

 

Events

Events normally last no more than a few minutes. As with glitches, the equipment they affect is frequently automatically self-correcting, but may on occasion require a degree of manual intervention.

 

Incidents

Incidents are usually viewed as lasting no more than a few hours. Unlike glitches and events, they require operational resolution, normally involving manual intervention that follows some form of process.

 

The methods of dealing with glitches, events, and incidents are mostly proactive in nature.

 

Crises

Crises

Crises can often last for several days. Although organizations may have plans, processes, and procedures to deal with them, and although operational staff will carry out any remedial actions, some degree of improvisation may be required.

 

Crises almost invariably require a higher layer of management to take control of the situation, make decisions and communicate with senior management and the media.

 

Disasters

Disasters frequently last for weeks. As with crises, operational staff will carry out remedial actions, although, at this stage, a degree of ad hoc action may be necessary and although a higher management layer will control activities, the senior management layer will take overall charge of the situation.

 

Catastrophes

Catastrophes are the most serious level, often lasting for months, or in some cases for years.

 

 Their scale tends to affect many communities, and so although individual organizations may be operating their own recovery plans, it is likely that local, regional or the even national government will oversee the situation and that a complete rebuilding of the infrastructure may be required.

 

Despite any proactive planning or activities to lessen their impact or likelihood, crises, disasters, and catastrophes all require significant reactive activity, and each will demand an increasing amount of incident management capability.

 

It is important for organizations to understand that the more time spent in proactive work, the less time will generally be required in reactive work following a cyber-attack.

 

Business continuity and disaster recovery share the same fundamental Plan–Do– Check–Act cycle, in which during the ‘Plan’ stage we carry out the risk assessment (risk identification, risk analysis and risk evaluation);

 

in the ‘Do’ stage, we implement the risk treatment options and assemble the plans; in the ‘Check’ stage, we verify that the plans are fit for purpose by testing and exercising; and finally in the ‘Act’ stage, we put the plans into practice when a disruptive incident occurs.

 

BUSINESS CONTINUITY

BUSINESS CONTINUITY

Putting business continuity into practice is strongly linked to the process of risk management, in which we identify the organization’s assets, owners and impacts;

 

assess the likelihood of risks happening, and combine the two to provide a perceived level of risk. From this, we are able to propose strategic, tactical and operational controls, one of the main components of which will be the business continuity plan (BCP) itself.

 

The plan should include the actions that will cause it to be triggered; who (or which departments) will be responsible for what actions;

 

how they will be contacted; what actions they will take; how, where and when they will communicate with senior management and other stakeholders; and finally, how they will determine when business has resumed to a pre-determined level of normality.

 

The plan itself may not always contain detailed instructions, as these may change at intervals, but they should be referred to in the plan. 

 

Although cybersecurity covers only a part of the overall business continuity process, there are certain aspects, especially with regard to the ongoing availability of information and resources, that are very much an integral part of cybersecurity.

 

The most obvious of these is that of disaster recovery of information and communications technology (ICT) systems, in which the systems that are likely to be impacted require some form of duplication in order to permit short-term or even immediate recovery.

 

Business continuity is often referred to as a journey rather than a destination. It looks at the organization as a whole as opposed to just the information technology aspects.

 

However, that said, the generic business continuity process applies extremely well to cybersecurity and can be used to help an organization to place itself in a very strong position.

 

The Business Continuity Institute (BCI) describes business continuity as ‘The capability of the organization to continue delivery of its products or services at acceptable rede-fined levels following a disruptive incident.’

 

It provides excellent guidance on the entire process, and its latest Good Practice Guidelines (2013 version) can be purchased for around £24 for a downloaded version or £30 for a printed copy. BCI members enjoy a £10 discount on the printed version, whilst the download version is free of charge to them.

 

Over several years, the BCI has developed a business continuity management lifecycle, having six distinct areas. It is basically a variation on the theme of risk management:

 

1. Business continuity policy and programme management, in which the overall organization’s business strategy is used to develop a programme of work, each component of which is then managed as a project.

 

2. Embedding business continuity into the organization’s culture, which includes training, education, and awareness.

 

3. Analysis, which is all about understanding the organization, its priorities and objectives, its assets, the potential impacts, the threats or hazards and the vulnerabilities it faces.

 

From this, a risk assessment can be undertaken, and the key metrics such as the recovery time objective (RTO), the maximum acceptable outage (MAO)3 and the maximum tolerable data loss (MTDL) can be derived.

 

4. Determining the business continuity management strategy and designing the approach to deliver this can now take place, based on the metrics arrived at in the analysis stage, and decisions can be made regarding what proactive measures should be put in place; how response to an incident will be organized;

 

and how the organization will recover to normal operational levels or to a new, revised level of normality.

 

5. Implementing the business continuity response will require the efforts of people in various parts of the organization to put in place the proactive and reactive measures agreed in the previous stage.

 

6. Validation, which includes exercising, maintaining and reviewing, is a separate activity to embedding the business continuity culture into the organization, since it deals with the inclusion of people who may already have been involved in the previous stages, and who need no introduction to the subject;

 

rather they need to be able to exercise the various response and recovery plans, validate them and fine-tune them where necessary.

 

If the organization is well organized, all six stages of the life cycle should have been completed before an incident occurs.

 

The first actions will be to respond to the incident itself, bringing together the incident management team, gaining an understanding of the situation and agreeing which aspects of the plan are to be implemented.

 

At this time, it will also be important to consider preparing some form of a statement that can be given to the media, customers, and suppliers so that their expectations are managed.

 

Next, the processes and procedures that have been developed (which may include disaster recovery mechanisms) will be brought into action and depending upon the nature of the situation.

 

Finally, once the situation has been resolved, a business can be returned to normal, or if the impacts have been considerable, to a new level of normality. The international standard ISO 22301:2012 – Societal security – Business continuity management systems – Requirements covers all aspects of business continuity.

 

DISASTER RECOVERY

DISASTER RECOVERY

One of the main features of a business continuity plan is in providing the availability determined by the analysis stage of the business continuity process. Disaster recovery is perhaps a misnomer since it implies that systems, applications and services have failed catastrophically and need to be brought back online.

 

Whilst this might be the case for some services, it is not true for all, since an element of proactive work can (and usually should) be carried out, and it may be the case that just one component in the service has failed, but that this requires a disaster recovery process to be invoked.

 

As with any business continuity work, there are both proactive and reactive sides to disaster recovery, and since there are no ‘one size fits all’ solutions, we’ll discuss some of the options in general terms.

 

Standby systems

Conventionally, there are three basic types of standby system – cold, warm and hot – although there are variants within these. Most well-designed standby operations will ensure that there is an effective physical separation between the ‘active’ and ‘standby’ systems since the loss of a data center or computer room containing both systems would clearly result in no recovery capability.

 

Traditionally, organizations work on the basis that a minimum separation of 30 km is sufficient to guarantee that a major incident affecting one data center will not affect the other.

 

Systems, as we refer to them here, can mean any system that is involved in providing the organization’s service and can include web servers at the front end of the operation as well as back-end servers and support systems and essential parts of the interconnecting networks.

 

Cold standby systems frequently make use of hardware platforms that are shared by a number of organizations. They may have power applied, and may also have an OS loaded, but they are unlikely to have much if any, user application software installed since each organization’s requirements will be subtly different. There will also be no data loaded.

 

This is the least effective method of restoration, since it may take a significant amount of time and effort to load the operating system (if not already done), to load and configure the user applications and to restore the data from backup media.

 

It will, however, invariably be the lowest cost solution for those organizations who are able to tolerate a longer RTO.

 

Another disadvantage of cold standby systems is that if they are shared with other organizations, there may be a conflict of resources if more than one organization declares an incident at, or around, the same time.

 

An example of this was the situation on 11 September 2001, when the attacks on the World Trade Center in New York took place. Most organizations had disaster recovery plans, but a number of them relied on the same providers, which completely overwhelmed their capabilities.

 

Warm standby systems will generally be pre-loaded with operating systems, some or all user applications, and also data up to a certain backup point. This means that the main task is to bring the data fully up to date, and therefore much reduce the restoration time required.

 

Warm standby systems are invariably costlier to provide then cold standby, and it is common practice for organizations to use one warm standby system to provide restoration capability for a number of similar systems where this provides an economy of scale.

 

Additionally, those organizations who regularly update their application software may make use of their warm standby systems as training, development and testing platforms before a new or updated application are taken into live service.

 

Hot standby systems come in several flavors, but increasingly, and especially where no outage time can be tolerated at all, high availability systems are becoming the norm.

 

A basic hot standby system will be as similar as possible in design to a warm standby system, except that the data will be fully up to date, requiring a real-time connection between the active and standby systems.

 

Two slightly different methods of synchronizing the systems are in common use – the first (and faster) method is known as asynchronous working, in which the active system simply transmits data to the standby, but continues processing without waiting for confirmation that the data has been written to disk.

 

The second, slightly slower (but more reliable) method is known as synchronous working, in which the active system transmits data to the standby and waits for confirmation that the data has been written to disk before it continues processing.

 

In the first method, there is always the possibility that some data will not be received by the standby system, and in cases where nothing less than 100 percent reliability is required (for example in financial transactions), this will not be sufficiently robust.

 

In the second method, there will always be a slight time lag between transactions, since this method will provide 100 percent reliability at the expense of speed. It will also be costlier to implement since very fast transmission circuits will be required – usually point-to-point optical fiber.

 

Networks and communications

Networks

Whilst the emphasis tends to be on the recovery of key systems, organizations should not overlook the networks and communications technology that support them. Wherever possible, key elements of the communications network should be duplicated so that the failure of one does not cause a total loss of connectivity.

 

Many organizations now use two different transmission providers to ensure that if one has a major network failure, the other should still be able to provide service.

 

This will, of course, depend on whether one is acting as a carrier for the other, in which case a failure of the main provider’s network could result in the other losing service as well.

 

Larger organizations make use of load balancing systems to ensure that in peak demands on their websites they are able to spread the load across a number of servers, and many also duplicate their firewall infrastructure as added insurance.

 

Separacy is also a wise consideration – the scenario in which a road repair takes out an organization’s communications is all too familiar, and by providing diverse communications cables on routes separated by 30 m or more and using entry points on opposite sides of a building, the likelihood of failure is much reduced.

 

Naturally, all this costs money, but when compared with the potential losses that would be incurred in the event of a total infrastructure failure, it is a vital form of insurance – and one that can reduce the cost of revenue loss insurance premiums.

 

Power

Power

Power is at the heart of everything. Without it, the systems and networks cannot run, and business would grind very quickly to a halt. Those organizations that suffer regular power outages will probably already have invested in a standby generator or at least an uninterruptible power supply (UPS) system that will continue to deliver sufficient power for a defined period of time.

 

More frequently nowadays, the two are combined, so that a UPS system will continue to deliver power and remove any power spikes from the supply, after which the standby generator will cut in and deliver power as long as the fuel supply lasts.

 

The international standard ISO/IEC 27031:2011 – Information technology – Security techniques – Guidelines for information and communication technology readiness for business continuity covers many aspects of disaster recovery.

 

Fire prevention and smoke detection

Fire prevention

Whilst this may not immediately appear to be a cybersecurity issue, access to a fire prevention system could affect an organization’s ability to deliver service. No computer room or data center would be complete without having smoke detection systems and fire prevention facilities.

 

Systems such as Very Early Smoke Detection Apparatus (VESDA) can identify the release of smoke (and therefore the possibility of fire) before it takes hold and causes real problems.

 

The system works by sucking air from the area through pipes and sampling the quality of air passing through a laser detection chamber.

 

If the quality falls below acceptable levels, a response can be triggered, and this is often as a result of detection by more than one detector. The extinguishing chemical, normally nowadays an inert gas called Inergen, is discharged to the affected area.

 

An interesting example of a problem in this area was highlighted in September 2016, when ING tested the system in their data center in Bucharest.

 

The gas discharge produced sound levels in excess of 130 decibels, which caused excessive vibrations and head crash in disk drives. The entire data center was out of action for an extended period of time, resulting in no access by ING’s customers.

 

In this blog, we examine steps that can be taken both by individuals and corporate users to improve their cybersecurity.

 

It provides details of the general steps that can be taken by any user – technical or non-technical–and then covers those steps that are of a rather more technical nature. Finally, the blog includes a section on mobile working.

 

As discussed earlier, the response to cyber issues comes in two distinct areas. The first area is that of proactive response, in which we try to either lessen the likelihood of the event happening or if we cannot do this, lessen its impact.

 

The other area is a reactive response, which will include the disaster recovery capabilities described in the previous blog, as well as the hands-on work of changing system configurations to apply corrective controls once a cybersecurity incident has been detected. Either method should reduce the risk, but we may have to accept that there may be some residual loss or damage.

 

When we leave our house, we take care to lock the doors and windows. This might not prevent a burglar from gaining entry, but it does make his job more difficult. Unless the burglar is specifically targeting us, there is a definite chance that he will go elsewhere and try to enter someone else’s property.

 

It is very much the same with cybersecurity. If a determined attacker is sufficiently well motivated, skilled and equipped, he will almost certainly eventually succeed in gaining access to our data.

 

However, financial constraints might make it difficult or impossible to repel him, so the emphasis should not, therefore, be to make 100 percent sure he is unable to achieve this since this is an unrealistic expectation. Rather we should try to make the attacker’s job so difficult that he goes elsewhere.

 

Most of the actions we can take in the world of cybersecurity tend to be in the third of these – that of risk modification or reduction, and it is this area that we shall focus on most. At the next level, there are four general directions we can take. Three of these are pro-active in nature:

  • detective, in which we put something in place to detect that an attack is in progress, such as IDSs or antivirus software (which will also react to malware it has detected);
  • preventative, in which we put additional facilities in place in an attempt to stop an attack from being successful, such as firewalls;
  • the directive, in which we set out policies, processes, and procedures that people must follow in order to reduce the risk, such as password policies.

 

The fourth direction is reactive in nature

 

Finally, we reach the point at which we can examine the actual actions known as controls or countermeasures that we can take. There are three options, which we shall examine in greater depth:

  • physical controls, such as access control systems, which prevent intruders from gaining access to equipment or its environment in order to launch a cyber-attack or otherwise cause damage;

 

  • technical controls, such as firewalls, which directly address the security of systems and software that hold our information;
  • procedural controls, which tell people both what not to do and also what they must do before, during or following an attack, and, as mentioned earlier, may include vetting of staff by the HR department.

 

A number of documents providing sound cybersecurity advice are available, and would be especially valuable to SMEs:

The NCSC publishes a number of cybersecurity-related advice documents, including the UK government’s ‘10 Steps to Cyber Security’, ‘Common Cyber Attacks: Reducing the Impact’ and ‘10 Steps: Board Level Responsibility’.

 

For those looking for more specific detail, there are more than 200 additional documents published, dealing with all aspects of cybersecurity.

 

It is worth making a brief examination of the SANS Institute Sliding Scale for Cyber Security, which provides general guidance starting from a proactive position and potentially moving to a highly reactive one.

 

At the proactive level, the scheme begins with security designed and planned into the organization’s information architecture, based on the business objectives.

 

This is often the most difficult to achieve since the security aspects of many systems’ hardware and software are outside our control. This represents both preventative and directive action.

 

It continues proactively, with passive defense, in which additional technology is added to the underlying infrastructure to provide protection against cyber-attacks without the need for human intervention. This represents both preventative and detective action.

 

From this point, we move into the reactive sphere, beginning with active defense, in which security teams respond to events that cannot be completely controlled by passive defense means.

 

This may include gaining a full understanding of the target, the method of attack, and even, if possible, the identity of the attacker. This represents corrective action.

 

An example of this might be the case in which an organization finds itself under a massive DDoS attack. One of the defense mechanisms taken in conjunction with the ISP is to move the company’s internet presence to a different connection and IP address, and the ISP then points the DDoS attack into a ‘sink’ or black hole.

 

Next, we move into the area of intelligence, in which we use the attacker’s identity to discover more detail about them, their motivations, means, and methods, which may enable us to prevent further similar attacks.

 

This part of the process will require tools to capture information about the attacker, and also a means of analyzing this information to produce viable intelligence.

 

However, this could be outside the scope of most organizations, and this sort of investigation could well be undertaken by an outside company offering specialist InfoSec skills assisting the attacked company to restore their service.

 

There are a number of models that enable this work; one such is the Diamond Model of Intrusion Analysis;4 however, it is rather detailed and falls outside the scope of this blog, so a link is provided in the notes for you to explore if you wish to do so.

 

Finally, we arrive at the reactive point of offense – fighting back. This course of action is not recommended, since it could be fraught with danger, and could constitute a cyber-attack in its own right.

 

Individuals and businesses should be discouraged from any form of retaliation – it’s much more sensible to respond by alerting the appropriate authorities where possible and leaving offensive retaliation to security services and where applicable, military agencies.

 

Physical security

Physical security

It would appear at first sight that since we’re dealing with cybersecurity, physical security actions might not feature strongly.

 

Whilst there is an element of truth in this, we should not overlook the fact that if an attacker can gain physical access to a key computer system, he can probably achieve anything he wishes just by connecting a USB stick with key-logger software or inserting a CD or DVD loaded with malware or the data of a fake website.

 

Restricting physical access to business-critical systems should always be the first step in any proactive activities. Not only does this mean keeping the bad guys out of the computer room, but also everyday users unless they have a very specific requirement to be there.

 

Access to controlled areas should be the exception rather than the rule, and all permissions for access should be subject to a formal procedure and should be reviewed at regular intervals.

 

It is good practice to ensure that any visitor to a computer or network equipment room should be accompanied by a trusted member of staff, preferably one of the organization's systems administrators. It should also be noted that cleaners are not exempt from this policy.

 

Some simple steps that will make a difference include:

Locking electronic devices (smartphones, tablet computers, and laptops) somewhere secure when you have to leave them. Never leave them unattended in a public place, and keeping them hidden from view when traveling, especially in crowded places like railway stations and airports.

 

If you’re concerned about your computer camera being accessed by someone, a very simple solution is to place a sticky note over the top of it. If you use a lockable steel security cable to secure a device, make sure that it is fastened to something that cannot easily be removed, and make sure you keep the key with you when you leave.

 

Individual user steps whilst surfing the web

web

Users should resist the temptation to install or download unknown or unsolicited applications or programs unless they are confident that they are secure and free from malware.

 

In a corporate environment, no privileged user should use an administrative account for downloading unauthorized software. Their day-to-day user account should not have the level of privilege required to do so.

 

When visiting a new website, users should avoid clicking on links to other pages unless they are sure they are valid. Some websites shorten the URL, so the final address is hidden. Let the mouse pointer hover over the link before you click to show the link’s real address.

 

Cookies are essential to many internet activities such as online shopping, but many are irritating and some are harmful by invading our privacy. Users should periodically edit the cookie list and clean out any that are not needed.

 

The Onion Router (TOR) is a browser system that protects users by routing internet traffic through a network of relays run by volunteers all across the globe. It prevents one’s internet activities from being observed and prevents the sites we visit from identifying our physical location.

 

TOR should not be used in a corporate environment, since it is well known for subverting end-user security controls, such as anti-malware products.

 

Online forms frequently ask for information they really don’t require. If you think the question is unnecessary or intrusive, give an answer such as ‘not relevant’. If you don’t think they really need your telephone number, for example, type in something like 01234 000000.

 

Users should always delete browser history on public computers. This prevents the next user discovering personal information they may have inadvertently left behind. It’s also a good idea to periodically delete it on home computers as well since it can eat up valuable disk space.

 

Users should delete temporary internet files on home computers occasionally, and every time after using a public computer, for example in a library or internet café. This is invariably achieved by accessing the security tab in the browser’s preferences since different browsers will store them in different locations.

 

They take up considerable space on the hard disk, and rarely serve any useful purpose. As with browser history, they can also be used to track one’s web surfing experiences.

 

Internet passwords should be treated in exactly the same way as ordinary system passwords. See the section on user passwords later in this blog.

 

Social engineering

Social engineering

One method by which attackers will attempt to break into a network or system is to use his social engineering skills to talk their way around the organization’s security defenses.

 

Never provide cold callers with your credentials.

If you receive spam text messages on your mobile phone, report these to your network provider. Use the number 7726, which spells SPAM on the keyboard of non-smartphones.

 

Unless you are confident of the originator of a text message that includes ‘TextSTOP to unsubscribe’ or similar, never do so, since this may be simply a ruse to discover if there is a real person behind the number as opposed to a system of some kind. Resist the temptation to reply ‘Go away’ or words to that effect.

 

Email

Email

Once an attacker has acquired (or guessed) your email address, they may send offers of apparently attractive goods or services to tempt you into clicking on a link, which is almost certainly going to cause you problems.

 

At best, it will connect you to a website that offers fake goods; at worst, it will download malware onto your device that will be used to extract further information such as banking details, passwords and so on.

 

If an email looks suspicious, delete it without opening it. To do this, in most email applications you can usually right-click on the message and consign it to the waste bin with no risk at all.

 

Never respond to emails that invite you to enter your credentials such as bank account number and PIN or password. Banks and credit card companies will never ask you to do this, and even if the email appears to be from your own bank, it may well be a scam. It is a sensible idea to check any such emails against one that is known to be legitimate.

 

However, spammers are becoming increasingly professional, and it is often difficult to discern spam from the real thing. If in doubt, allow the mouse to hover over the URL, and check that this has not been obfuscated.

 

Phishing attacks often originate from respectable-looking emails purporting to originate from a reputable financial institution requesting that the user verifies their online identity.

 

These are invariably scams and will take the unsuspecting user to a fake website that is to all intents and purposes an identical copy of the real one. It is essential not to respond to these, and it can be helpful to notify the real institution whose genuine website is being abused.

 

Spam email is a blight. Fortunately, many email providers now have highly tuned filters that detect and delete spam without the user even being aware of it. If spam email does make it through their filter, it may (with luck) wind up in a spam email folder in your mail application, making it simple to identify and delete.

 

Do so. Do not be tempted to reply, since this will merely let the originator know that they have found a working email address, and you may end up receiving even more.

 

Consider using encrypted email to send sensitive information over the internet. We deal with this in greater detail in the section about encryption later on. For organizations with their own email servers, there is the option of turning on ‘opportunistic encryption’ described in RFC 7435: ‘Opportunistic Security: Some Protection Most of the Time’.5

 

Backup and restore

Backup

It is incredibly easy to accidentally delete something important, but it is just as easy to make sure you don’t. 

 

It is not recommended that you back up your files to the same hard disk drive that the operating system is installed on, so buy a reliable backup disk drive and make use of the inbuilt software in Microsoft Windows and Apple Mac operating systems.

 

As an alternative to a hard disk drive you may consider backing up data to DVD, Blu-ray or memory stick, but always encrypt the data on your backup device. 

 

Always store the media used for backups in a secure location to prevent unauthorized access to your data. A fireproof safe is an ideal storage solution, but keep it separate from your computer.

 

Pirated software

The best advice for pirated material, including films, music, and software is just don’t download it! You don’t know that the material is malware-free, and in any case, much of it is actually illegal, since it usually represents the theft of intellectual property.

 

For a business, legal liability was pirated or illicit material is found on one of its computers lies with the business owner, and not with the user of the computer. If you discover pirated material, the copyright owner may be interested in hearing about it, and in the case of software, the Federation Against Software Theft (FAST) may take an active interest.

 

Personal information

Keeping your personal information secure is one of the main objectives of cybersecurity. If the information is extremely sensitive, consider whether you should be keeping it on a device in the first place. If the answer is ‘yes’, then consider encrypting it.

 

Be extremely careful what information you share, and with whom you share it. Consider where the information might be stored, and where it might end up if the person or organization to whom you are giving it is not as careful about security as you are.

 

File sharing

File sharing

Many people and organizations now make use of cloud-based services to share information with friends, family, and colleagues.

 

All this is absolutely fine, provided that you have legitimate reasons for sharing information and it does not infringe someone else’s copyright. However, there is increasing use of file sharing mechanisms to distribute material illegally.

 

Corporate staff who access personal cloud-based file sharing services from the work-place pose an additional threat of the possible exfiltration of corporate data/information.

 

Films, audio recordings, blogs, and other material is hosted or ‘seeded’ by individual sharers. The user acquiring the information obtains a ‘torrent’ file from a file sharing service and runs this within file sharing download software.

 

The software links to the individual seed computers and downloads small portions of the file, linking them all together.

 

Only share information with family, friends, and colleagues if it is not someone else's copyright or unless you have their express permission to do so. If you use a file sharing service (such as Dropbox, Amazon Cloud or MicrosoftOneDrive), consider encrypting the information, especially if it is in any way sensitive.

 

Social networks

Social networks

The use of social networks has increased dramatically in recent years. Facebook, Twitter, Flickr, LinkedIn, and Instagram are just a few examples of the most widely used social networking sites. Whilst the idea behind these is to share information between friends, family, and colleagues, there are significant dangers in making use of them.

 

First, you may not know who is reading them if you have not correctly set your access preferences (which may be difficult to identify). Many organizations now examine the social networking site pages of job applicants before deciding whether to invite them for an interview.

 

Second, you do not necessarily know what other people may be posting about you – that embarrassing photograph taken on a recent night out may have been purely in jest but could reveal some aspect of your character that you would prefer to keep to yourself.

 

Third, you do not necessarily know the impact of something you have posted about someone else.

  • Be careful what you post on any social media networking site. It might come back to bite you later on!
  • Be very careful about who you accept as a ‘friend’.
  • Always ensure your information sharing preferences are set to the most appropriate level.

 

‘Free’ USB sticks

Anyone attending a conference these days will probably receive a free USB memory stick containing the presentations and usually some form of advertising or marketing material provided by organizers and sponsors.

 

Most of this is harmless, but there exists the possibility that the memory stick may also contain malware, and it is sound practice to run this through a virus scanner on a stand-alone computer before attempting to make further use of it.

 

It is well worth remembering the phrase ‘There is no such thing as a free lunch’!

 

A scam sometimes used by the hacking community is to load malware onto a USB memory stick – often a high capacity one – and leave it where their target will be likely to find it.

 

Once plugged into the target’s computer, the malware will install itself without the user’s knowledge, and (if the attacker has done his job well) will then delete itself from the memory stick leaving no trace. The malware can then commence its task.

 

Always test a ‘free’ USB memory stick on a stand-alone computer before plugging it into any other. Never use a memory stick you find lying around – it may well be a trap.

 

Banking applications

Banking applications

Banks are increasingly trying to persuade us to use their online banking applications, both from fixed computers and from mobile phones and tablets. The reason is simple – it saves them money.

 

Fortunately, the applications they provide and their web interfaces have been thoroughly tested and appear very robust. However, back in 2014 it was a very different story, with vulnerabilities found especially in the mobile applications, but this does not mean we should not be vigilant.

 

Remember to keep your banking details secure.

Log out of the banking application when you have finished your transactions. If using a public computer, clear the cookies, browser history, and temporary internet files. Be aware of people ‘shoulder surfing’, who may be able to see what you are typing on the screen.

 

TECHNICAL SECURITY ADVICE

TECHNICAL SECURITY

There are many activities covered by technical security, so I have tried to break these down into a few distinct areas.

 

Device locking

Physical locks are fine, provided that no-one can access your device without the need to remove it. The device should be equipped with a password, and a password-protected screensaver should cut in at a suitable interval once the device is unattended.

 

Further protection can be provided by setting the device to delete its data after a number of incorrect password attempts, but this must take into consideration the need for all the data to be backed up.

 

Encryption

Encryption

One relatively simple step to prevent unauthorized access to information on a computer, CD/DVD or USB memory stick is to use encryption. There are two distinct methods of achieving this:

 

File encryption – in cases where one or two files are of a confidential nature, itis easy to encrypt the individual files, and provide the encryption key securely to those who should have access.

 

Drive encryption – in cases where there are multiple files that require protection,or where access to the computer’s operating system or applications could constitute a significant threat, the entire drive can be encrypted.

 

When the user switches on the machine, a boot-level password is required to be entered before the computer will even commence starting.

 

Operating systems and applications

Operating systems

Every computer has a specific operating system, whether it be Linux, Windows or Mac OS X, or indeed a proprietary operating system used by more specialized computer hardware.

 

New or replacement operating systems should only ever be purchased or acquired through a reputable supplier – normally Microsoft and Apple for their operating systems, and a variety of trusted suppliers for Linux.

 

Once installed, it is essential to ensure that these operating systems are kept up to date, and the suppliers will usually provide a free online updating system to allow this to happen – provided of course that the facility has been enabled.

 

The same is true for key applications – for example, computers that run Microsoft Office applications can receive updates at the same time as the Windows operating system updates, and Microsoft Office applications that run on Mac OS X can check automatically for updates.

 

Regular updates contain not only fixes for problems but also from time to time introduce new features. In these cases, larger organizations should always test an updated operating system or application in a sterile environment before introducing it to the user community to ensure that it does not cause any conflict with existing corporate services.

 

Antivirus software should be installed – especially on Windows PCs, which are the most prone to virus attacks, but also on Apple Mac computers, which although considerably less susceptible, are still at risk from malware.

 

Some security specialists claim that antivirus software will only catch around 5 percent of viruses, but it is always wise to have it installed since failure to do so could still result in a successful attack.

 

It is also essential to install regular antivirus updates – most antivirus software will do this automatically – and to perform regular scans of the computer in case a virus was already present on the machine before the antivirus software was brought completely up to date. Ensure that operating systems and key applications are kept fully up to date.

 

Enable automatic updates if at all possible

Keep antivirus threat databases updated. Even though this doesn’t guarantee100 percent protection, a good antivirus system will catch the main viruses.

 

User Account Control (UAC)

In recent years, Microsoft Windows introduced the concept of User Account Control or UAC. This facility prevents users with non-administrative privileges from installing software.

 

If several people share the use of a single computer, make sure that all their user accounts are non-administrative, and retain just one master administrative account that is only ever used when required.

 

 Even if you are the only user of a computer, it is essential to allocate a non-administrative account and to use this instead of the master administrative account, since unauthorized access to this account will enable the user to take complete control of the computer.

 

Similar constraints apply to Apple Mac computers, in which non-administrative users are automatically unable to install software, and additionally, the system can be set to prevent an administrative user from installing software that does not originate from the Mac App Store or from an accredited developer.

 

Firewalls

Firewalls

If the computer has a built-in firewall capability (for example in Windows versions 7, 8 and 10), this should always be enabled, as it is usually quite reliable.

 

There is no need to buy third party firewall software or enable the firewall that comes with many antivirus products since doing so can cause compatibility issues.

 

The firewall can be configured (using an administrative user account) to prevent or allow access by certain applications, providing an additional layer of security.

 

Windows 10 offers built-in firewall software called Defender, although it requires enabling.

 

Antivirus software

Antivirus

Although it is claimed that most antivirus software only traps a small proportion of malware, this small proportion may be sufficient to cause damage or allow malware to infect the user’s computer. 

 

Install a reputable antivirus package, such as Norton, AVG, McAfee or Kaspersky. Many of these are free. An antivirus option is built into Windows 10’s Defender firewall software.

 

Most antivirus packages offer features in addition to antivirus such as protection when surfing the internet, for example, URL checking.

Enable automatic updating, which will ensure that the latest virus profiles are available. Enable the software to conduct regular scans of the computer, so ensuring that any malware that was present before a new virus was identified can be removed.

 

Java

Although it is an occasionally useful application, Java is known to suffer from a number of vulnerabilities, and unless it is essential that it is used on the computer, it is best turned off, so cutting off another means of attack. It can always be turned back on temporarily or reinstalled if required.

 

Application software updates

Reputable software companies will always provide updates, not only when they have developed new features, but also when they have identified and fixed vulnerabilities in the software. 

 

If a known application, such as Microsoft Office or Adobe Acrobat flags up that an update is available, it is always best to allow the update to take place.

 

Better still, if the operating system permits automatic updates to take place, this is worth enabling, as it means that your applications are up to date without the need for you to make a decision.

 

Miscellaneous user activities

User-related activities are often the cause of many of the cybersecurity issues we face, including misuse – and occasionally abuse – of networks, systems and services.

 

Keeping users on the straight and narrow is also a management responsibility, and this involves the monitoring of user behavior and occasionally some form of remedial (possibly disciplinary) action in order to resolve matters. There are a number of general guidelines that both individual and company users can and should follow.

 

User passwords

User passwords

Passwords are like toothbrushes – they should be changed regularly and never shared. Most people (myself included) struggle to keep track of passwords. Whenever you access a new service on the internet, shop for goods or register for information, you are obliged to select a username and password.

 

There is a great deal of common sense in this – it helps the supplier to identify individual users; it (in theory at least) keeps your transactions separate from those of others, and it provides you as a user with a degree of confidence that the website you are using is relatively secure.

 

Unfortunately, this means that we have multiple usernames and passwords, and we have difficulty remembering them all, so we write them down somewhere, which never a good idea since the piece of paper is likely either to be found by someone who should not know your passwords or be lost forever in the recycling bin.

 

The great temptation is to use the same username and password for as many logins as possible, but this is the first step on a slippery slope since if an attacker finds one instance of it, he will have the opportunity to use it elsewhere.

 

An attacker will often be able to guess your username since many websites invite you to use your email address for this, so if you do find yourself in the unfortunate position of having multiple passwords, there are a number of ways in which you can make your life simpler whilst retaining a measure of security.

 

Avoid all passwords that include all or part of your name, the names of family members (especially your mother’s maiden name) and pets. These are usually extremely easy to guess or discover. If you must use simple passwords, use those that can’t be easily guessed, such as fictitious’ words, like ‘gunzles wiped’.

 

Where possible, use a mixture of upper and lower case letters, numbers and other symbols. Longer passwords are always more secure than shorter ones.

 

Do not write passwords down where other people can find them. If you find complex passwords difficult to memorize, or if you have a large number of them, use a password management tool such as KeePass for Microsoft Windows or mSecure for Mac OS X.

 

That way, you will only have to remember the one password to access that. There are many such tools available.

 

Screen locking

Screen locking

When moving away from your computer in a location where others could obtain access to it, it is always advisable to engage the screensaver, suitably protected by a password. On corporate user computers, this should be set to happen automatically after a pre-determined period of time.

 

Configure a screensaver with password protection to cut in after no more than five minutes of inactivity. 

 

If possible, configure a shortcut to enable the screensaver – a single keystroke or mouse movement are both ideal. Never leave a computer unattended in a public place unless the password-protected screensaver has been enabled and the computer is physically secured.

 

Least privilege

When configuring new users of a system, always follow the rule of least privilege, meaning that they only have the level of access they actually require, as opposed to being made a system administrator.

 

All too often when people buy a new computer, they set their own account as the system administrator. Instead, they should set up the computer using administrative privileges, and then create their own user account without them.

 

If that account’s username and password are obtained by someone else, they can only then access a limited set of functions on the system itself, and not be able to make system changes.

 

As mentioned earlier in the blog, organizations with systems administrators must ensure that they have two accounts, one with administrative privileges and one for day-to-day email and office work. It should be a security policy rule that no-one should ever undertake day-to-day activities with an account that has elevated or administrative privileges.

  • Never configure a guest user on a computer to have administrative privileges.
  • Always ensure that guest user accounts have password protection turned on.
  • Always set up the main user of a computer with a non-administrative account.
  • Use the administration account user for essential systems changes only.

 

Surfing the internet

internet

There is so much information available on the internet that it’s difficult to do anything these days without downloading photographs or documents. When visiting websites, and downloading from them, users should take care to ensure that they are reaching a legitimate website. 

 

There are proactive preventative steps the user or the organization can take by putting controls into place to reduce the likelihood of a successful attack, and also simple steps that users themselves can take to avoid risks when surfing the web. The latter was covered earlier in the blog so we will focus on the proactive preventative steps here:

 

Internet browsers are able to block pop-up windows that can contain malware scripts linking to websites that contain malware. Microsoft Internet Explorer, Mozilla Firefox, Apple Safari, and Google Chrome all have this capability using freely available add-in software, such as AdBlock.

 

The ‘protected’ mode on browsers allows a high degree of anonymous web surfing. It isn’t guaranteed to be 100 percent effective, but using it should hide your computer’s identity from most prying eyes.

 

Parental control can be set in both Microsoft Windows and Apple Mac operating systems to safeguard underage web surfers. In Windows, they are located within the Control Panel application and in Mac they can be found under Preferences.

 

Adware and spyware are aggravating intrusions that we experience when we surf the internet. Much of this can be disabled within the internet browser, by disabling pop-up windows for example.

 

However, this will only solve part of the problem, so the use of an ‘add-ins or extensions’ such as Adblock Plus to the browser can block some adware and spyware, and there are commercial adware blockers available to download.

 

Be cautious though – some of these ‘free’ applications can actually install adware and spyware instead of removing it.

 

Encryption of stored and shared information

Encryption

Encryption is a method of maintaining confidentiality and integrity by scrambling information, usually referred to as ‘plain text’, so that is cannot be read or changed by unauthorized persons.

 

In order to encrypt information, a ‘key’ – invariably a very large number – is used in conjunction with software known as an encryption algorithm to change the plain text to ‘ciphertext’.

 

The ciphertext can only be decrypted by using the correct key in conjunction with the same algorithm. There are two different flavors of encryption used to ensure confidentiality:

 

Symmetric encryption, in which the sender and recipient of information share an identical key. Symmetric encryption keys are more at risk of being discovered since more than one person has access to them. For this reason, they must be changed at intervals, for example daily, or even changed each time they are used.

 

Asymmetric encryption, also known as public key encryption, in which both sender and recipient each have two keys, one of which is published publicly, and the other of which is kept private. The recipient’s public key is used by the sender to encrypt the information, and the recipient’s private key is used by them to decrypt the information.

 

Symmetric and asymmetric encryption methods are normally used for the encryption of information being transmitted to others, which can be achieved by using an application such as Pretty Good Privacy (PGP), which not only encrypts the information you wish to send;

 

but also allows digital signing of messages, which provides an increased level of trust for the recipient. PGP can also be used to encrypt hard disk drives, but this application of it is less common.

 

To ensure integrity, a one-way encryption method is adopted, in which a key is used in conjunction with a so-called ‘hashing’ algorithm that scrambles the plain text in such a way that it cannot be reversed.

 

Uses of this type of encryption include:

Hard disk drive encryption in which either the entire hard disk drive or selected files are encrypted. Microsoft Windows (but not all versions) uses an application called BitLocker, whilst Apple Mac OS X has FileVault built into the operating system to achieve this.

 

There are also a number of third parties and open sources drive encryption products such as PGPDisk and SecureDoc.

 

The storage of passwords, where the user enters their password, which is then hashed, and the resulting hash value is compared with a previously stored value. Storage of information in the cloud also demands that the information should be encrypted since this is invariably stored in locations over which users have no control.

 

Encryption as a technical policy was discussed earlier in the blog.

 

MOBILE WORKING

WiFi service

It is always tempting to use ‘free’ WiFi whenever we have the opportunity, but this brings its own set of threats, such as an attacker who intercepts the data being transmitted between the device and the access point, and if sufficient data can be captured, attempts to recover the encryption key in use (if indeed there is one) and uses the recovered key to gain access to the user’s information.

 

Earlier in the blog, we heard about the company that provided free WiFi in London’s Docklands, but potentially at a terrible cost. This example is extreme, but when we sign up for a free WiFi service, we really have no idea what is happening to our data, since once it has passed through the wireless access point, it is normally completely unencrypted.

 

Out and about

The recommendations for using free WiFi, especially in unknown locations, include:

Not using the service for anything that involves a financial transaction where your bank or credit card details are passed.

 

Not using any service that does not have an encryption key. Most bars and restaurants who provide free WiFi, for example, will always make use of an encryption key since this prevents ‘drive-by’ users who are not spending money there.

 

Avoiding those free WiFi services that use an insecure key such as WEP or WPA.WPA2 (the next generation) is much more secure and resistant to key recovery. Ensure you select WPA2-Personal (aka WPA2-PSK) with Advanced Encryption Standard (AES) encryption.

 

Corporate network users, if a WiFi hotspot must be used, then this should always be done by using a virtual private network (VPN) back to the corporate network. Additionally, corporate machines should always be configured to prevent a feature known as split tunneling, so that when a VPN is in use all traffic is passed over that VPN.

 

WiFi in the home and the workplace

WiFi in the home

Most home broadband services nowadays provide the user with a router that contains a wireless access point as well as Ethernet ports, and this is in many ways a much more convenient method of connecting since we can move around the house without the need to cable up in every room.

 

There are some basic rules that should be observed when setting up wireless networks in the home and the office:

 

Begin by changing the SSID name of the router. Preferably avoid calling it something that would identify your property.

 

After setting up the router or wireless access points, change the administration username (if possible), and definitely change the password. See the earlier discussion on user passwords for recommendations.

 

Always enable WPA2-Personal with AES encryption.

Use a long and complex key, which prevents outsiders from making free use of your wireless network, since you never know what they’ll be doing. The router supplier will probably print the default key on the side of the router, and you’ll need to use this in order to set it up, but it’s essential to change it afterward.

 

If the router supports remote administration, turn this off. If you ever need to use it, you can turn it on locally until you have done what you need to do. Again, if your router supports Universal Plug ‘n’ Play, turn it off, as it is a totally insecure protocol.

 

 Unless you need to use WiFi Protected Setup (WPS) in order to connect to a wireless printer, you should consider turning this off, since it provides an additional vulnerability.

 

Bluetooth

The history of Bluetooth vulnerabilities is legendary. There is little that an individual can do to make their Bluetooth devices more secure. In some cases, there are no user settings apart from ‘on’ or ‘off’. Here are a few suggestions that should reduce the likelihood of Bluetooth problems:

 

  • Ensure that the Bluetooth device (for example, a smartphone) is password protected.
  • Refuse all connection requests from devices you don’t recognize.
  • If you lose a Bluetooth device (for example a headset), remove it from the list of paired devices so that it can no longer be used to connect to yours.
  • Switch Bluetooth-enabled devices off when you’re not actually using them.

 

Location services

Location services

This feature applies to mobile devices that make use of GPS to make use of their location, for example when using a mapping application to plot a route between two points.

 

Many smartphone and tablet applications turn on location services automatically when you install them, meaning that they can track your movements. This may be essential as in the example given above, but there is no justification why a smartphone game should require it at all.

 

Think carefully about each application on your smartphone or tablet, and make an informed choice about whether location services will enhance your experience, or whether they are simply giving away information to someone about where you are.

 

Turn off location services in the general settings menu on all applications that you think should not be making use of them.

 

 If the application does require it, it will ask for them to be turned on, and it is your decision as to whether or not you do so. If you’re not doing scans and penetration tests, then just know that someone else is. And they don’t work for you.

 

SECURITY POLICIES OVERVIEW

SECURITY POLICIES

organizations should produce and maintain an overall security policy, which will set the scene for other policies that may be required. In general, security policies need not be lengthy documents, since they do not require a great level of detail – this can be incorporated in lower-level documents such as processes, procedures and work instructions.

 

For ease of use and clarity, a security policy should generally contain no more than eight sections:

  • an overview, stating what aspect of the organization’s operations the policy is intended to address;
  • the actual purpose of the policy;
  • the scope of the policy – both what is within the scope and what is not;

 

  • the policy statements themselves – usually the largest part of the policy document;

 

  • requirements for compliance – including, if appropriate, the penalties for failing to observe the policy, whether these are required by the organization, the sector regulator, national legislation, national or international standards, or whether they are simply good practice;
  • any related standards, policies, and procedures; definitions of terms used within the policy; revision history.

 

The overall security policy would normally contain policy statements along the lines of:

 

The organization’s information must be protected in line with all relevant legislation, sector regulations, business policies, and international standards, in particular, those relating to data protection, human rights and freedom of information.

 

Each of the organization’s information assets will have a nominated information owner who will accept responsibility for defining the appropriate uses of that asset and ensuring that appropriate security measures are in place to protect it.

 

The organization’s information will only be made available to those who have a legitimate business need. All the organization’s information will be classified according to an appropriate level of privacy and sensitivity. 

 

The integrity of the organization’s information assets must be maintained at all times. Individuals who have been granted access to information have the responsibility to handle it in an appropriate manner and according to its classification.

 

The organization’s information must be protected against unauthorized access.

  • Compliance with the organization’s information security policies will be enforced.
  • organizational security steps fall broadly into four areas:
  • directive policies that state ‘thou shalt’ or ‘thou shalt not’;
  • administrative policies, that is those that are underpinned by an administrative function;
  • communal policies which large parts of the organization must work together; technical policies that require specific hardware, software or both.

 

The following policies and operational controls are likely to be implemented by SMEs and within medium to large organizations.

 

DIRECTIVE POLICIES

Directive policies are concerned with individual behaviors and tell individuals either what they should do or should not do.

 

As with all policies, there should be some mention not only of the consequences of failing to adhere to them but also of the penalties for failing to do so.

 

Acceptable use

Acceptable use policies are those to which all users of the organization’s network and services, whether temporary staff, contractors or permanent members of staff, should adhere. 

 

Acceptable use will normally include such areas as personal access (browsing, shopping, etc.) to the internet and email. It may also cover the use of organizational facilities when posting on blogs and social media.

 

Information retention

This policy determines the duration for which information can be stored, and how it should be disposed of when the end of the retention period is reached. This policy will have strong links with the Information Classification Policy and any data protection legislation requirements.

 

Data and information retention

The organization’s data and information retention policy will link closely with its Information Classification Policy and where appropriate must take into account the requirements of data protection, human rights and freedom of information legislation, since this will impact on the amount of time for which personal information may be stored, for example, as required by Principle 5 of the Data Protection Act.

 

Information classification

Information classification

The organization is likely to possess many different types of information, including publicly available information; information that should be restricted to staff generally; and information that should be available only to very specific members of staff.

 

The information classification policy should define these levels, avoiding generic terms such as ‘confidential’ or ‘restricted’, since these can have different meanings, not only between the public and private sectors but also between similar organizations.

 

For each type of information, the policy will dictate how and where the information is stored (and in some cases where it may not be stored); its retention period; how it is labeled; the extent to which it may be shared; how and where it must be backed up; how it is transported; and finally, how it is destroyed when no longer required.

 

Peer-to-peer (P2P) networking

One of the simplest methods for distributing malware is by concealing it inside files being shared on peer-to-peer (P2P) networks. 

 

Unless it is a business imperative, organizations should enforce a policy forbidding the use of P2P networking, including P2P on company computers used at home and on individuals’ personal computers used on the organization’s network.

 

ADMINISTRATIVE POLICIES

Administrative policies deal more with the steps that individuals or groups of individuals take in order to protect the wider organization. These policies will determine the capabilities of all users within the organization as opposed to the dos and don’ts of individual users.

 

Access control

Access control

This determines how applications and information are accessed and can be achieved in a number of ways, including role-based, time of day or date, level of privilege, and whether access is read-only or read and write.

 

An access control policy can quite reasonably include the requirement for different methods of authentication, such as single sign-on, digital certificates, biometrics, and token-based authentication.

 

Change control

Uncontrolled changes are a frequent cause of problems in systems and services. The change control policy will describe the process for making changes to the systems and their supporting network, including the operating system and applications.

 

This may involve a detailed analysis of the proposals prior to any attempt at implementation, and will usually include functionality and load testing prior to roll out.

 

Hand in hand with the change control function is that of change management, which includes informing users of impending changes and having a back-out process that would be invoked should the change fail for any reason.

 

Termination of access

When employees leave the organization, it is vital that their access permissions are terminated. If an employee transfers to a new department or to a new role within the existing department, then existing permissions should still be terminated (as opposed to being modified), and then reinstated at levels appropriate to the new role.

 

Viruses and malware

Viruses

Viruses and other malware can infect systems without warning and must be dealt with in a formalized manner rather than an ad hoc approach that may do more harm than good. The policy will define who will address the problem and the procedure they will follow to identify, isolate if possible, and remove or quarantine the virus.

 

Passwords

Password management is a key aspect of information security policy and one that is frequently overlooked.

Users are notoriously bad at password management. They will (when they can get away with it) use passwords they find easy to remember, such as their mother’s maiden name, their birthday or the name of their pet, all of which are relatively simple for an attacker to guess or discover.

 

Users should be warned of the dangers of this practice and advised how to create strong passwords.

Passwords

In the past, the general advice has always been to recommend a minimum password length; to use a complex combination of letters, numbers and other symbols; and to force the user to change their password at intervals.

 

The USA’s National Institute of Standards and Technology has recently changed its view on passwords and has published a draft of a new standard – SP 800 63-3,2 which deals with digital identity. The draft currently makes three recommendations of things that organizations should do, and four that they should avoid.

 

Things that organizations should do:

Since users are only human, instead of placing the burden on the user, place the burden on the verifier. It is much easier to write one piece of software than it is to force hundreds or thousands of users to conform to a set of rules, and this is also less stressful on the users.

 

Size matters – by all means, check for password length, and encourage users to make use of longer passwords. Check the passwords the users enter against a dictionary list of known poorer bad passwords, and require the users to try again if the test proves positive.

 

Things that organizations should avoid:

Complex rules for composition, such as a combination of upper and lower case letters, numbers and other keyboard symbols. These are almost impossible for users to remember (especially if they are required to have different passwords for each application), and may only result in users writing them down.

 

Password hints can help the users remember their passwords, but they can also provide clues to an attacker. Since the originator of a targeted attack may well have undertaken considerable research into their target, such clues could easily betray the user’s credentials.

 

Credentials chosen from lists are similarly of dubious value. Such choices mother's maiden name, the town of birth, name of the first school and so on are just as likely to be known to a serious attacker as the hints described above.

 

Expiration of passwords after a finite period of time does little to improve password security and only serves to complicate matters for the user. Users should have the option to change their password if they feel that it may have been compromised, but forcing them to do it without good cause only adds to their burden.

 

The policy should also include a statement regarding the changing of default passwords, especially those that allow root access to systems and network devices such as firewalls and routers.

 

Occasionally, passwords are embedded within applications, especially in cases where one application must connect and exchange data with another without human intervention.

 

The use of embedded passwords should be avoided wherever possible, since they may be widely known and therefore represent a potential avenue of attack, but if they must be used, they should be changed from the manufacturer’s default.

 

No password is immune from a ‘brute force’ search in which an attacker’s computer tries every combination of characters until it eventually finds the right one. Using long passwords will make this much more complicated, and the attacker may simply give up and move on to another, possibly easier, target.

 

Users also have a habit of using the same password on multiple systems. Attackers know this, and if they discover one of a user’s passwords, it will normally allow them to access other systems as well. Users should have a different password for each system to which they require access.

 

If users must have multiple passwords and have difficulty in remembering them all, a password management tool may well be an appropriate solution as discussed in Blog 8; alternatively, single sign-on is a method that can be used to alleviate multiple password issues.

 

Users should also be discouraged from reusing passwords, and where available, some access control systems, such as Microsoft’s Active Directory, can be configured to forbid reuse within a certain period of time.

 

Removable media

Removable media, including USB memory sticks, DVDs and external disk drives can all be not only a source of malware if they have been infected on another system outside the organization, but also a means of users removing information from the organization without authority.

 

Although not obviously seen as removable media, there are many USB devices that can easily act as removable media and become a source of malware, including so-called smartphones, tablet computers, and even e-cigarettes. System hardware can be easily configured to prevent the use of removable media unless the user has a very specific, authorized need.

 

Shared network resources

Shared network

Shared network drives are an extremely useful resource, allowing staff to move large volume files around the organization. However, they suffer from one serious failure and that is that there is usually no audit trail of who copied files onto the hard drive and who subsequently copied them off.

 

Additionally, some forms of malware such as worms can infect multiple shared drives within a network.

If files are to be shared between users within the organization, or with users outside the organization, then a collaborative system such as Microsoft SharePoint should be considered, since this allows the organization to select who can make use of the system to share files, and retain an audit trail of who has done what and when.

 

Segregation of duties

It is all too easy for organizations to allocate people who understand IT to wide-ranging roles, and in some situations, this is a mistake, since it can provide administration-level users with the capability to create and allocate high-level user accounts for people who do not or should not have them.

 

This can lead, for example, to a member of staff being able both to order goods and authorize their purchase, which can lead to fraudulent activities. The correct method of addressing this is to ensure that a particular type of user account cannot carry out both functions – in other words, to completely segregate the duties and access permissions of two account types.

 

Backups and restoral

organizations should always operate a policy that demands that information is backed up; including the backup intervals (which may differ for different information elements); the backup method (for example, full or incremental);

 

the media upon which backups are stored; whether backup media is kept on the organization’s premises (but not the same location as that of the data being backed up) or at a third party location; the maximum time allowed for recovering the data including transport from third-party sites; and how often backup media is tested for reliable restoral.

 

Most large organizations will have a backup policy, but as with all policies, this should be regularly reviewed to ensure that the correct systems are being backed up to some form of removable (encrypted) media, which is then stored off-site in a secure location.

 

However, that is only half the story, since many organizations have discovered to their cost that after a period of time, some backup tapes or disks cannot be read, and so it is essential to perform a test restoral of data at intervals as a sanity check.

 

As an alternative to conventional backups, some organizations rely on the use of cloud services to maintain a long-term store of data, and whilst this might be a cost-effective the solution, it does require careful planning and management, since it is often very easy to delete files stored in the cloud, which rather defeats the object of the exercise.

 

Another increasingly popular alternative is where the move to virtualization has occurred and storage area networks (SAN) are becoming widely used, configured with a second SAN for backup. The SAN can be updated daily or by regular snapshots during the day. However, additional backups to other media would normally be recommended.

 

Antivirus software

Antivirus software

Some organizations have begun to move away from antivirus software, having been put off by stories in the media about its lack of effectiveness, especially when new malware appears but has not yet been addressed by the antivirus software author. These are called ‘zero-day’ vulnerabilities since once they become known, the author has no time at all in which to provide a fix.

 

However, even if antivirus software does not identify and trap every vulnerability, it will prevent known vulnerabilities from causing problems by neutralizing or quarantining the offending virus, so it is still very much worthwhile maintaining an antivirus capability, and ensuring that it is kept fully up to date.

 

Software updates

Many of the key applications upon which organizations rely – for example, Microsoft Windows, Internet Explorer and Office; Adobe Acrobat Reader, Mozilla Firefox or Google Chrome – are all targets in which attackers find vulnerabilities.

 

The authors of this software will invariably produce updates to fix known vulnerabilities at regular intervals, and it is essential that organizations keep these operating systems and applications fully up to date with the latest patches.

 

Failure to do this can result in an attacker taking advantage of the gap between the vulnerability becoming known and the organization applying the patch to fix it.

 

Where possible and practicable, automatic updating should be applied, since this does not require further manual input from support staff, and reduces the ‘patch gap’ to a minimum.

 

Additionally, any software update that will result in a major change to the operating system or applications should have a back-out plan so that the organization can revert quickly and easily to the original version.

 

Remote access/guest/third-party access

Whether or not an organization makes use of VPNs for network access, it will be necessary to define how staff and third-party contractors are able to access the network and its systems. This policy will also link closely with other policies such as access control, security awareness, and passwords.

 

Wireless/mobile devices

mobile devices

This type of policy will set out the organization’s requirements for implementing wireless access points around its premises; how the wireless infrastructure devices must be configured and secured, including the encryption method; whether the SSID is broadcast; and which bands and channels are to be used.

 

When considering devices that make use of Bluetooth for communications, it should only be enabled when it is actually required and then turned off. Once initially configured for use, the organization should ensure that the device’s visibility is set to ‘Hidden’ so that it cannot be scanned by other Bluetooth devices.

 

If device pairing is mandated, all devices must be configured to ‘Unauthorised’, which then requires authorization for each connection request. Applications that are unsigned or sent from unknown sources should be rejected.

 

For mobile devices supplied by the organization, there will also need to be a section of the policy that regulates when and where these may be used over wireless networks that are not owned or provided by the organization, for example, public wireless or third-party networks.

 

This policy will also include a definition of what information may be stored on the device; what applications may be loaded onto it; whether it may be used to gain access to the wider internet; and whether information stored on the device is or becomes the intellectual property of the organization.

 

Bring your own device (BYOD)

This policy will overlap to a certain extent with the mobile device policy described above, but in this case, the device – such as a laptop computer, tablet computer or smartphone– will be the personal property of the staff member as opposed to being owned by the organization.

 

The policy may include statements regarding use by friends or members of the user’s family, and may also require separate login procedures for access to the organization’s network and, where necessary, hard disk drive encryption.

 

Peripherals

By default, many operating systems install auxiliary services that are not critical to the operation of the system and which provide avenues of attack.

 

When configuring users’ computers, system administrators can disable and remove unnecessary services and peripherals such as USB ports, SD card slots and CD/DVD drives, which, once they are removed, cannot be enabled or used. This policy may form part of a more general procurement policy on the organization’s IT infrastructure.

 

Isolation of compromised systems

organizations that have detected that a system has been compromised would be well advised to isolate it quickly from the network in order to prevent possible malware from spreading to other systems on the network.

 

Once removed, it would be useful to perform a forensic analysis on the system, using a specialist organization if the relevant skills are not available internally, and finally to restore the systems to normal operation using trusted media.

 

Browser add-ons and extensions

Attacks on internet browsers, add-ins and extensions are becoming increasingly prevalent, and it is critical that attackers should not be able to use vulnerabilities in software such as Microsoft’s Internet Explorer or Adobe’s Acrobat Reader or Adobe Flash to gain access to systems. organizations should make use of the vendor’s automatic update or software distribution facilities to install patches as soon as they become available.

 

AutoRun

AutoRun is a facility provided on Microsoft Windows that permits a command file on media such as a USB memory stick, CD or DVD to execute when it is inserted into the computer.

 

This is an extremely simple way for an attacker to gain access to a system, since the user may be totally unaware that the media is infected and may not notice the program is running.

 

Turning off AutoRun will probably be a minor inconvenience both to users and to system administrators. It is interesting to note that Apple’s OS X operating system does not support this kind of facility.

 

Adobe Acrobat Reader

Adobe’s Portable Document Format (PDF) has become the de facto standard format for sharing information. Almost any file, presentation or document can be exported or converted into PDF format and will look identical on any type of computer, smartphone or tablet that has Acrobat Reader software loaded.

 

However, an increasing number of cyber-attacks are being conducted by inserting malware into PDF documents, which are then transferred to the device.

 

organizations can protect their machines from such attacks hidden inside PDF files by hardening Acrobat Reader, by downloading the advice from the NSA.

 

Outsourcing

organizations may find it economically advantageous to outsource certain aspects of their operations. This is becoming increasingly so in the case of the organization’s ICT infrastructure, and outsource service providers may offer to provide not only data storage but also the operating system hardware and software and the application software required for the organization’s operations.

 

In some cases, this will be provided at a dedicated third party site, as is frequently used in DR arrangements; or may be provided in a more virtual environment such as cloud services.

 

In either case, it will be vital that the organization has a clear policy regarding the selection of suppliers for this type of service, which will form the basis of a service level agreement (SLA), and should also include an exit policy should the organization decide to move away from a supplier, especially with regard to ownership of indexing to the organization’s information and subsequent destruction of any the organization’s information remaining in the cloud.

 

COMMUNAL POLICIES

Communal policies are those that may have an impact not only on individuals within the organization but also on the wider context of the business and the environment in which it exists.

 

Contingency planning

Contingency planning determines how data or access to systems is made available to users during the prescribed hours of operation. The policy will cover what measures are to be put in place to ensure that access is available in the event of failure of either the systems themselves or the means of accessing them such as a web server and the associated supporting network.

 

A contingency planning policy will often link directly to a business continuity or to a disaster recovery policy.

 

Incident response

The organization’s incident response policy will detail how incidents are reported, investigated and how they are resolved. In the event that certain predefined failure thresholds are exceeded, additional measures such as business continuity and disaster recovery plans may need to be invoked.

 

An incident may also require communication regarding the incident to be made available to staff, customers, third-party suppliers, the public at large and, if the organization is part of a highly regulated sector (such as energy, finance or transport), the incident may also require notification to the sector regulator.

 

As with business continuity and disaster recovery plans, incident response plans should be reviewed at regular intervals or when any major aspect of the organization’s business changes, and also tested at regular intervals.

 

User awareness and training

User awareness and training

Since many of the cybersecurity issues we experience are caused by users, making them aware of the risks they face – including the major threats, vulnerabilities and potential impacts – is a highly important step to achieving better cybersecurity.

 

Awareness is the first step and introduces users gradually to the things they need to know and understand so that security becomes second nature to them, and they cease to foster bad security habits and move towards a position where they are fully committed to good security practice.

 

This is then supplemented with training for those people who are more actively involved in day-to-day security operations, and who require specialist training courses in order to properly fulfill their role.

 

TECHNICAL POLICIES

Technical policies are those of a purely technical nature. They may be necessary either in order to allow other policies previously described to operate successfully, or may stand on their own.

 

Spam email filtering

Spam email is the bane of most people’s lives. It can range from the simply annoying to the positively alarming. Nowadays, most email service providers check email passing through their systems and filter out those that have been previously flagged as spam.

 

However, this may not remove all spam email, as new spam messages will always arise, and some filters may either never add them to their blacklist, or it may take time for the spam to be reported. organizations can make use of their own spam filters such as SpamAssassin, which will remove unwanted email from entering users’ inboxes and junk mail folders.

 

Alternatively, organizations may outsource email scanning to a specialist organization such as Message Labs. It is also vitally important to instruct users as part of the organization's awareness programme on how to identify spam and junk mail even if it originates from a known and normally trusted source.

 

Audit trails

These allow an organization to follow a sequence of events in cases where security incidents have occurred and, where necessary, to be able to show that a user has or has not carried out a particular action. Such evidence might be required in cases where legal proceedings take place, in which case the audit trail must also be forensically robust.

 

Firewalls

Firewall policies will determine the way in which firewalls are deployed and configured to form an integral part of the network, especially with regard to the rules that must be applied and subsequently maintained.

 

Firewalls should be used to block all incoming connections from the internet to services that the organization does not wish to be available. By default, all incoming connections should be denied, and only allowed for those services that the organization explicitly wishes to offer to the outside world.

 

Good practice also calls for the IP address of the incoming session to be a valid public IP address and not an IP address associated with the business itself. For example, if the business has a block of 32 public IP addresses these must be filtered out.

 

In addition to firewalls, it may be an advantage to partition the organization’s network into separate areas by splitting them according to their function, such as research and development, operations and finance, making it more difficult for an attacker to reach a particular service (see the later item on VPNs).

 

It is also common practice for organizations to create another barrier between the external and internal networks by introducing a so-called demilitarised zone or DMZ. 

 

Good practice also requires that any outgoing connection from the organization to the internet originates from a specific proxy server or service located on a DMZ and not within the main network.

 

Firewalls come in various shapes and sizes. Many require specialized hardware on which to operate and require well-trained staff to configure and maintain them.

 

The decision on which type of firewall to use and how it should be configured is best left to specialist advice, since it must not only provide protection for the business against unwanted intrusion but also meet the business needs as regards what can and cannot be transmitted through it.

 

Other firewalls come built into desktop operating systems – these are much simpler and require little if any, configuration. On user computers, these should always be enabled, and the user’s access should prevent them from changing this by providing them with a non-administrative account.

 

Encryption

Encryption

The information encryption policy will go hand in hand with the information classification policy, in that it will define, for certain levels of information classification (for example, secret or top secret), how sensitive information will be encrypted and how the encryption keys will be managed and exchanged.

 

For example, information classified at a certain level could be exchanged between two people using a straightforward encryption mechanism such as PGP, with each owning their own encryption keys, whilst other information might require the use of a full-blown public key management system, with encryption keys centrally managed and distributed.

 

The policy should additionally make the distinction between information in transit (for example, within emails) and information at rest – that is stored on hard drives or other media, especially if stored in the cloud.

 

For information at rest, encrypting the hard drive of a mobile user’s computer is relatively straightforward, and means that the device cannot be used without the user’s password to decrypt the data, making the information useless to anyone who steals it.

 

On Apple Mac computers, turning on the free built-in FileVault software will encrypt the entire hard drive, whilst for Windows users, there are two options.

 

The first, for Professional or Enterprise versions of Windows, is to enable the inbuilt BitLocker software. The second, for other versions of Windows, is to download and install the free VeraCrypt encryption software.

 

Business data that is being stored in the cloud should always be encrypted, since it is always uncertain in which country or countries the cloud storage is located, and those countries’ jurisdictions may not place a high level of protection on data, even to the extent of intercepting and analyzing it themselves.

 

Sensitive information that is being moved to another location – whether by some form of media like a memory stick or by email – should always be encrypted, so that, again, anyone who is able to intercept the transmission or steal the media will be unable to access the information.

 

The key lengths used in symmetric encryption algorithms such as Data Encryption Standard (DES), 3DES and AES are typically 56, 112, 128 or 256 bits in length, whereas the keys used in asymmetric or public key cryptography (typically 2048 bits in length) are used in the initial set-up of an encrypted session that determines the actually fixed encryption key that will be used by the symmetric algorithm during the session.

 

These keys are not typically used for the main encryption work because they require too much computation resource.

 

Secure Socket Shell (SSH) and Transport Layer Security (TLS) keys

Transport Layer Security

Secure Socket Shell (SSH), is a network protocol that provides administrators with a secure method of access to remote systems. It provides a means of strong authentication and encrypted communication between two systems over an insecure network, especially the internet.

 

It is widely used by network administrators for the remote management of systems and applications, enabling them to log on to another system, execute commands and move files between systems.

 

The Transport Layer Security (TLS) protocol provides both confidentiality and integrity between two communicating applications exchanging information such as that between a user’s web browser and an internet banking or e-commerce application. TLS is also used in VPN connections, instant messaging services, and Voice-Over IP (VoIP) applications.

 

Both SSH and TLS make use of encryption keys (as described above) to secure the transfers and are typically 256 bits in length

 

Abuse of SSH and TLS keys is not uncommon. In order to reduce the likelihood of insiders taking advantage of these when they leave the organization, which renders critical network infrastructure open to malicious access, it is recommended that organizations rotate SSH and TLS keys at intervals.

 

Digital certificates

Digital certificates are widely used to provide authentication of websites, particularly when conducting financial transactions. Digital certificates can be purchased from accredited certification authorities (CAs) both for personal use and by organizations.

 

However, it is important to remember to renew the certificate (normally annually), since failure to do so renders the certificate useless, and users whose web browser detects this will receive a notification that the certificate has expired. This may result in their deciding not to continue with the online transaction.

 

Email attachments

Email attachments

As an integral part of their awareness training, employees should be instructed that they should not open email attachments unless they are expecting them.

 

Additionally, users should be forbidden to execute software that has been downloaded from the internet unless it has been scanned for viruses and tested for security vulnerabilities. Users who visit a compromised website can unintentionally introduce malware.

 

organizations should configure email servers to block or remove emails that contain those file attachments that are commonly used to spread malware, such as .vbs, .bat, .exe, .pif, .zip and .scr files.

 

Network security

Network security policies are very wide-ranging, taking into account how the organization's networks can be secured against intrusion using a combination of firewalls, intrusion detection software, antivirus software, operating system, and application patching and password protection. 

 

These should include fixed and wireless local area networks (LANs), VPNs, wide area networks (WANs) and SANs.

 

Virtual private networks (VPNs)

The use of virtual private networks is commonplace, especially in larger organizations, and a policy will be required that sets out how and where these are deployed; who may make use of them (for example, for remote access by staff, guests and third-party contractors); and how they are configured and secured.

 

The use of VPNs should be part of the organization’s strategy that includes network segregation and firewall deployment.

 

Physical access

Physical access

This will define how access to the physical areas of the organization is controlled and may include perimeter fencing and gates with movement detection and/or CCTV systems, electronically controlled gates and physical security guards.

 

Within the organization’s sites, physical access control will normally be governed by electronic door access systems, whether by personal identification number (PIN), wire-less proximity card or a combination of both.

 

The supporting system will dictate the levels and locations of access available to individual members of staff, visitors and contractors. Internally, infrared movement detection and CCTV systems are also frequently used, especially in highly sensitive areas.

 

Intrusion detection systems (IDS)

As with many security tools, intrusion detection systems are just one weapon in the security manager’s armory. As the name suggests, their purpose is to try to identify when unauthorized intrusion to a network or computer system is being attempted, and they are available in a variety of forms:

 

Host intrusion detection systems (HIDS) are installed on individual computer systems, and monitor that system’s configuration only. If a HIDS perceives an abnormal change in a system configuration, it will send an alert message to a console for a security operator to examine.

 

Network intrusion detection systems (NIDS) are installed on internal networks and subnetworks in order to detect abnormal network traffic such as attacks on firewalls. They will also report to a console if they detect an attack but additionally can take some form of action, such as to change firewall rules.

 

Under certain circumstances, it may be necessary to undertake such work using forensic techniques and to retain hard drives and data for possible use in legal proceedings. Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.

 

changing people’s behavior

security liabilities

For the most part, one of the greatest security liabilities in any organization is caused by the user. They may not act deliberately, but often they will unintentionally perform acts of cyber vandalism that will cause untold problems for the IT and security support staff.

 

Their actions (or inactions) may be that they behave inappropriately and release information or allow information to be released, but this may often be due to the fact that they have not been properly trained by the organization to react appropriately to information security events.

 

Some – but not all – of this can be corrected by educating and training the users in good security practice, making them aware of the risks that they will face when using both their own and the organization’s systems.

 

The ‘not all’ referred to above covers two different aspects of human behavior – first, when the user simply forgets or ignores their training, and second, when they are carrying out some action in a very deliberate manner, either to cause loss of the organization’s information (selling it to a competitor for example) or to cause damage or loss as an act of revenge.

 

However, making users aware of the threats, vulnerabilities, and impacts that they may face is an essential precursor to training.

 

There is little that the organization can do to ensure that users never make a mistake, although some organizations as a means of reducing the likelihood levy a fine on staff who leave sensitive documents or their computer unattended.

 

Preventing or reducing the likelihood of information theft or damage to systems and information can be achieved to a certain extent by implementing very strict access control mechanisms and introducing monitoring software that looks for anomalies in user behavior and flags up an early warning if something out of character is detected.

 

Banks and credit card companies adopt a similar approach as a means of early detection of fraud, and will often contact a customer immediately if they appear to be making purchases that do not match previous spending patterns.

 

Although it may appear obvious, it is worth stating that awareness and training are two different but inter-related concepts. Awareness provides users with the information they need in order to avoid making mistakes, whilst training equips them with the skills they require to deal effectively with challenging situations when they arise. 

 

This blog focuses mainly on changing people’s behavior so that the instances of people-related cyber-attacks can be reduced.

 

AWARENESS

Awareness of cybersecurity

Awareness of cybersecurity issues permits both individuals and an organization’s users to act as a first – or indeed a last – line of defense in combating cyber-attacks. It is never a one-off activity and should be considered to be an integral part of personal development, whilst remaining a rather less formal activity than training.

 

An awareness programme allows people to understand the threats they face whenever they use a computer, the techniques used by social engineers to achieve their goals, the vulnerabilities faced by them or by their organization, and finally the potential impacts of their actions or inactions.

 

This doesn’t imply that it is necessary to turn everybody into cybersecurity experts, but that a basic level of understanding is required, similar to that in driving a car – we need to know how to operate the vehicle, the rules of the road and the dangers we face, but we do not need to understand how the engine management system works.

 

As with any process, there are a number of discrete steps in an awareness programme:

 

Plan and design the programme:

  • select the most appropriate topics for awareness, such as email etiquette, correct handling of information assets or password security, for example;
  • make a business case to justify any expenditure;
  • develop a means of communicating with the users.
  • Deliver and manage the programme:
  • develop the materials and content;
  • implement the awareness campaign.
  • Evaluate and modify the programme as necessary:
  • evaluate the campaign’s effectiveness;
  • improve and update the material with new information.

 

Like many other aspects of working life, awareness is a journey, not a destination, since new people will join the organization and need to be included in the programme, and new threats and vulnerabilities will arise.

 

The campaign should also focus on continuous reinforcement through such things as poster campaigns and pop-ups when people access the internet or log on.

 

The general trend of user engagement in the programme should be along the lines of:

  • initial contact with the user community – letting them know that something will be happening in which they will need to become involved and providing a general idea of what the programme will be all about so that their expectations can be managed;
  • further understanding of the programme, so that they appreciate what the implications will be for them;
  • timely engagement, so that they begin to understand that there is a new way of working;
  • acceptance by users, in which the user community begin to work in a new way;
  • full commitment to new ways of working, so that they do not revert to their old ways;
  • evangelism, in which they encourage others to follow their example.

 

Possible obstacles to a successful awareness programme

It is easy to assume that once an awareness programme is underway that all will go to plan, and organizations will only need to react and respond to problems when they arise. However, if forewarned about some of the possible issues, organizations should have a contingency plan in place so that faster reaction is possible.

 

Some of the issues that organizations may face include:

Initial lack of understanding. When the awareness programme is initiated, it is vital that the communication that goes out to the audience involved explains not just what the organization expects to achieve, but also why it is undertaking the work. This will greatly aid acceptance of the programme.

 

The introduction of new technology which complicates a programme that is already underway. Such changes in the IT infrastructure in an organization can either enhance the ability to deliver the message or can complicate it;

 

but as long as people from that part of the organization are involved in the awareness programme, the team should be aware of the possibility before it arises and be able to include it in their programme or work around the problem.

 

One size never fits all. Every organization is different, and there are no standard methods of operating an awareness programme, and even within one organization, the different types of the audience may have different requirements.

 

Also, there will be a considerable difference in both the size and the scope of an awareness programme between one for a large organization and one for an SME.

 

Trying to deliver too much information. Many users in an organization will be non-technical, and so the focus of the programme must take into account that the more technical aspects of cybersecurity could overwhelm them.

 

It is essential to keep the focus on what the audience needs to know and not try to extend the delivery of information to be too technical. Less is more.

 

Ongoing management of the programme can become a challenge. If this becomes the case then the probability exists that the programme will flounder due to lack of support from those areas of the organization that are involved in its delivery, and therefore senior management commitment must be assured.

 

Follow-up failure. These can and will cause problems for the programme since it is vital that the team understand how well the message has been received, understood and acted upon by the target audience. Regular monitoring and reviews are essential to delivering a quality programme.

 

Inappropriate targeting of the subject matter. This can have a negative effect on the programme, since groups within the organization may be receiving some awareness information that has little or no impact on their role, whilst others are not receiving information that would be essential to their daily activities.

 

Ingrained behaviors. These are a constant challenge in this kind of programme. Some people will always challenge the programme saying, ‘We’ve always done it this way and it has always worked, so why should we change?’ Any organization running an awareness programme must expect this kind of response and must develop sound arguments against it.

 

Some people will take the view that security is the responsibility of the department. It is essential that they are disabused of this notion at an early stage and throughout the ongoing campaign. Cybersecurity is everybody’s problem and is not restricted to one department.

 

Programme planning and design

Programme planning

The process commences with the establishment of a small team who will develop and run the programme. Some of them will naturally have a degree of expertise in information security, whilst others may represent those parts of the organization that might suffer serious impacts in the event of a cyber-attack.

 

It may also be beneficial to involve the internal audit function, who may be able to offer constructive advice, since a programme such as this may well be audited at a later stage, and it’s always good to have an audit on your side.

 

The team’s initial task will be to define the exact goals and objectives of the programme, and this will include whether the target audience is to be the whole organization or just a small part as a pilot project.

 

This latter option may be a much more beneficial approach, since it should be able to achieve its objectives on a small and therefore less costly scale than targeting the whole organization, before widening the programme to include everyone.

 

In the initial part of the programme, the target audience might also be limited to one particular type of user, such as:

  • employees working full-time in the organization’s premises. These are frequently the kind of users who will benefit the most from receiving cybersecurity awareness training;

 

  • home-based users, who will have similar, but slightly more complex needs. Due to the different requirements for connecting into the organization’s network, these users may require a slightly higher level of understanding of the issues at stake;

 

  • third party users, such as contractors, outsourced staff and suppliers who require connections into the organization’s networks in order to undertake their work;

 

  • system administrators and IT support staff, who will already have at least a general appreciation of the issues;
  • management-level users, who may be responsible for in-house employees or home-based users, and who need to understand how cybersecurity issues will affect their departments; 

 

  • senior executive users, who will be responsible for making many of the business decisions that would be impacted by a successful cyber-attack.

 

Alternatively, the organization may decide to target a cross-section of users from different groups so that the overall organizational benefits can be seen, rather than solely those for a particular community.

 

Some topics will have greater relevance to particular target groups, such as the issues of social engineering, which may possibly be more relevant to staff who have regular contact with customers and suppliers than to those who do not.

 

This does not imply that those who do not have as much external contact should not be included in that aspect of awareness, but that they might gain less from it.

 

Next, in the development of the programme, the team must clearly identify the topics that will be covered.

 

It is pointless trying to cover all aspects of cyber awareness since this will simply overwhelm the audience; instead, the programme should focus initially on a very tightly defined subset such as usernames and passwords, spam email or social engineering.

 

The campaign can be widened at a later stage once the results of the earlier work have been examined and the techniques used have been refined where appropriate.

 

The methods of communicating the message to the user community will vary considerably, and may well consist of some or all of the following:

  • posters, which can be placed where the staff can easily engage with the message, such as meeting rooms and other shared areas. Some posters might have a humorous focus in order to lighten the message, whilst others could be somewhat darker;

 

  • newsletters, which can be delivered by desk-drop in office buildings, or by email for offices and home workers alike;
  • give-away items such as coasters, coffee mugs, key fobs and mouse mats, which continue to reinforce the general message for as long as they are used;

 

  • screensavers, which might display a variety of messages, and which could be changed either at regular intervals or when a new message must be given out;
  • intranet websites that provide helpful advice, examples of good and bad cybersecurity behavior and links to additional informative material and training;

 

  • fact sheets and leaflets, which may be particularly relevant to a group within the organization, to the whole organization or to its business sector;

 

  • presentations at team meetings, in which a guest speaker talks for a few minutes on a hot topic and takes questions about the whole awareness programme, keeping the presentation ‘short and sweet’;

 

computer-based training (CBT), which delivers a more detailed level of knowledge, and maybe a mandatory requirement for the certain users’ work. This might include data protection legislation for example.

 

Once this part of the work is complete, the team may well have to approach the senior management team or board of directors to obtain funding approval, since it is unrealistic to expect that the work can be undertaken at no cost.

 

As with all business cases, the approach should focus on the likely impacts that will occur if the work does not proceed, as well as the benefits that will accrue when it does.

 

This is another reason for keeping the initial part of the campaign to a reduced volume of information since the costs will be lower and the board will find it easier to give approval.

 

Success at this early stage will then make it much easier to obtain board approval for further expenditure when the campaign moves on to cover more aspects of cybersecurity awareness.

 

The costs can be more easily identified if they are broken down into manageable areas, for example:

  • the hourly costs of staff who are engaged in delivering the awareness campaign as well as those who will be on the receiving end;
  • development costs, including development and maintenance of any intranet websites or the production of materials such as posters and newsletters;
  • promotional costs, such as give -away items including branded pens, coffee mugs, key fobs, mouse mats and the like;
  • training costs, where external trainers are brought in to deliver all or part of the awareness campaign.
  • Some of these will be one-off costs, whilst others will be recurring, and the board will expect that these will be clearly identified.

 

It should also be possible to attempt to quantify the potential impacts since the directors of organizations will need to be certain that the programme will deliver value for money.

 

Potential impacts can include not only the direct financial losses anticipated if a particular incident occurs, such as the loss of sales revenue and the expenditure that would be incurred in responding to and recovering from the incident, but also the indirect losses such as share value, brand and the organization’s reputation, although these can be rather more subjective in nature.

 

Delivery and management of the programme

Although we have called this an awareness campaign, it actually goes further than this, because awareness is only the first stage in which the target audience is made aware of what they should know and when they are likely to need the information.

 

This may be delivered in a variety of ways, for example by printed material, email, electronic newsletters and intranet portals for those organizations having more sophisticated resources.

 

The campaign then moves up a level so that the target audience gain an understanding of why they need to be involved and how best they can participate. This may include raising awareness topics at team meetings and delivering specific presentations on the subject matter.

 

Evaluation and modification of the programme

Finally, the campaign is ready to see results from the earlier work and to evaluate its effectiveness, and as the campaign develops and widens its scope, the organization will expect to see the benefits in reduced or zero instances of successful cyber-attacks.

 

The team must ensure that the entire exercise has been carefully documented and that they can demonstrate the resulting benefits at the end of the pilot project so that more of the organization and greater areas of cyber security awareness can be addressed.

 

Once presented back to the board, success should breed success, and the team should be better placed to move on to raising awareness for the wider organization or in more topic areas.

 

The presentation should focus on both the financial and non-financial benefits, the value to the business itself and also to its external stakeholders, including suppliers and customers and the sector regulator if applicable, and should be completely honest about both the overall costs and the potential impacts of not progressing with a full rollout.

 

Once the board has given their commitment for this, the pilot user group should be given an acknowledgment for their involvement, as this will not only reinforce the importance of the programme but will encourage others to become actively involved.

 

TRAINING

TRAINING

As mentioned earlier in this blog, awareness and training are two entirely different, but interconnected concepts. Whilst awareness places cybersecurity issues firmly in the minds of the user community in an organization, training will deliver very specific and often highly targeted information to those individuals or groups who have a specific requirement for it.

 

Training and especially highly technical training can be costly, but as with awareness, has a direct payback in terms of reducing the number of incidents and the potential financial impact on the organization.

 

Cybersecurity training falls into two distinct categories:

Generic training, in which the underlying concepts of cybersecurity are explained, and which give a sound appreciation of the issues. This may be required by those managers who are responsible for specialist security design and operational staff.

 

Specialized cybersecurity training, in which very specific skills are taught to the limited audience such as those security staff who manage the organization’s security infrastructure.

 

A few final points to consider

In the case of a product or technology-specific training, it should be taken into account that technology changes at an alarming rate and the need for updated courses will undoubtedly become necessary as time progresses.

 

The requirement for ongoing budget allocations for this should be factored into the cost estimates when preparing business cases.

 

One method of reducing training costs is by identifying those staff who already possess training skills, and who can pass on their knowledge to others. This ‘train the trainer’ approach can work well when budgets are limited.

 

The business cases for both generic and specialized cybersecurity training will need to be developed and presented on a case-by-case basis and should be presented in a similar manner to those for the awareness programme.

 

However, instead of being focused solely on benefits to the organization as a whole by targeting all users within the organization, these business cases should also focus on benefits to the organization by addressing the specific training needs of individual specialists.

 

A note adds:

The world-famous Chatham House Rule may be invoked at meetings to encourage openness and the sharing of information. As far as the classification of information to be shared is concerned, trust works on two levels.

 

First, the originator must ensure that the information has been correctly classified, and must be confident that the recipients will handle the information in line with that classification. 

 

Second, recipients must have sufficient trust in the integrity of the originator so that they can have the same level of confidence in the accuracy and reliability of the information.

 

One final aspect of trust is the ability to have an independent party, trusted by all members of an information sharing community, who can act as a moderator, and can also perform the role of go-between in certain situations as we shall see later. This individual is sometimes known as the Trust Master.

 

INFORMATION CLASSIFICATION

INFORMATION CLASSIFICATION

Information to be shared must be classified according to its sensitivity, and whatever method is used, it must be possible for it to be used by both public and private sectors without the need to cross-reference their information classification schemes.

 

As mentioned previously, the Traffic Light Protocol is used by many information sharing initiatives and classifies information as one of four colors:

 

RED – Personal for named recipients only – in the context of a face-to-face meeting, for example, distribution of RED information is limited to those present at the meeting, and in most circumstances, will be passed verbally or in person.

 

AMBER – Limited distribution – recipients may share AMBER information with others within their organization, but only on a ‘need-to-know’ basis. The originator may be expected to specify the intended limits of that sharing.

 

GREEN – Community-wide – information in this category can be circulated widely within a particular community or organization. However, the information may not be published or posted on the internet, nor released outside the community.

 

WHITE – Unlimited – subject to standard copyright rules, WHITE information may be distributed freely and without restriction. This method of information classification is widely used in information sharing communities around the world since it is very simple to understand and implement, and additionally can be readily understood in other sectors or countries.

 

Most of the time, the originator of the information to be shared will determine its classification color, but on occasion, Trust Masters may decide to raise it if they feel that it is set too low.

 

PROTECTION OF SHARED INFORMATION

PROTECTION

When information is being shared, the originator may consider it necessary to restrict its onward distribution, or to ensure that the information can be revoked or deleted in situations where it is no longer valid, or when its level of sensitivity has changed.

 

This can be achieved by the use of a technique sometimes known as ‘information rights management’, which works by encrypting the information – for example, a text document – and allowing it to be opened by the recipients provided they can identify themselves to the central sharing resource.

 

Further, the document can be provided with additional protection choices, so that it can never be copied, including the copying of selected parts of the document and thereby preventing it being pasted into an unprotected document; or printed, preventing its onward distribution in physical or scanned form.

 

If the document is forwarded to another recipient, it will be necessary for them in turn to have access rights on the central sharing resource, and if the originator decides to remove the original document, any remaining copies will not be able to be opened since the original document’s metadata that enables decryption will also be deleted.

 

As with information classification, originators must ensure that the information has been appropriately protected, and again, recipients must have sufficient trust in the integrity of the originator so that they can have the same level of confidence in the accuracy and reliability of the information.

 

It makes good business sense in organizations that have a requirement for very strict confidentiality to run all incoming or outgoing emails through a scanning system that is able to detect and isolate any message containing particular words or phrases, or which can direct encrypted messages to a central verification point prior to their release.

 

ANONYMISATION OF SHARED INFORMATION

SHARED INFORMATION

Situations will inevitably arise when a participating organization does not wish to be identified as having been the victim of an attack (possibly even more so for a successful attack) or other cybersecurity situation in which they have become embroiled.

 

The reasons for this are generally connected with commercial interests, and organizations may be reluctant for a competitor who is part of the same information sharing community to know whom the incident affected since this might place that organization at a competitive disadvantage or have a negative effect on their share price or public reputation.

 

At the same time, however, they might still wish details of the exploit to be made available to the wider community.

 

In face-to-face situations, such an organization might well approach the Trust Master and request that they raise the matter without identifying the originator.

 

The Trust Master will take great pains to ensure that this request for anonymity is respected, ensuring that even having omitted the originator’s identity, the information passed on contains no clues or additional metadata that might reveal, infer, suggest or identify the originator in any way.

 

In the context of a centralized information sharing system, the Trust Master’s role must be performed by the system itself in conjunction with the originator of the information being shared. There are two general courses of action:

 

The originator can select an ‘anonymise’ option on the system’s preferences when setting up the specific information to be shared. This will remove any reference as to who originally submitted the information.

 

However, should the information include other documents, for example, word processed documents, spreadsheets or presentations, the originator will be responsible for completely anonymizing these.

 

The originator can select an ‘anonymise via the Trust Master’ option instead. In this situation, the originator openly sends the information to the Trust Master, who then submits it to the community as if it had come from the Trust Master alone.

 

Here, the application of trust works slightly differently. Originators must again ensure that nothing in the information being shared can reveal their identity, nor could their identity be inferred from the content detail.

 

They must also have trust in both the information sharing system and the Trust Master that their identity will not be revealed. No additional trust is required here by the recipient.

 

Organizations, or groups of communities, who wish to provide their own centralised systems for information sharing may later wish to interconnect these so that they can widen the scope of their operations;

 

since some cybersecurity situational submissions will inevitably be of significant interest to other sectors, and sharing information with them would be highly beneficial if not essential, and this can often avoid possible duplication of effort.

 

In order to supplement the ISO/IEC 27001 standard, the ISO produced an additional standard, ISO/IEC 27010:2015, that covers the secure exchange of information between centralized systems.

 

 Contact – and therefore trust – may already have been established between these different groups, communities or sectors, in which case information might be freely shared between them, following the same rules as those for sharing within a sector.

 

Alternatively, if no previous contact has been established and therefore no degree of trust exists, the Trust Masters in those sectors wishing to share information can act, as intermediaries and initiate a limited degree of information sharing – possibly one-way only in the first instance – and subsequently encourage bilateral information sharing as an increasing level of trust develops.

 

Finally, once trust is fully established between the sectors, the Trust Masters may set preferences in the information sharing system that allow individual sector users to share information – either on a one-to-one basis with a peer in another sector or more widely to a whole sector.

 

Originators of information should have the same degree of trust in users within a different sector as they do for users within their own sector. The information should be classified, protected and anonymized in exactly the same way. 

 

From the recipient’s point of view, the only thing that matters is that they have trust in the originators of the information and therefore in the information itself.

 

ROUTES TO INFORMATION SHARING

ROUTES TO INFORMATION

There are four major routes to sharing information regarding cybersecurity issues, each of which has its own unique characteristics:

  • warning, advice and reporting points (WARPs);
  • the Cyber Security Information Sharing Partnership (CiSP);
  • computer emergency response teams (CERTs) and computer security incident response teams (CSIRTs);
  • security information exchanges (SIEs) and information sharing and analysis centers (ISACs).

Additionally, an excellent Good Practice Guide to Network Security Information Exchanges has been written by the European Union Agency for Network and Information Security (ENISA).

 

Warning, advice and reporting points (WARPs)

WARPs are a UK initiative that began in 2002 under the auspices of the National Infrastructure Security Coordination Centre (NISCC), which is now known as CPNI. WARPs allow their members to receive and share up-to-date cyber threat information and best practice.

 

WARPs are now provided by CERT-UK’s CiSP. Members of current WARPs tend to be a regional government, emergency services or military organizations.

 

Cyber Security Information Sharing Partnership (CiSP)

The CiSP is an initiative set up jointly between UK industry and government in order to share cybersecurity threat and vulnerability information. The objective is to increase situational awareness of cyber threats with a consequent reduction of impact on UK businesses.

 

CSP membership can only be given to the UK registered companies responsible for the administration of an electronic communications network in the UK, or organizations that are sponsored by either a government department, an existing CiSP member or a trade body or association.

 

CiSP members are able to exchange cyber threat information in real time, in a secure environment, operating within a framework that protects confidentiality. Information shared includes alerts and advisories, weekly and monthly summaries and trend analysis reporting. Computer emergency response teams (CERTs) and computer security incident response teams (CSIRTs)

 

CERTs have been in existence for some years now – originally begun by the US Carnegie Mellon University, the practice of collecting, analyzing and distributing security advisories has been a major influence on all sectors worldwide. CERTs and CSIRTs carry out the same function, and the mnemonics are used interchangeably.

 

Many countries now operate a CERT/CSIRT, and even some larger multinational organizations whose enterprises cross traditional national and continental boundaries may do likewise.

 

In the UK, CERT-UK8 has four main responsibilities that flow from the UK’s Cyber Security Strategy:

  • national cybersecurity incident management;
  • support to critical national infrastructure companies to handle cyber security incidents;
  • promoting cybersecurity situational awareness across industry, academia and the public sector;

 

INFORMATION SHARING

INFORMATION SHARING

providing the single international point of contact for coordination and collaboration between national CERTs. Subscription to a CERT or CSIRT is possible for almost any individual or organization wishing to receive updates. However, sometimes the volume and frequency of these can be overwhelming.

 

As an example, CERT-UK provides three main work streams:

Alerts– In the exceptional event of a critical national cybersecurity incident, CERT-UK will issue an alert and appropriate guidance.

Advisories– CERT-UK issues advisories that address cybersecurity issues being detected across government, industry or academia or that offer best practice updates.

 

Best practice guides– Through CSP, CERT- UK provides regular advice and guidance on a range of cyber issues, with the aims of sharing information and encouraging best practice amongst its partners.

 

Security information exchanges (SIEs) and information sharing and analysis centers (ISACs)

Whereas CERTs and CSIRTs concentrate both on information collection and response to incidents, SIEs and ISACs provide solely a means of exchanging information about threats, vulnerabilities, and incidents. SIEs tend to provide raw data about incidents, whereas ISACs tend to provide a deeper analysis and suggestions for a response.

=

Recommend