Risk Management Process (The Complete Guide 2019)

Risk Management Process

Risk Management Process

This guide explores the complete risk management process with examples. And also explains how to reduce security level risks in your organization and how to implement Cybersecurity. 

 

A GENERAL VIEW OF RISK

Impacts or consequences are just two of the elements of risk management. The others are assets – the things we care about; vulnerabilities – those things that weaken our defenses against cyber-attacks; and likelihood or probability – the chance that the threat will be successfully carried out.

 

ASSETS

ASSETS

Assets in the wider sense can be almost anything, but in cybersecurity terms, assets can include not only the data – information we may be trying to protect – but also the complete technical infrastructure – hardware, software, data and information HVAC and premises.

 

Last, but by no means least, are the staff who have the technical knowledge and skills to design and implement the appropriate security measures in place, to maintain them and to respond to incidents.

 

VULNERABILITIES

Vulnerabilities

Vulnerabilities are things that reduce the effectiveness of securing assets and come in two distinct varieties.

 

Intrinsic vulnerabilities are inherent in the very nature of an asset, such as the ease of erasing information from magnetic media (whether accidental or deliberate), whereas extrinsic vulnerabilities are those that are poorly applied, such as software that is out of date due to a lack of patching, or vulnerable due to poor coding practices.

 

Threats exploit vulnerabilities in order to cause an impact to an asset, whether it is copied or stolen (confidentiality), changed or damaged (integrity) or access to it is prevented (availability).

 

Vulnerabilities can exist without our knowledge. There may be security issues with an operating system or an application that a hacker has discovered, but is unknown to the software vendor – this type of vulnerability is called a zero-day vulnerability.

 

One of the biggest problems with this kind of vulnerability is that once it becomes known to the hacking community it will be ruthlessly exploited until a fix is developed – and more importantly, applied.

 

Once the software vendor announces the fix, knowledge of the vulnerability becomes even greater, and will often result in increased attacks, and an added danger is that individuals and organizations will fail to apply the fix placing themselves at greater risk.

 

An interesting twist on the publication of known vulnerabilities is the situation in which attackers reverse engineer the vulnerabilities in order to design and build dedicated attack tools.

 

Other vulnerabilities are more obvious – the lack of antivirus software, which can allow malware to reach the target through email, or firewall protection, which can result in the same problems for internet access.

 

Disaffected staff can either allow malware through the organization’s defenses by reconfiguring them, or by bypassing them completely, introducing malware on a USB stick for example.

 

Computers without passwords, with default passwords for operating system software and application software, and shared passwords present easy pickings for even the least experienced attacker.

 

QUALITATIVE AND QUANTITATIVE ASSESSMENTS

objective assessment

The problem we face in risk management is deciding which of the two types of measure to use – a subjective assessment of likelihood or an objective assessment of probability.

 

Although we have provided boundaries for the levels, there will be a degree of uncertainty about the upper and lower limits of each, but in general, the ranges should be sufficient to provide a fairly accurate assessment.

 

Clearly, these ranges will differ from one scenario to another but set a common frame of reference when there are a substantial number of assessments to be carried out.

 

THE RISK MANAGEMENT PROCESS

Since we are only taking a brief look at risk management, we will focus on context establishment, risk assessment and risk treatment and omit the communication and consultation, and monitoring and review stages.

Risk assessment

Context establishment

If we look at just the basic components of risk as described above, we can certainly make some form of assessment, but unless this is placed within the context in which the organization operates, any judgment will have been taken in isolation.

 

The first stage of the risk management process then is to understand the context in which the organization operates – financial, commercial and political – so that the later steps take these into account when making decisions regarding how to treat the risks.

 

Risk assessment

This second stage of the risk management process is broken down into three distinct areas: risk identification, risk analysis, and risk evaluation.

 

Risk identification

Risk management begins by identifying the assets, deciding what value they have for the organization, and therefore what the impact will be if they were damaged or lost. All assets require a single clearly identified owner who will have overall responsibility for the asset, even if the asset is shared between a number of departments in an organization.

 

Some organizations mistakenly allocate ownership of information assets to the IT department, but this is a mistake, since the IT department can easily become the unwitting owner of many assets over which they have little or no influence, despite the assets being held on the IT department’s systems, since only the genuine owner of the asset will be able to estimate its value to the organization.

 

Once we have established the assets, their ownership, and their value to the organization, we can move on to understand what might threaten these assets and what (if any) vulnerabilities the assets have, which provides us with a basis for deciding on the likelihood or probability.

 

There is an ongoing debate about which aspects of risk identification come in which order. Some people feel that it is easier to identify the impacts if they understand the threats first; others feel that threat assessment can come later. Whichever approach you favor, it is important that you assess:

  • the impact of the loss or degradation of assets; the vulnerabilities that might contribute to this; the threats those assets face;
  • the likelihood or probability that the threats will exploit the vulnerabilities and result in an impact.

 

When assessing the threats, we can make use of a number of models – one of these is referred to by the initial letters D.R.E.A.D. and asks five questions:

  1. Damage – how bad would an attack be?
  2. Reproducibility – how easy is it to reproduce the attack?
  3. Exploitability – how much work is required to launch the attack?
  4. Affected users – how many people will be impacted?
  5. Discoverability – how easy it is to discover the threat?

 

Although rather subjective, the answer to each question is allocated a value (say between 1 for ‘low’ and 3 for ‘high’), and the sum of the five elements delivers the relative threat level. 

 

Impact and likelihood are the two key outputs of this part of the process, and there are two methods of deciding the level of them:

  1. qualitative impact and likelihood assessment;
  2. quantitative impact and likelihood assessment.

 

In the case of the qualitative assessment, the outputs are measured in general subjective terms, such as low, medium and high, whereas in quantitative assessment, objective numerical data is used – for example, financial values for impact and percentages for likelihood.

 

Each method has its own merits – qualitative assessment can be carried out quite quickly and does not require detailed research or investigation, whereas quantitative assessment can be time-consuming but will usually deliver more accurate results.

 

It is for the organization to decide whether such a high degree of accuracy adds value to the assessment exercise – if the resulting risk is very high, the problem will require urgent attention, regardless of whether the risk comes out at 90 percent or 95 percent.

 

As already mentioned, there is, however, a halfway house in which qualitative and quantitative assessments are combined in a ‘semi-quantitative’ assessment. In these, boundaries are set for the values.

 

for example for impact assessments, ‘low’ might indicate a financial value between zero and one million pounds; ‘medium’ might indicate a financial value between one million and ten million pounds, and ‘high’ might indicate a financial value above ten million pounds.

 

Similarly, for likelihood assessments, ‘low’ might indicate a likelihood between zero and 35 percent; ‘medium’ might indicate a likelihood between 35 percent and 70 percent, and ‘high’ might indicate a likelihood above 70 percent. This provides a more meaningful assessment of risk, especially when presenting a business case to the board for approval.

 

Risk analysis

Risk analysis

Once we have conducted the initial risk identification, we then take the impact and likelihood and combine them in the form of a risk matrix, which will allow us to compare the risk levels.

 

The risk matrix is simply a pictorial representation of the relative levels of all the risks we have identified, and which will allow us to understand the order in which we wish to treat them, based on some form of priority.

 

Risk matrices most commonly consist of three, four or five ranges of values. Three is often considered to be too few to be meaningful, whilst five allows the possibility of too many results being in the middle. Four is sometimes thought to be a better choice since the assessor must choose some value on either side of the middle ground.

 

In conjunction with others, the risk assessor will allocate a risk category to each part of the matrix, in order to assist prioritization. Alternatively, values can be assigned to each cell in the matrix, which enables the grouping of risks. Risks measuring 1 to 5 might be graded as trivial; 6 to 10 might be minor; 11 to 15 might be moderate; 16 to 21 might be major, and 21 to 25 might be critical.

 

Risk evaluation

Risk evaluation

Finally, we can decide how we are going to deal with the various risks, usually recording the results in a risk register.

 

Risk avoidance or termination

In this method of risk treatment, we either stop doing whatever it is that has caused or might cause the risk, or if it is a planned activity, we simply avoid doing it. Whilst this will usually result in the risk being completely eliminated, it may cause the organization other problems.

 

for example, if an organization was planning to build a data center and the risk assessment indicated a high likelihood of flooding in the proposed location, the decision would almost certainly be to avoid the risk by abandoning that location and building elsewhere.

 

However, this might prove problematic, since alternative sites might be difficult to locate, be excessively costly or have other limiting factors. This would result in the organization reviewing all these risks against one another.

 

Risk sharing or transfer

If we find that we cannot avoid the risk, an organization may decide to share it with a third party. 

 

This is usually in the form of insurance, but it is important to remember that even though the organization may let someone else share or take the risk, they still own the responsibility for it.

 

However, some insurance companies will refuse to ensure certain types of risk, particularly when the full possible impact is unknown, and in such cases, the organization must find an alternative method of dealing with it.

 

Risk reduction or modification

Risk reduction

Some people refer to this as risk treatment, although it is actually just one form of risk treatment. In this option, we do something that will reduce either the impact of the risk or its likelihood, which in turn may require that we reduce either the threat or the vulnerability where this is possible.

 

It is often the case that threats cannot be reduced – one cannot, for example, remove the threat of a criminal attempting to hack into an organization’s website, but it may in such cases be possible to reduce the likelihood by applying strict firewall rules or other countermeasures.

 

Risk acceptance or tolerance

The final option is to accept or tolerate the risk, especially if it has either a very low impact or likelihood. 

 

This is not to be confused with ignoring risk – never a sensible option – but is undertaken knowingly and objectively, and is reviewed at intervals or when a component of the risk changes, such as the asset value, the threat level or the vulnerability.

 

Risk acceptance is based largely on the organization’s attitude to risk, known as its risk appetite.

 

Some organizations have a very low-risk appetite – for example pharmaceutical companies, who understand that the impact of failure to keep details of their products secure can mean enormous financial loss if they are stolen, or that patients could die if the manufacturing process is tampered with.

 

On the other hand, organizations like petrochemical companies will have a much higher risk appetite, investing vast sums of money in test drilling for oil reserves, knowing that some attempts will produce no useful results.

 

Residual risk

Whilst some forms of risk treatment will completely remove the risk, others will inevitably leave behind an amount of so-called ‘residual’ risk.

 

This residual risk is either not possible to treat, or, more frequently, too expensive when compared to the cost of the likely impact. Residual risk must be accepted by the organization and will require monitoring and regular reviews to ensure that it does not grow and become a treatable risk.

 

Risk treatment

Risk treatment

Risk treatment is also sometimes referred to as risk mitigation, which is generally taken to mean a reduction in the exposure to risk (the impact or consequence) and/or the likelihood of its occurrence.

 

Once we have decided on the most appropriate method of treating risks, we move to the final stage of the risk management process – risk treatment and the use of controls or countermeasures to carry out our decisions.

 

There are four distinct types of controls:

  • detective controls, which allow us to know or be made aware when something has happened or is actually happening;
  • directive controls, which invoke some form of procedure that must be followed; preventative controls, which stop something from happening; corrective controls, which fix a problem after it has happened.

 

Directive and preventative controls are proactive in nature since they are carried out before an attack has occurred in order to reduce its impact or the likelihood of it occurring. 

 

Detective and corrective controls are reactive in nature since they take effect once an attack has actually happened. The four types of control are implemented in one of three ways:

 

Procedural controls, which dictate what actions must be taken in a particular situation. An example of a procedural control would be one in which users are required to change their system access passwords at regular intervals. Such controls might include the vetting of staff by the HR department.

 

Physical controls, which prevent some form of physical activity from taking place, such as fitting locks on computer room doors to prevent unauthorized entry. Technical controls, which change the way in which some form of hardware or software operates, such as configuring firewall rules in a network.

 

Sometimes, the risk treatment options – avoid/terminate, transfer/share, reduce/modify and accept/tolerate – are referred to as strategic risk treatment controls;

 

the four types of control – detective, directive, preventative and corrective can be referred to as tactical risk treatment options; and finally, the three methods of implementing the controls – procedural, physical and technical are sometimes referred to as operational controls.

 

Although it is not strictly speaking an information risk topic, for many years, and for a variety of purposes, organizations have linked the risk management process with a system known as the Plan–Do–Check–Act (PDCA) cycle, otherwise known as the Deming cycle.

 

The PDCA cycle has been widely adopted as a basic reference framework in the cybersecurity, information security, information risk management, and business continuity management disciplines as well as many others. The four stages are described as follows:

 

Plan

Plan

In this stage, we establish the objectives and the processes necessary to deliver the required results. In the cybersecurity context, this equates to understanding the organization and its context.

 

Do

The next stage of the process implements the plan, initially as a means of testing that the plan has been successful. In the cybersecurity context, this equates to the implementation of the information risk management framework.

 

Check

In this stage, we examine the results we have achieved by either measurement or observation. In the cybersecurity context, this equates to testing, monitoring, and review of the framework.

 

Act

In the final stage, we put the validated plans into action when an incident occurs and bring lessons learned from incidents into revisions of the plan. In the cybersecurity context, this equates to a continual improvement of the framework.

 

Although the descriptions above relate to the wider area of risk management, in cybersecurity terms, any of these methods can be used to treat risk since cyber threats can be used equally easily against poor procedures, a lack of good physical security and poor technical security. Just because the river is quiet does not mean the crocodiles have left.

 

Malay proverb

In this blog, we will briefly examine the concepts of business continuity, which looks at the business as a whole, and disaster recovery (DR), which looks at just the IT infrastructure, and which usually forms a component part of an organization’s business continuity (BC) programme.

 

Although business continuity covers a much broader area than just cybersecurity, it is important to understand the underlying principles since it is a means of preparing for possible cybersecurity incidents. Likewise, disaster recovery again is not all about cyber security but can play a major part in recovering from cybersecurity incidents.

 

Both business continuity and disaster recovery have a proactive and a reactive element to their contribution to cybersecurity; the proactive side attempts to reduce the likelihood that a threat or hazard may cause a disruption, and the reactive side takes care of the recovery if one does occur.

 

Generally speaking, the longer a disruption lasts, the greater the impact on the organization, so it helps to clarify the type of disruption, its duration and impact, and how an organization manages the situation.

 

Glitches

These are extremely short occurrences, usually lasting just a few seconds at the most and are generally caused by brief interruptions in power or loss of radio or network signal. Activities usually return to normal following most glitches as equipment self-corrects automatically.

 

Events

Events normally last no more than a few minutes. As with glitches, the equipment they affect is frequently automatically self-correcting, but may on occasion require a degree of manual intervention.

 

Incidents

Incidents are usually viewed as lasting no more than a few hours. Unlike glitches and events, they require operational resolution, normally involving manual intervention that follows some form of process.

 

The methods of dealing with glitches, events, and incidents are mostly proactive in nature.

 

Crises

Crises

Crises can often last for several days. Although organizations may have plans, processes, and procedures to deal with them, and although operational staff will carry out any remedial actions, some degree of improvisation may be required.

 

Crises almost invariably require a higher layer of management to take control of the situation, make decisions and communicate with senior management and the media.

 

Disasters

Disasters frequently last for weeks. As with crises, operational staff will carry out remedial actions, although, at this stage, a degree of ad hoc action may be necessary and although a higher management layer will control activities, the senior management layer will take overall charge of the situation.

 

Catastrophes

Catastrophes are the most serious level, often lasting for months, or in some cases for years.

 

 Their scale tends to affect many communities, and so although individual organizations may be operating their own recovery plans, it is likely that local, regional or the even national government will oversee the situation and that a complete rebuilding of the infrastructure may be required.

 

Despite any proactive planning or activities to lessen their impact or likelihood, crises, disasters, and catastrophes all require significant reactive activity, and each will demand an increasing amount of incident management capability.

 

It is important for organizations to understand that the more time spent in proactive work, the less time will generally be required in reactive work following a cyber-attack.

 

Business continuity and disaster recovery share the same fundamental Plan–Do– Check–Act cycle, in which during the ‘Plan’ stage we carry out the risk assessment (risk identification, risk analysis, and risk evaluation);

 

in the ‘Do’ stage, we implement the risk treatment options and assemble the plans; in the ‘Check’ stage, we verify that the plans are fit for purpose by testing and exercising; and finally, in the ‘Act’ stage, we put the plans into practice when a disruptive incident occurs.

 

BUSINESS CONTINUITY

process of risk management

Putting business continuity into practice is strongly linked to the process of risk management, in which we identify the organization’s assets, owners and impacts;

 

assess the likelihood of risks happening, and combine the two to provide a perceived level of risk. From this, we are able to propose strategic, tactical and operational controls, one of the main components of which will be the business continuity plan (BCP) itself.

 

The plan should include the actions that will cause it to be triggered; who (or which departments) will be responsible for what actions;

 

how they will be contacted; what actions they will take; how, where and when they will communicate with senior management and other stakeholders; and finally, how they will determine when a business has resumed to a pre-determined level of normality.

 

The plan itself may not always contain detailed instructions, as these may change at intervals, but they should be referred to in the plan. 

 

Although cybersecurity covers only a part of the overall business continuity process, there are certain aspects, especially with regard to the ongoing availability of information and resources, that are very much an integral part of cybersecurity.

 

The most obvious of these is that of disaster recovery of information and communications technology (ICT) systems, in which the systems that are likely to be impacted require some form of duplication in order to permit short-term or even immediate recovery.

 

Business continuity is often referred to as a journey rather than a destination. It looks at the organization as a whole as opposed to just the information technology aspects.

 

However, that said, the generic business continuity process applies extremely well to cybersecurity and can be used to help an organization to place itself in a very strong position.

 

The Business Continuity Institute (BCI) describes business continuity as ‘The capability of the organization to continue delivery of its products or services at acceptable rede-fined levels following a disruptive incident.’

 

It provides excellent guidance on the entire process, and its latest Good Practice Guidelines (2013 version) can be purchased for around £24 for a downloaded version or £30 for a printed copy. BCI members enjoy a £10 discount on the printed version, whilst the download version is free of charge to them.

 

Over several years, the BCI has developed a business continuity management lifecycle, having six distinct areas. It is basically a variation on the theme of risk management:

 

1. Business continuity policy and programme management, in which the overall organization’s business strategy is used to develop a programme of work, each component of which is then managed as a project.

 

2. Embedding business continuity into the organization’s culture, which includes training, education, and awareness.

 

3. Analysis, which is all about understanding the organization, its priorities and objectives, its assets, the potential impacts, the threats or hazards and the vulnerabilities it faces.

 

From this, a risk assessment can be undertaken, and the key metrics such as the recovery time objective (RTO), the maximum acceptable outage (MAO)3 and the maximum tolerable data loss (MTDL) can be derived.

 

4. Determining the business continuity management strategy and designing the approach to delivering this can now take place, based on the metrics arrived at in the analysis stage, and decisions can be made regarding what proactive measures should be put in place; how the response to an incident will be organized;

 

and how the organization will recover to normal operational levels or to a new, revised level of normality.

 

5. Implementing the business continuity response will require the efforts of people in various parts of the organization to put in place the proactive and reactive measures agreed in the previous stage.

 

6. Validation, which includes exercising, maintaining and reviewing, is a separate activity to embedding the business continuity culture into the organization since it deals with the inclusion of people who may already have been involved in the previous stages, and who need no introduction to the subject;

 

rather they need to be able to exercise the various response and recovery plans, validate them and fine-tune them where necessary.

 

If the organization is well organized, all six stages of the life cycle should have been completed before an incident occurs.

 

The first actions will be to respond to the incident itself, bringing together the incident management team, gaining an understanding of the situation and agreeing which aspects of the plan are to be implemented.

 

At this time, it will also be important to consider preparing some form of a statement that can be given to the media, customers, and suppliers so that their expectations are managed.

 

Next, the processes and procedures that have been developed (which may include disaster recovery mechanisms) will be brought into action and depending upon the nature of the situation.

 

Finally, once the situation has been resolved, a business can be returned to normal, or if the impacts have been considerable, to a new level of normality. The international standard ISO 22301:2012 – Societal security – Business continuity management systems – Requirements covers all aspects of business continuity.

 

DISASTER RECOVERY

Disaster recovery

One of the main features of a business continuity plan is in providing the availability determined by the analysis stage of the business continuity process. Disaster recovery is perhaps a misnomer since it implies that systems, applications, and services have failed catastrophically and need to be brought back online.

 

Whilst this might be the case for some services, it is not true for all, since an element of proactive work can (and usually should) be carried out, and it may be the case that just one component in the service has failed, but that this requires a disaster recovery process to be invoked.

 

As with any business continuity work, there are both proactive and reactive sides to disaster recovery, and since there are no ‘one size fits all’ solutions, we’ll discuss some of the options in general terms.

 

Standby systems

Conventionally, there are three basic types of standby system – cold, warm and hot – although there are variants within these. Most well-designed standby operations will ensure that there is an effective physical separation between the ‘active’ and ‘standby’ systems since the loss of a data center or computer room containing both systems would clearly result in no recovery capability.

 

Traditionally, organizations work on the basis that a minimum separation of 30 km is sufficient to guarantee that a major incident affecting one data center will not affect the other.

 

Systems, as we refer to them here, can mean any system that is involved in providing the organization’s service and can include web servers at the front end of the operation as well as back-end servers and support systems and essential parts of the interconnecting networks.

 

Cold standby systems frequently make use of hardware platforms that are shared by a number of organizations. They may have power applied, and may also have an OS loaded, but they are unlikely to have much if any, user application software installed since each organization’s requirements will be subtly different. There will also be no data loaded.

 

This is the least effective method of restoration, since it may take a significant amount of time and effort to load the operating system (if not already done), to load and configure the user applications and to restore the data from backup media.

 

It will, however, invariably be the lowest cost solution for those organizations who are able to tolerate a longer RTO.

 

Another disadvantage of cold standby systems is that if they are shared with other organizations, there may be a conflict of resources if more than one organization declares an incident at, or around, the same time.

 

An example of this was the situation on 11 September 2001, when the attacks on the World Trade Center in New York took place. Most organizations had disaster recovery plans, but a number of them relied on the same providers, which completely overwhelmed their capabilities.

 

Warm standby systems will generally be pre-loaded with operating systems, some or all user applications, and also data up to a certain backup point. This means that the main task is to bring the data fully up to date, and therefore much reduce the restoration time required.

 

Warm standby systems are invariably costlier to provide then cold standby, and it is common practice for organizations to use one warm standby system to provide restoration capability for a number of similar systems where this provides an economy of scale.

 

Additionally, those organizations who regularly update their application software may make use of their warm standby systems as training, development and testing platforms before a new or updated application are taken into live service.

 

Hot standby systems come in several flavors, but increasingly, and especially where no outage time can be tolerated at all, high availability systems are becoming the norm.

 

A basic hot standby system will be as similar as possible in design to a warm standby system, except that the data will be fully up to date, requiring a real-time connection between the active and standby systems.

 

Two slightly different methods of synchronizing the systems are in common use – the first (and faster) method is known as asynchronous working, in which the active system simply transmits data to the standby, but continues processing without waiting for confirmation that the data has been written to disk.

 

The second, slightly slower (but more reliable) method is known as synchronous working, in which the active system transmits data to the standby and waits for confirmation that the data has been written to disk before it continues processing.

 

In the first method, there is always the possibility that some data will not be received by the standby system, and in cases where nothing less than 100 percent reliability is required (for example in financial transactions), this will not be sufficiently robust.

 

In the second method, there will always be a slight time lag between transactions, since this method will provide 100 percent reliability at the expense of speed. It will also be costlier to implement since very fast transmission circuits will be required – usually point-to-point optical fiber.

 

Networks and communications

Networks

Whilst the emphasis tends to be on the recovery of key systems, organizations should not overlook the networks and communications technology that support them. Wherever possible, key elements of the communications network should be duplicated so that the failure of one does not cause a total loss of connectivity.

 

Many organizations now use two different transmission providers to ensure that if one has a major network failure, the other should still be able to provide service.

 

This will, of course, depend on whether one is acting as a carrier for the other, in which case a failure of the main provider’s network could result in the other losing service as well.

 

Larger organizations make use of load balancing systems to ensure that in peak demands on their websites they are able to spread the load across a number of servers, and many also duplicate their firewall infrastructure as added insurance.

 

Separacy is also a wise consideration – the scenario in which a road repair takes out an organization’s communications is all too familiar, and by providing diverse communications cables on routes separated by 30 m or more and using entry points on opposite sides of a building, the likelihood of failure is much reduced.

 

Naturally, all this costs money, but when compared with the potential losses that would be incurred in the event of total infrastructure failure, it is a vital form of insurance – and one that can reduce the cost of revenue loss insurance premiums.

 

Power

Power

Power is at the heart of everything. Without it, the systems and networks cannot run, and business would grind very quickly to a halt. Those organizations that suffer regular power outages will probably already have invested in a standby generator or at least an uninterruptible power supply (UPS) system that will continue to deliver sufficient power for a defined period of time.

 

More frequently nowadays, the two are combined, so that a UPS system will continue to deliver power and remove any power spikes from the supply, after which the standby generator will cut in and deliver power as long as the fuel supply lasts.

 

The international standard ISO/IEC 27031:2011 – Information technology – Security techniques – Guidelines for information and communication technology readiness for business continuity covers many aspects of disaster recovery.

 

Fire prevention and smoke detection

cybersecurity issue

Whilst this may not immediately appear to be a cybersecurity issue, access to a fire prevention system could affect an organization’s ability to deliver service. No computer room or data center would be complete without having smoke detection systems and fire prevention facilities.

 

Systems such as Very Early Smoke Detection Apparatus (VESDA) can identify the release of smoke (and therefore the possibility of fire) before it takes hold and causes real problems.

 

The system works by sucking air from the area through pipes and sampling the quality of air passing through a laser detection chamber.

 

If the quality falls below acceptable levels, a response can be triggered, and this is often as a result of detection by more than one detector. The extinguishing chemical, normally nowadays an inert gas called Inergen, is discharged to the affected area.

 

An interesting example of a problem in this area was highlighted in September 2016, when ING tested the system in their data center in Bucharest.

 

The gas discharge produced sound levels in excess of 130 decibels, which caused excessive vibrations and head crash in disk drives. The entire data center was out of action for an extended period of time, resulting in no access by ING’s customers.

 

In this blog, we examine steps that can be taken both by individuals and corporate users to improve their cybersecurity.

 

It provides details of the general steps that can be taken by any user – technical or non-technical–and then covers those steps that are of a rather more technical nature. Finally, the blog includes a section on mobile working.

 

As discussed earlier, the response to cyber issues comes in two distinct areas. The first area is that of proactive response, in which we try to either lessen the likelihood of the event happening or if we cannot do this, lessen its impact.

 

The other area is a reactive response, which will include the disaster recovery capabilities described in the previous blog, as well as the hands-on work of changing system configurations to apply corrective controls once a cybersecurity incident has been detected. Either method should reduce the risk, but we may have to accept that there may be some residual loss or damage.

 

When we leave our house, we take care to lock the doors and windows. This might not prevent a burglar from gaining entry, but it does make his job more difficult. Unless the burglar is specifically targeting us, there is a definite chance that he will go elsewhere and try to enter someone else’s property.

 

It is very much the same with cybersecurity. If a determined attacker is sufficiently well motivated, skilled and equipped, he will almost certainly eventually succeed in gaining access to our data.

 

However, financial constraints might make it difficult or impossible to repel him, so the emphasis should not, therefore, be to make 100 percent sure he is unable to achieve this since this is an unrealistic expectation. Rather we should try to make the attacker’s job so difficult that he goes elsewhere.

 

Most of the actions we can take in the world of cybersecurity tend to be in the third of these – that of risk modification or reduction, and it is this area that we shall focus on most. At the next level, there are four general directions we can take. Three of these are pro-active in nature:

  • detective, in which we put something in place to detect that an attack is in progress, such as IDSs or antivirus software (which will also react to malware it has detected);
  • preventative, in which we put additional facilities in place in an attempt to stop an attack from being successful, such as firewalls;
  • the directive, in which we set out policies, processes, and procedures that people must follow in order to reduce the risk, such as password policies.

 

The fourth direction is reactive in nature

 

Finally, we reach the point at which we can examine the actual actions known as controls or countermeasures that we can take. There are three options, which we shall examine in greater depth:

  • physical controls, such as access control systems, which prevent intruders from gaining access to equipment or its environment in order to launch a cyber-attack or otherwise cause damage;

 

  • technical controls, such as firewalls, which directly address the security of systems and software that hold our information;
  • procedural controls, which tell people both what not to do and also what they must do before, during or following an attack, and, as mentioned earlier, may include vetting of staff by the HR department.

 

A number of documents providing sound cybersecurity advice are available, and would be especially valuable to SMEs:

The NCSC publishes a number of cybersecurity-related advice documents, including the UK government’s ‘10 Steps to Cyber Security’, ‘Common Cyber Attacks: Reducing the Impact’ and ‘10 Steps: Board Level Responsibility’.

 

For those looking for more specific detail, there are more than 200 additional documents published, dealing with all aspects of cybersecurity.

 

It is worth making a brief examination of the SANS Institute Sliding Scale for Cyber Security, which provides general guidance starting from a proactive position and potentially moving to a highly reactive one.

 

At the proactive level, the scheme begins with security designed and planned into the organization’s information architecture, based on the business objectives.

 

This is often the most difficult to achieve since the security aspects of many systems’ hardware and software are outside our control. This represents both preventative and directive action.

 

It continues proactively, with passive defense, in which additional technology is added to the underlying infrastructure to provide protection against cyber-attacks without the need for human intervention. This represents both preventative and detective action.

 

From this point, we move into the reactive sphere, beginning with active defense, in which security teams respond to events that cannot be completely controlled by passive defense means.

 

This may include gaining a full understanding of the target, the method of attack, and even, if possible, the identity of the attacker. This represents a corrective action.

 

An example of this might be the case in which an organization finds itself under a massive DDoS attack. One of the defense mechanisms taken in conjunction with the ISP is to move the company’s internet presence to a different connection and IP address, and the ISP then points the DDoS attack into a ‘sink’ or black hole.

 

Next, we move into the area of intelligence, in which we use the attacker’s identity to discover more detail about them, their motivations, means, and methods, which may enable us to prevent further similar attacks.

 

This part of the process will require tools to capture information about the attacker, and also a means of analyzing this information to produce viable intelligence.

 

However, this could be outside the scope of most organizations, and this sort of investigation could well be undertaken by an outside company offering specialist InfoSec skills assisting the attacked company to restore their service.

 

There are a number of models that enable this work; one such is the Diamond Model of Intrusion Analysis;4 however, it is rather detailed and falls outside the scope of this blog, so a link is provided in the notes for you to explore if you wish to do so.

 

Finally, we arrive at the reactive point of offense – fighting back. This course of action is not recommended, since it could be fraught with danger, and could constitute a cyber-attack in its own right.

 

Individuals and businesses should be discouraged from any form of retaliation – it’s much more sensible to respond by alerting the appropriate authorities where possible and leaving offensive retaliation to security services and where applicable, military agencies.

 

[Note: You can free download the complete Office 365 and Office 2019 com setup Guide.]

 

Physical security

Physical security

It would appear at first sight that since we’re dealing with cybersecurity, physical security actions might not feature strongly.

 

Whilst there is an element of truth in this, we should not overlook the fact that if an attacker can gain physical access to a key computer system, he can probably achieve anything he wishes just by connecting a USB stick with key-logger software or inserting a CD or DVD loaded with malware or the data of a fake website.

 

Restricting physical access to business-critical systems should always be the first step in any proactive activities. Not only does this mean keeping the bad guys out of the computer room, but also everyday users unless they have a very specific requirement to be there.

 

Access to controlled areas should be the exception rather than the rule, and all permissions for access should be subject to a formal procedure and should be reviewed at regular intervals.

 

It is good practice to ensure that any visitor to a computer or network equipment room should be accompanied by a trusted member of staff, preferably one of the organization's systems administrators. It should also be noted that cleaners are not exempt from this policy.

 

Some simple steps that will make a difference include:

Locking electronic devices (smartphones, tablet computers, and laptops) somewhere secure when you have to leave them. Never leave them unattended in a public place, and keeping them hidden from view when traveling, especially in crowded places like railway stations and airports.

 

If you’re concerned about your computer camera being accessed by someone, a very simple solution is to place a sticky note over the top of it. If you use a lockable steel security cable to secure a device, make sure that it is fastened to something that cannot easily be removed, and make sure you keep the key with you when you leave.

 

Individual user steps whilst surfing the web

surfing the web

Users should resist the temptation to install or download unknown or unsolicited applications or programs unless they are confident that they are secure and free from malware.

 

In a corporate environment, no privileged user should use an administrative account for downloading unauthorized software. Their day-to-day user account should not have the level of privilege required to do so.

 

When visiting a new website, users should avoid clicking on links to other pages unless they are sure they are valid. Some websites shorten the URL, so the final address is hidden. Let the mouse pointer hover over the link before you click to show the link’s real address.

 

Cookies are essential to many internet activities such as online shopping, but many are irritating and some are harmful by invading our privacy. Users should periodically edit the cookie list and clean out any that are not needed.

 

The Onion Router (TOR) is a browser system that protects users by routing internet traffic through a network of relays run by volunteers all across the globe. It prevents one’s internet activities from being observed and prevents the sites we visit from identifying our physical location.

 

TOR should not be used in a corporate environment, since it is well known for subverting end-user security controls, such as anti-malware products.

 

Online forms frequently ask for information they really don’t require. If you think the question is unnecessary or intrusive, give an answer such as ‘not relevant’. If you don’t think they really need your telephone number, for example, type in something like 01234 000000.

 

Users should always delete browser history on public computers. This prevents the next user discovering personal information they may have inadvertently left behind. It’s also a good idea to periodically delete it on home computers as well since it can eat up valuable disk space.

 

Users should delete temporary internet files on home computers occasionally, and every time after using a public computer, for example in a library or internet café. This is invariably achieved by accessing the security tab in the browser’s preferences since different browsers will store them in different locations.

 

They take up considerable space on the hard disk, and rarely serve any useful purpose. As with browser history, they can also be used to track one’s web surfing experiences.

 

Internet passwords should be treated in exactly the same way as ordinary system passwords. See the section on user passwords later in this blog.

 

Social engineering

Social engineering

One method by which attackers will attempt to break into a network or system is to use his social engineering skills to talk their way around the organization’s security defenses.

 

Never provide cold callers with your credentials.

If you receive spam text messages on your mobile phone, report these to your network provider. Use the number 7726, which spells SPAM on the keyboard of non-smartphones.

 

Unless you are confident of the originator of a text message that includes ‘TextSTOP to unsubscribe’ or similar, never do so, since this may be simply a ruse to discover if there is a real person behind the number as opposed to a system of some kind. Resist the temptation to reply ‘Go away’ or words to that effect.

 

Email

Email

Once an attacker has acquired (or guessed) your email address, they may send offers of apparently attractive goods or services to tempt you into clicking on a link, which is almost certainly going to cause you problems.

 

At best, it will connect you to a website that offers fake goods; at worst, it will download malware onto your device that will be used to extract further information such as banking details, passwords and so on.

 

If an email looks suspicious, delete it without opening it. To do this, in most email applications you can usually right-click on the message and consign it to the waste bin with no risk at all.

 

Never respond to emails that invite you to enter your credentials such as bank account number and PIN or password. Banks and credit card companies will never ask you to do this, and even if the email appears to be from your own bank, it may well be a scam. It is a sensible idea to check any such emails against one that is known to be legitimate.

 

However, spammers are becoming increasingly professional, and it is often difficult to discern spam from the real thing. If in doubt, allow the mouse to hover over the URL, and check that this has not been obfuscated.

 

Phishing attacks often originate from respectable-looking emails purporting to originate from a reputable financial institution requesting that the user verifies their online identity.

 

These are invariably scams and will take the unsuspecting user to a fake website that is to all intents and purposes an identical copy of the real one. It is essential not to respond to these, and it can be helpful to notify the real institution whose genuine website is being abused.

 

Spam email is a blight. Fortunately, many email providers now have highly tuned filters that detect and delete spam without the user even being aware of it. If spam email does make it through their filter, it may (with luck) wind up in a spam email folder in your mail application, making it simple to identify and delete.

 

Do so. Do not be tempted to reply, since this will merely let the originator know that they have found a working email address, and you may end up receiving even more.

 

Consider using encrypted email to send sensitive information over the internet. We deal with this in greater detail in the section about encryption later on. For organizations with their own email servers, there is the option of turning on ‘opportunistic encryption’ described in RFC 7435.

 

Backup and restore

Backup

It is incredibly easy to accidentally delete something important, but it is just as easy to make sure you don’t. 

 

It is not recommended that you back up your files to the same hard disk drive that the operating system is installed on, so buy a reliable backup disk drive and make use of the inbuilt software in Microsoft Windows and Apple Mac operating systems.

 

As an alternative to a hard disk drive you may consider backing up data to DVD, Blu-ray or memory stick, but always encrypt the data on your backup device. 

 

Always store the media used for backups in a secure location to prevent unauthorized access to your data. A fireproof safe is an ideal storage solution, but keep it separate from your computer.

 

Pirated software

The best advice for pirated material, including films, music, and software is just don’t download it! You don’t know that the material is malware-free, and in any case, much of it is actually illegal, since it usually represents the theft of intellectual property.

 

For a business, legal liability was pirated or illicit material is found on one of its computers lies with the business owner, and not with the user of the computer. If you discover pirated material, the copyright owner may be interested in hearing about it, and in the case of software, the Federation Against Software Theft (FAST) may take an active interest.

 

Personal information

Keeping your personal information secure is one of the main objectives of cybersecurity. If the information is extremely sensitive, consider whether you should be keeping it on a device in the first place. If the answer is ‘yes’, then consider encrypting it.

 

Be extremely careful what information you share, and with whom you share it. Consider where the information might be stored, and where it might end up if the person or organization to whom you are giving it is not as careful about security as you are.

 

File sharing

File sharing

Many people and organizations now make use of cloud-based services to share information with friends, family, and colleagues.

 

All this is absolutely fine, provided that you have legitimate reasons for sharing information and it does not infringe someone else’s copyright. However, there is increasing use of file sharing mechanisms to distribute material illegally.

 

Corporate staff who access personal cloud-based file sharing services from the work-place pose an additional threat of the possible exfiltration of corporate data/information.

 

Films, audio recordings, blogs, and other material is hosted or ‘seeded’ by individual sharers. The user acquiring the information obtains a ‘torrent’ file from a file sharing service and runs this within file sharing download software.

 

The software links to the individual seed computers and downloads small portions of the file, linking them all together.

 

Only share information with family, friends, and colleagues if it is not someone else's copyright or unless you have their express permission to do so. If you use a file sharing service (such as Dropbox, Amazon Cloud or MicrosoftOneDrive), consider encrypting the information, especially if it is in any way sensitive.

 

Social networks

Social networks

The use of social networks has increased dramatically in recent years. Facebook, Twitter, Flickr, LinkedIn, and Instagram are just a few examples of the most widely used social networking sites. Whilst the idea behind these is to share information between friends, family, and colleagues, there are significant dangers in making use of them.

 

First, you may not know who is reading them if you have not correctly set your access preferences (which may be difficult to identify). Many organizations now examine the social networking site pages of job applicants before deciding whether to invite them for an interview.

 

Second, you do not necessarily know what other people may be posting about you – that embarrassing photograph taken on a recent night out may have been purely in jest but could reveal some aspect of your character that you would prefer to keep to yourself.

 

Third, you do not necessarily know the impact of something you have posted about someone else.

  • Be careful what you post on any social media networking site. It might come back to bite you later on!
  • Be very careful about who you accept as a ‘friend’.
  • Always ensure your information sharing preferences are set to the most appropriate level.

 

‘Free’ USB sticks

Anyone attending a conference these days will probably receive a free USB memory stick containing the presentations and usually some form of advertising or marketing material provided by organizers and sponsors.

 

Most of this is harmless, but there exists the possibility that the memory stick may also contain malware, and it is sound practice to run this through a virus scanner on a stand-alone computer before attempting to make further use of it.

 

It is well worth remembering the phrase ‘There is no such thing as a free lunch’!

 

A scam sometimes used by the hacking community is to load malware onto a USB memory stick – often a high capacity one – and leave it where their target will be likely to find it.

 

Once plugged into the target’s computer, the malware will install itself without the user’s knowledge, and (if the attacker has done his job well) will then delete itself from the memory stick leaving no trace. The malware can then commence its task.

 

Always test a ‘free’ USB memory stick on a stand-alone computer before plugging it into any other. Never use a memory stick you find lying around – it may well be a trap.

 

Banking applications

Banking applications

Banks are increasingly trying to persuade us to use their online banking applications, both from fixed computers and from mobile phones and tablets. The reason is simple – it saves them money.

 

Fortunately, the applications they provide and their web interfaces have been thoroughly tested and appear very robust. However, back in 2014, it was a very different story, with vulnerabilities found especially in mobile applications, but this does not mean we should not be vigilant.

 

Remember to keep your banking details secure.

Log out of the banking application when you have finished your transactions. If using a public computer, clear the cookies, browser history, and temporary internet files. Be aware of people ‘shoulder surfing’, who may be able to see what you are typing on the screen.

 

MOBILE WORKING

WiFi service

It is always tempting to use ‘free’ WiFi whenever we have the opportunity, but this brings its own set of threats, such as an attacker who intercepts the data being transmitted between the device and the access point, and if sufficient data can be captured, attempts to recover the encryption key in use and uses the recovered key to gain access to the user’s information.

 

Earlier in the blog, we heard about the company that provided free WiFi in London’s Docklands, but potentially at a terrible cost. This example is extreme, but when we sign up for free WiFi service, we really have no idea what is happening to our data, since once it has passed through the wireless access point, it is normally completely unencrypted.

 

Out and about

The recommendations for using free WiFi, especially in unknown locations, include:

Not using the service for anything that involves a financial transaction where your bank or credit card details are passed.

 

Not using any service that does not have an encryption key. Most bars and restaurants who provide free WiFi, for example, will always make use of an encryption key since this prevents ‘drive-by’ users who are not spending money there.

 

Avoiding those free WiFi services that use an insecure key such as WEP or WPA.WPA2 (the next generation) is much more secure and resistant to key recovery. Ensure you select WPA2-Personal (aka WPA2-PSK) with Advanced Encryption Standard (AES) encryption.

 

Corporate network users, if a WiFi hotspot must be used, then this should always be done by using a virtual private network (VPN) back to the corporate network. Additionally, corporate machines should always be configured to prevent a feature known as split tunneling, so that when a VPN is in use all traffic is passed over that VPN.

 

WiFi in the home and the workplace

WiFi in the home

Most home broadband services nowadays provide the user with a router that contains a wireless access point as well as Ethernet ports, and this is in many ways a much more convenient method of connecting since we can move around the house without the need to cable up in every room.

 

There are some basic rules that should be observed when setting up wireless networks in the home and the office:

Begin by changing the SSID name of the router. Preferably avoid calling it something that would identify your property.

 

After setting up the router or wireless access points, change the administration username (if possible), and definitely change the password. See the earlier discussion on user passwords for recommendations.

 

Always enable WPA2-Personal with AES encryption.

Use a long and complex key, which prevents outsiders from making free use of your wireless network, since you never know what they’ll be doing. The router supplier will probably print the default key on the side of the router, and you’ll need to use this in order to set it up, but it’s essential to change it afterward.

 

If the router supports remote administration, turn this off. If you ever need to use it, you can turn it on locally until you have done what you need to do. Again, if your router supports Universal Plug ‘n’ Play, turn it off, as it is a totally insecure protocol.

 

 Unless you need to use WiFi Protected Setup (WPS) in order to connect to a wireless printer, you should consider turning this off, since it provides an additional vulnerability.

 

Bluetooth

The history of Bluetooth vulnerabilities is legendary. There is little that an individual can do to make their Bluetooth devices more secure. In some cases, there are no user settings apart from ‘on’ or ‘off’. Here are a few suggestions that should reduce the likelihood of Bluetooth problems:

  • Ensure that the Bluetooth device (for example, a smartphone) is password protected.
  • Refuse all connection requests from devices you don’t recognize.
  • If you lose a Bluetooth device (for example a headset), remove it from the list of paired devices so that it can no longer be used to connect to yours.
  • Switch Bluetooth-enabled devices off when you’re not actually using them.

 

Location services

Location services

This feature applies to mobile devices that make use of GPS to make use of their location, for example when using a mapping application to plot a route between two points.

 

Many smartphone and tablet applications turn on location services automatically when you install them, meaning that they can track your movements. This may be essential as in the example given above, but there is no justification why a smartphone game should require it at all.

 

Think carefully about each application on your smartphone or tablet, and make an informed choice about whether location services will enhance your experience, or whether they are simply giving away information to someone about where you are.

 

Turn off location services in the general settings menu on all applications that you think should not be making use of them.

 

 If the application does require it, it will ask for them to be turned on, and it is your decision as to whether or not you do so. If you’re not doing scans and penetration tests, then just know that someone else is. And they don’t work for you.

 

AWARENESS

Awareness of cybersecurity

Awareness of cybersecurity issues permits both individuals and an organization’s users to act as a first – or indeed a last – line of defense in combating cyber-attacks. It is never a one-off activity and should be considered to be an integral part of personal development, whilst remaining a rather less formal activity than training.

 

An awareness programme allows people to understand the threats they face whenever they use a computer, the techniques used by social engineers to achieve their goals, the vulnerabilities faced by them or by their organization, and finally the potential impacts of their actions or inactions.

 

This doesn’t imply that it is necessary to turn everybody into cybersecurity experts, but that a basic level of understanding is required, similar to that in driving a car – we need to know how to operate the vehicle, the rules of the road and the dangers we face, but we do not need to understand how the engine management system works.

 

As with any process, there are a number of discrete steps in an awareness programme:

 

Plan and design the programme:

  1. select the most appropriate topics for awareness, such as email etiquette, correct handling of information assets or password security, for example;
  2. make a business case to justify any expenditure;
  3. develop a means of communicating with the users.
  4. Deliver and manage the programme:
  5. develop the materials and content;
  6. implement the awareness campaign.
  7. Evaluate and modify the programme as necessary:
  8. evaluate the campaign’s effectiveness;
  9. improve and update the material with new information.

 

Like many other aspects of working life, awareness is a journey, not a destination, since new people will join the organization and need to be included in the programme, and new threats and vulnerabilities will arise.

 

The campaign should also focus on continuous reinforcement through such things as poster campaigns and pop-ups when people access the internet or log on.

 

The general trend of user engagement in the programme should be along the lines of:

  1. initial contact with the user community – letting them know that something will be happening in which they will need to become involved and providing a general idea of what the programme will be all about so that their expectations can be managed;
  2. further understanding of the programme, so that they appreciate what the implications will be for them;
  3. timely engagement, so that they begin to understand that there is a new way of working;
  4. acceptance by users, in which the user community begin to work in a new way;
  5. full commitment to new ways of working, so that they do not revert to their old ways;
  • evangelism, in which they encourage others to follow their example.

 

Possible obstacles to a successful awareness programme

It is easy to assume that once an awareness programme is underway that all will go to plan, and organizations will only need to react and respond to problems when they arise. However, if forewarned about some of the possible issues, organizations should have a contingency plan in place so that faster reaction is possible.

 

Some of the issues that organizations may face include:

Initial lack of understanding. When the awareness programme is initiated, it is vital that the communication that goes out to the audience involved explains not just what the organization expects to achieve, but also why it is undertaking the work. This will greatly aid the acceptance of the programme.

 

The introduction of new technology which complicates a programme that is already underway. Such changes in the IT infrastructure in an organization can either enhance the ability to deliver the message or can complicate it;

 

but as long as people from that part of the organization are involved in the awareness programme, the team should be aware of the possibility before it arises and be able to include it in their programme or work around the problem.

 

One size never fits all. Every organization is different, and there are no standard methods of operating an awareness programme, and even within one organization, the different types of the audience may have different requirements.

 

Also, there will be a considerable difference in both the size and the scope of an awareness programme between one for a large organization and one for an SME.

 

Trying to deliver too much information. Many users in an organization will be non-technical, and so the focus of the programme must take into account that the more technical aspects of cybersecurity could overwhelm them.

 

It is essential to keep the focus on what the audience needs to know and not try to extend the delivery of information to be too technical. Less is more.

 

Ongoing management of the programme can become a challenge. If this becomes the case then the probability exists that the programme will flounder due to lack of support from those areas of the organization that are involved in its delivery, and therefore senior management commitment must be assured.

 

Follow-up failure. These can and will cause problems for the programme since it is vital that the team understand how well the message has been received, understood and acted upon by the target audience. Regular monitoring and reviews are essential to delivering a quality programme.

 

Inappropriate targeting of the subject matter. This can have a negative effect on the programme, since groups within the organization may be receiving some awareness information that has little or no impact on their role, whilst others are not receiving information that would be essential to their daily activities.

 

Ingrained behaviors. These are a constant challenge in this kind of programme. Some people will always challenge the programme saying, ‘We’ve always done it this way and it has always worked, so why should we change?’ Any organization running an awareness programme must expect this kind of response and must develop sound arguments against it.

 

Some people will take the view that security is the responsibility of the department. It is essential that they are disabused of this notion at an early stage and throughout the ongoing campaign. Cybersecurity is everybody’s problem and is not restricted to one department.

 

Programme planning and design

Programme planning

The process commences with the establishment of a small team who will develop and run the programme. Some of them will naturally have a degree of expertise in information security, whilst others may represent those parts of the organization that might suffer serious impacts in the event of a cyber-attack.

 

It may also be beneficial to involve the internal audit function, who may be able to offer constructive advice, since a programme such as this may well be audited at a later stage, and it’s always good to have an audit on your side.

 

The team’s initial task will be to define the exact goals and objectives of the programme, and this will include whether the target audience is to be the whole organization or just a small part as a pilot project.

 

This latter option may be a much more beneficial approach, since it should be able to achieve its objectives on a small and therefore less costly scale than targeting the whole organization, before widening the programme to include everyone.

 

In the initial part of the programme, the target audience might also be limited to one particular type of user, such as:

  • employees working full-time in the organization’s premises. These are frequently the kind of users who will benefit the most from receiving cybersecurity awareness training;

 

  • home-based users, who will have similar, but slightly more complex needs. Due to the different requirements for connecting into the organization’s network, these users may require a slightly higher level of understanding of the issues at stake;

 

  • third party users, such as contractors, outsourced staff and suppliers who require connections into the organization’s networks in order to undertake their work;

 

  • system administrators and IT support staff, who will already have at least a general appreciation of the issues;
  • management-level users, who may be responsible for in-house employees or home-based users, and who need to understand how cybersecurity issues will affect their departments; 

 

  • senior executive users, who will be responsible for making many of the business decisions that would be impacted by a successful cyber-attack.

 

Alternatively, the organization may decide to target a cross-section of users from different groups so that the overall organizational benefits can be seen, rather than solely those for a particular community.

 

Some topics will have greater relevance to particular target groups, such as the issues of social engineering, which may possibly be more relevant to staff who have regular contact with customers and suppliers than to those who do not.

 

This does not imply that those who do not have as much external contact should not be included in that aspect of awareness, but that they might gain less from it.

 

Next, in the development of the programme, the team must clearly identify the topics that will be covered.

 

It is pointless trying to cover all aspects of cyber awareness since this will simply overwhelm the audience; instead, the programme should focus initially on a very tightly defined subset such as usernames and passwords, spam email or social engineering.

 

The campaign can be widened at a later stage once the results of the earlier work have been examined and the techniques used have been refined where appropriate.

 

The methods of communicating the message to the user community will vary considerably, and may well consist of some or all of the following:

  • posters, which can be placed where the staff can easily engage with the message, such as meeting rooms and other shared areas. Some posters might have a humorous focus in order to lighten the message, whilst others could be somewhat darker;

 

  • newsletters, which can be delivered by desk-drop in office buildings, or by email for offices and home workers alike;
  • give-away items such as coasters, coffee mugs, key fobs and mouse mats, which continue to reinforce the general message for as long as they are used;

 

  • screensavers, which might display a variety of messages, and which could be changed either at regular intervals or when a new message must be given out;
  • intranet websites that provide helpful advice, examples of good and bad cybersecurity behavior and links to additional informative material and training;

 

  • fact sheets and leaflets, which may be particularly relevant to a group within the organization, to the whole organization or to its business sector;

 

  • presentations at team meetings, in which a guest speaker talks for a few minutes on a hot topic and takes questions about the whole awareness programme, keeping the presentation ‘short and sweet’;

 

computer-based training (CBT), which delivers a more detailed level of knowledge, and maybe a mandatory requirement for certain users’ work. This might include data protection legislation for example.

 

Once this part of the work is complete, the team may well have to approach the senior management team or board of directors to obtain funding approval, since it is unrealistic to expect that the work can be undertaken at no cost.

 

As with all business cases, the approach should focus on the likely impacts that will occur if the work does not proceed, as well as the benefits that will accrue when it does.

 

This is another reason for keeping the initial part of the campaign to a reduced volume of information since the costs will be lower and the board will find it easier to give approval.

 

Success at this early stage will then make it much easier to obtain board approval for further expenditure when the campaign moves on to cover more aspects of cybersecurity awareness.

 

The costs can be more easily identified if they are broken down into manageable areas, for example:

  • the hourly costs of staff who are engaged in delivering the awareness campaign as well as those who will be on the receiving end;
  • development costs, including the development and maintenance of any intranet websites or the production of materials such as posters and newsletters;
  • promotional costs, such as give -away items including branded pens, coffee mugs, key fobs, mouse mats and the like;
  • training costs, where external trainers are brought in to deliver all or part of the awareness campaign.
  • Some of these will be one-off costs, whilst others will be recurring, and the board will expect that these will be clearly identified.

 

It should also be possible to attempt to quantify the potential impacts since the directors of organizations will need to be certain that the programme will deliver value for money.

 

Potential impacts can include not only the direct financial losses anticipated if a particular incident occurs, such as the loss of sales revenue and the expenditure that would be incurred in responding to and recovering from the incident, but also the indirect losses such as share value, brand and the organization’s reputation, although these can be rather more subjective in nature.

 

Delivery and management of the programme

Although we have called this an awareness campaign, it actually goes further than this, because awareness is only the first stage in which the target audience is made aware of what they should know and when they are likely to need the information.

 

This may be delivered in a variety of ways, for example by printed material, email, electronic newsletters and intranet portals for those organizations having more sophisticated resources.

 

The campaign then moves up a level so that the target audience gain an understanding of why they need to be involved and how best they can participate. This may include raising awareness topics at team meetings and delivering specific presentations on the subject matter.

 

Evaluation and modification of the programme

Finally, the campaign is ready to see results from the earlier work and to evaluate its effectiveness, and as the campaign develops and widens its scope, the organization will expect to see the benefits in reduced or zero instances of successful cyber-attacks.

 

The team must ensure that the entire exercise has been carefully documented and that they can demonstrate the resulting benefits at the end of the pilot project so that more of the organization and greater areas of cyber security awareness can be addressed.

 

Once presented back to the board, success should breed success, and the team should be better placed to move on to raising awareness for the wider organization or in more topic areas.

 

The presentation should focus on both the financial and non-financial benefits, the value to the business itself and also to its external stakeholders, including suppliers and customers and the sector regulator if applicable, and should be completely honest about both the overall costs and the potential impacts of not progressing with a full rollout.

 

Once the board has given their commitment for this, the pilot user group should be given an acknowledgment for their involvement, as this will not only reinforce the importance of the programme but will encourage others to become actively involved.

 

TRAINING

TRAINING

As mentioned earlier in this blog, awareness and training are two entirely different, but interconnected concepts. Whilst awareness places cybersecurity issues firmly in the minds of the user community in an organization, training will deliver very specific and often highly targeted information to those individuals or groups who have a specific requirement for it.

 

Training and especially highly technical training can be costly, but as with awareness, has a direct payback in terms of reducing the number of incidents and the potential financial impact on the organization.

 

Cybersecurity training falls into two distinct categories:

Generic training, in which the underlying concepts of cybersecurity are explained, and which give a sound appreciation of the issues. This may be required by those managers who are responsible for specialist security design and operational staff.

 

Specialized cybersecurity training, in which very specific skills are taught to the limited audience such as those security staff who manage the organization’s security infrastructure.

 

A few final points to consider

In the case of a product or technology-specific training, it should be taken into account that technology changes at an alarming rate and the need for updated courses will undoubtedly become necessary as time progresses.

 

The requirement for ongoing budget allocations for this should be factored into the cost estimates when preparing business cases.

 

One method of reducing training costs is by identifying those staff who already possess training skills, and who can pass on their knowledge to others. This ‘train the trainer’ approach can work well when budgets are limited.

 

The business cases for both generic and specialized cybersecurity training will need to be developed and presented on a case-by-case basis and should be presented in a similar manner to those for the awareness programme.

 

However, instead of being focused solely on benefits to the organization as a whole by targeting all users within the organization, these business cases should also focus on benefits to the organization by addressing the specific training needs of individual specialists.

 

A note adds:

The world-famous Chatham House Rule may be invoked at meetings to encourage openness and the sharing of information. As far as the classification of information to be shared is concerned, trust works on two levels.

 

First, the originator must ensure that the information has been correctly classified, and must be confident that the recipients will handle the information in line with that classification. 

 

Second, recipients must have sufficient trust in the integrity of the originator so that they can have the same level of confidence in the accuracy and reliability of the information.

 

One final aspect of trust is the ability to have an independent party, trusted by all members of an information sharing community, who can act as a moderator, and can also perform the role of go-between in certain situations as we shall see later. This individual is sometimes known as the Trust Master.

 

INFORMATION CLASSIFICATION

INFORMATION CLASSIFICATION

Information to be shared must be classified according to its sensitivity, and whatever method is used, it must be possible for it to be used by both public and private sectors without the need to cross-reference their information classification schemes.

 

As mentioned previously, the Traffic Light Protocol is used by many information sharing initiatives and classifies information as one of four colors:

 

RED – Personal for named recipients only – in the context of a face-to-face meeting, for example, distribution of RED information is limited to those present at the meeting, and in most circumstances, will be passed verbally or in person.

 

AMBER – Limited distribution – recipients may share AMBER information with others within their organization, but only on a ‘need-to-know’ basis. The originator may be expected to specify the intended limits of that sharing.

 

GREEN – Community-wide – information in this category can be circulated widely within a particular community or organization. However, the information may not be published or posted on the internet, nor released outside the community.

 

WHITE – Unlimited – subject to standard copyright rules, WHITE information may be distributed freely and without restriction. This method of information classification is widely used in information sharing communities around the world since it is very simple to understand and implement, and additionally can be readily understood in other sectors or countries.

 

Most of the time, the originator of the information to be shared will determine its classification color, but on occasion, Trust Masters may decide to raise it if they feel that it is set too low.

Recommend