Cyber Crime—Studies and Reports
In this blog, we explain CSI/FBI Survey report on the economic impact of cyber attacks and cybercrime, which was based on computer security practitioners. This report is based on 530 U.S. corporations, financial institutions, government agencies, medical institutions, and universities.
They also reviewed a study focused on worldwide economic damage estimates of all forms of digital attack by the British firm Mi2g. Another study they examined was the Computer Economics Institute (CEI) assessment of the financial impact of major virus attacks from 1995 to 2003.
We will focus only on their comments, which were directed to research methodological issues, to highlight in a constructively critical fashion the areas that future studies will be well advised to consider as new studies are launched.
Regarding the CSI/FBI Survey, the criticism was directed to the point that the respondents were not a representative sample of business organizations and other entities that would be exposed to cyber risk. Also, survey recipients were not randomly chosen but were self-selected from among security professionals.
As a result, there was no rigorous, statistically sound method for extrapolating the Cybercrime reports of a group of 530 to the national level.
More significantly, 75% of respondents reported financial losses; however, only 47% could quantify the losses. Finally, the survey was deficient in the absence of a standardized method for quantifying the costs of cyber attacks.
The Mi2g study on worldwide economic damage estimates for all forms of digital attacks was criticized on the basis that their conclusions were based on the collection of economic information from a variety of open sources and extrapolated to a global level using a proprietary set of algorithms.
Since their model is proprietary, outside researchers cannot evaluate their model and its underlying assumptions. CEI’s benchmarks and algorithms are the keys to its cost estimates, and due to the proprietary nature, outside evaluators cannot attest to the models or the underlying assumptions.
In 2002, a study by the World Bank criticized the existing base of information that supports projections about the extent of the electronic security problem to be flawed for two reasons.
First, there are strong incentives that discourage the reporting of security breaches. Second, organizations are often not able to quantify the risks of the cyber attacks they face or to establish a dollar value on the costs of the attacks that have already occurred.
It is interesting to note that incentives to not report security breaches still remain a problem to this day. The difficulty is that organizations in many cases have real economic incentives not to reveal information about security breaches because the costs of public disclosure may take several forms such as the following:
Financial market impacts: The stock and credit markets and bond rating firms may react to security breach announcements. Negative reactions raise the cost of capital to reporting firms.
Even firms that are privately held and not active in public securities markets may be adversely affected if banks and other lenders judge them to be riskier than previously thought.
Reputation or confidence effects: Negative publicity may damage a reporting firm’s reputation or brand or cause customers to lose confidence. These effects may give commercial rivals a competitive advantage.
Litigation concerns: If an organization reports a security breach, investors, customers, or other stakeholders may use the courts to seek recovery of damages. If the organization has been open in the past about previous incidents, plaintiffs may allege a pattern of negligence.
Liability concerns: Officials of a firm or organization may face sanctions under federal laws such as the Health Insurance Portability and Accountability Act of 1996 (HIPAA), the Gramm-Leach-Bliley Act of 1999 (GLBA), or the Sarbanes-Oxley Act of 2003, which require institutions to meet various standards for safeguarding customer and patient records.
Signal to attackers: A public announcement may alert hackers that an organization’s cyber-defenses are weak and inspire further attacks.
Job security. IT personnel may fear for their jobs after an incident and seek to conceal the breach from senior management.
Global Data Breach Study
The 2014 Cost of Data Breach Benchmark Research Study sponsored by IBM and independently conducted by the Ponemon Institute was the ninth annual study and included 314 companies from ten countries participating in this research.
Those nations who participated were the United States, the United Kingdom, Germany, Australia, France, Brazil, Japan, Italy, and, for the first time, the United Arab Emirates and Saudi Arabia.
For purposes of this research, a data breach was defined as an event in which an individual’s name plus a medical record and/or a financial record or debit card was put at risk either in electronic or paper format.
The three main causes of a data breach were a malicious or criminal attack, system glitch, or human error. The research methodology to perform this study was quite impressive as the researchers collected in-depth qualitative data through 1690 interviews conducted over a ten-month study, which entailed 314 participating organizations.
Those people interviewed were IT personnel, compliance and information security practitioners who were knowledgeable about the organization’s data breach and costs associated with resolving the breach. It is important to mention that the costs presented in this study are from actual data loss incidents and are not hypothetical.
The methodology used to calculate the cost of a data breach as well as their stated limitations of the research is worthy of inclusion as it will serve as a comprehensive guideline for future research projects.
The importance of the Ponemon’s Institute field research conclusions is reinforced by their publishing of the methodological approach they employed in the calculation of security breaches in the ten nations studied.
The activity-based cost methodology utilized in their study merits further highlighting, as we believe it will benefit future researchers as additional studies on computer security breaches are pursued:
To calculate the cost of a data breach, we use a costing methodology called activity-based costing (ABC). This methodology identifies activities and assigns a cost according to actual use. Companies participating in this benchmark research are asked to estimate the cost for all the activities they engage in to resolve the data breach.
Typical activities for discovery and the immediate response to the data breach include the following:
Conducting investigations and forensics to determine the root cause of the data breach
Determining the probable victims of the data breach
Organizing the incident response team
Conducting communication and public relations outreach
Preparing notice documents and other required disclosures to data breach victims and regulators
Implementing call center procedures and specialized training
The following are typical activities conducted in the aftermath of discovering the data breach:
Audit and consulting services
Legal services for defense
Legal services for compliance
Free or discounted services to victims of the breach
Identity protection services
Lost customer business based on calculating customer churn or turnover
Customer acquisition and loyalty program costs
Once the company estimates a cost range for these activities, we categorize the costs as direct, indirect and opportunity as defined in the following:
Direct cost—the direct expense outlay to accomplish a given activity.
Indirect cost—the amount of time, effort and other organizational resources spent, but not as a direct cash outlay.
Opportunity cost—the cost resulting from lost business opportunities as a consequence of negative reputation effects after the breach has been reported to victims (and publicly revealed to the media).
Our study also looks at the core process-related activities that drive a range of expenditures associated with an organization’s data breach detection, response, containment, and remediation. The costs for each activity are presented in the Key Findings section. The four cost centers are:
Detection or discovery: Activities that enable a company to reasonably detect the breach of personal data either at risk (in storage) or in motion.
Escalation: Activities necessary to report the breach of protected information to appropriate personnel within a specified time period.
Notification: Activities that enable the company to notify data subjects with a letter, outbound telephone call, email or general notice that personal information was lost or stolen.
Post data breach: Activities to help victims of a breach communicate with the company to ask additional questions or obtain recommendations in order to minimize potential harm. Post data breach activities also include credit report monitoring or the reissuing of a new account (or credit card).
In addition to the above process-related activities, most companies experience opportunity costs associated with the breach incident, which results from diminished trust or confidence by present and future customers.
Accordingly, our Institute’s research shows that the negative publicity associated with a data breach incident causes reputation effects that may result in an abnormal turnover or churn rates as well as a diminished rate for new customer acquisitions.
To extrapolate these opportunity costs, we use a cost estimation method that relies on the “lifetime value” of an average customer as defined for each participating organization.
Turnover of existing customers: The estimated number of customers who will most likely terminate their relationship as a result of the breach incident. The incremental loss is abnormal turnover attributable to the breach incident. This number is an annual percentage, which is based on estimates provided by management during the benchmark interview process.
Diminished customer acquisition: The estimated number of target customers who will not have a relationship with the organization as a consequence of the breach. This number is provided as an annual percentage.
Our study utilizes a confidential and proprietary benchmark method that has been successfully deployed in earlier research. However, there are inherent limitations with this benchmark research that need to be carefully considered before drawing conclusions from findings.
Non-statistical results: Our study draws upon a representative, non-statistical sample of global entities experiencing a breach involving the loss or theft of customer or consumer records during the past 12 months. Statistical inferences, margins of error and confidence intervals cannot be applied to these data given that our sampling methods are not scientific.
Non-response: The current findings are based on a small representative sample of benchmarks. In this global study, 314 companies completed the benchmark process. Non-response bias was not tested so it is always possible companies that did not participate are substantially different in terms of underlying data breach cost.
Sampling-frame bias: Because our sampling frame is judgmental, the quality of results is influenced by the degree to which the frame is representative of the population of companies being studied. It is our belief that the current sampling frame is biased toward companies with more mature privacy or information security programs.
Company-specific information: The benchmark information is sensitive and confidential. Thus, the current instrument does not capture company-identifying information. It also allows individuals to use categorical response variables to disclose demographic information about the company and industry category.
Unmeasured factors: To keep the interview script concise and focused, we decided to omit other important variables from our analyses such as leading trends and organizational characteristics. The extent to which omitted variables might explain benchmark results cannot be determined.
Extrapolated cost results: The quality of benchmark research is based on the integrity of confidential responses provided by respondents in participating companies.
While certain checks and balances can be incorporated into the benchmark process, there is always the possibility that respondents did not provide accurate or truthful responses. In addition, the use of cost extrapolation methods rather than actual cost data may inadvertently introduce bias and inaccuracies.
This study reported that the average cost paid for each lost or stolen record containing sensitive and confidential information was $145.00. The most expensive breaches occurred in the United States, at $201.00 per record, and Germany, at $195.00 per record. The United States experienced the highest total loss at $5.85 million and Germany at $4.74 million.
On average, the United States had 29,087 exposed or compromised records. The two countries that spent the most to notify customers of a data breach were the United States and Germany, which, on average, spent $509,237 and $317,635, respectively.
Typical notification costs included IT activities associated with the creation of contact databases, determination of satisfying all regulatory requirements, engagement of outside experts, and other related efforts to alert victims to the fact their personal information had been compromised.
The average post-data-breach costs included help desk activities, inbound communications, special investigative activities, remediation, legal expenditures, product discounts, identity protection services, and regulatory interventions. The cost to the United States was $1,599,996; Germany, $1,444,551; France, $1,228,373; and the United Kingdom, $948,161.
Only 32% of the organizations in this research study had a cyber insurance policy to manage the risks of attacks and threats and 68% did not have a data breach protection clause or cyber insurance policy to address the above-identified costs.
There is a vast amount of data in the 2014 Cost of Data Breach Study that provides an interesting foundation on which future research will continue not only by the Ponemon Institute but also for other interested parties and researchers.
The increasing cost of data breaches to organizations has resulted in the emergence of the cyber insurance industry, as many businesses simply recognize the need for additional protection.
One of the advantages of organizations seeking cyber insurance is that the effect of qualifying for insurance actually means a company will have to meet the requirements for cyber resilience as mandated by the insurance carrier’s underwriters.
In short, the insurance company is going to issue a cyber insurance policy only if reasonable security programs and policies are in place. This has the advantage of offering greater security to all concerned.
However, establishing sound cyber resilience programs means that an organization is providing protection of its networks, computers, and data systems beyond the typical cybersecurity programs. So, the higher the number of organizations seeking cyber insurance, the greater the possibility of our nation improving its overall cybersecurity.
Cyber Resilience Program Policies
It should be mentioned that as organizations seek cyber insurance, there will be increased costs involved in both their financial outlays for additional personnel, as well as new security software and other devices.
An example of the increase of cybersecurity can be viewed in the following document on the development of “securing protected data on university-owned mobile and non-mobile devices policy/use of personal devices.”
Note the requirements for compliance with important federal and state regulatory agencies that are listed and described within this policy. Compliance with these regulatory agencies and the encryption/password policy as well as the policy on data protection all result in an organization that is more prepared to defend itself against breaches.
Of course, at the same time, this results in the need for employing additional personnel to implement these policies and to assist in the monitoring of requirements. An example of this policy is presented in the following:
Securing Protected Data on University-Owned Mobile and Non-Mobile devises Policy/Use of Personal Devices
The number and types of breaches occurring globally can best be ascertained by going directly to the source of those corporations and entities that are offering security services and obtaining their conclusions on the range of current breach activity.
The Symantec Corporation has compiled an impressive data report in their “Internet Security Threat Report 2014,” and they have perhaps the most comprehensive source of Internet threat data in the world.
Their data are captured by the Symantec Global Intelligence Network of over 41.5 million attack sensors, which record thousands of events per second. The Symantec Network monitors threat activity in 157 countries and territories.
In addition to their real-time monitoring of events, Symantec also maintains one of the world’s most comprehensive vulnerability databases consisting of over 60,000 identified vulnerabilities over 20 years and from over 19,000 vendors representing 54,000 products.
The Symantec Probe Network, which includes a system of more than 5 million decoy accounts, collects data on spam, phishing, and malware data.
Symantec’s Skeptic Cloud proprietary system for heuristic technology is designed to detect new sophisticated targeted threats before they reach the networks of their clients.
The scope of this system is impressive, as over 8.4 billion e-mail messages are processed each month along with more than 1.7 billion web requests filtered each day across 14 data centers.
The data collected over 2013 recorded eight mega breaches in which over 10 million identities were exposed and the targeted attacks of spear-phishing attacks increased by 91%.
In addition, there was a dramatic increase in watering-hole attacks and attacks based on a legitimate website having mal-ware being installed by attackers with the purpose of advancing an advanced persistent threat (APT) attack.
So both spear-phishing and watering-hole attacks increasing in frequency suggest an increase in APT attacks. Symantec’s research suggested that 77% of legitimate websites had exploitable vulnerabilities and 16% of all websites had a critical vulnerability installed by an individual or group focused on targeting victims visiting these websites.
There was a 500% increase in Ransomware attacks where the attacker pretends to be a law enforcement agent and demands $100 to $500 to unlock the victim’s computer from the encryption planted surreptitiously on the victim's computer.
This attack evolved into the CryptoLocker attack, in which the user’s files and entire hard drive were encrypted and the attacker would decrypt the files only if a ransom fee was paid.
Other conclusions reached by the extensive Symantec Internet Threat Report were the increase in social media scams and the increase in malware targeting mobile applications and devices.
Also for the first time, attackers began attacking devices through the IoT, such as baby monitors, security cameras, smart televisions, automobiles, and even medical equipment.
The IoT will become a prime attack vector in which we are clearly not prepared to provide security. As the volume of data increases as a result of a proliferation of devices connected to the IoT, we will also experience a phenomenal number of new threats and attacks.
Another important source of global data is provided by FireEye and Mandiant, a FireEye company. Their data are gathered from more than 1216 organizations in 63 countries across more than 20 industries.
In addition to their collection of autogenerated data, they also surveyed 348 organizations and they concluded that no nation or no corner of the world is free from attack vulnerabilities.
Also, they concluded that the two most vulnerable vertical industries to attack were higher education and financial services.
Higher education is a prime target because of the vast amount of valuable intellectual property and their open network philosophy, which makes them quite vulnerable and easy to breach.
The financial services industry is vulnerable due to their vast amount of cash and the physical resources they possess.
The “Verizon 2013 Data Breach Investigations Report” was based on data collected from the following 18 contributors:
Complete List of 2013 DBIR Partners
Australian Federal Police
CERT Insider Threat Center at the Carnegie Mellon University Software Engineering Institute (United States)
Consortium for Cybersecurity Action (United States)
Danish Ministry of Defence, Center for Cybersecurity
Danish National Police, NITES (National IT Investigation Section)
Deloitte (United States)
Dutch Police: National High Tech Crime Unit
Electricity Sector Information Sharing and Analysis Center (United States)
European Cyber Crime Center
G-C Partners, LLC (United States)
Guardia Civil (Cybercrime Central Unit) (Spain)
Industrial Control Systems Cyber Emergency Response Team
Irish Reporting and Information Security Service (IRISS-CERT)
Malaysia Computer Emergency Response Team, CyberSecurity Malaysia
National Cybersecurity and Communications Integration Center (United States)
ThreatSim (United States)
U.S. Computer Emergency Readiness Team
U.S. Secret Service
The Verizon combined data set for 2013 reflected 2012 numbers in which 47,000 reported security incidents resulted in 621 confirmed data disclosures, which resulted in 44 million compromised records.
In the nine-year period, Verizon has been collecting these data, they have reported on 2500 data disclosures and over 1.1 billion compromised records.
The impressive amount of data collected by Symantec, FireEye, and Verizon provides an important perspective on the extraordinary challenges confronting computer security professionals. Also, the attacks reported are only those attacks known and discovered.
There are many successful attacks that remain unknown for a vast period of time, and in the case of APT attacks, the normative range is approximately 243 days before the victim is aware of the attack.
Some attacks have resulted in the attacker’s presence on the targeted system for as long as four years. We have no way of knowing how many systems have been attacked without the knowledge of the victim.
The most sophisticated form of a targeting attack in 2013 made use of the watering-hole attack, in which the attackers infiltrated a very legitimate website and planted malicious code and then simply waited for their target to access the website since the attacker was able to monitor the logs of the compromised website.
The attacker’s process of reconnaissance of the potential target enables the attacker to select a number of legitimate websites that the victim is liable to visit as a result of the victim’s interest in the nature of the website.
This attack technique is effective because the victim is not suspicious of legitimate websites and is totally unaware that someone may have planted malicious code on the websites.
Another interesting industry that has been targeted in 2013 is the healthcare industry, and the purpose for these attacks is a result of the enormous number of people with absolutely valid personal identification information that will be valuable to the attacker in using or selling this compromised information to other cyber-criminals.
This tactic will certainly increase in volume as a result of the Affordable Health Care Act (Obama Care) since there are millions of people adding their medical information to databases that have operated in a most ineffective fashion during the first four months leading up to its full implementation.
Another reason this will become a high-valued target is the potential access points that attackers will have to the U.S. Treasury databases since those signing up and enrolling in the Affordable Health Care Act must be qualified by the level of their income.
Therefore, healthcare databases interacting with Treasury and Internal Revenue Service databases will provide an opportunity for potential targeting by attackers who no doubt are already developing malicious code and malware which will be targeted at these areas.
The business model for the delivery of toolkits such as the Black Hat exploit kit, Magnitude Exploit, and the authors of the Ransomware threats such as Revention have moved to the Whitehole kit.
The new business model now permits the developers of the malware to retain ownership as they do not sell the kits outright, but they offer their kits as a service in which they maintain full control of the code and they administer the toolkit by offering their services for a fee to anyone wishing to compromise another person’s computer system.
Some attackers now advertise their services on the Silk Road and the Dark Web. Some even have been emboldened to offer their services on the Internet.
FireEye and Mandiant reported on the new generation of attacks including high-end cybercrime and state-sponsored campaigns known as APT attacks. Common to these attacks is the organizational method in which multiple teams of people are involved and each with assigned specific tasks.
Another unique facet of an APT attack is that it is not a single, one-step attack but is coordinated through multiple steps. The process of the attack is described as follows:
1. External reconnaissance. Attackers typically seek out and analyze potential targets—anyone from senior leaders to administrative staff—to identify persons of interest and tailor their tactics to gain access to target systems. Attackers can even collect personal information from public websites to write convincing spear-phishing e-mail.
2. Initial compromise. In this stage, the attacker gains access to the system. The attacker can use a variety of methods, including well-crafted spear-phishing e-mails and watering-hole attacks that compromise websites known to draw a sought-after audience.
3. Foothold established. The attackers attempt to obtain domain administrative credentials (usually in encrypted form) from the targeted company and transfer them out of the network.
To strengthen their position in the compromised network, intruders often use stealthy malware that avoids detection by host-based and network-based safeguards.
For example, the malware may install with system-level privileges by injecting itself into legitimate processes, modifying the registry, or hijacking scheduled services.
4. Internal reconnaissance. In this step, attackers collect information on surrounding infrastructure, trust relationships, and the Windows domain structure.
The goal: move laterally within the compromised network to identify valuable data. During this phase, attackers typically deploy additional backdoors so they can regain access to a network if they are detected.
5. Mission completed? Once attackers secure a foothold and locate valuable information, they exfiltrate data such as e-mails, attachments, and files residing on user workstations and file servers.
Attackers typically try to retain control of compromised systems, poised to steal the next set of valuable data they come across. To maintain a presence, they often try to cover their tracks to avoid detection.9
Tony Flick and Justin Morehouse’s blog, Securing the Smart Grid: Next Generation Power Grid Security, discusses what security professionals expect and what they predict particularly in the emergence of an all-encompassing smart grid.
Clearly, the electrical power grid has received the most attention, and in California, the PG&E has established a smart grid for customer’s use of electricity.
Some areas are moving their natural gas and water systems through this same transformation, so they may also operate within a smart grid.
The creation of a metering infrastructure will require advanced sensor networks to be deployed, and this will enable the utility workers to locate water and gas leaks faster and even remotely.
This system of smart grids will assist customers in more effectively regulating their use of these utilities in a more cost-effective manner.
However, security professionals are concerned that their new smart grids and their supporting infrastructure will offer security vulnerabilities that could cause a local or potential national catastrophe if they become targeted by cyber-criminals or nation-state’s focused on causing harm for the United States.
Interestingly, the city of Tallahassee, Florida, is creating a smart grid that includes the electricity, gas, and water utilities, and while this will be more convenient for the citizens to see the total cost of their utility services in real-time on one system, it does, on the other hand, present itself as a single point of failure in which all utility service could be lost.
The possibility of failure is consistent with our nation’s concern for the safety and reliability of our critical infrastructure.
On May 1, 2013, Bill Gertz reported in The Washington Free Beacon that U.S. Intelligence Agencies traced a recent cyber intrusion into a sensitive infrastructure database to the Chinese government or their military cyber warriors.
The compromise of the U.S. Army Corps of Engineers National Inventory of Dams suggests that China might be preparing to conduct a future cyber attack against our electrical power grid, including the electricity produced by our hydroelectric dams.
Evidently, the database hacked contains sensitive information on vulnerabilities of every major dam in the United States. The database also contains information on the number of people who might be killed if the dam failed, so it included significant and high-hazard level dams.
General Keith Alexander has repeatedly warned our nation that potential adversaries are increasing their level of sophistication in their offensive cyber capabilities and tactics.
Since cyber warfare is moving well beyond simply the disruption of networks to the era in which malware and malicious code can be planted within computer systems, we now face the enhanced risk of destruction of hydroelectric generators at dams with the potential for cyber attacks on the electrical power controllers as well.
Clearly, the Chinese and the Russians have military cyber capabilities to clandestinely implant malicious code and malware into the U.S. electrical power grid system. We have already noted attempts at the penetration of these critical infrastructures, and we must remain vigilant to protect against further attempts.
[Note: You can free download the complete Office 365 and Office 2019 com setup Guide.]
Cyber Liability, First-Party, and Third-Party Insurance
The degree of cyber insurance that your organization is interested in acquiring is based on the sensitivity of the data they are responsible for maintaining. Other issues that an organization may be concerned about and need cyber insurance are the following cyber liability issues:
Unauthorized access to data
Disclosure of confidential data
Loss of data or digital assets
Introduction of malware, viruses, and worms
Ransomware or cyber extortion
Advanced persistent threat attacks
Invasion of privacy lawsuits
Defamation from an employee’s email
Failure of notification of the breach
In addition to cyber liability insurance, there is also optional insurance coverage from some insurance carriers that address first-party cybercrime expenses, which may include the following:
Crisis management expenses to include the
Cost of cybersecurity forensic experts to assist in cyber extortion cases
Public relations consultants to work with local media, providing appropriate information to maintain the goodwill of the customers
Insurance carriers may also be prepared to offer additional first-party lines of coverage; the organization’s risk manager can negotiate any number of concerns to create the type of cyber insurance policy that best fits the needs of the organization and the people they serve.
A more critical cyber insurance policy coverage would fall under third-party liability, where the claims of breach arising from cybersecurity failures result in damage to third-party systems.
Typical problems arising in this area are when a company’s credit card and point of sales systems are below the standards of the major credit card company mandate for compliance with industry-based standards. Two recent cases that highlight this problem are the attacks against the Schnucks Markets and also the attack against Target.
Kavita Kumar reported on the proposed class action settlement stemming from the 2014 Schnuck’s Markets computer system breach in which an estimated 2.4 million payment cards were compromised. Under the proposed settlement, Schnucks would pay up to $10 to customers for each credit or debit card that was compromised and had fraudulent charges posted on it that were later credited or reversed.
Schnucks also would pay customers for certain unreimbursed out-of-pocket expenses such as a bank, overdraft and late fees as well as up to three hours for documented time spent at the rate of $10 an hour for dealing with a security breach. There would be a cap on these expenses of up to $175 per class member.
The aggregate cap that Schnucks would pay on the above claims would be $1.6 million. If claims exceed that amount, customers would still be guaranteed at least $5 for each compromised card.
Furthermore, Schnucks would pay up to $10,000 for each related identity theft loss, with the total capped at $300,000; up to $635,000 for the plaintiff and settlement attorney’s fees; and $500 to each of the nine named plaintiffs in the lawsuit.
While Schnucks denied any wrongdoing, the cost of the litigation was substantial, and they want to bring closure to the case to avoid further expense, business disruption, and reputational loss.
The basis for the class action claim against the Schnucks Markets centered on their alleged failure to secure customers personal financial data and their failure to provide notification that their customer’s personal information had been stolen.
It is interesting to note that little focus was placed on those responsible for the malicious breach, and the burden of responsibility was transferred to the victims, whose losses are still being calculated by the credit card companies who are also suing Schnucks Markets for their third-party loss.
The class action litigation filed against the Target store was based on a breach of security that permitted the attackers to place malicious software on thousands of cash registers in various Target stores and gain access to 70 million records that contained names and e-mail addresses of customers.
In addition to the class action suit by Target customers, the Jefferies investment bank estimates that Target may also face a bill of $1.1 billion to the payment card industry as a result of this breach.
The level of security and its quality will be a key in the Target litigation, as will the timing of its notification of this breach to its customers and to the regulators.
The importance of notification is clearly a critical factor for any organization suffering a security breach. Regulatory agencies at both the federal and state levels have imposed standards that companies must adhere to in reporting security breaches to those whom they suspect might be compromised by the breach.
In 2011, the Securities and Exchange Commission issued guidelines stating that publicly traded companies must report significant instances of cyber theft and cyber attacks and even the material risk of such a security event. California was the first state to require data breach notifications in 2003.
In 2012, companies and governmental agencies were required to notify the California Attorney’s General Office of any data breach that involved more than 500 Californians.
Cyber insurance can be a valuable investment, particularly third-party insurance protection against litigation brought by the payment card industry against businesses that fail to comply with the Payment Card Industry Data Security Standard, which requires that businesses who use online transactions abide by certain procedures.
Today, businesses, organizations, and even universities should examine their business partners to be certain their respective security processes are in compliance with payment card standards or they may be vulnerable as a result of security breaches by a business partner.
Past Computer Crime Studies and Reports
We believe that there is increasing improvement in the efforts of the industry to sharpen their cost assessment of cybersecurity, and one very good example is the work being performed by the Ponemon Institute.
The Ponemon Institute has been commissioned by corporations such as IBM, Hewlett-Packard, and Experian Corporation to focus on a number of studies involving security breaches, as well as a cost-benefit analysis study;
and they have applied a research methodology to control for bias as well as pointed out the limitations of their study, thus providing readers with a clearer report than most previous reports have achieved.
The Ponemon Institute’s studies of cybercrime included six nations: the United States, the United Kingdom, Germany, Australia, Japan, and France. The study of these six nations involved field-based research as opposed to a more traditional survey research methodology.
A total of 234 companies were included in the study, and it consisted of only larger organizations with more than 1000 enterprise seats, which was defined as the number of direct connections to the network and enterprise systems.
The Cybercrime report stated that ten months of effort was required to recruit the companies and to build an activity-based cost model to analyze the data, collect source information, and complete the analysis.
A total of 1935 interviews were conducted with company personnel, although each nation’s individual study would be a number less than the total. For example, the Ponemon Institute’s study of the United States was based on 561 interviews drawn from 60 U.S. companies.
A total of 1372 attacks were used to measure the cost; however, again, the number of attacks reviewed in each nation varied both in number and type of attack that created a higher cost in one nation compared to another nation. For instance, in the study of the United States, 488 attacks were recorded at an average annualized cost of $11.56 million.
The above data were collected from the seven studies performed by the Ponemon Institute’s research in each nation.7 There are a number of very interesting results and important data that are included in each of the seven reports, and these reports will stimulate a number of questions and hopefully additional research.
The focus of these field-based studies was to acquire useful data primarily for the industry and presumably any other interested parties. The data were collected within a cost framework that measured two cost streams, one pertaining to internal security-related activities and the second to the external consequences and costs.
The Ponemon Institute’s Cost of Cyber-Crime Study was unique in addressing the core systems and business-process-related activities that are responsible for creating a range of expenditures associated with a large company's response to cybercrime. The inclusion of direct, indirect, and opportunity costs associated with cyber crimes is a very essential and valuable framework of their seven studies.
The study’s definition of cybercrime was limited to cyber attacks and criminal activity conducted via the Internet. These attacks were defined as including the theft of intellectual property, confiscating online bank accounts, creating and distributing viruses, posting confidential business information on the Internet, and disrupting a country’s critical infrastructure.
It is useful to present the key findings of the Ponemon Institute’s field-based cybercrime studies and to encourage further research in these vital areas of inquiry.
In reviewing the 2013 cost of cybercrime in the United States, the study is based on 60 U.S. companies that are considered large, with over 1000 enterprise seats. The key findings of the 2013 U.S. study reported an average annualized cost of $11.6 million per year with a range of $1.3 million to $58 million.
The 60 companies in the study experienced 122 successful attacks per week, and the most costly cyber crimes were denial-of-service attacks, malicious insiders, and web-based attacks.
The average time to resolve an attack was 32 days, with an average cost to participating organizations of $1,035,769 during this 32-day period. On an annualized basis, the detection and recovery combined for 49% of the total internal activity cost, with cash outlay and labor representing the majority of these costs.
The Ponemon Institute was careful to identify the limitations of their research study by cautioning against extrapolation of their data beyond the field-based survey parameters of the size of the organizations reviewed and the exclusion of small business organizations as well as governmental organizations.
The key findings of the Ponemon Institute’s 2013 Cost of Cyber-Crime Study: Global Cyber crime report, which includes all six nations, reveals that the average annualized cost of cybercrime for the 234 organizations was $7.2 million per year, with a range of $375,387 to $58 million.
The companies experienced successful attacks per week and the most costly cyber crimes were caused by malicious insiders, denial-of-service, and web-based attacks.
The average time to resolve a cyber attack was 27 days, with an average cost to participating organizations of $509,665 during this 27-day period. On an annualized basis, the detection and recovery combined for 54% of the total internal activity cost, with the productivity loss and direct labor representing the majority of these costs.
In another industry-based study, IBM’s Managed Security Services Division reported that they continuously monitor tens of billions of events per day for their 3700 clients in more than 130 countries for the express purpose of identifying security breaches for interdiction and removal.
In a one-year period between April 1, 2012, through March 31, 2013, and with normalizing data to describe an average client organization between 1000 and 5000 employees, they reported 81,893,882 security events for an average of 73,400 security attacks to a single organization.
These 73,400 attacks were identified by correlation and analytic tools as malicious activity attempting to collect, disrupt, deny, degrade, or destroy information systems resources or the information. The monthly average to an IBM single organization client amounted to 6100 attacks, with 7.51 security incidents requiring action on a monthly basis.
The two types of incidents that represented the most common attack types were malicious code and sustained probe/scans.
It is interesting to note that 20% of the attackers were considered as malicious insider attacks. While this Cybercrime report did not note any cost factors, we included it to represent the global nature of the problem of cybersecurity and its continuing expansion in both numbers of incidents that must be monitored to ensure for due diligence in protecting an organization.
Another economic cyber risk model reviewed was the annual loss expectancy (ALE) model developed in the late 1970s by the National Institute of Standards and Technology. The ALE model creates a dollar figure, produced by multiplying the cost, or impact, of an incident (in dollars) by the frequency (or probability) of that incident.
So the ALE cost model analyzes security breaches from the perspective of (1) how much the breach would cost and (2) how likely it is to occur. The ALE cost model combines the probability and severity of computer attacks into a single number, which represents the amount that a firm could actually expect to lose in a given year.
While ALE has become a standard unit of measure for talking about the cost of cyber attacks—it has not been used by many to assess cyber risk. One critique of the ALE cost model is the difficult nature of establishing cost measurements and the equal difficulty in specifying the likelihood of an attack.
The importance of developing economic cost models to assess and measure security breaches is a method for an organization to assess the cyber risks they confront.
Without these cost models, how can they make rational decisions about the appropriate amount of money and resources they should spend on the security of their information systems and computer networks?
In short, without these cost models, it is difficult, at best, and almost impossible to evaluate the effectiveness of computer security efforts.
Organizations, particularly businesses and corporations should be quantifying the factors of security breaches and their frequency so they are capable of assessing the optimal amount to spend on computer security systems and to also measure the effectiveness of this financial investment and their computer security programs.
This study, although limited, is unique because it focuses on covered events and actual claims payouts. We asked the major underwriters of cyber liability to submit claims payout information based on the following criteria:
The incident occurred between 2009 and 2011
The victimized organization had some form of cyber or privacy liability coverage
A legitimate claim was filed
We received claims information for 137 events that fit our selection criteria. Of those, 58 events included a detailed breakout of what was paid on the claim. Many of the events submitted for this year’s study were recent, which means the claims are still being processed and actual costs have not yet been determined.
We used our entire sampling of 137 events to analyze the type of data breached, the cause of data loss and the business sectors affected. We used the smaller sampling (58 events) to evaluate the payouts associated with the events—again based on the type of data breached, the cause of data loss and the business sectors affected.
As a result, readers should keep in mind the following:
Our sampling is a small subset of all breaches
Our numbers are lower than other studies because we focused on claims payouts rather than expenses incurred by the victimized organizations
Our numbers are empirical as they were supplied directly by the underwriters who paid the claims
Most claims were reported for total losses. Of those that mentioned retentions, these ran anywhere from $50 thousand to $1 million
While this study reported on claims dated in 2011, they reported an average cost per incident of $3.7 million. The average cost for the legal defense was $582,000, and the average legal settlement was $2.1 million.
The average cost for crisis services, which included forensics, notification, call center expenses, credit monitoring, and legal guidance, was $983,000, and the business sectors most affected were financial services, health care, and retail stores.
The 2013 Third Annual NetDiligence “Cyber Liability and Data Breach Insurance Claims Study” provided an update on the data from the 2011 figures, and they reported health care as the most frequently breached business sector, followed by the financial industry.
The claims submitted ranged from $2500 to $20 million; however, the most typical range of claims was $25,000 to $400,000. Of the 140 claims submitted, 88 reported a total payout of $84 million;
however, claims not reporting a total payout were still in litigation and a settlement had not yet been reached, so these figures will increase as the settlements are closed.
The objective of both NetDiligence studies was to help risk management professionals and insurance underwriters understand the impact of security breaches.
These two NetDiligence studies consolidated claims from multiple insurance carriers so that the combined pool of claims would permit real costs and possible future trends.
The insurance industry studies alongside industry reports by the Ponemon Institute and several other industry reports will be necessary to establish more precise actuarial tables.
Challenges to Current Cybersecurity Models
Based on the numerous industry-driven surveys on security breaches, especially that Ponemon Institute commissioned cybersecurity surveys from throughout the world, and coupled with the NetDiligence surveys on actual cyber insurance claims, it is abundantly clear that cybersecurity breaches are a global problem that is growing both in volume and cost.
Financial Services Sector
One of the areas in which growth continues to be targeted by cyber-criminals is the financial services sector. Financial service companies in the United States lost, on average, $23.6 million in 2013, and this represented a 44% increase in average loss from the previous year of 2012.
In fact, financial institutions are experiencing such an increase in cyber threats that an assumption by most, if not all, financial institutions is that their customer personal computers (PCs) are infected with viruses.
The thought that is beginning to underscore this assumption centers on Internet-based banking systems that are accessed through smartphones, which typically are insecure and open to multiple viruses due to their having use in social media sites.
Another factor supporting this belief of customer widespread infected PCs is the abundant number of viruses targeting the financial community, which include ZeuS, SpyEye, Conficker, DNS Changer, Gameover Zeus, Black Hole Exploit Kit, and fake antivirus software.
In a white paper on cyber threats and financial institutions, Josh Bradford reports on the eight cyber threats that the FBI notes are of concern for financial institutions as follows:
Third-party payment processor breaches
Securities and market trading company breaches
Automated teller machine skimming breaches
Mobile banking breaches
Supply chain infiltration
Telecommunication and network disruption
The important aspect about account takeovers is a new emerging trend in which the cyber-criminals refocus their attack on the customers as opposed to only the financial institution. This is accomplished through targeted phishing schemes via e-mail or text messages, and it is designed to compromise the customer’s online banking information.
The “high roller” malware is designed to specifically target the PCs of bank customers with high account balances, and the infected PC or smartphone will automatically transfer large sums of money into mule business accounts at the precise moment the customers log into their account.
In addition, the proliferation of relatively cheap “do it yourself virus kits” available through the Internet is creating more problems for the financial services sector.
Additional concerns to the financial services firms throughout the world are the increasing frequency, speed, and sophistication of cyber attacks.
The Deloitte Center for Financial Services analyzed data from an investigative annual report by Verizon and discovered that in 2013, 88% of the attacks initiated against financial service companies were successful in less than 24 hours.
The speed of the cyber attack, the significant lag time in the discovery of the attack, and the longer restoration of system services highlight the challenges in both the cyber attack detection and response capabilities.
In short, the attacker’s “skill to attack” and the financial service firm’s “ability to defend” outpace both the discovery and restoration success, which is so necessary to the continued financial stability and health of the financial sector firm.
The increasing sophistication of cyber attacks, which are being directed at more than just the financial sector but at many others as well, can be seen in the June 2014 Price Waterhouse Coopers Survey, “U.S. Cyber Crime: Rising Risks, Reduced Readiness “
Recently, for instance, hackers engineered a new round of distributed denial-of-service (DDoS) attacks that can generate traffic rated at a staggering 400 gigabits per second, the most powerful DDoS assaults to date.”
Security Breaches, Insurance Claims, and Actuarial Tables
The NetDiligence study “Cyber Liability and Data Breach Insurance Claims” is one of the more comprehensive examinations of the actual insurance pay-outs on claims for data breaches.
The study was interested in comparing the actual cyber payouts to the anecdotal breach information that is reported in the media and industry reports.
This study reported the real costs of cyber insurance payouts from an insurance company’s perspective. Perhaps the most significant contribution of this study was the focus on improving the actuarial tables by encouraging risk managers and those working in the data security field to perform more accurate risk assessment reviews and to implement more effective safeguards to protect their organizations from data breaches.
As the improvement of safeguards and risk assessments make progress in their respective areas, the insurance industry will be in a position to improve the actuarial tables, which will result in more precise price modeling of the cyber insurance policies. The NetDiligence study also compared their results to the work of the Ponemon study:
Major underwriters of cyber liability provided information about 137 events that occurred between 2009 and 2011, which we analyzed for emerging patterns. Among our findings: PII (personal identification information) is the most typically exposed data type, followed by PHI (private health information).
Topping the list of the most frequently breached sectors are health care and financial services. The average cost per breach was $3.7 million, with the majority devoted to legal damages.
When compared with the Ponemon Institute’s Seventh Annual U.S. Cost of a Data Breach Study, our figures appear to be extremely low. The institute reported an average cost of $5.5 million per breach and $194 per record.
However, Ponemon differs from our study in two distinct ways: the data they gather is from a consumer perspective and as such, they consider a broader range of cost factors such as detection, investigation and administration expenses, customer defections, opportunity loss, etc.
Our study concentrates strictly on costs from the insurer’s perspective and therefore provides a more focused view of breach costs.
The NetDiligence study also focuses primarily on insured per-breach costs, rather than pre-record costs. As explained by Thomas Kang, Senior Claims Specialist at ACE USA, “You have to be careful in correlating too closely the cost of a breach response to the number of records.
Certainly, it will cost more to notify and offer credit monitoring to more people, and there is a greater risk of potential third-party claims for incidents involving a higher number of records.
However, the legal and forensic costs can vary significantly depending on the complexity of the incident and the specific requirements in the policyholder's industry, independent of the number of records.
There appears to be an expectation in the marketplace for a breach to cost a certain amount simply based on the number of records, but our policyholders have been surprised to find that the actual response costs generally will be unique to the specifics of the breach.
For example, we have breach incidents involving less than 5 thousand records, with remediation costs in six figures because of the policyholders’ industry and the complexity of the breach.”
The NetDiligence study described their methodology in which they specifically worked with insurance underwriters and requested information on the data breaches and the claim losses sustained as follows:
Survey of Financial Institutions’ Cybersecurity Programs
The New York State Department of Financial Services’ concern on the number of cyber attacks against financial institutions in terms of both the increasing frequency and sophistication of attacks caused them to survey all 154 financial institutions within New York State.
Their survey was designed to seek information on each of the 154 institutions’ cybersecurity programs and its costs and future plans. The objective of the survey research was to obtain a horizontal perspective of the financial services industry’s efforts to prevent cybercrime and protect consumer and clients in the event of a security breach.
The total of 154 depository institutions that completed the survey included 60 community and regional banks, 12 credit unions, and 82 foreign branches and agencies.
They were asked questions about their information security framework; use and frequency of penetration testing; budget costs associated with cybersecurity; the frequency, nature, cost of, and response to cybersecurity breaches; and future plans on cybersecurity.
Almost 90% of the surveyed institutions reported having an information security framework, in which the key pillars of their information security program included the following:
A written information security plan
Security awareness education and employee training
Risk management of cyber risk including trends
Information security audits
Incident monitoring and reporting
The vast majority of institutions reported utilizing some or all of the following software security tools:
Spyware and malware detection
Server-based access control lists
Intrusion detection tools
Vulnerability scanning tools
Encryption for data in transit
Data loss prevention tools
Also, most of the institutions used penetration testing as an additional important element to the above listing of defenses. However, more than 85% of the institutions participating in penetration testing used third-party consultants to perform the penetration tests.
Another point of importance was the number of institutions participating in the Information Sharing and Analysis Centers (ISACs), which dropped off at the level of small-institution participation rate of 25% and large -institution participation rate of 60%.
Institutions, particularly the smaller institutions, if not all, could achieve an advantage by participation in the F-ISACs, or Financial-ISACs. The federal government and Department of Homeland Security share a great deal of their information from the reports sent to the ISACs.
It is interesting to note that virtually all surveyed institutions anticipate budgetary increases for their cybersecurity programs, and the three principal reasons for this are because of (1) compliance and regulatory requirements, (2) business continuity and disaster recovery, and (3) concern for reputational risk.
Despite budgetary increases in their cybersecurity programs, their expressed concerns as to the primary barriers they will encounter in building cybersecurity programs for the future centered on the increasing sophistication of the threats and cyber attacks. They were also concerned about emerging technologies and their ability to keep pace with these new technologies.
New Cybersecurity Models
Despite all the efforts of institutions and organizations across all business sectors and regions, the risk of cyber attacks is a significant issue that could have major strategic implications for the global economy.
McKinsey & Company prepared a report, “Risk and Responsibility in a Hyperconnected World,” with the cooperation of the World Economic Forum as a joint research effort to develop a fact-based view of cyber risks and to assess their economic and strategic implications.
They interviewed executives and reviewed data from more than 200 enterprises, and their main finding was despite years of effort and tens of billions of dollars spent annually, the global economy is still not sufficiently protected against cyber attacks and the risks are increasing and getting worse.
They further concluded that the risk of cyber attacks could materially slow the pace of technology and business innovation with as much as $3 trillion in aggregate impact.
While the major technology transformational advancements made by big data and cloud computing are expected to add $10 trillion dollars to the global economy, the potential drag on these technologies will continue to originate from the increasing volume and complexity of cyber attacks.
Also, the introduction of big data offers a vast new opportunity for security breaches so each of the estimates is subject to major revision, and our fear is the losses will increase on a global scale while the anticipated revenue from the new transformational technologies of big data and cloud computing will decrease below anticipated estimates.
The McKinsey & Company report stated, “The defenders are losing ground to the attackers. Nearly 60% of technology executives said that they cannot keep up with attackers increasing sophistication.”
In short, current models of cybersecurity protection across so many business sectors are simply becoming less effective in protecting institutions from cyber attacks. As a result, we need further thought and analysis on building very different cybersecurity operating models.
Current models are very IT-centric, and the complexity of these models deters the CEOs and other C-level administrators from more active participation. Therefore, new cybersecurity models should be designed to engage senior business leaders by transitioning from technology-centric to view these breaches as strategic business risks.
The CEOs of the past were focused only on “revenue centers” and their quarterly returns. Now that the level of cyber attacks is capable of stealing intellectual property and totally devastating a business organization, the board of director’s fiduciary responsibilities has resulted in a series of “wake-up calls” to the CEOs for full engagement in developing effective cybersecurity programs.
Many boards of directors now expect quarterly progress reports and are holding the CEOs and leading C-level administrators responsible for the development of more effective programs.
In addition to the past nonengagement of senior business executives, the shortcomings of the IT-centric model was simply around a “reactive” model of audit and compliance, and at best, the fragmented approaches simply were not designed to anticipate the increasing sophistication of cyber attacks.
The Deloitte Center for Financial Services offers important suggestions for model development in their report “Transforming Cybersecurity: New Approaches for an Evolving Threat Landscape.”
One suggestion is to enhance security through a “defense-in-depth” strategy that involves a number of mutually reinforcing security layers both to provide redundancy and potentially slow down the progression of attacks in progress, if not prevent them.
The improvement of a cybersecurity model is a three-stage process that includes (1) secure, (2) vigilant, and (3) resilient.
By (1) secure, the focus is on enhanced risk-prioritized controls to protect against known and emerging threats in compliance with industry standards and regulations.
(2) Vigilant is an emphasis on the detection of violations and anomalies through more effective situational awareness across the environment, which implies intelligence activities not only limited to the collection of raw data on known threat indicators but also through the engagement of direct human involvement.
(3) Resilient is the establishment of programs with the ability to rapidly return to normal operations and repair damage to the business by the cyber-security breach.
Thus, a well-rounded cybersecurity program model is based on the three components of secure, vigilant, and resilient. However, this model requires actionable threat intelligence premised on experience-based learning and situational awareness.
The final level requirement is to model the cybersecurity program with a strategic organizational approach that includes top-level executive sponsorship, a dedicated threat-management team, renewed focus on analytics and not solely automation, and a strong emphasis on external collaboration from the ISACs and F-ISACs to other important intelligence sources.
WIKILEAKS AND WHISTLEBLOWERS
The internet allows all sorts of operatives and opportunities to get access to treasure troves of data that would have been unimaginable back in the days of Watergate when actual burglars had to break into bricks and mortar hotel to lift a few measly documents. These days, millions of pages of classified records can be liberated with a little stealth and skill.
This means that it’s easier than ever for legitimate government agencies (or of course jack-booted thugs, depending on your perspective) to obtain all kinds of information from voice and written communications surveillance.
This type of intelligence gathering traditionally falls into the realm of what is known as signals intelligence, or SIGINT. Our national intelligence agencies concerned with the capture and analysis of SIGINT have never had such an advantage, and they have risen to the task.
The flip side? What’s good for the goose has turned out to be far better for the gender who wants to steal that SIGINT. This affects governments trying to protect information as well as businesses that are vulnerable to corporate and industrial espionage Ultimately “Information wants to be free,” as the saying goes… but at what price?
THE SECURITY TRIAD
The guiding principle of the security field is that you need three factors to ensure a secure system. Confidentiality You simply must know that data stored on your system is protected against unintended or unauthorized access.
That certainty is immensely complex to implement since, for example, Chelsea Manning was authorized to access—but not to copy and share—files.
Integrity The data’s consistency, accuracy, and trustworthiness must be maintained over its entire life cycle, with contingencies for human error, server crashes, and viruses.
Availability The best data in the world is useless if you can’t consistently and reliably access it, no matter what technology you’re using.
WIKILEAKS AND SIMILAR SITES
In the last several years, mentions of WikiLeaks and other associated websites and individuals have become more and more prevalent in media and conversations about security. But some people may still be a bit fuzzy on just who they are and what they do.
What Are They? WikiLeaks—as well as its wiser, more mature, and, from a policy perspective, more lastingly impactful older brother, Cryptome (as well as thousands of similar sites that have sprung up around the world from time to time)—provide varying levels of anonymity to those willing to disclose to the public information or data that they feel is of interest.
This information can range from government documents (such as intelligence reports, diplomatic communiqués, program outlines, and planning descriptions), to insider stock trading records.
Other examples include logs of computer network breaches, inside corporate policy documents not intended for public consumption (for example, internal pricing or policies on pharmaceutical distribution), customer records, naked photographs of celebrities, and anything else considered interesting or titillating.
Why Do They Exist? Reporters, muckrakers, short-sellers, investigators, opposition researchers, and suspicious spouses have always looked to insiders for these kinds of disclosures. The internet has simply made it easier to find.
What’s Useful About Them? It can be argued quite well—and I do argue it—that our founding fathers had a hearty and healthy distrust of government, and they empowered the people to foment regularly this distrust through a vigorous and free press.
If you believe that assertion, then you simply must believe that anything that empowers such a free press is, by definition, “useful.” Regardless of your position on Edward Snowden, WikiLeaks, Chelsea Manning, or Daniel Ellsberg, that they are a part of public debate and discourse is ultimately better than why they are not.
What’s the Downside? It is my personal belief that the kind of leaks inspired by Julian Assange—and committed by Chelsea Manning and especially by Edward Snowden—ultimately do more harm than good.
When leaking is relatively easy (provided of course you have the access and the know-how), the kinds of thought and agonizing put into “what to leak” and “whether to leak it” exhibited by an Ellsberg or a Russo give way to the immediacy and the instant global celebrity attendant with the act of leaking, as we saw with the way Snowden gathered and disseminated some potentially deadly intelligence.
In summary, the increasing risks of cybersecurity attacks and the growing sophistication of these cyber breaches have now reached a point where business executives are acknowledging that their ability to keep pace with these breaches is not keeping pace with those attacking their companies.
The costs of these attacks and the need for cyber insurance have reached a level where security breaches are simply not only a problem for the IT departments, as these cyber breaches have become a major strategic business risk.
As such, there will be a need for cross-functional teams composed of the CEO and other C-level administrators, including the chief information officer, chief information security officer, chief operating officer, risk manager, compliance officer, and corporate counsel, to develop actionable programs that go beyond the “reactive” and audit-compliance aspects of the more traditional information security programs.
The new cybersecurity and IT models must be guided by a new enriched business-driven risk management approach.
The costs of cyber attacks today are so serious that they are threatening the very sustainability of corporations throughout the world.
In addition to the cyber attacks threatening the sustainability of our corporations and business community, other private and public entities are also being attacked, and their capability to withstand such serious cyber attacks is even less ensured than that of the corporate community.
Hospitals, healthcare facilities, schools, and universities, as well as most municipal and local governmental agencies, simply do not have the personnel or capabilities to withstand the sophisticated level of attacks that could be directed at them, should they become targeted for such security attacks.
Similarly, most states and many federal government agencies have minimal ability to cope with the number of attacks that could be directed at them for extended attack time periods.
While our nation’s military and intelligence community have developed both programs and personnel with new skills to defend against the enormous range of cyber attacks, the sheer number of daily attacks is coming perilously close to overwhelming their defensive capabilities.
Our nation cannot afford for these important and critical agencies to confront security attacks that could potentially result in their loss of sustainable operational capability.