Cyber Security Framework (2019)

Cyber Security Framework

Cyber Security Framework 2019

This Tutorial explores the transformational changes and cybersecurity framework that will impact the entire field of information assurance and computer security as a result of five major trends throughout the world. These five trends are the following:

  1. Virtualization
  2. Social media
  3. Cloud computing
  4. Internet of Things (IoT)
  5. Big data

 

Each of these trends provides clear enhancements and cost-effective strategies for improving business operations and revenue streams for corporations.

 

However, collectively, these trends introduce a transformational challenge to computer security professionals because the field has not yet been confronted with such a fundamental change in providing security to the volume and velocity of data that is being created in terms of exabyte capacities.

 

The entire data and computer industry are confronting a change at a level few are prepared to address.

 

The increasing number of breaches throughout our global community of businesses, government, and citizens has occurred within a threat landscape of attack mechanisms far beyond the current computer security defense capabilities.

 

The potential volume of new data in both structured and unstructured formats will introduce new threat attacks and will deeply impact corporations, governments, and military institutions throughout the world.

 

The cost of securing information and the incredible size of databases will increase in both financial terms as well as in risk and vulnerability.

 

New skill sets and the training and education of computer security and data professionals will be required to become prepared for the massive changes that will impact virtually all industries, governments and nations.

 

Threat Landscape

Threat Landscape

The increasing number of breaches occurring globally as reported by Symantec, FireEye, and Verizon is most alarming as it represents a significant threat to all nations.

 

The loss of intellectual property and damage to information systems is a cost that is causing a great deal of alarm to both the corporate suites of major corporations as well as to governmental leaders throughout the world.

 

The increasing number of breaches is causing the retargeting of limited resources from new developmental projects to firming up the cybersecurity defense programs.

 

The breaches that have occurred over the years have evolved and increased as a result of the numerous attack tools and exploit techniques that are too easily available for free from the Internet or by sale from cyber-criminals and hacktivists.

 

Despite the thousands of different computer and network attacks that have been developed and used since the very first computer attack tools were identified in 1981, we believe that analysis of the threat landscape provides an organizational framework of great value.

 

Steve Piper’s Definitive Guide to Next Generation Threat Protection is an excellent resource available from the CyberEdge Group, and we recommend building on his framework as it will be extremely useful in analyzing vulnerabilities and developing defense strategies against breaches.

 

Worm

Worm

  1. A stand-alone malware program that replicates itself
  2. Harms networks by consuming bandwidth
  3. A lateral attack vector that can exfiltrate data Trojan
  4. Typically masquerades as a helpful software application
  5. Can be initiated by spam mail, social media, or a game application Computer virus
  6. Is a malicious code that attaches itself to a program or file, enabling it to spread from one computer to another, leaving infections as it travels Spyware

Covertly gathers user information without the user’s knowledge, usually for advertising called “Adware”

 

Is a collection of compromised Internet-connected computers on which malware is running command and control servers; can launch distributed denial-of-service (DDoS) attacks using these botnets

 

Social engineering attacks

Social engineering attacks

An example is phishing, in which the purpose is to obtain usernames, passwords, credit card information, and social security information.

After clicking on a (seemingly innocent) hyperlink, the user is directed to enter personal details on a fake website that looks almost identical to a legitimate website.

 

Targets a specific person within an organization. Whaling Is directed specifically toward senior executives and other high-profile targets.

 

A criminal casually and purposefully drops a USB-thumb drive or CD-ROM in a parking lot or Cyber Café. The drive is prominently labeled with words, such as, “Executive Compensation” or “Company Confidential” to pique the interest of whoever finds it. When the victim accesses the media in their computer it installs the malware.

 

Buffer Overflow and Structured Query Language Injection

 

Buffer overflow

The hacker writes more data into a memory buffer than the buffer is designed to hold. Some of the data spill into adjacent memory, causing the desktop or web-based application to execute arbitrary code with escalated privileges or to crash.

 

Attacks databases through a website or web-based application. The attacker submits SQL statements into a web form in an attempt to get the web application to pass the rogue SQL command to the database.

 

A successful SQL injection attack can reveal database content such as credit card numbers and social security numbers and passwords.

 

Next-Generation Threats

Next-Generation Threats

Polymorphic threats

A cyber attack such as a virus, worm, spyware, or Trojan that constantly changes (morphs), making it impossible to detect using signature-based defenses. Vendors who manufacture signature-based security products must consistently create and distribute new threat signatures. Blended threats

 

A cyber attack that combines elements of multiple types of malware and usually employs multiple attack vectors (varying paths and targets of attack) to increase the severity of damage. Examples are the Nimda virus, Code Red virus, Conficker virus.

 

Zero-Day Attack

A zero-day threat is a cyber attack on an application or an unknown publicly operating system application vulnerability so named because the attack is launched on or before “day zero” of public awareness of the vulnerability.

 

APTs

Sophisticated network attacks in which an unauthorized person gains access to a network and stays undetected for a long period of time. The intention of the APT is to exfiltrate data rather than cause damage.

 

The APT attack process is as follows:

  1. Stage 1—Initial intrusion through system exploitation.
  2. Stage 2—Malware is installed on the compromised system.
  3. Stage 3—Outbound connection initiated.
  4. Stage 4—Attack spreads laterally.
  5. Stage 5—Compromised data are extracted via tunneling and encryption.
  6. Stage 6—Attacker covers their tracks—remains undetected.

 

Eric Cole describes APT attacks as being targeted, data-focused, and seeking high-valued information and intellectual property from the victim organization being probed.

 

If the APT attack is successful, the amount of damage to the organization will be very significant. Cole reports on the characteristics of the APT attack as being a nonstop attack, and signature analysis will be ineffective in protecting against the attack.

 

Attackers, once obtaining access, will not simply get in and then leave, as they want long-term access and will remain as long as possible. Several researchers have discovered that a norm of 243 days before the discovery of the attack was reported, with some attacks lasting as long as four years before the discovery of the APT attack.

 

Cole also reports that the APT attack is not based on an individual or small hacker cell but a well-organized and very structured organization in which there are an attack protocol and methodology that are very detailed and sophisticated.

 

Cole also indicates that one of the most frightening features of the APT is that it turns our biggest strength into our biggest weakness.

 

So by using encryption that was designed to protect and prevent attackers from accessing critical information, the attacker uses encryption to establish an outbound tunnel from the targeted victim’s organization to the attacker’s site and exfiltrates data in an encrypted format virtually undetected, as most security devices are not capable of reading encrypted packets.

 

Additional new threat attacks such as CryptoLocker and Ransomware permit the cyber-criminal to encrypt and prevent access to all files unless the victim pays the extortion fee to have their computer files decrypted and regain access to the files.

 

Donna Leinwand Leger reports that small groups of anonymous hackers once went after individual victims, but now, we are experiencing how they have organized into crime syndicates that launch massive attacks against entire companies.

 

Also, computer threat researchers at Dell Secure Works estimated that the CryptoLocker virus struck over 250,000 computers in its first 100 days.

 

The virus is being sent through “the onion router,” (TOR), and it comes to the victim via an infected e-mail that appears to come from the local police or the Federal Bureau of Investigation, a package delivery service such as FedEx or UPS, or in PDF attachments.

 

Once the victim’s computer is infected, a pop-up screen appears with instructions to pay the ransom through an anonymous payment system such as Ukash, PaySafe, MoneyPak, or Bitcoin.

 

In some cases, the pop-up screen has a clock running, which notifies the victim to pay within so many hours or the ransom price will be increased. CryptoLocker is one of the few main-stream attacks where security companies do not have a method for decrypting the virus.

 

Kaspersky Lab in North America reported no effective cure for the CryptoLocker virus, at least at the time this blog is being written. The range of victims not only includes individuals, and companies but also police departments. We anticipate that any organization with data may be targeted.

 

Attacker’s Need for Information

computer attack

Irrespective of the type of computer attack or exploit techniques that an attacker would plan to use, the one item absolutely necessary for the attacker is information.

 

The source of information to the attacker would be the servers at the targeted victim’s organization. To acquire this information, the attacker needs an Internet protocol (IP) address, and since ports are the entry point into a computer system, the attacker will be looking for open ports.

 

Ultimately, for an attacker to compromise a system, there must be vulnerability present on the system and the attacker will attempt to discover this vulnerability.

 

To acquire the IP information, the attacker will use a Whois search to find the name servers for the domain. Once the name servers are identified, the attacker will use Nslookup to identify the IP.

 

The Nslookup will identify the organization’s IP address, and if it is a U.S. address, the American Registry for Internet Numbers (ARIN) will provide the range of the address to the target.

 

Once the attacker knows the IP range, the attacker will scan the range to discover visible IP addresses and open ports, and this process can be accomplished with tools such as Nmap and Zenmap, both software tools used as security scanners to discover hosts or services on a computer network.

 

The next step in an attack on a targeted organization is to locate vulnerabilities, and the attacker will use a vulnerability scanner such as OpenVas to identify vulnerabilities or exposures.

 

The next step the attacker will implement is to use a tool such as Core Impact, as this tool will actually find system vulnerabilities and, if vulnerable, will exploit the service and provide the attacker access to the system.

 

Skoudis and Liston note that most attacks follow a general five-phase approach, which includes reconnaissance, scanning, gaining access, maintaining access, and covering the tracks of the attack. They outline the process as follows:

 

Typical Phase of the Computer Attack Phase 1—Reconnaissance

Computer Attack Phase

  1. Phase 2—Scanning
  2. Phase 3A—Gaining Access at the Operating System and Application Level
  3. Phase 3B—Gaining Access at the Network Level
  4. Phase 3C—Gaining Access and Denial-of-Service Attacks Phase 4—Maintaining Access
  5. Phase 5—Covering Tracks and Hiding

 

The exceptional contribution of their blog centers on the comprehensive description of each attack phase and the tools and techniques used during each stage of the attack.

 

Eric Cole considers APT attacks so significant and such a transformational attack on our traditional cybersecurity products, programs, and systems that he was moved to write his excellent blog Advanced Persistent Threat on this subject because it quite simply changed the rules as to how we secure our systems.

 

For example, over the years, worms and viruses adapted and changed, but the fundamental way they worked remained the same.

 

The APT is no longer software that is programmed to perform a certain function; now, it is a person, group, or a nation that is an organized adversary that will not give up until they obtain or exfiltrate the information or intellectual property they are seeking.

 

Therefore, to defend against an APT attack, you will not find a product that will protect your organization. Instead, it will be necessary to develop a strategy that implements a variety of solutions that can be adaptive and be prepared for future changes in the APT threat.

 

This new strategy must be more than the past approach of reactive security, and we now must have a proactive security approach that goes beyond the binary decision of allowed or denied.

 

Today, our cybersecurity environment operates within social media, cloud computing, bring your own devices (BYOD), the machine-to-machine IoT (M2M-IoT), and big data, all areas in which there will be different levels of trust and access which will be required.

 

Therefore, access has to be based on overall risk and not simply static rules. The overarching reality is quite simple: whether you are an individual, small company, a major corporation, government organization, or a university, you will be targeted and you will be attacked.

 

Cyber Security Framework Trend 1:

Transformational Changes for Cybersecurity

Cybersecurity

The challenges confronting information assurance and cybersecurity have become greatly pronounced as a result of five major transformational changes in how data are produced, processed, collected, stored, and utilized. These five transformational changes are as follows:

  1. Virtualization
  2. Social media
  3. Cloud computing
  4. M2M-IoT
  5. Big data

 

These five major movements are creating both major advancements and increased productivity in the industries and governmental entities utilizing one or more of them.

 

While the corporate community embraces the increased revenue streams that each may produce, they will also experience increased costs in the information technology (IT) and computer security created by these transformational movements.

 

In addition to enhanced data security problems creating a need for more skilled personnel, there will also be increased needs for data analytics personnel.

 

The five transformational movements have an interesting relationship in terms of their interdependencies.

 

For example, the virtualization of computer server provisioning has created the need for cloud computing. The explosive growth in social media provided an enhanced need for virtualization and also created a need for cloud computing.

 

The presence of cloud computing and its availability as either a public cloud, private cloud, community cloud, or hybrid cloud provide a menu suitable to a reduced cost structure to those corporations or governments adopting one of these models.

 

Cloud computing also requires the advances made in virtualization, and while there are cost savings in computer hardware, the challenges of computer security in the cloud environment are considered a challenge.

 

The IoT, which is based upon M2M integration of automatic data stream processing from one computer sensor to another sensor, as an example your home heating and cooling thermostat to your smartphone as well as other appliances, is representative of the enormous increase in the processing of data.

 

The IoT will include all forms of digital data, which include voice, video, and text, and its growth is at an exponential level. Since these data streams are being processed through the Internet, the processing requires a new format of unstructured data that differ from the traditional SQL for accessing relational databases.

 

So this movement of IoT has created the need for big data and the introduction of Hadoop and NoSQL to process the phenomenal volume and velocity of these new data streams. Big data will also require new personnel in the data analytics field, as well as increased cybersecurity provisioning.

 

The cumulative interdependencies of these five transformational movements have resulted in major advancements for the entire computer industry. We will describe some of the emerging challenges and provide a brief overview of the contributions that each of the five movements has made to the overall computer industry.

 

Cyber Security Framework Trend 2:

Virtualization

Virtualization

Virtualization is best defined as a strategy that permits and enables the provisioning of multiple logical servers on one physical server.

 

In virtualization, you will always require a physical server, but by being able to manage this physical server through a logical process, one can consolidate applications and workloads on one physical server as opposed to requiring multiple physical servers.

 

For example, if your organization has 16 separate computer servers hosting critical infrastructures, the virtualization process would enable all 16 separate servers to be hosted on one physical server.

 

While this process is very cost effective in terms of reducing capital expenditures for multiple types of equipment, it does provide vulnerability should a hardware failure occur on the physical server that contains all the virtual machines (VMs).

 

Another aspect of virtualization is the need for more memory since the increase in logical connections has increased the volume of data. Also, the number of software licenses may be increased since multiple applications are being delivered through one physical server.

 

Virtualization really became mainstream in 2011–2012, despite its early appearance in 1999. Another advantage of virtualization centers on the fact that it enables IT departments to confront one of their most difficult challenges of infrastructure sprawl that consumes 70% of the IT budget for maintenance while leaving few resources to focus on building new business innovations.

 

In essence, virtualization is the key technology that enables cloud computing, and both cloud computing and the new “software-defined” data centers are examples of IT assets that have been virtualized. Thus, the interdependency of virtualization, cloud computing, and big data is in an integral relationship.

 

Despite the recent emergence of virtualization, threats to the virtualized infrastructure have already occurred, and since virtualization now occupies such an important role in cloud computing, it is imperative to enhance our management of the security environment in our virtualized infrastructure. Ronald Krutz and Russell Dean Vine’s excellent blog, Cloud Security:

 

A Comprehensive Guide to Secure Cloud Computing provides an outstanding framework to understand the security threats to the different types of virtualized environments.

 

Their listing of virtual threats emphasizes the range of vulnerabilities stemming from the fact that vulnerability in one VM system can be exploited to attack other VM systems or the host system since multiple virtual machines share the same physical hardware or server. Additional important virtual threats they describe are the following:

 

Shared Clip Board—this technology allows data to be transferred between VMs and the host, providing a means of moving data between malicious programs in virtual machines of different security realms.

 

Keystroke Logging—some virtual machine technologies enable the logging of keystrokes and screen updates to be passed across virtual terminals in the virtual machine, writing to host files and permitting the monitoring of encrypted terminal connections inside the virtual machine.

 

VM Monitoring from the Host—since all network packets coming from or gaining to a VM pass through to the host, the host may be able to affect the virtual machine in any number of ways.

 

Virtual Machine Monitoring from Another VM—usually, virtual machines should not be able to directly access one another’s virtual disks on the host.

 

However, if the VM platform uses a virtual hub or switches to connect the VM to the host, then intruders may be able to use a hacker technique known as “ARP Poisoning” to redirect packets going to or from the other VM for sniffing.

 

Virtual Machine Backdoors—a backdoor, covert communication channel between the guest and host could allow intruders to perform potentially dangerous operations.

 

Hypervisor Risks—the hypervisor is the part of the virtual machine that allows host resource sharing and enables VM/host isolation. Therefore, the ability of the hypervisor to provide the necessary isolation during an intentional attack determines how well the virtual machine can survive risk.

 

The Hypervisor is susceptible to risk because it is a software program, and risk increases as the volume and complexity of application code increase. Rogue Hypervisor and rootkits are all capable of external modification to the Hypervisor and can create a covert channel to dump unauthorized­ code into the system.

 

In addition to identifying virtualization risks, Krutz and Vines also provide an extensive list of VM security recommendations and best practice security techniques, which include the following:

  1. Hardening the host operating system
  2. Limiting physical access to the host
  3. Using encrypted communications
  4. Disabling background tasks
  5. Updating and patching of systems
  6. Enabling perimeter defense on the VM
  7. Implementing file integrity checks
  8. Maintaining back-ups
  9. Hardening the VM
  10. Harden the hypervisor
  11. Root secure the monitor
  12. Implement only one primary function per VM
  13. Firewall any additional VM ports
  14. Harden the host domain
  15. Use Unique Nic’s for sensitive VMs
  16. Disconnect unused devices
  17. Secure VM remote access

 

Clearly, virtualization is an enabling and transformational trend that has already impacted many industries, as well as the computer field itself. We can anticipate additional advancements in the virtualization infrastructure, and these will impact each of the five major trends we have identified.

 

Cyber Security Framework Trend 3:

Social Media

Social Media

In today’s current environment, the number of people using and participating in social media is exploding at a level so intense that businesses and the corporate community are moving headlong into these environments.

 

Business organizations see an opportunity to more effectively market their products especially given the enormous number of people who are so totally engaged in social media.

 

Also, the low cost of marketing products or services over social media compared with the more expensive cost of traditional marketing media is another driving force behind the acceptance of social media by corporations and the business communities.

 

One of the major pillars supporting the emergence of social media has been the function of the mobility of various computing devices. Thus, the mobile telephone, Smartphone, and the tablets have all provided a means for people to engage in social media wherever they are located.

 

The desktop computer as well as the laptop, once the primary tools of the individual at home or at work, are now being replaced by Smartphone’s and tablets, and this allows easier and more frequent access to an increasing number of social media sites.

 

While this access has been welcomed by the individual and to a large degree by corporations and the business community, there are many aspects of social media that present a challenge to the security of data that reside in our corporations and businesses.

 

So the factor of mobile devices such as Smartphone’s and tablets, which are increasingly being brought to the individual’s workplace and, in many cases, with or without the knowledge of the employer, has prompted concerns, especially when the individual uses the personal device to access the employer’s websites or database and other applications.

 

The concern for the organization, whether it is a business, a governmental, or a nongovernmental organization, all centers on the possibility of the individual device introducing malicious code such as a virus or worm into the employer’s data system. This, in turn, has introduced the BYOD concern, and what policies and programs should be developed to respond to this major trend?

 

Business organizations as well as universities, governmental entities, and virtually any organization that employs people will, at some point, have to consider the creation of policies for employees or those who bring their own devices to work.

 

Thus, the creation of a BYOD policy will have to entail not only a policy but also programs for informing and training employees as to the safe use of their devices in the employers work environment.

 

Obviously, the first decision is whether to permit employees to use their personal devices with the organizations business applications, data, and other internal digital information.

 

Clearly, there are some organizations that have classified information such as our military, federal law enforcement agencies, and our national laboratories that already have articulated policies in place precluding BYOD into designated areas.

 

Also, some businesses, financial institutions, and healthcare organizations may be precluding their employees from bringing their personal devices or using their personal devices due to stringent legal, regulatory, and compliance rules.

 

Those organizations that are able to consider authorization of their employees’ use of personal devices should develop a BYOD set of policies and programs.

 

Since the major concern of any organization will be on maintaining the security of their data, it will be imperative that such policies and programs are created not simply by top management but through the inclusion of the IT leadership, the legal department, and the human resources department.

 

The creation of such a policy will, by its specific intent, generate programs that will be implemented and will have to be monitored for employee approved usage. In addition to employee usage, what policies will exist for violations of the approved personal device use?

 

Since businesses must address concerns related to secure access, malware prevention from third-party users, and exfiltration of their intellectual property, it is necessary and incumbent to establish policies that will secure the data of the organization from exploitation or modification.

 

Smartphones, which, in many cases, are equipped with near-field communication (NFC), allow one smartphone to share information with another NFC device and to very easily transfer payment information or photos and other contact information.

 

This is technology that hackers can use to gain access to an employee’s information and entire digital personality, including information as to the employer and the employer’s databases.

 

In addition to NFC technology, the recent malware known as Ransomware can encrypt an individual’s Smartphone and prevent the user from using it unless a ransom is paid.

 

This could also impact the corporation if the employee passes data from the corporate databases. In this case, both the user of the Smartphone and their employer could be susceptible to extortion unless the money is paid to the cyber-criminal.

 

Also, since Smartphone users maintain photos on their device, this becomes another target for extortion, with the attacker threatening either to delete the photos or to post them on various public sites, causing the owner a loss of privacy.

 

One of the difficult issues that confront organizations in creating BYOD policies, whether these are focused on smartphones, tablets, or other devices, is related to the issue of privacy.

 

In essence, how do you maintain a balance between the need to protect your organization’s data and resources and responding to the individual user’s personal data on that same device that may or may not is owned by the user?

 

In the event, the employee visits sites that may be blacklisted by the organization, what recourse is open to the human resources department? Indeed, how will this be monitored, and what recourse is open to the organization for the user’s noncompliance with the BYOD policy?

 

Additional issues that must be carefully considered are as follows:

mobile security

  1. Will employee’s smartphones require some form of security or mobile security software?
  2. Will encryption be required?
  3. Will phones be containerized to separate the business from personal data?
  4. Will certain “blacklisted” applications be blocked from the user’s phone?
  5. Will monitoring be instituted? If so, by whom?
  6. Will file sync be authorized where documents are uploaded to the cloud? While a convenient application use for the individual, it adds significant vulnerability to the organization's database.
  7. Will e-mail encryption policies be implemented?
  8. Will certain Apps be permitted, and from what devices or operating systems?

 

Eric Cole, in discussing top security trends, reports that the exponential growth of smartphones, tablets, and other mobile devices has opened additional opportunities for cyber attacks as each has created vulnerable access points to networks.

 

This expanding use of social media contributes to the cybersecurity vulnerabilities and expanded threats, and in particular, when assessing the smartphones, it is clear that at least 80% do not have appropriate mobile security in place.

 

If a laptop, tablet and mobile phone all contain the same data, why does one have fifteen character passwords and another only a four-digit pin?

 

Why does one have endpoint security and patching and the other device has nothing? The policy should be written for the sensitivity of the data and any device that contains that information should have the same level of protection.

 

What Cole quite astutely points out is that security should be based on the data and not on the type of device.

 

In analyzing APT attacks, it is the targeting of humans and the reconnaissance of social media information found on sites such as Facebook, LinkedIn, and others that allow APT attackers to become so successful in their operations.

 

An APT attacker would scan social media sites looking for a list of people who work at a target organization. They would also go to the organization's website and see who is listed on the webpage. Press releases, job vacancy sites, and other open source information are all used to obtain a list of employees.

 

Subcontractors would also be targeted as a potential access point. Once a list of employees is gained, Google alerts are set-up on those individuals tracking all postings and any information that is publicly available about those people.

 

Correlations analysis is done to try and find out the bosses including the overall structure of the organization. Once a threat actor finds out about a person’s job, their interest, and co-workers, they begin to put together a plan.

 

In essence, the attacker has socially engineered a plan to attack a target organization on the basis of social media and mobility and, in the process, has benefited by numerous vectors, which must now be analyzed by cyber­ security professionals to neutralize those weak points and vulnerabilities.

 

Cyber Security Framework Trend 4:

Internet of Things

Internet of Things

As a result of the transformational developments in virtualization, social media, and mobility, we now encounter the M2M connectivity, and we are entering a new era that is termed the Internet of Things.

 

The M2M movement made possible by Wi-Fi and sensors has enabled direct connectivity between machine and machine without human interface.

While the M2M movement began in the 1990s, it has gained incredible expansion particularly through its connectivity via cellular networks, and projections are now being estimated that within the next five years, there will be 25 billion to 50 billion devices connected, and each providing a stream of data that will increase the IoT era.

 

The cellular network is growing since the data exchange from one device to another device is being accomplished wirelessly and on a mobile basis. The point for which these devices are becoming Internet-connected is to improve the homeowner’s convenience and ability to use some devices more economically.

 

For example, the ability of a smart telephone to be able to receive data from the homeowner’s heating and cooling units provides the homeowner the opportunity to either reduce or elevate the thermostat, which will lower the cost of the utility bills, conserve limited resources that produce this energy, and also provide a convenience to the homeowner. This same process can be applied to lighting and security issues around the home as well.

 

The growing applications of M2M are providing a shift in business models that now permit more than simply selling products and are now expanded to also sell services. An example can be viewed by those companies that deal with commercial trucking operations.

 

Now, they can sell more than the truck tire; they can provide a service that permits them to dispatch their service vehicles to the truck when the truck tire wear reaches a critical level.

 

Another example is a manufacturing company, a produce shipping company, or a garden supply or florist operation, who can all install devices that not only track the location of the vehicle but also record the inside temperature to guard against spoilage.

 

There are other business sectors such as healthcare, security services, energy companies, construction, automotive, and transportation that are all in the process of connecting M2M devices and creating this incredible expansion of the IoT.

 

The Wall Street Journal reported on an application that even involved a smart-phone-controlled Crock-Pot cooker to adjust the heat and cooking time from a remote site.

 

Ironically, the typical selling point of Crock-Pots is to permit the remote preparation of a meal; so is the M2M connectivity really representative of the type of devices that will become an important part of the IoT, or is this simply an application that is more of a gimmick or marketing ploy?

 

A more serious application that actually has benefits but also possible downsides is the incorporation of the Livestream video sharing App to the Google Glass eyepiece.

 

This software application allows Google Glass wearers to share with another exactly what they are seeing and hearing simply by issuing the verbal command, “OK glass, start broadcasting.”

 

The application and use of this technology can be most useful to physicians, especially during a surgical operation, as it can provide incredibly focused instruction to interns and other physicians interested in the particular surgical intervention.

 

On the other hand, there are potential incursions on one’s privacy should you be the target of the particular video broadcast. There are even more serious potential situations that could involve broadcasting obscene, pornographic, or even sexual assaults via this medium.

 

Certainly, both Google as well as Livestream are concerned about potential abuses and should take steps to guard against violations of their licensed application.

 

The range of applications that are proliferating and creating this IoT continues to expand to the point that all the data being processed are now being created as unstructured data that is creating the need for the emergence of big data and new methods to store and process this IoT environment.

 

At the same time, the processing of these data as they achieve the volume and velocity that billions of these devices are creating has also generated the need for both virtualizations and cloud computing.

 

Cyber Security Framework Trend 5:

Cloud Computing

Cloud Computing

Cloud computing, while a new paradigm shift, originated based on the time-sharing model of computing from the 1960s as a result of IBM developing a four-processor mainframe and software that permitted the time to share computing model.

 

The introduction of the personal computer led to the client-server computing model, which was an important facet of the eventual emergence of cloud computing.

 

The major event that really enabled cloud computing was the introduction of the virtualization computing model. These items, plus the addition of the Internet, high-speed networks, Wi-Fi, cellular models, and the smart chips enabling mobility, have all come together to spawn this new transformational change in computing.

 

The attractiveness of cloud computing to organizations, governmental agencies, small businesses, and individual’s centers on the fact that the cost of one’s computing is on a metered basis, and you pay only for what you are actually using. This means one can go to a cloud provider and rely on the cloud provider’s computing infrastructure.

 

The cloud providers already possess the computers, servers, network bandwidth, Internet and network access, storage capability, the facility with cooling and heating, and other related items that permit a service contract that enables the user to acquire computing services without any capital investment of equipment, buildings, heating and cooling, and personnel to operate their computing needs.

 

While there are many excellent attributes to cloud computing, there are also some very negative aspects that must also be reviewed and assessed by those interested in cloud computing.

 

 Perhaps, the most appropriate manner in presenting our discussion of cloud computing is to present the definition of cloud computing and related cloud models as defined by the U.S. government agency the National Institute of Standards (NIST):

 

As defined by NIST, cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

 

Cloud computing services can be described by their shared characteristics, by the computing resources provided as a service, and by the method of deployment.

 

The generally agreed classification scheme for cloud computing is termed the SPI Framework, which means the Software–Platform–Infrastructure model.

 

This represents the three major services provided through the cloud: SaaS, or Software as a Service; PaaS, referring to Platform as a Service; and IaaS, which is Infrastructure as a Service. The three cloud service delivery models as defined by NIST are as follows:

 

Service Models

SaaS

Software as a Service (SaaS): The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface.

 

The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

 

Platform as a Service (PaaS): The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider.

 

The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.

 

Infrastructure as a Service (IaaS): The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications, the consumer does not manage.

 

or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls).

 

Cloud computing offers four major types of cloud models, termed private cloud, public cloud, community cloud, and hybrid cloud. Each of these deployment models provides a range of services and capabilities that have different cost structures as well as different specifications depending upon the needs of the organization seeking a cloud service contract.

 

For example, if security was an issue for the customer, the cloud model of choice would be a private cloud, whereas a customer requiring less security could select a public cloud. The four cloud models as defined by the NIST are as follows:

 

Cloud Models

Cloud Models

Private Cloud: The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.

 

Public Cloud: The cloud infrastructure is provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. It exists on the premises of the cloud provider.

 

Community Cloud: The cloud infrastructure is provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy and compliance considerations).

 

It may be owned, managed, and operated by one or more of the organizations in the community, a third party, or some combination of them, and it may exist on or off premises.

 

Hybrid Cloud: The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).

 

There are a number of benefits provided by the cloud environment, irrespective of which cloud model is selected. Typically, these benefits permit an organization the ability to rapidly deploy business and research applications in a cost-effective manner.

 

Also, the cloud computing model relieves the customer from the concerns about updating servers or having to install the latest software patches, and it enables the customer to acquire increased or additional services on an as-needed basis.

 

The cloud model also permits customers to focus on the innovation of their business computing solutions instead of dealing with the operation and maintenance of their computing infrastructure.

 

In general, the cloud paradigm provides cost savings since the customer is only incrementally paying for the computing services metered or used, and this avoids the large capital investment in equipment and personnel were they to create their own computing infrastructure.

 

While cloud computing offers several attractive reasons for its consideration, there are also some concerns to weigh before concluding on a decision as to selecting a cloud model, or for that matter even deciding as to whether it is appropriate for your organization to move into the cloud paradigm at all.

 

Clearly, the issue of security is a major concern, as well as where your data are being housed and located. Each of these issues might be addressed in a service level agreement with the cloud provider.

 

Perhaps one of the most serious drawbacks centers on the fact that most cloud providers’ traditional level agreements state that the cloud provider takes control and has potential ownership of the information, yet the customer organization still has full liability if proper security is not managed.

 

Since cloud providers seek to retain the customer’s business, the control of the customer’s information is a way to deter the customer from changing cloud providers.

 

In addition to the issue of ownership of the information, liability is another major issue to be aware of or resolve. For example, in many cloud agreements, if the cloud provider does not provide proper security and there is a breach of critical information or regulatory data, the customer is liable and not the cloud provider.

 

Any organization considering a business relationship with a cloud provider should be certain that contractual language specifies and requires the cloud provider to adhere to the legal requirements of public privacy laws and other regulatory issues, including the following:

  1. The Health Insurance Portability and Accountability Act
  2. The Fair Credit Reporting Act of 2003
  3. The Gramm-Leach-Bliley Act of 1999
  4. The Federal Information Security Act
  5. The PCI/DSS Payment and Credit Card Industry Data Security Standards
  6. Red Flag, a mandate by the Federal Trade Commission requiring institutions to develop identity theft prevention programs

 

Patent assurance that the cloud provider is the rightful and legal owner of the technologies they are providing and that they will indemnify the customer against any patent infringement litigation

 

Krutz and Vines also suggest that service level agreements be created that acknowledge mutual commitments for both the customer and the cloud provider and that the cloud provider should have a clear understanding of the customer’s expectations and concerns. The following elements are typically included in a service level agreement:

  1. Intellectual property protection
  2. Application security
  3. Termination
  4. Compliance requirements
  5. Customer responsibilities
  6. Performance tracking
  7. Problem resolution
  8. Lead time for implementation

 

Now that we have provided an overview of the cloud computing paradigm, we shall now examine several of the issues that the U.S. Department of Defense (DoD) addressed as it moved its entire information infrastructure into a cloud environment.

 

The scope of any organization moving into a cloud environment entails a number of challenges and the need for a very well-planned program; however, the enormous challenge that confronted the DoD was both unique and without precedent.

 

The DoD had to address the same issue that most organizations confront, namely, their concern about the security of the cloud model. The DoD has a need for world-class security as a result of their military and intelligence missions, as well as its dependence on operations within cyberspace.

 

An example of the DoD’s reliance on cyberspace is documented by the 15,000 networks and 7 million computing devices across hundreds of install-locations in dozens of countries throughout the world.

 

DoD networks are probed millions of times every day, and successful penetrations have resulted in the loss of thousands of files and important information on our weapons systems.

 

The number of foreign nation attacks and efforts to exploit our DoD unclassified and classified networks has increased not only in number but also in sophistication. Equally of concern are the attacks by nonstate actors who also seek to penetrate and disrupt DoD networks.

 

The global scope of DoD networks offers adversaries numerous targets to attack, and as a result, the DoD must defend against not only external threat actors but also internal threats.

 

In addition, since a great deal of software and hardware products are manufactured and assembled in foreign countries, the DoD must also develop strategies for managing these risks at both the design, manufacture, and service distribution points as they can represent supply chain vulnerabilities and threats to the operational ability of the DoD.

 

In view of these challenges, it was a bold and decisive move on the part of the Joint Chiefs of Staff to authorize the chief information officer of the DoD to develop a cloud computing strategy. This action was designed to re-engineer the DoD information infrastructure and improve its mission effectiveness in cybersecurity.

 

The result of this transformation was to create the Joint Information Environment, known today as the JIE. The DoD cloud computing strategy was focused on eliminating the duplicative, cumbersome, and expensive set of application silos to a more robust, secure, and cost-effective joint service environment that is capable of fully responding to the changing mission needs of the DoD.

 

The DoD identified a four-step process that guided the movement into the cloud computing infrastructure.

 cloud computing infrastructure

Step 1: Foster Adoption of Cloud Computing

  • Establish a joint governance structure to drive the transition to the DoD Enterprise Cloud environment
  • Adopt an Enterprise First approach that will accomplish a cultural shift to facilitate the adoption and evolution of cloud computing
  • Reform DoD IT financial, acquisition, and contracting policy and practices that will improve agility and reduce costs

 

Implement a cloud computing outreach and awareness campaign to gather input from the major stakeholders, expand the base of consumers and providers, and increase the visibility of available cloud services throughout the Federal Government

 

Consolidate and virtualize Legacy applications and data

 

Step 3: Establish the DoD Enterprise Cloud Infrastructure

  • Incorporate core cloud infrastructure into data center consolidation
  • Optimize the delivery of multi-provider cloud services through a Cloud Service Broker
  • Drive continuous service innovation using Agile, a product-focused, iterative development model
  • Drive secure information sharing by exploiting cloud innovation

 

Step 4: Deliver Cloud Services

Cloud Services

Continue to deliver DoD Enterprise cloud services

Leverage externally provided cloud services, i.e., commercial services, to expand cloud offerings beyond those offered within the Department

 

The specific objectives the DoD sought to achieve by moving into the cloud computing infrastructure were designated as follows:

  • Reduced Costs/Increased Operational Efficiencies
  • Consolidating systems, which reduces the physical and energy footprint, operational, maintenance, and management resources, and the number of facilities

 

  • Using a pay-as-you-go pricing model for services on demand rather than procuring entire solutions
  • Leveraging existing DoD cloud computing development environments to reduce software development costs
  • Increased Mission Effectiveness
  • Enabling access to critical information

 

  • Leveraging the high availability and redundancy of cloud computing architectures to improve options for disaster recovery and continuity of operations
  • Enhancing Warfighter mobility and productivity through device and location independence, and provision of on-demand, yet secure, global access to enterprise services

 

  • Increasing, or scaling up, the number of supported users as mission needs a surge, optimizing capabilities for the joint force
  • Enabling data to be captured, stored, and published almost simultaneously, decreasing the time necessary to make data available to users
  • Enabling the ability to create and exploit massively large data sets, search large data sets quickly, and combine data sets from different systems to allow cross-system data search and exploitation

 

[Note: You can free download the complete Office 365 and Office 2019 com setup Guide.]

 

Cybersecurity

Cybersecurity

Leveraging efforts such as FedRAMP that help standardize and streamline Certification and Accreditation (C&A) processes for commercial and Federal Government cloud providers, allowing approved IT capabilities to be more readily shared across the Department

 

  1. Moving from a framework of traditional system-focused C&A with periodic assessments to continual reauthorization through the implementation of continuous monitoring
  2. Moving to standardized and simplified identity and access management (IdAM)
  3. Reducing network seams through network and data center consolidation and implementation of a standardized infrastructure

 

The DoD cloud environment had to support Legacy applications as well as develop new applications. The cloud environment also is required to be closely aligned with the initiatives of the intelligence community and support information sharing with the Joint Worldwide Intelligence Communication System (JWICS).

 

The DoD chief information officer will lead to unclassified but Sensitive Internet Protocol Router Network (NIPRNET) and Secret Internet Protocol Router Network (SIPRNET) efforts.

 

The Director of National Intelligence will designate their chief information officer to lead the Top Secret Sensitive Compartmentalized Information (TS SCI), and both the DoD and the National Intelligence Agency will be required to evaluate data and information sensitivity as to low risk, moderate risk, and high risk.

 

Cloud model deployment will incorporate data on the basis of risk in which some commercial cloud providers will manage low-risk and, in selected cases, moderate-risk data and information.

 

High-risk data, which if breached would result in having a severe or catastrophic effect on organizational operations, organizational assets, or individuals, will not be placed within a commercial cloud provider that is generally available to the public and will remain within the DoD.

 

Protecting mission-critical information and systems requires the most stringent protection measures including highly classified tools, sophisticated cyber analytics, and highly adaptive capabilities that must remain within the physical and operational control of the DoD.

 

The transformation of the DoD to a cloud infrastructure for its information network, and cyberspace activities resulting in the current JIE has been an incredible journey relying on the expertise of some of our nation’s most professional, knowledgeable, and highly skilled personnel.

 

Cyber Security Framework Trend 6:

Big Data

Big Data

As pointed out earlier, social media and the enormous number of mobile devices, as well as the M2M connectivity and the IoT with the increasing number of sensors, have created an environment in which we are experiencing an explosion of data. As a result of cloud computing and virtualization, we are now capable of entering the new environment of big data.

 

The data being produced today are so large and complex that they cannot be processed by traditional relational database management programs.

 

The reason new processes are necessary is due to the nature of the data appearing in both an unstructured and semi-structured format, which totally deviates from the structured data format, which is based on the SQL, an international standard for defining and accessing relational databases.

 

Structured and Unstructured Data

Structured data consist of the ordinary processing of documents such as customer invoices, billing records, employee pay information, and any number of typical business transactions that have been traditionally managed in spreadsheets and databases.

 

In contrast to the structured data, the form of unstructured data consists of photographs, videos, social network updates, blog entries, remote sensor logs, and other remote and diverse types of information that are more difficult to process, categorize, and analyze with traditional tools.

 

Naturally, the question that comes to the forefront is if big data cannot be processed by the traditional relational database management programs, how then is this new enormous volume of data being processed? The answer typically revolves around two big data components.

 

The first is Hadoop, which is an open source technology framework that provides a storage capability for these large unstructured and semistructured data sets and, through its MapReduce processing engine, offers a shared file system with analysis capability.

 

The Hadoop solutions are available through a number of vendors such as IBM, HP, Apache, Cisco, and others.

 

The second component is NoSQL, which provides the capability to capture, read, and update in real-time the large influx of unstructured data and data without schemes; examples include clickstreams, social media, log files, event data, mobility trends, sensor, and M2M data.

 

An example of a big data technology ecosystem would include a big data platform that provides storage of the data. The data can include images and videos, social media, weblogs, documents, an operating system from a Legacy system, and a data warehouse.

 

This platform includes the capabilities to integrate, manage, and apply sophisticated computational processing to the data. Hadoop uses a processing engine named MapReduce to both distribute data and process the data in parallel across various nodes.

 

An example of how big data would be used by healthcare providers would entail the use of big data technologies to track the patient’s lifecycle with healthcare management capabilities, including all patient transactions, social media interactions, radiology images, pharmaceutical prescriptions, patient medical history, and any other related information important to the healthcare and lifecycle of the patient.

 

These data are stored and are repopulated into operational systems or prepared for subsequent analytics through the data warehouse.

 

Securing Big Data

Securing Big Data

Obviously, with data as important as a patient’s medical data, there is a need for the assurance of the information and its security. 

 

Since big data consists of data sourced through the Internet, cloud computing, social media, mobile devices, as well as Legacy system data, this commingling of data provides vulnerability, and malicious hacking from some remote unknown source could create a threat problem.

 

The security of these big data systems is critical and is very much a concern to those considering moving into this environment.

 

One problem that was fairly well resolved by traditional IT systems was the “back-end systems,” where the network’s hosts, storage, and applications were within the enterprise server or the data center. Now because of virtualization, we have an IT infrastructure that is not solely on the premises, since it now is in the cloud computing environment.

 

If you are in a public cloud or community cloud, there is a high probability that you do not even know where your data reside, and this means you may not even know if your data are in the same state or, for that matter, even what country.

 

Another problem is termed endpoints and usually, in the past, referred only to the devices that were centrally procured, provisioned, and managed by the enterprise IT function.

 

This is now obscured by BYODs, which are not owned by the organization but by the employee and which are highly susceptible to bringing malware into the data center.

 

Also, user-generated unstructured data are so easy to share among many people, and it has become a very large problem in managing and protecting the data center from malicious software or some of these unpatched and low-level security mobile devices.

 

The process and responsibility for providing security to the big data environment include many facets and responsibilities. Since big data adds substantial complexity to the entire IT infrastructure and since big data is widely distributed, it is important that it is protected in a secure manner.

 

This means that judgments must be made as to the information that should be classified and what level of sensitivity should be provided to protect it. 

 

The information needs to be protected across applications and environments with periodic vulnerability tests. Also, the security measures should guard against any intrusions that could modify or change the data. Data that are assigned a higher risk level must be identified by its location.

 

Obviously, data located within the IT infrastructure as well as the cloud environment must be protected. Users of the data must be monitored. Thus, the organization must have policies in place to govern how the organization will protect and ensure the big data environment. This means that there should be policies dealing with the security of the following:

 

  1. Structured information
  2. Unstructured data
  3. Device security
  4. Mobile application security
  5. Data transmission security
  6. Device information security
  7. Security monitoring and audit processes

 

New security requirements that might be considered in the protection of information within the big data environment include the following:

  1. Need to encrypt sensitive data on big data platforms
  2. Need to flag sensitive data files in Hadoop and other NoSQL data stores
  3. Need to control who can access exploratory “sandboxes” built on Hadoop or other analytical database management systems
  4. Need to flag sensitive data files in Hadoop and other NoSQL data stores to control access to it
  5. Need to encrypt and redact sensitive data produced from analysis undetected in Hadoop
  6. Need to protect access to sensitive data in big data platforms from applications and tools using other database management systems
  7. Need to log and report on which users and applications accessed sensitive data on any big data platform
  8. Need to control access to sensitive data from MapReduce applications running on Hadoop
  9. On-premises and cloud data need to be protected

 

Security Analytics

Security Analytics

The emerging new field of security analytics is the beginning of a new evaluation of how computer security will grow beyond the simple application of intrusion detection and intrusion prevention tools.

 

Currently, organizations can purchase various security tools such as Security Incident and Event Management (SIEM), Data Loss Prevention (DLP), and Network Intrusion Prevention (NIP) and can take advantage of the tools built-in algorithms. However, this approach is fundamentally reactive to the tools for identifying an attack or a similar event.

 

The new approach we hope security analytics will offer is to embrace the development of skill sets in computer security personnel that will enable them to both collect and analyze data logs, network flows, full packet capture, and endpoint execution and to extract useful insights by both applying data analysis algorithms as well as their own security analysis.

 

The value of a well-educated and skillful security analytics expert lies in their ability to explore patterns and to offer correlation analysis of events tied to both anomaly detection as well as predictive event occurrences.

 

The security analytics person can offer an enriching capability by constructing a new repository of collected log activity and network traffic data through the collection of Domain Name Server, Who is information, and threat intelligence alerts from all source sites and agencies, so that this repository can be data mined and analyzed for trends, patterns, and deviations from observed models.

 

This new approach provides computer security personnel with the security analytical capabilities of detecting new attacks, investigating previous and past intrusions, and even being better prepared to encounter employee abuse or malicious activity.

 

In short, the most important contribution of this new security analytics perspective is that we are now preparing computer security personnel to respond to events in real-time or at least near real-time with greater complexity than what is offered by signature-based intrusion detection tools.

 

Currently, there exists a huge deficit of personnel who are skilled and trained in data analytics, and there simply is no existing field of computer security analytics. The need for personnel in both these fields is in such high demand particularly as a result of the emergence of big data.

 

In a 2013 survey focusing on detecting problems in real-time big data analytics, over 40% of the 260 enterprise security professionals stated that they were challenged by a lack of adequate staffing in security operations/­ incident response teams.

 

The Wall Street Journal reported that the McKinsey Global Institute estimated that the demand for employees skilled in data analysis will outstrip supply by 60% by 2018, and this does not even factor in the demand for security analytics personnel who are virtually nonexistent today.

 

Big Data Applications

Big Data Applications

Perhaps the best way to appreciate the transformation that big data is introducing is to provide several examples of programs that have already been institutionalized.

 

At the same time, it is appropriate to also present the amount of data that are being produced and why this challenge will continue to grow as additional programs are developed and institutionalized.

 

The amount of data being created in an unstructured format by social media, mobile devices, the IoT, and M2M sensors is truly remarkable. As of April 2013, IBM estimated that 2.5 quintillion bytes of data are created daily. The average amount of stored data per U.S. company with more than 1000 employees exceeds 200 terabytes.

 

There are 6 billion global cell phone subscriptions beaming location information back to networks. Amazon alone has more than 500,000 computer servers in their Elastic Compute Cloud. There are 4.5 million new URLs appearing on the web each month.

 

There are 170 computing centers across 36 countries analyzing data from the CERN facility, and 25 million gigabytes of data are created annually by the Large Hadron Collider at CERN.

 

This amount of data is precisely why new technologies were created to store and process this information. However, what is missing is the personnel to work in the big data environment, and the Gartner Research Firm estimated that 85% of the Fortune 500 firms will be unprepared to leverage big data for a competitive advantage by 2015.

 

In fact, estimates of the current shortage of U.S. managers with data analysis skills exceeds 1.5 million people.

 

We have already discussed one application of big data that included patient lifecycle applications within the healthcare industry. Another fascinating application has transformed research capabilities in the field of geology through the use of big data.

 

Most geological discoveries were reported in research journals, and over the history of the development of the field of geology worldwide, journals held vast amounts of research data.

 

Some very good research that received little notice was consigned to oblivion and not accessible to contemporary geology researchers. Additionally, the volume and inaccessibility of past research were also hampered by the high cost of geological surveys and on-site discoveries.

 

In 2012, Professor Shanan Peters, a geologist at the University of Wisconsin, teamed up with two computer science professors, Miron Livny and Christopher Re, to build a computer program that scanned pages from pre-Internet science journals, generations of websites, archived spreadsheets, and video clips to create a database comprising as nearly as possible the entire universe of trusted geological data.

 

The massive piles of unstructured and overlooked data are now available for geology professors and students to query the database and to receive informative replies. This program was called Geo Deep Dive, and it has provided researchers access to a larger collection of geological data than ever before.

 

Another advantage of utilizing a query system is the ability to pose questions to the system that researchers may lack the expertise to answer on their own.

 

This insightful program created by the University of Wisconsin Geology and Computer Science departments is an example of how other academic programs can enrich their fields of research.

 

These gains were made possible as a result of virtualization, cloud computing, and big data, which allows the incorporation of valuable unstructured data that range from video to voice recordings and many other examples.

 

The Hadoop and NoSQL components of big data permit rather advanced query capabilities resulting in the production of important new insights and directions for further research and knowledge building.

 

Examples of governmental programs that are embracing big data applications are in the National Weather Service and the Federal Emergency Management Association, where new data-rich models are being developed to predict weather patterns.

 

Also, the Centers for Medicare and Medical Services has created a system that permits their analysis of the 4 million claims it pays daily to search for fraudulent patterns of activity. Since federal requirements impose a 30-day obligation for paying all claims, a system to detect fraudulent behavior is necessary.

 

Perhaps the most important and greatest long-term effects of big data applications are more than likely to be in the physical sciences, where big data has the capacity to assist researchers in formulating new hypotheses by the query development process capability.

 

An example of an application of this type is in the work of the National Institutes of Health, where it has placed more than 1000 individual human genomes inside Amazon’s Elastic Compute Cloud.

 

Amazon is storing this massive amount of non-sensitive government information at no fee for the government. 

 

The information being stored currently amounts to 2000 terabytes of data, and when researchers want to use this database, they are charged to analyze the cloud-based dataset only on the amount of computing time required to perform their research objective.

 

This big data storage model has opened the field of research to large numbers of health and drug researchers, academics, and graduate students who could never have afforded this research before its inclusion in the cloud and by big data applications. More importantly, it has the potential to increase research and speed up time for the development of treatments for diseases.

 

The cost factor is really quite astonishing because research such as this would have entailed the use of a supercomputer and cost over $500,000.

 

In less than seven years, the cost of sequencing an individual human genome in 2012 became $8000, and the cost at which sequencing an individual human genome that becomes part of medical diagnosis at less than $1000.

So as the costs are reduced and greater opportunities for researchers to review the more than 1000 human genomes stored within the

 

Amazon Elastic Cloud continues to progress, we anticipate new discoveries and abilities to treat diseases.

Amazon Elastic Cloud

Another interesting application of big data is found in some of the research in Canada, where researchers are interested in the identification of infections in premature babies before the appearance of overt symptoms.

 

The research protocol is to convert 16 vital signs including heartbeat, blood pressure, respiration, and blood- oxygen levels into an information flow of more than 1000 data points per second to ascertain correlations between very minor changes and more serious problems.

 

Over an extended period of time and as their database increases, it is projected that this will provide physicians with a deeper comprehension as to the etiology of such problems.

 

One of the major changes in processing big data research questions centers on the issue of inference. The enormous volume of data being processed is being probed for inferential relationships and correlations.

 

This approach is totally at variance to traditional research methodologies in which statistical samples of small amounts of data representing a larger population were analyzed for predictive and causal conclusions.

 

The significance of this major research methodological change is to caution big data researchers that any causal conclusion they offer must be carefully reviewed and analyzed as the data sets they are including in their research are drawn from very unstructured data and are open to issues of scientific validity concerns and checks.

 

However, if the results are framed within the perspective of correlation analysis, it will provide a rich set of previously unobserved opportunities to correlate event X with event Y or Z and even offer multiple correlative lines of research inquiry that later may be subject to more traditional causal analysis conclusions.

 

The University of California-Berkeley’s Simons Institute for the Theory of Computing held their Fall 2013 program on the theoretical foundations of big data analysis, and their comments on big data are very instructive and they offer the following:

 

We live in an era of “Big Data”: science, engineering, and technology are producing increasingly large data streams, with petabyte and exabyte scales becoming increasingly common.

 

In scientific fields such data arise in part because tests of standard theories increasingly focus on extreme physical conditions (cf., particle physics) and in part because science has become increasingly exploratory (cf., astronomy and genomics).

 

In commerce, massive data arise because so much of human activity is now online and because business models aim to provide services that are increasingly personalized.

 

Clearly, we are living in the era of big data, and data streams of petabyte and exabyte scales are increasingly becoming quite common.

 

As organizations move to embrace and create more big data applications, it is important that the science surrounding these applications is more firmly based on the theories of computation, statistics;

 

and related disciplines where continuing research in the topics of dimension reduction, distributed optimization, Monte Carlo sampling, compressed sensing, and low-rank matrix factorization are further researched.

 

The major transformational changes that big data is introducing to our society require a firmer application of science to guard against any latent, unanticipated, and dysfunctional consequences of this big data movement.

 

Preparing Future Generations for Cybersecurity Transformational Challenges

The challenges for cybersecurity professionals are both deep and longitudinal, as the era of big data, cloud computing, and the IoT has introduced so many fundamental security vulnerabilities. The threat landscape continues to grow, and both preventing and stopping breaches in real-time or near real-time is difficult at best.

 

The emergence of big data has spawned a need for increased research into the theoretical foundations for big data. The fields of engineering, computer science, and statistics will have to address the research challenges that confront inferential algorithms, while also providing additional research into the field of correlation analysis.

 

Our universities will be facing a need and challenge to locate and employ faculty and researchers who will provide the foundations for creating the academic instructional areas in security analytics, data analytics, decision science, predictive analytics, and correlation analysis.

 

The role of the university and its relationship to research collaborations with governmental agencies and the DoD will continue to grow in the importance of both providing skilled and educated next generation workers as well as providing a vigorous research program.

 

The fundamental role of defending our nation has dramatically changed as a result of the activities within the cyberspace environment. War as we once knew it is forever changed due to the digital advancements that continue to be made.

 

Cyber weapons now exist and have the capability of decimating even the most prepared nations. The ability to design and prepare cyber weapons exceeds the current defense strategies of most nations.

 

The challenges in international law and in the area of individual privacy issues will continue to increase and require patient and sound educated judgments to guide both governments and nations.

 

Greater cooperation will be required between our universities, research institutes, and our industries as we prepare for the development of new advancements in science and the generation of new inventions.

 

Finally, our nation’s commitment to an educational system that seeks to expand the boundaries of science, technology, and the advancement of knowledge is a strength that provides an environment for our children with unrivaled opportunities for growth and achievement.

 

The dedication of teachers at our elementary and secondary school systems as well as the faculty of our colleges and universities all work in an effort to provide our nation with the next generation of citizen leaders and innovators.

 

As we prepare our youth for the future and the transformational challenges they will encounter, our nation will be well advised to continue its investment in our education systems at all levels of society. The continuity and sustainability of our nation’s commitment to these ideals, goals, and the highest of standards are fundamental parts of our heritage.

 

Economic Cost of Cybersecurity

Cost of Cybersecurity

Calculating the cost of cybersecurity is a very complex problem since there are a number of variables that must be included in any economic assessment. Another facet of the problem is to define what is being measured in calculating the economic cost.

 

In addition, what economic model will be applied, and will it control for the statistical requirements of sampling and other research methodology requirements?

 

How complete and accurate are computer breaches and computer criminal acts being reported and what is the variability between corporations, governmental agencies, and individual citizens?

 

Further difficulties emerge as a result of the public media reporting the “cost of computer crime” from various sources, which, in many instances, are nonscientific sources and may contain undocumented sources as well as elevated cost estimates.

 

Listings of the factors that will be important in determining the economic cost of cybersecurity include the following:

  1. Financial losses to business organizations Small businesses Corporations
  2. Nongovernmental organizations (NGOs) and charitable organizations
  3. Individuals
  4. Governmental organizations
  5. Costs expended to protect against loss Antivirus software
  6. Insurance costs
  7. Macroeconomic costs
  8. Cybersecurity and information technology (IT) information security personnel
  9. Cyber intelligence/counterintelligence Military Corporate

 

It is critical to assess and measure the cost of cybersecurity and the range of issues that are required to prepare an adequate defense and prevention strategy for the security of information assets and intellectual property.

 

An understanding of the scope of cybercrime, when expressed in financial terms, provides policymakers with a perspective as to how serious the cybercrime problem really is and what degree of investment of resources will be required to realistically address and defend against this growing problem.

 

Since cyber activities have expanded beyond cybercrime to now include cyber espionage and cyber warfare challenges, we now have substantially increased a society’s vulnerability, while also increasing their financial burden for defense and prevention of security breaches and attacks.

 

The economic analysis of cybersecurity costs now has to assess organizations and entities at the federal level, state, and municipal levels; corporate businesses; small businesses; NGOs; and the individual.

 

As each of these entities has a vast array of organizations and individuals who may be victimized, a careful and sound research-based scientific study must be performed to determine the costs of victimization losses.

 

Also, the cost of prevention and defense must be factored into the true cybersecurity costs. These costs will entail antivirus software, firewalls, intrusion prevention software, and a range of additional security devices and network software programs and services at a global level.

 

In addition, the cost of cybersecurity professionals, managers, and executives at the C-level, including computer information security officers, will also be included in the economic cost modeling.

 

The cost of cybercrime is quite complicated, as all costs cannot be neatly summed up by reporting and totaling actual financial losses. It becomes very difficult to measure actual financial losses since the loss of intellectual property has both immediate and long-term costs.

 

The closure and bankruptcies of some businesses have been reported due to their loss of critical intellectual property.

 

Another dimension of the difficulty in making an assessment of economic costs of cybersecurity has occurred when a security breach has resulted in the diminishing of a business’s reputation and, in some cases, the loss of customers or the loss of an opportunity to serve other business partners.

 

It is difficult to ensure the accurate assessment of the financial costs of a computer security breach, especially when there exists a series of interdependencies between the actions of a breach in terms of its initial impact and the subsequently incurred costs several weeks or months after the breach.

 

Equally difficult is the cost accounting assessment of the financial loss incurred by the lost opportunity of serving business customers who fear to return to an organization that has suffered a major breach.

 

Computer security breaches occurring or targeted against our nation’s military and our governmental agencies create additional cost factors that are incurred in the defense of our nation. Another important aspect of analyzing the cost of cybersecurity occurs in terms of the transformational costs involved in securing the defense of our nation.

 

The increase in cyber espionage by nation-states as well as terrorist organizations has resulted in our military investing billions of dollars to provide a protective defense for our nation.

 

In addition to cyber espionage, the threat of cyber warfare has created a need for cyber weapons and the defense against opposing offensive cyber weapons.

 

There is a range of very complex costs of personnel, equipment, and hardware and software that are added to the long-range responsibility and requirement that must also be considered in any true assessment of the cost of cybersecurity.

 

Many researchers and economists have expressed concern over the inflated estimates of the costs of cybersecurity and have even noted that many officials in the federal government are commenting on financial costs in the trillions of dollars without providing a basis for these cost figures.

 

Interestingly, most of the economic research has been done by those corporations involved with the field of cybersecurity, and the criticism has been raised as to whether the effort is more of a “marketing” focus as opposed to a science-based approach committed to economic analysis.

 

In fairness to the past industry-based economic assessment, we can be grateful for their interest in pursuing this important information. Also, it should be noted that little focus on determining the economic costs of security breaches and computer crime was being performed by our nation’s research universities. Even less effort was being expended by our governmental agencies.

 

Cost of Cybersecurity—Studies and Reports

computer crime

There exist little consensus and even less satisfaction as to the current knowledge regarding the accurate cost of cybersecurity within our nation. There is little agreement as to the real cost of computer crime, and while great improvement is being made in the area of determining the cost of security breaches, much work remains to be completed.

 

One of the problems is the absence of standard research methodologies for cost measurement and modeling.

 

Another problem stems from no standardized protocol and requirement for the reporting of security breaches; in fact, there is a great reluctance of business organizations to even report computer criminal and beach activities.

 

As a result, we have very spotty empirical data on costs that are attributable to computer crime, security breaches, viruses, worms, and other attack mechanisms. Without solid empirical data, the challenge of calculating the cost of cybersecurity becomes speculative at best.

 

Cybersecurity as a Business Risk

Cybersecurity Business Risk

An important study sponsored by Experian Data Breach Resolution and independently conducted by the Ponemon Institute surveyed risk management professionals who either considered or adopted cyber insurance policies.

 

According to survey question responses, many risk managers understand that security is a clear and present risk, and a majority of the surveyed companies now rank cyber security risks as greater than natural disasters and other major business risks.

 

The increasing cost and number of data breaches are forcing business executives to reconsider cybersecurity from a purely technical issue to a more complex major business risk issue.

 

Corporate boards of directors and trustees are also expecting their chief executive officers (CEOs) to become more fully engaged in this new and potentially devastating risk.

 

The noteworthy findings of the study titled “Managing Cyber Security as a Business Risk: Cyber Insurance in the Digital Age” revealed that the concerns about cyber risks are now moving outside of the corporate IT teams and becoming more engaged by risk managers.

 

As a result of risk managers becoming more engaged in cybersecurity issues and data breaches, there has been an increased interest in corporations acquiring cyber insurance policies.

 

Of those participating in the study’s survey, 31% currently have a cyber insurance policy and another 39% stated that their organizations plan to purchase a cyber insurance policy.

 

Despite increasing interest in acquiring cyber insurance policies, the study did identify the main reasons respondents gave for not purchasing cybersecurity insurance, and those reasons, in order of frequency of response, were as follows:

  1. Premiums are too expensive.
  2. There are too many exclusions, restrictions, and uninsurable risks.
  3. Property and casualty policies are sufficient.
  4. They are unable to get insurance underwritten because of the current risk profile.
  5. Coverage is inadequate based on exposure.
  6. Risk does not warrant insurance.
  7. Executive management does not see the value of this insurance.

 

Of those respondents who stated that their company did have cyber insurance, 40% stated that risk management was most responsible for evaluating and selecting the insurance provider.

 

Interestingly, the study reported that the chief information officer and a chief information security officer had little involvement and influence in the purchase decision and policy coverage even though one naturally assumed that their views and input had been seriously considered.

 

For those companies that did report having cyber insurance coverage, their policies covered the following types of incidents:

  1. Human error, mistakes, and negligence
  2. External attacks by cyber-criminals
  3. System or business process failures
  4. Malicious or criminal insiders
  5. Attacks against business partners, vendors, or other third parties that had access to the company’s information assets

 

The study also reported the following protections or benefits covered by the cyber insurance policy, and again the responses are ranked by the frequency of respondent answers, with the highest response to the lowest response as follows:

  1. Notification costs to data breach victims
  2. Legal defense costs
  3. Forensic and investigative costs
  4. Replacement of lost or damaged equipment
  5. Regulatory penalties and fines
  6. Revenue loss
  7. Third-party liability
  8. Communication costs to regulators
  9. Employee productivity losses
  10. Brand damage

 

The above listing of areas to seek cyber insurance protection is consistent with most companies’ concerns after experiencing a breach.

 

One very interesting result of this study revealed that companies rarely use formal risk assessments by in-house staff to determine how much coverage should be purchased. Instead, companies rely on the insurer to do a formal risk assessment.

 

What we find most striking about this situation is the fact that insurance carriers are only recently becoming involved with cybersecurity issues, so their level of experience and knowledge is probably not much deeper than that of the company’s risk managers.

 

Clearly, both groups will need further training and education as the field of cyber insurance develops. While corporations view the cybersecurity issues in terms of breaches, the insurance carriers view cybersecurity issues in terms of claims. Settled claims are determined by the company’s cyber resilience defense against security breaches, as well as a host of other factors.

 

So both groups must begin to learn a great deal more about cybersecurity since the costs of security breaches are increasing both in frequency and in costs measured into the millions of dollars and the insurance premiums as measured into the billions of dollars.

Recommend