Penetration Testing

Penetration Testing

Defining Penetration Testing and What a Penetration Tester Does

A penetration tester, or pentester, is employed by an organization either as an internal employee or as an external entity such as a contractor hired on a per-job or per-project basis. In either case, pentesters conduct a penetration test, meaning they survey, assess, and test the security of a given organization by using the same techniques, tactics, and tools that a malicious hacker would use.

 

The main differences between a malicious hacker and a pentester are intent and the permission that they get, both legal and otherwise, from the owner of the system that will be evaluated. Additionally, pentesters are never to reveal the results of a test to anyone except those designated by the client.

 

As a safeguard for both parties, a nondisclosure agreement (NDA) is usually signed by both the hiring firm and the pentester. This protects company property and allows the pentester access to internal resources.

 

Finally, the pentester works under contract for a company, and the contract specifies what is off-limits and what the pentester is expected to deliver at the end of the test. All of the contractual details depend on the specific needs of a given organization.

 

Some other commonly encountered terms for pentester are a penetration tester, ethical hacker, and white-hat hacker. All three terms are correct and describe the same type of individual (though some may debate these apparent similarities in some cases). Typically the most commonly used name is pentester.

 

1. A penetration testing methodology assures that a process is followed and certain tasks get completed. Additionally, a methodology also ensures that a test meets regulatory or other legal requirements when compliance testing is done.

 

2. If a penetration test is undertaken or requested as part of a regulatory audit or compliance test, the law can play a big role. Failure to follow specific processes and doing so on a regular schedule can lead to civil and regulatory penalties.

 

3. Different methodologies may have more or fewer steps based on their goals and what they were designed for. For example, a pentest for HIPAA would have specific goals in mind that the process may need to be adjusted to account for.

 

4. Scoping a penetration test is important since it allows the client and penetration tester to understand the test objectives. The scoping process should seek to clearly define all the goals and objectives of the test and what the expected deliverables at the end of the test will be.

 

5. Without written permission, a penetration tester entering a network or other system is not viewed any differently than a black-hat hacker. Written permission should always be obtained if test goals are expanded, changed, or otherwise differ from the original objectives. Never substitute verbal approvals or requests to perform a task for written permission.

 

Being a pentester has become more important in today’s world as organizations have had to take a more serious look at their security posture and how to improve it. Several high-profile incidents such as the ones involving retail giant Target and entertainment juggernaut Sony have drawn attention to the need for better trained and more skilled security professionals who understand the weaknesses in systems and how to locate them.

 

Through a program that combines technological, administrative, and physical measures, many organizations have learned to fend off their vulnerabilities.

 

Technology controls such as virtual private networks (VPNs), cryptographic protocols, intrusion detection systems (IDSs), intrusion prevention systems (IPSs), access control lists (ACLs), biometrics, smart cards, and other devices have helped security.

 

Administrative controls such as policies, procedures, and other rules have also been strengthened and implemented over the past decade. Physical controls include devices such as cable locks, device locks, alarm systems, and other similar devices.

 

As a pentester, you must be prepared to test environments that include any or all of the technologies listed here as well as an almost endless number of other types. So, what is a penetration tester anyway?

 

EC-Council uses ethical hacker when referencing its own credential, the Certified Ethical Hacker.

 

In some situations, what constitutes a hacker is a topic ripe for argument. I have had many interesting conversations over the years addressing the question of whether the term hacker is good or bad. Many hackers are simply bad news all-around and have no useful function, and that’s how hackers are usually portrayed in movies, TV, blogs, and other media.

 

However, hackers have evolved, and the term can no longer be applied to just those who engage in criminal actions. In fact, many hackers have shown that while they have the skill to commit crimes and wreak havoc, they are more interested in engaging with clients and others to improve security or perform research.

 

Recognizing Your Opponents

Recognizing Your Opponents

In the real world, you can categorize hackers to differentiate their skills and intent.

 

Script Kiddies These hackers have limited or no training and know how to use basic tools or techniques. They may not even understand any or all of what they are doing.

 

White-Hat Hackers These hackers think like the attacking party but work for the good guys. They typically are characterized by having what is commonly considered to be a code of ethics that says they will cause no harm. This group is also known as pentesters.

 

Gray-Hat Hackers These hackers straddle the line between the good and bad sides and have decided to reform and become the good side. Once they are reformed, they may not be fully trusted, however. Additionally, in the modern era of security, these types of individuals also find and exploit vulnerabilities and provide their results to the vendor either for free or for some form of payment.

 

Black-Hat Hackers These hackers are the bad guys who operate on the wrong side of the law. They may have an agenda or no agenda at all. In most cases, black-hat hackers and outright criminal activity are not too far removed from one another.

 

Cyber terrorists are a new form of the attacker that tries to knock out a target without regard to being stealthy. The attacker essentially is not worried about getting caught or doing prison time to prove a point.

 

Preserving Confidentiality, Integrity, and Availability

Confidentiality

Any organization that is security minded is trying to maintain the CIA triad— or the core principles of confidentiality, integrity, and availability. The following list describes the core concepts. You should keep these concepts in mind when performing the tasks and responsibilities of a pentester.

 

Confidentiality This refers to the safeguarding of information, keeping it away from those not otherwise authorized to possess it. Examples of controls that preserve confidentiality are permissions and encryption.

 

Integrity This deals with keeping information in a format that retains its original purposes, meaning that the data the receiver opens is the same the creator intended.

 

Availability This deals with keeping information and resources available to those who need to use it. Simply put, information or resources, no matter how safe, are not useful unless they are ready and available when called upon.

 

CIA is one of the most important if not the most important set of goals to preserve when assessing and planning security for a system. An aggressor will attempt to break or disrupt these goals when targeting a system.

 

Why is the CIA triad so important? Well, consider what could result if an investment firm or defense contractor suffered a disclosure incident at the hands of a malicious party. The results would be catastrophic, not to mention it could put either organization at serious risk of civil and criminal actions. As a pentester, you will be working toward finding holes in the client’s environment that would disrupt the CIA tried and how it functions. Another way of looking at this is through the use of something I call the anti-CIA triad.

 

Improper Disclosure This is the inadvertent, accidental, or malicious revealing or accessing of information or resources to an outside party. Simply put, if you are not someone who is supposed to have access to an object, you should never have access to it.

 

Unauthorized Alteration This is the counter to integrity as it deals with the unauthorized or other forms of modifying information. This modification can be corruption, accidental access, or malicious in nature.

 

Disruption (aka Loss) This means that access to information or resources has been lost when it otherwise should not have. Essentially, information is useless if it is not there when it is needed. While information or other resources can never be 100 percent available, some organizations spend the time and money to get 99.999 percent uptime, which averages about six minutes of downtime per year.

 

Appreciating the Evolution of Hacking

Hacking

The role of the pentester tends to be one of the more misunderstood positions in the IT security industry. To understand the role of this individual, let’s first look back at the evolution of the hacker from which the pentester evolved.

 

Hacker has a double meaning within the technology industry in that it has been known to describe both software programmers and those who break into computers and networks uninvited. The former meaning tends to be the more positive of the two, with the latter being the more negative connotation.

 

The news media adds to the confusion by using the term liberally whenever a computer or other piece of technology is involved. Essentially the news media, movies, and TV consider anyone who alters technology or has a high level of knowledge to be a hacker.

 

The Role of the Internet

 Internet

Hackers became more prolific and more dangerous not too long after the availability of the Internet to the general public. At first many of the attacks that were carried out on the Internet were of the mischievous type such as the defacing of web pages or similar types of activity. Although initially, many of these first types of attacks on the Internet may have been pranks or mischievous in nature, later attacks became much more malicious.

 

In fact, attacks that have been perpetrated since the year 2000 have become increasingly more sophisticated and aggressive as well as more publicized. One example from August 2014 is the massive data breach against Apple’s iCloud, which was responsible for the public disclosure of hundreds of celebrity pictures in various intimate moments. Unfortunately, Apple’s terms and conditions for customers using iCloud cannot hold Apple accountable for data breaches and other issues.

 

This breach has so far resulted in lawsuits by many of those who had their pictures stolen as well as a lot of negative publicity for Apple. The photos that were stolen as a result of this breach can be found all over the Internet and have spread like wildfire much to the chagrin of those in the photos.

 

Another example of the harm malicious hackers have caused is the Target data breach in September 2014. This breach was responsible for the disclosure of an estimated 56 million credit card accounts. This single breach took place less than a year after the much-publicized Target data breach, which itself was responsible for 40 million customer accounts being compromised.

 

A final example comes from information provided by the U.S. government in March 2016. It was revealed that the 18-month period ending in March 2015 had a reported 316 cybersecurity incidents of varying levels of seriousness against the Obamacare website. This website is used by millions of Americans to search for and acquire health care and is used in all but 12 states and Washington, DC.

 

While the extensive analysis of the incidents did not reveal any personal information such as Social Security numbers or home addresses, it did show that the site is possibly considered a valid target for stealing this information. Somewhat disconcerting is the fact that there are thought to be numerous other serious issues such as unpatched systems and poorly integrated systems.

 

All of these attacks are examples of the types of malicious attacks that are occurring and how the general public is victimized in such attacks. Many factors have contributed to the increase in hacking and cybercrime, with the amount of data available on the Internet and the spread of new technology and gadgets two of the leading causes.

 

Since the year 2000, more and more portable devices have appeared on the market with increasing amounts of power and functionality. Devices such as smartphones, tablets, wearable computing, and similar items have become very open and networkable, allowing for the easy sharing of information.

 

Additionally, I could also point to the number of Internet-connected devices such as smartphones, tablets, and other gadgets that individuals carry around in increasing numbers. Each of these examples has attracted the attention of criminals, many of whom have the intention of stealing money, data, and other resources.

 

Many of the attacks that have taken place over the last decade have been perpetrated not by the curious hackers of the past but rather by other groups. The groups that have entered the picture include those who are politically motivated, activist groups, and criminals. While there are still plenty of cases of cyber attacks being carried out by the curious or by pranksters, the attacks that tend to get reported and have the greatest impact are these more maliciously motivated ones.

 

The Hacker Hall of Fame (or Shame)

Hacker

Many hackers and criminals have chosen to stay hidden behind aliases or in many cases, they have never gotten caught, but that doesn’t mean there haven’t been some noticeable faces and incidents. Here’s a look at some famous hacks over time:

 

In 2005, Cameron Lacroix hacked into the phone of celebrity Paris Hilton and also participated in an attack against the site LexisNexis, an online public record aggregator, ultimately exposing thousands of personal records.

 

In 2009, Kristina Vladimirovna Svechinskaya, a young Russian hacker, got involved in several plots to defraud some of the largest banks in the United States and Great Britain. She used a Trojan horse to attack and open thousands of bank accounts in the Bank of America, through which she was able to skim around $3 billion in total.

 

In an interesting footnote to this story, Ms. Svechinskaya was named World’s Sexiest Hacker at one point due to her good looks. I mention this point to illustrate the fact that the image of a hacker living in a basement, being socially awkward, or being really nerdy looking is gone. In this case, the hacker in question was not only very skilled and dangerous, but she also did not fit the stereotype of what a hacker looks like.

 

In 2010 through the current day, the hacking group Anonymous has attacked multiple targets, including local government networks, news agencies, and others. The group is still active and has committed several other high-profile attacks up to the current day. Attacks in recent history have included the targeting of individuals such as Donald Trump and his presidential campaign of 2016.

 

While many attacks and the hackers that perpetrate them make the news in some way shape or form, many don’t. In fact, many high-value, complicated, and dangerous attacks occur on a regular basis and are never reported or, even worse, are never detected. Of the attacks that are detected, only a small number of hackers ever even see the inside of a courtroom much less a prison cell. Caught or not, however, hacking is still a crime and can be prosecuted under an ever-developing body of laws.

 

Recognizing How Hacking Is Categorized Under the Law

Hacking

Over the past two decades, crimes associated with hacking have evolved tremendously, but these are some broad categories of cybercrime:

Identity Theft This is the stealing of information that would allow someone to assume the identity of another party for illegal purposes. Typically this type of activity is done for financial gains such as opening credit card or bank accounts or in extreme cases to commit other crimes such as obtaining rental properties or other services.

 

Theft of Service Examples are the use of phone, Internet, or similar items without expressed or implied permission. Examples of crimes or acts that fall under this category would be acts such as stealing passwords and exploiting vulnerabilities in a system. Interestingly enough, in some situations, just the theft of items such as passwords is enough to have committed a crime of this sort. In some states, sharing an account on services such as Netflix with friends and family members can be considered theft of service and can be prosecuted.

 

Network Intrusions or Unauthorized Access This is one of the oldest and more common types of attacks. It is not unheard of for this type of attack to lead into other attacks such as identity theft, theft of service, or any one of countless other possibilities. In theory, any access to a network that one has not been granted access to is enough to be considered a network intrusion; this would include using a Wi-Fi network or even logging into a guest account without permission.

 

Posting and/or Transmitting Illegal Material This has gotten to be a difficult problem to solve and deal with over the last decade. Material that is considered illegal to distribute includes copyrighted materials, pirated software, and child pornography, to name a few. The accessibility of technologies such as encryption, file sharing services, and ways to keep oneself anonymous has made these activities hard to stop.

 

Fraud This is the deception of another party or parties to elicit information or access typically for financial gain or to cause damage.

Embezzlement This is one form of financial fraud that involves theft or redirection of funds as a result of violating a position of trust. The task has been made easier through the use of modern technology.

 

Dumpster Diving This is the oldest and simplest way to get and gather material that has been discarded or left in unsecured or unguarded receptacles. Often, discarded data can be pieced together to reconstruct sensitive information. While going through trash itself is not illegal, going through trash on private property is and could be prosecuted under trespassing laws as well as other portions of the law.

 

Writing Malicious Code This refers to items such as viruses, worms, spyware, adware, rootkits, and other types of malware. Essentially this crime covers a type of software deliberately written to wreak havoc and destruction or disruption.

 

Unauthorized Destruction or Alteration of Information This covers the modifying, destroying, or tampering with information without appropriate permission.

Denial-of-Service

Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) Attacks These are both ways to overload a system’s resources so it cannot provide the required services to legitimate users. While the goals are the same, the terms DoS and DDoS actually describe two different forms of the attack. DoS attacks are small scale, one-on-one attacks, whereas DDoS attacks are much larger in scale, with thousands of systems attacking a target.

 

Cyberstalking This is a relatively new crime on this list. The attacker in this type of crime uses online resources and other means to gather information about an individual and uses this to track the person and, in some cases, try to meet these individuals in real life.

 

While some states, such as California, have put laws in place against stalking, which also covers crimes of the cyber variety, they are far from being universal. In many cases, when the stalker crosses state lines during the commission of their crime, it becomes a question of which state or jurisdiction can prosecute.

 

Cyberbullying This is much like cyberstalking except in this activity individuals use technologies such as social media and other techniques to harass a victim. While this type of crime may not seem like a big deal, it has been known to cause some individuals to commit suicide as a result of being bullied.

 

Cyberterrorism This, unfortunately, is a reality in today’s world as hostile parties have realized that conventional warfare does not give them the same power as waging a battle in cyberspace. It is worth noting that a perpetrator conducting terrorism through cyberspace runs the very real risk that they can and will be expedited to the targeted country.

 

To help understand the nature of cybercrime, it is first important to understand the three core forces that must be present for a crime, any crime, to be committed. These three items are:

 

  • Means or the ability to carry out their goals or aims, which in essence means that they have the skills and abilities needed to complete the job
  • Motive or the reason to be pursuing the given goal
  • Opportunity, the opening or weakness needed to carry out the threat at a given time

 

As we will explore in this blog, many of these attack types started very simply but rapidly moved to more and more advanced forms. Attackers have quickly upgraded their methods as well as included more advanced strategies, making their attacks much more effective than in the past. While they already knew how to harass and irritate the public, they also caused ever bolder disruptions of today’s world by preying on our “connected” lifestyle.

 

Attacks mentioned here will only increase as newer technologies such as smartphones and social networking integrate even more into our daily lives. The large volumes of information gathered, tracked, and processed by these devices and technologies are staggering. It is estimated by some sources that information on location, app usage, web browsing, and other data is collected on most individuals every three minutes. With this amount of information being collected, it is easy to envision scenarios where abuse could occur.

 

What has been behind a lot of the attacks in the past decade or more is greed. Hackers have realized that their skills are now more than curiosity and are something that could be used for monetary gain. One of the common examples is the malware that has appeared over this time period.

 

Not only can malware infect a system, but in many cases, it has been used to generate revenue for their creators. For example, malware can redirect a user’s browser to a specific site with the purpose of making the user click or view ads.

 

Outlining the Pen Testing Methodology

Pen Testing Methodology

This section explains methodology you’ll use to conduct your penetration test. Typically, the process kicks off with some planning, such as determining why the test is necessary and choosing the type of test. Once this planning is completed, you’ll get permission in written form, and the test can then proceed; it usually starts with gathering information that can be used for later network scanning and more aggressive actions.

 

Once all the penetration testing is complete and information about vulnerabilities and exploits has been obtained, you create a risk mitigation plan (RMP). The RMP should clearly document all the actions that took place, including the results, interpretations, and recommendations where appropriate. Finally, you’ll need to clean up any changes made during the test.

 

In this blog, you’ll learn to:

  • Determine why the test is necessary to Choose the type of test to perform Get permission and create a contract Following the law while testing
  • Determining the Objective and Scope of the Job

 

We’ve all heard the saying that you have to plan for success; well, the same is true with any penetration test you are tasked with performing. To ensure success, you’ll need to do a great deal of planning.

 

You and the client will have a kickoff meeting to discuss the course of the test. The meeting will cover a lot of different issues, but specifically look for information relating to scope, objective, parties involved, as well as other concerns. Before the meeting is finished, you must have a clear idea of the objective of the test because, without that, the test cannot be effective, and it would be difficult if not impossible to determine whether a satisfactory outcome has been reached.

 

The test should ultimately be focused on uncovering and determining the extent of vulnerabilities on the target network. In addition, the scope should determine what is and isn’t included in the test. Essentially, you are looking to establish boundaries that you will be required to stay within. The scope must also be tangible with actual success criteria factored into it.

 

These are some other questions to ask:

questions

Why is a penetration test necessary?

 

What is the function or mission of the organization to be tested? What will be the constraints or rules of engagement for the test? What data and services will be included as part of the test?

  • Who is the data, owner?
  • What results are expected at the conclusion of the test? What will be done with the results when presented?
  • What is the budget?
  • What are the expected costs?
  • What resources will be made available?
  • What actions will be allowed as part of the test? When will the test be performed?
  • Will insiders be notified?
  • Will the test be performed as a black or white box?
  • What conditions will determine the success of the test? Who will be the emergency contacts?

 

You should also consider if any of the following attacks need to be performed to obtain the results that a client is seeking. (Make sure that the client approves each category of attack and what it includes.)

 

Social Engineering The weakest security element in just about any system is the human element. Technology is able to assist and strengthen the human component, but still, a large number of the weaknesses present here must be addressed through training and practice, which is sorely lacking in many cases. Testing the security of an organization via its human element should be something that is considered for every potential penetration test.

 

Application Security Testing This form of the test focuses specifically on locating and identifying the nature of flaws in software applications. This type of test can be performed as an independent test or as part of a complete testing suite. This process may be requested in those situations where custom applications or environments exist and a closer scrutiny needs to be made.

 

Physical Penetration Test Strong physical security methods are applied to protect sensitive data. This is generally useful in military and government facilities. All physical network devices and access points are tested for possibilities of any security breach. This test may seek to gather information from devices or other assets that are unsecured and can be considered as part of a social engineering test in some cases.

 

It should come as no surprise that determining the goals of the test is one of the more difficult items to nail down. Many clients will look to you as the penetration tester to help them arrive at a goal for the exercise. Conduct interviews with your clients, as the meeting, is your chance to do so. Put your answers in clear, understandable language when clarifying the objectives. Make sure the meeting isn’t over until you have a good understanding of the goals of the test.

 

It is highly recommended that before entering into this type of meeting with a client that you have a checklist of prepared questions and an agenda prepared in order to make sure that all issues are addressed and time is not lost.

 

Another item to discuss and refine during the meeting is the timing and overall duration of the test. This is an extremely important detail because some clients may want you to conduct the test only during specific hours to avoid disruptions to their infrastructure and business processes.

 

This need will have to be balanced against the need to evaluate an organization as it works or is under stress, something that an after-hours test will not provide. No organization of any sort is willing to have their operations affected as the result of a penetration test, so performing aggressive tests such as a denial-of-service attack or another type of test may be frowned upon. In short, be aware of any limitations, and if you need to deviate from them, check with the client.

 

Another choice that will need to be made during this meeting is who will and won’t be informed about the test. Although there will always be some part of the staff who will be aware of the test and will be on hand to verify that you are supporting the goals of the organization and to provide support in the event you are confronted about performing the test by those that don’t know about it, informing too many of the staff can have the effect of doctoring the results. This is because personnel will adjust their work habits either consciously or unconsciously when they know a test is ongoing.

 

Choosing the Type of Test to Perform

Test to Perform

A penetration test is considered part of a normal IT security risk management process that may be driven by internal or external requirements as the individual situation merits.

 

Whether an internal or external risk assessment, it is important to remember that a penetration test is only one component in evaluating an environment’s security, but it frequently is the most important part because it can provide real evidence of security problems. Still, the test should be part of a comprehensive review of the security of the organization.

 

The following are the items that are expected to be tested during a penetration test:

  • Applications
  • IT infrastructure Network devices
  • Communication links
  • Physical security and measures Psychological issues

 

Policy issues

Policy issues

In many cases, a penetration test represents the most aggressive type of test that can be performed in an organization. Whereas other tests yield information about the strengths and weaknesses of an organization, only a penetration runs the real risk of causing a disruption to a production environment.

 

Clients quite frequently do not fully grasp that even though the pen test is being done by a benevolent party, it still involves some level of risk, including actually crashing systems or causing damage. Make sure the client always is aware of the potential risks to their business and make sure they have made backups and put other measures in place in case a catastrophic failure occurs.

 

When a penetration test is performed, it typically takes one of the following forms:

 

Black-Box Testing Black-box testing is a type of test that most closely resembles the type of situation that an outside attack and is sometimes known as an external test. To perform this type of test, you will execute the test from a remote location much like a real attacker. You will be extremely limited in your information and will typically have only the name of a company to go on, with little else.

 

By using many of the techniques mentioned in this blog, the pentester will gain an increasing amount of information about the target to make your eventual penetration into the company. Along the way, you will log and track the vulnerabilities on a system and report these to the client in the test documentation.

 

You will also attempt to use your knowledge to quantify the impact any loss would have to an organization. Once the test process is completed, a report is generated with all the necessary information regarding the target security assessment, categorizing and translating the identified risks into business context.

 

Gray-Box Testing In this type of test, you are given limited knowledge that may amount to all the information in a black box plus information such as operating system or other data. It is not unheard for this type of test to provide you with information on some critical, but untouchable, resources ahead of time.

 

The idea with this practice is that if you have knowledge of some key resources ahead of time you can look for and target these resources. However, once one of these targets is found, you are told to stop the test and report your findings to the client.

 

White-Box Testing A white-box test gives the testing party full knowledge of the structure and makeup of the target environment; hence, this type of test is also sometimes known as an internal test. This type of test allows for closer and more in-depth analysis than a black or gray box would.

 

White-box tests are commonly performed by internal teams or personnel within an organization as a means for them to quickly detect problems and fix them before an external party locates and exploits them. The time and cost required to find and resolve the security vulnerabilities is comparably less than with the black-box approach.

 

Gaining Permission via a Contract

Contract

Remember that one of the key tenets of performing a penetration test on an organization is to get clear and unambiguous permission to conduct the test. Although getting sponsorship and such to perform the test is important, it is vital to have permission documented. Get the person authorizing the test to sign off on the project and the plan, and have their contact information on hand just in case. Without such authorization, the test can run into one of many snags, including a claim that the test was never authorized.

 

What form can this authorization take? Well, a verbal authorization is not desirable, but other forms are acceptable. If you are an outside contractor, a signed contract is enough to convey and enforce permits for the action. Internal tests can be justified with an email, signed paperwork, or both.

 

Without this paperwork or permission in place, it would be unwise to proceed. The permission not only gives you the authorization to conduct the test but also serves as your “Get out of Jail Free” card if you are challenged as to whether you should be testing.

 

Don’t underestimate the importance of having permission to do a test as well as having it in writing. Charges have been filed and successfully pursued against those who have not had such permission or documentation. After the initial meeting is conducted, a contract will be generated outlining the objectives and parameters of the test. The following are some of the items that may be included:

 

Systems to Be Evaluated or Targets of Evaluation (TOE) You will work with the client to together determine which systems require evaluation during the penetration test. These can be any systems that are considered to be of value to the organization or need to be tested due to compliance reasons.

 

Perceived Risks In any penetration test something can and will happen that is not planned. Consider that during testing despite your best-laid plans and preparations the unexpected will occur, and by informing the client ahead of time you decrease the surprise of downtime and allow for preparations to be made to lessen any impact.

 

Timeframe Set a realistic timeframe during which the tests are to be conducted. Ensure that enough time is allocated to perform the test, check and verify the results, and catch any problems. Additionally, setting times for the test will also include times of the day and week to perform the test because results and responses to an attack will vary depending on the time of day and which day it is performed on.

 

Systems Knowledge Remember, you don’t necessarily need to have extensive knowledge of every system you are testing, but you should at least possess some basic level of understanding of the environment. This basic understanding helps protect you and the tested systems. Understanding the systems you’re testing shouldn’t be difficult if you’re testing internal systems.

 

Actions to Be Performed When a Serious Problem Is Discovered Don’t stop after you find one security hole. Keep going to see what else may possibly be discovered. Although you shouldn’t continue until all the systems have been disabled and/or crashed, you should pursue testing until you have exhausted your options.

 

If you haven’t found any vulnerability, you haven’t looked hard enough. If you uncover something big, you must share that information with the key players as soon as possible to plug the hole before it’s exploited. Also, ask the client to define the criterion for a “wake-up call,” which means that if your team finds something that poses a grave threat to the network, they must stop the test and notify the client right away. This will prevent the team from stopping at every vulnerability they find and guessing whether to either continue or to contact the client.

 

Deliverables This includes vulnerability scanner reports and a higher-level report outlining the important vulnerabilities to address, along with counter-measures to implement.

 

As a rule of thumb, include any information in the contract that clarifies expectations, rules, responsibilities, and deliverables. The more information you include in a contract that clarifies things, the better for you and your client as doing so eliminates confusion later on.

 

Gathering Intelligence

Gathering Intelligence

After a plan is in place and proper preparation has been completed, the information gathering process can begin. This phase represents the start of the actual test, even though you will not be engaging your target directly as of yet. At this step, a wealth of information can be obtained.

 

Sometimes this step is known as footprinting instead of reconnaissance or information gathering. All these terms are correct. In any case, the process is intended to be methodical. A careless or haphazard process of collecting information in this step can waste time later or, in a worst-case scenario, can cause the attack to fail outright. A smart and careful tester will spend a good amount of time in this phase gathering and confirming information.

 

How do you gain information? Well, there is an endless sea of resources available to do this, and it is up to you to determine which are useful and which are less so. Look for tools that can gain information that will help you build a picture of a target that will allow you to refine later attacks. Information can come from anywhere, including search engines, financial disclosures, websites, job sites, and even social engineering (I’ll define all of these methods a little later, so don’t worry).

 

What you want to have when leaving this phase is a comprehensive list of information that can be exploited later. To give you some idea of what information is available, look at the following list:

 

Public Information Collect all the information that may be publicly available about a target, such as a host and network information, from places like job boards.

 

Sector-Specific Commonalities Ascertain the operating system or systems in use in a particular environment, including web servers and web application data where possible.

 

DNS Information Determine queries such as Whois, DNS, network, and organizational queries.

 

Common Industry System Weaknesses Locate existing or potential vulnerabilities or exploits that may exist in current infrastructure that may be conducive to launching later attacks.

 

A tip I give to those coming into the field of ethical hacking and pen testing is to try to think “outside the lines” that they may have been traditionally taught.

 

When acquiring a new piece of technology, try to think of new ways that it could be used. For example, could you wipe a device and install Linux on it? Could you circumvent safety mechanisms on the device to force it to allow the installation and configuration of additional software and hardware? Try to train yourself to think like someone who is trying to cause harm or get away with something. As a penetration tester, you will be expected to think like a bad guy but act in a benevolent manner.

 

Scanning and Enumeration

Once you have gathered information about your target, it is time to move on to the next step: scanning and enumeration. While you hope that you have gathered a good amount of useful information, you may find that what you have is lacking. If that’s the case, you may have to go back and dig a little more for information.

 

Or you may also decide that instead of going back to fill in gaps in your knowledge you want to continue with the scanning process. You will find yourself developing an eye for things as you practice your skills and gain more experience.

 

Scanning includes ping sweeping, port scanning, and vulnerability scanning. Enumeration is the process of extracting meaningful information from the openings and information you found during scannings, such as usernames, share data, group information, and much more. 

 

Penetrating the Target

Penetrating the Target

Once a target has been scanned and openings and vulnerabilities determined, the actual penetration of the target can proceed. This step is done to exploit the weaknesses found in the system with the intention of compromising the system and gaining some level of access.

 

You should expect to take the results from the previous step of gathering intelligence to carefully identify a suitable target for penetration. Keep in mind that during the previous step a good number of vulnerable systems may be uncovered, so the challenge is now to locate a system or systems that can be exploited or are valuable targets.

 

For example, when scanning a network, you may locate 100 systems, with four being servers and the rest desktop systems. Although the desktop systems may be interesting targets, you will probably focus your attention, at least initially, on the servers, with the desktops a possible secondary target.

 

After selecting suitable or likely targets, you will attempt to use your skills and knowledge to break into the targets. Many different attacks may be tried before one is actually successful if one is successful at all. Remember that scanning and assessing a system as having a vulnerability does in no way mean it is actually capable of being exploited in any way. You should consider which type of attacks may be successful and what order you will attempt them prior to actually employing them against a target.

 

Attacks that may appear during this phase can include

  • Password cracking Traffic sniffing
  • Session hijacking Brute-force attacks
  • Man-in-the-middle attacks

 

These attacks are covered in this blog, so you will have some familiarity with each and how to use them. Be aware, however, that there are many potential attacks and tricks that can be performed, many of which you will learn over your career and as experience grows.

 

Automated tools can be used to identify many of the more common, well-known weaknesses that may be present in an environment. These tools typically have updates that are regularly refreshed that ensure that the latest weaknesses are caught.

 

Here’s how to select a good penetration tool:

  • It should be easy to deploy, configure, and use. It should scan your system easily.
  • It should categorize vulnerabilities based on severity that need immediate fixes. It should be able to automate verification of vulnerabilities.
  • It should re-verify exploits found previously.
  • It should generate detailed vulnerability reports and logs.

 

However, automated tools present some limitations, such as producing false positives and missing known weaknesses. They can also be loud on the network and even provide a false sense of confidence in the results.

 

Since automated tools cannot locate every potential weakness, the need for manual testing becomes apparent. A human, with the right training and knowledge, can locate a wide range of weaknesses that may not be located through automated means. However, the downside of performing the test manually is that it is time-consuming and it is just not possible for a human being to check every potential vulnerability in a reasonable amount of time.

 

So what is the best approach? Well, the best approach for many penetration testers is to combine the two into a hybrid approach. The automated tests can be used to look for vulnerabilities, and the manual ones can focus on specific issues and do further investigation on specific weaknesses.

Some other actions that happen after breaking into a system are maintaining some sort of access and covering your tracks.

 

Maintaining Access

Maintaining Access

Maintaining access is a step that is used to preserve the opening that you have made on a system as a result of gaining access. This step assumes that you will want to continue going further with the attack or come back later to perform additional actions. Remember that the owner of the targeted system is, or at least should be, attempting to stop or prevent your access to the system and as such will try to terminate your access. 

 

Covering Your Tracks

Covering your tracks is also an important part of this step because it helps conceal evidence of your actions and will ward off detection and removal actions on the part of the system owner. The less evidence you leave behind or the more you conceal it, the harder it will be for the defending party to thwart your actions.

 

Documenting the Findings of the Test

After conducting all the previous tasks, the next step is to generate a report for the client. This document is called your risk mitigation plan. Although the report can take many different forms depending on the specific situation and client needs, there are some essential pieces of information and a format that you can follow.

 

The report should start with a brief overview of the penetration testing process. This overview should seek to neatly encapsulate what occurred during the test without going into too many technical details. This section will be followed by an analysis of what vulnerabilities were uncovered during the test.

 

Vulnerabilities should be organized in some way that draws attention to their respective severity levels such as critical, important, or even low. The better you can separate the vulnerabilities, the better it will assist the client in determining where to dedicate time and effort toward addressing each.

 

The other contents of the report should be as follows:

  • Summary of any successful penetration scenarios
  • A detailed listing of all information gathered during penetration testing Detailed listing of all vulnerabilities found
  • Description of all vulnerabilities found
  • Suggestions and techniques to resolve vulnerabilities found

 

I additionally try to separate my reports for clients into a less technical summary and report up front. I then attach the hard technical data as an appendix to the report for the client to review as needed.

 

In some cases, clients may request a certain format either directly or indirectly as a condition of the test they request. For example, in tests that are performed in order to satisfy Payment Card Industry (PCI) standards, a format may be requested for the client that conforms to specific standards.

 

The same might be said for requirements pertaining to HIPAA standards and others. Always ask your client if any specific format is needed or if it is up to your own discretion. To make the reporting and documentation process easier, I strongly recommend that during your process of penetration testing you make a concerted effort to maintain clear and consistent notes.

 

If this is not your forte, I strongly recommend you develop these skills along with purchasing or developing a good reporting system (which we will discuss more fully elsewhere in this blog) to ease some of a load of this process. A lack of documentation can not only make things harder for you, but it will also have the effect of potentially leaving conspicuous holes in your test data.

 

After all, is said and done, there may be some degree of cleaning up to do as a result of the actions taken during the penetration test. You will want to go through all of the actions you took in your documentation and double-check to determine whether anything you performed needs to be undone or remediated.

 

You are seeking to make sure that no weakened or compromised hosts remain on the network that could adversely affect security. In addition, any actions you take to clean up the network or hosts should be verified by the organization’s own IT staff to ensure that they are satisfactory and correct.

 

Typical cleanup actions include removing malware from systems, removing test user accounts, restoring changed configurations, and fixing anything else that may have been altered or impacted during the test.

 

Exploring the Process According to EC-Council

EC-Council

There are many ways to perform the ethical hacking process, and another well-known process is that of EC-Council’s Ethical Hacker Credential. The process is arranged a little differently, but overall it is the same. I am documenting it here for you because I strongly feel that being aware of your options is essential to being successful as a pentester.

 

The following are the phases of the EC-Council process for your reference:

 

Footprinting This means that the attacking party is using primarily passive methods of gaining information from a target prior to performing the later active methods. Typically, interaction with the target is kept to a minimum to avoid detection and alerting the target that something is coming to their direction. A number of methods are available to perform this task, including Whois queries, Google searches, job board searches, discussion groups, and other means.

 

Scanning In this second phase, the attacking party takes the information gleaned from the footprinting phase and uses it to target the attack much more precisely. The idea here is to act on the information from the prior phase to avoid the “bull in the china shop” mentality and blunder around without purpose and set off alarms. The scanning means performing tasks like ping sweeps, port scans, observations of facilities, and other similar tasks.

 

Enumeration This is the next phase, where you now extract much more detailed information about what you uncovered in the scanning phase to determine its usefulness. Think of the information gathered in the previous phase as walking down a hallway and rattling the doorknobs, taking note of which ones turn and which ones do not.

 

Just because a door is unlocked doesn’t mean anything of use is behind it. In this phase, you are actually looking behind the door to see whether there is anything behind the door of value. Results of this step can include a list of usernames, groups, applications, banner settings, auditing information, and other similar information.

 

System Hacking Following enumeration, you can now plan and execute an attack based on the information uncovered. You could, for example, start choosing user accounts to attack based on the ones uncovered in the enumeration phase. You could also start crafting an attack based on service information uncovered by retrieving banners from applications or services.

 

Escalation of Privilege If the hacking phase was successful, then an attacker could start to obtain privileges that were granted to higher privileged accounts than they broke into originally. If executed by a skilled attacker, it would be possible to move from a low-level account such as a guest account all the way up to administrator or system-level access.

 

Covering Tracks This is where the attacker makes all attempts to remove evidence of their being in a system. This includes purging, altering, or other actions involving log files; removing files; and destroying other evidence that might give away the valuable clues needed for the system owner to easily or otherwise determine an attack happened.

 

Think of it this way: if someone were to pick a lock to get into your house versus throwing a brick through the window, the clues are much more subtle or less obvious than the other. In one case, you would look for what the visitor took, and in the other, the trail would probably have gone very cold by then.

 

Maintain Access Planting back doors is when you, as the attacker, would leave something behind that would enable you to come back later if you wanted. Items such as special accounts, Trojans, or other items come to mind, along with many others. In essence, you would do this to retain the gains you made earlier in this process in the event you wanted to make another visit later.

 

Following the Law While Testing

 Testing

You need to also be familiar with the law and how it affects the actions you will undertake. Ignorance or lack of understanding of the law is not only a bad idea, but it can quickly put you out of business or even in prison. In fact, under some situations, the crime may even be enough to get you prosecuted in several jurisdictions in different states, counties, or even countries due to the highly distributed nature of the Internet.

 

Therefore, you need to always ensure that the utmost care and concern is exercised at all times to ensure that the proper safety is observed to avoid legal issues. The following is a summary of laws, regulations, and directives that you should have a basic knowledge of:

 

1974 U.S. Privacy Act This governs the handling of personal information by the U.S. government.

1984 U.S. Medical Computer Crime Act This addresses illegally accessing or altering medication data.

 

1986 (Amended in 1996) U.S. Computer Fraud and Abuse Act This includes issues such as altering, damaging or destroying information in a federal computer and trafficking in computer passwords if it affects interstate or foreign commerce or permits unauthorized access to government computers.

 

1986 U.S. Electronic Communications Privacy Act This prohibits eaves-dropping or the interception of message contents without distinguishing between private or public systems.

 

1994 U.S. Communications Assistance for Law Enforcement Act This requires all communications carriers to make wiretaps possible.

1996 U.S. Kennedy–Kassebaum Health Insurance and Portability Accountability Act (HIPAA) (Additional requirements were added in December 2000.) This addresses the issues of personal healthcare information privacy and health plan portability in the United States.

 

1996 U.S. National Information Infrastructure Protection Act This was enacted in October 1996 as part of Public Law 104-294; it amended the Computer Fraud and Abuse Act, which is codified in 18 U.S.C. § 1030. This act addresses the protection of the confidentiality, integrity, and availability of data and systems. The act is intended to encourage other countries to adopt a similar framework, thus creating a more uniform approach to addressing computer crime in the existing global information infrastructure.

 

Sarbanes–Oxley (SOX) In the shadow of corporate scandals such as the ones that brought down Enron and MCI Worldcom, new federal standards were put forth to combat this in the United States in the form of SOX.

 

Federal Information Security Management Act (FISMA) This law, passed in the United States, requires each federal agency to create, document, and implement information security policies.

 

Hardening a Host System

Host System

The computer systems of an organization are vital to its ability to function. Systems typically perform tasks such as processing data, hosting services, and hosting or storing data.

 

As you know, these systems are also tempting targets for an attacker. Being aware of the threats and vulnerabilities that could weaken an organization is important and is one of the main motivations behind being a pentester but knowing how to be proactive and deal with the issues before an attack is also important.

 

We all know that stopping a problem before it starts can save a tremendous amount of work. This is where the process of hardening begins. The process is ongoing as threats change and so do vulnerabilities, meaning that the organization must adapt accordingly. The process will have several phases consisting of various assessments, reassessments, and remediation as necessary.

 

In this blog, you’ll learn to:

  • Understand why a system should be hardened
  • Understand defense in depth, implicit deny, and least privilege Use Microsoft Baseline Security Analyzer
  • Harden your desktop Back up your system

 

Introduction to Hardening

Introduction to Hardening

While it is true that most system, hardware, and software vendors offer a number of built-in security features in their respective products, these features do not offer total protection. The security features present in any system can limit access only in a one-size-fits-all approach, meaning that they don’t take specific situations into account.

 

As a pentester, you should recognize that computer systems are still rife with vulnerabilities that can be exploited. Mitigating this situation requires a process known as system hardening, which is intended to lower the risks and minimize the security vulnerabilities as much as possible. The process can be undertaken by IT staff or even pentesters if so contracted.

 

System hardening is a process that is designed to secure the system as much as is possible through the elimination of security risks. The process typically involves defining the role of a system (i.e., web server or desktop) and then removing anything that is not required to perform this role.

 

If this process is strictly enforced and adhered to, the system will have all nonessential software packages removed and other features disabled in order to reduce the surface threat. This process will decrease the number of vulnerabilities as well as reduce the possibilities of potential backdoors.

 

Note the step of defining a system role; this is absolutely essential in getting further into hardening a system. Defining the role is essential because it is impossible to effectively remove nonessential services until you know what is essential.

 

If this process is taken to a serious level, more extreme measures can be taken, including the following:

  • Reformatting and wiping a hard drive before reinstalling the operating system
  • Changing the boot order in BIOS from removable devices to other components
  • Setting a BIOS password
  • Patching the operating system Patching applications
  • Removing user accounts that are not used or disabling these accounts
  • Setting strong passwords
  • Removing unnecessary network protocols Removing default shares
  • Disabling default services

 

The steps involved in hardening are very much a moving target, with the process varying widely from company to company. This is why securing a system requires a high level of knowledge regarding how the system works, features available, and vulnerabilities.

 

Of course, system administrators should always remember that there are many different computing systems and services running on any given network, but all devices have an operating system whether it is a mobile system, laptop, desktop or server. On the technology side, increasing security at the operating system level is an important first step in moving toward a more secure environment.

 

In fact, attackers are well aware that operating systems are the common denominator in all of the stated environments, and as such they are a good place to start an attack. That’s why the operating system represents a good place to start a defense.

 

In addition, operating systems are very complex, and all are subject to defects of all types, some of which can lead to security issues, no matter who the creator of the system may be. Some in the technology field believe that some systems are more secure than others and that’s “just the way things are.”

 

The reality is that any operating system can be made secure or less secure based on who uses it and how it is set up. Operating systems are quite frequently misconfigured or even mismanaged by those who use and support them, meaning that they are targets for attacks for this reason alone.

 

Three Tenets of Defense

The following are three ways to approach hardening a system.

 

Following a Defense-in-Depth Approach

Depth Approach

Defense in depth is a powerful and essential concept in information security that describes the coordinated use of multiple security countermeasures to protect assets in an enterprise and that complement one another.

 

The strategy is based on the military or “castle strategy” principle in that it is more difficult for an enemy to defeat a complex and multilayered defense system than to penetrate a single barrier. Think of a castle with all of its defenses—its defense usually includes moats, walls, archers, catapults, and hot lead in some cases. Once attackers get past one layer of security, they have to contend with another.

 

Defense in depth serves as a way to reduce the probability that an attack will ultimately succeed. If an attacker goes after a system, the different layers typically stop the assault in one of three ways (but not the only way):

 

Providing Safeguards Against Failure If one security measure is used, the danger if it fails is much more serious. In this case, if a single security measure is in place and were to fail, even briefly, the system would be totally defenseless. For example, if a network is protected solely by a firewall and the firewall failed, then an attacker could access the network easily.

 

Slowing Down an Attacker If multiple defenses are in place, an attacker must successfully breach several countermeasures, and one of the purposes this serves is to give the defender time to detect and stop the attack.

 

Serving as an Imposing Obstacle While no defense will ever stop those who truly want to gain access to a system, multiple layers will serve as a deterrent to many. The truth is that there are fewer highly skilled hackers than there are script kiddies and beginners. Good defenses will serve as an imposing obstacle for many, meaning that a true attack will not happen in many cases.

 

Basically, never put all your eggs in one basket. Depending on one security mechanism is a perfect recipe for disaster because any technology or procedure can and will fail. And when the single mechanism depended on happens to fail, there will then be no security mechanism in place to protect an organization against attackers. Of course, the layers of defensive measures must not go overboard—too many layers can make the system unmanageable.

 

Implementing Implicit Deny

Implementing Implicit Deny

One of the most important concepts in security is that of implicit deny. Simply put, implicit deny states that if an action is not explicitly allowed, then it is denied by default. To be secure, a party, whether it is a user or piece of software, should only be allowed to access data or resources and as such perform actions that have been explicitly granted to them. When an implicit deny is implemented correctly, actions that have not been specifically called out will not be allowed.

 

Implicit deny is present in a number of situations, including many locations in software where it makes the difference between secure and insecure environments. One example of implicit deny is in firewalls, where the system is locked down and doesn’t allow any traffic whatsoever to pass until the system owner configures the system to allow specific traffic.

 

In the real world, not every piece of hardware and software will adhere to this rule, as good as it is. In many modern operating systems, the tendency is to make the system as usable as possible, which means that many actions are allowed by default. This can be thought of as implicit allow since many actions are permitted that should not be for security reasons. This means many devices and software will need to be configured to allow every operation without question.

 

Why is this done? Simply put, if an operating system allows everything to occur without question, it is much more usable for the end user; in other words, it’s more convenient to use—at the expense of security, of course. What is the result of this policy of implicit allow? Many users install, configure, or do things to a system that they are not qualified to do or don’t understand and end up causing a security issue or incident within an organization.

 

Implementing Least Privilege

Implementing Least Privilege

Another core element of a robust security program is that of least privilege. This concept dictates that the users of a system should only ever have the level of access necessary to carry out whatever tasks are required to do their jobs. This concept can apply to access to facilities, hardware, data, software, personnel, or any number of elements. When implemented and enforced properly, a user or system is given access; that access should again be only the level of access required to perform the necessary task.

 

At any point, any given program and every user of the system should operate using the least set of privileges necessary to complete the job, with no more, no less. When implemented as described, the principle limits the damage that can result from an accident or error. It also serves to reduce the number of potentially harmful interactions among privileged programs to the minimum needed for correct operation, so that unintentional, unwanted, or improper uses of privilege are much less likely to occur and cause harm.

 

If a question arises related to misuse of a privilege, the number of programs that must be audited is minimized. Another example of least privilege is that of “need-to-know,” which calls out the same type of setup in environments as those present in the military and defense contractors.

 

In Windows 10, actually starting with Windows Vista, many sensitive system operations displayed a colored shield icon next to them in the interface. This colored shield icon informed the observant user that the chosen operation would require elevated privileges and would, therefore, prompt the user for approval.

 

If a user was not logged in as an administrator, they would have to provide credentials to prove they could do the operation. If the user was logged in as an administrator, they would then be prompted whether they had requested the operation and, if so, whether they wished to approve the operation to continue.

 

Least privilege is an effective defense against many types of attacks and accidents, but only if it is implemented and adhered to; otherwise it loses its effectiveness. Because least privilege can be time-consuming and potentially tedious to implement as well as maintain, a system admin could very easily become lazy and neglect to stick to the concept.

 

Consider the problems that could arise if a person changes positions or jobs within an organization; logically their responsibilities would change, which means their privileges should change accordingly. Note the phrase “Only if it is implemented and adhered to”? This is perhaps the trickiest part.

 

In many companies least privilege procedures were implemented only to have a higher-up in the company get angry that they couldn’t do something they did before. Because the upset individual was high up in the company, they would be able to request/demand that the restriction be lifted.

 

Even though these individuals did not need the extra privileges, they got them. The end results in many cases would be lowered security or, much worse, a security incident.

 

A system admin needs to keep track of the necessary privileges so that a person doesn’t change job positions and end up with more privileges than they need, opening the door for an accident to cause substantial damage.

 

Creating a Security Baseline

Security Baseline

One of the first steps in hardening a system is determining where the system needs to be security-wise in regard to its specific role. This is where a baseline comes in. A baseline provides a useful metric against which a system can be measured in regard to its expected and defined role.

 

Simply put, a security baseline is a detailed listing of the desired configuration settings that need to be applied to a particular system within the organization. Once a baseline is established, it becomes the benchmark against which a system will be compared.

 

Systems that are not found to meet or exceed the requirements specified in the baseline will either need to have remedial action taken to bring them into compliance or need to be removed from the environment (barring other actions as allowed by company security policy). When generating a baseline for any given system, the settings that are eventually arrived at will depend on the operating system that is in place on the system as well as its assigned role within the organization.

 

Baselines are not something that stays static; they will change over time. Factors that will cause a baseline to change include operating system upgrades, changing roles, data processing requirements, and new hardware.

 

The first step in creating a baseline against which to measure a given system is to define the system role. On the surface, it may look as if only one or two baselines will be needed—with the knee-jerk response being that only one is needed for desktops and one for servers—but more are typically required.

 

Roles should be identified by examining the computing and data processing systems in an environment and identifying which have common requirements. These common requirements will collectively define a role to which a common set of configuration options can be applied.

 

Baselines should include the minimum software deployed to workstations, basic network configuration, and access, and latest service pack installed, for example.

 

While it is true that in many organizations a common set of settings will be applied across all systems, there still will be identifiable groups that will have their own unique requirements. Typically an organization will define those settings that are common among all systems and then customize further by adding additional settings and configuration options to enhance as necessary.

 

Creating a security baseline is a daunting task at best, but many tools exist to make the process much easier to complete and much more efficient. Additionally, manufacturers of operating systems generally also publish guidance that can be used to fine-tune a system even further. Using a software tool can make it easier and quicker to scan for and detect a broad range of potential issues by automating the process. Some of the common tools for hardening systems and creating baselines include the following:

 

Bastille This Linux- or Unix-based tool is used to scan and harden a system so it is more secure than it would be otherwise. It is important to note, however, that Bastille has not been updated in some time, but it may still be used as a hardening tool in some cases.

 

Microsoft Baseline Security Analyzer (MBSA) This tool has been made available by Microsoft for a long period of time and has evolved over the years. The tool is designed to do a scan of a system and compare it against a list of commonly misconfigured settings and other issues.

 

Security Configuration Wizard (SCW) Originally introduced in Windows Server 2003, the SCW has become a useful tool for improving system security. The wizard guides you through the process of creating, editing, applying, or rolling back a security policy as customized by the system owner.

 

The Microsoft Baseline Security Analyzer (MBSA) is probably the most well-known tool. When this tool was originally released in 2004, it was quickly adopted by many in the IT and security fields as a quick and dirty way of assessing the security of systems by determining what was missing from the system and which configuration options were impacting security.

 

The tool is able to provide a reasonably basic, but thorough, assessment of Windows, SQL Server, and Office. During the assessment process, the tool will also scan its host a system to determine which patches it is missing and inform users of what they need to do to remedy the situation.

 

As opposed to many other tools on the market, the MBSA does not provide any ability to customize the scan above a few basic options. Essentially the tool can scan a system using predefined groups of settings that Microsoft has determined to be the ones that most impact system security.

 

MBSA includes support for the following operating systems and applications:

Windows 2000 through Windows 10 as well as Server versions from 2000 through Windows Server 2012

Internet Information Server 5 through 8 Office 2000 through 2016

Internet Explorer 5 and higher

 

Additionally, MBSA supports both 32- and 64-bit platforms and can perform accurate security assessments on both platforms with context-sensitive assistance. MBSA is a useful tool, but care should be taken to avoid becoming too reliant on its output. While the tool provides a great foundation for performing assessments and saving results for later comparison, it is not an end-all, do-all solution.

 

The MBSA is available only for the Windows platform. Additionally, the tool is only capable of assessing a fixed portfolio of applications, and as such, any application that it is not hardcoded to check for will not be assessed.

 

You may see the phrase penetration test used interchangeably with the term security audit, but they are not the same thing. Penetration testers may be analyzing one service on a network resource. They usually operate from outside the firewall, with minimal inside information, in order to more realistically simulate the means by which a hacker would attack a target.

 

An audit is an assessment of how the organization’s security policy is employed and operating at a specific site. Computer security auditors work out in the open with the full knowledge of the organization, at times with considerable inside information, in order to understand the resources to be audited. Computer security auditors perform their work through personal interviews, vulnerability scans, examination of operating system settings, analyses of network shares, and historical data.

 

Hardening with Group Policy

 Group Policy

Using tools to analyze and configure basic settings for a computer system is only the start of “locking” down a computer, as many more tools are available to provide security. One of the most popular ones is Group Policy in the Windows family of operating systems.

 

In its simplest form, Group Policy is nothing more than a centralized mechanism for configuring multiple systems at once. In the hands of a skilled administrator who is guided by proper planning and assessment, the technology can be used to configure just about every option on a system, including such items as

 

  • Whether or not a user can install devices Whether or not a user can install software What printers the user can connect to
  • What settings the user can change
  • Where patches may be downloaded from How auditing is configured
  • Permissions on the registry Restricted groups
  • Permissions on the filesystem

 

Group Policy in Windows Active Directory has more than 1,000 settings, but this in no way implies that every setting needs to be configured—indeed, no administrator should ever attempt to do so. Only those settings that are required to attain a certain level of security dictated by company policy should ever be configured.

 

Hardening Desktop Security

Hardening Desktop Security

The home and business computer system at the desktop level is a popular and tempting target for attackers. Even beginner attackers know that the average computer user has a wealth of information and other things stored there. Consider the fact that the average home user stores a large amount of information on their drive year after year, frequently migrating it to new systems—the amount of information increases like a snowball rolling downhill.

 

The average user stores everything from bank information, credit card information, photos, chat logs, and many other items. With enough information in hand, a user could easily steal your identity and use your good name and credit to buy themselves whatever they want. If this is a business computer, the stakes are different if not higher, with company information being ripe for the picking on a user’s hard drive.

 

Intruders want a computer’s resources such as hard disk space, fast processors, and an Internet connection. They can use these resources to attack other targets on the Internet. In fact, the more computers an intruder uses, the harder it is for law enforcement to figure out where the attack is ultimately coming from. If intruders can’t be found, they can’t be stopped, and they can’t be prosecuted.

 

Why do intruders target desktops? Typically because they are the weak link; home computers are generally not very secure and are easy to break into. Corporate computers, on the other hand, maybe a different story, but the systems are typically the softer targets to compromise and may provide a starting point to getting to juicier assets within the company.

 

When combined with high-speed Internet connections that are always on, intruders can quickly find and then attack home computers. While intruders also attack computers connected to the Internet through dial-in connections, high-speed connections are a favorite target.

 

How do intruders break into a computer? In some cases, they send an email with a virus. Reading that email will activate the virus, creating an opening through which intruders will enter or access the computer. In other cases, they take advantage of a flaw or weakness in one of a computer’s programs to gain access.

 

Once they’re on the computer, they often install new programs that let them continue to use the computer, even after the owner plugs the holes they used to get onto the computer in the first place. These backdoors are usually cleverly disguised so that they blend in with the other programs running on the computer.

 

Managing Patches

Managing Patches

One of the ways to deal with vulnerabilities on a system is by patching and applying updates to a system. This is something that you should be prepared to recommend to a client. Just a few years ago the prevailing wisdom was to simply build a system from scratch and install all applications as well as updates and patches during initial setup and then deploy and either infrequently or never install additional updates.

 

Since the year 2000 forward, this approach has largely changed, with many organizations becoming victims of malware and other types of mischief the reason a reevaluation of the prevailing approach was considered and adopted. The downtime and loss of production that could have been prevented through the application of regular patches was a huge reason for this shift.

 

Along with the increased threats, there has been increasing concern about governance and regulatory compliance (e.g., HIPAA, Sarbanes–Oxley, FISMA) to gain better control and oversight of information. Factor in the rise of increasingly interconnected partners and customers as well as higher speed connections, and the need for better patching and maintenance becomes even greater.

 

It is easy to see why proper patch management has become not just an important issue, but a critical issue as time has moved on.

The goal of a patch management program is to design and deploy a consistently configured environment that is secure against known security issues in the form of vulnerabilities. Managing updates for all the software present in a small organization is complicated, and this is more complex when additional platforms, availability requirements, and remote offices and workers are factored in.

 

Accordingly as each environment has unique technology needs, successful patch management programs will vary dramatically in design and implementation. However, there are some issues that should be addressed and included in all patch management efforts.

 

Researching Information Sources

A critical component of the patch management is researching and verification of information. Every organization should designate a person or team to be in charge of keeping track of updates and security issues on applications and operating systems. This team should take a role in alerting administrators of security issues or updates to the applications and systems they support.

 

A comprehensive and accurate asset management system can help determine whether all existing systems are accounted for when researching and processing information on patches and updates.

 

Scheduling and Prioritizing Patches

Scheduling

When developing a patch management process you should consider several factors in order to get the most effective and most optimized process in place as possible. The more research and time you take to develop your patch management process, the more likely it is that it will be more effective at stopping or at least blunting the impact of various security threats and vulnerabilities as they appear—or even before they become a problem.

 

The first factor to consider is that a patch management process needs to guide and shape the management and application of patches and updates to the systems in any given environment. Generally, at the most basic level, you need to have a patch management process that is concerned only with the normal task of applying patches and updates as they become available to ensure that regular maintenance is done and not overlooked.

 

You never want to have a situation where patches are getting applied or updates are getting applied only in response to a problem or threat. Essentially what you are trying to avoid is having a reactive process as much as possible and be on a proactive footing instead. How often the process of applying patches takes place as part of normal maintenance is something that each organization will need to consider for themselves; for example, some organizations might decide to have patches applied every month and so may decide to delay major patches to every quarter.

 

Or they may decide to go to the other end of the spectrum and apply patches every couple of weeks or so as part of normal maintenance. You may read this and think that three months (every quarter) is too long to wait, but there is no one-size-fits-all solution. Also remember that the patches we’re talking about here are not being applied specifically in response to a security issue, though they could be addressing an issue, just not a critical one. For critical issues, you will have a different plan in place to deal with those situations as they arise.

 

Speaking of critical updates in the form of patches, service packs, or even hotfixes, there needs to be a plan in place to meet the needs of these particular software items. In addition to regular maintenance, it is expected that from time to time high priority or sense of security issues will arise. Security researchers or vendors find them and identify them and decide that they are indeed a critical issue that must be addressed as soon as possible.

 

When these situations occur, organizations want to have a process in place that deals with these off-cycle situations that cannot wait for the normal maintenance cycle. In these situations, the patches must be deployed immediately and installed on the systems to avoid a security problem getting out of control or emerging and leading to more serious problems.

 

Typically what starts off this process of patching is that a vendor will identify an issue as being crucial to their customers’ stability and well-being. So they will distribute information stating that there’s an issue with the software package and that certain updates will address that issue. Since these situations can appear at any time, not on a set schedule, an organization has to evaluate the seriousness of the situation and decide how best to employ the patch to its greatest effect.

 

What makes this process a little tougher is that you cannot schedule these types of situations to occur; they just appear as things are found that need to be addressed immediately. Regular maintenance updates can be scheduled so they are deployed when the systems are not being utilized for normal business operations.

 

That way, if a serious problem arises during the patching process, it can be handled without affecting business operations adversely. These types of updates and patches can be applied off hours on a weekend or evening when the systems are not being used. In the event a problem arises, time can be built into the schedule so there’s enough time to fix it before the systems are needed again.

 

If the problem is serious enough, this means that the update must be deployed immediately, even if it is in the middle of the day. Fortunately, these issues do not appear all that often, but they do appear from time to time and you must be ready to apply them as quickly as possible, with the goal of reducing any risk to your environment of becoming destabilized because of the patch deployment.

 

Testing and Validating a Patch

Testing

Murphy’s Law essentially says that if something’s going to go wrong, it will go wrong. IT and security folks quickly learn that Murphy’s Law also applies to our field and will quickly throw a monkey wrench into all our best-laid plans. In order to avoid any problems that might occur during the deployment of a patch, it’s a good idea to consider a mandatory testing phase.

 

During this phase, you check to make sure a patch works as advertised and will not have any adverse effects on the environment in which it will be deployed. Do not underestimate the potential for something to go wrong when a patch is deployed. Just because a patch is supposed to fix an issue doesn’t mean it won’t cause problems when it is deployed into your environment.

 

A patch may cause numerous other problems to pop up after it is deployed. The unexpected can happen, and that’s why we implement a testing process with the intention of lowering the possibility of this situation as much as possible.

 

The testing process should begin after a patch is acquired and before it is deployed into a production environment. Ideally the patch should be deployed to a test system or even a lab system and given a test drive or evaluation both before and after it’s applied. Remember that just because a patch is made available does not mean that it always has to be deployed; in some cases, the best action is no action at all. But you should arrive at the decision after evaluating and testing.

 

Also, do not underestimate the value of doing your research through Google or other sources to see if other people are encountering issues with the patch or update. Take care to ensure that the patches you will be deploying are obtained from a legitimate source and can be checked out to determine that they have not been altered or corrupted in any way.

 

Upon completion of testing and validation of a patch, you still have other steps to take. You must decide on a deployment schedule. Ideally, any updates that are required, even if they’re critical, will be applied outside of normal business hours. In some situations, the option to wait is not something that can be considered or rolled into the equation when performing planning.

 

For example, there have been cases where a piece of malware such as a worm spread rapidly across the Internet and affected an untold number of hosts all around the world. In many of these cases, it was found that the application of a patch to squash a vulnerability that the worm had exploited not only would keep the system from becoming infected but would also have the effect of eliminating one more host that could be used to infect numerous other hosts.

 

In these cases, it just was not worthwhile to wait any appreciable length of time to apply the patch. The worm was still spreading and systems that were cleaned but still vulnerable would still run the risk of becoming a problem if they were infected again.

 

Although organizations don’t use anyone fixed method to apply their updates, at a conceptual level the methods are all pretty much the same as far as how they progress and function. Most patches and updates will involve a medium- to high-level use of system resources. Typically, a system reboot—in some cases, numerous reboots—will occur during the application of a patch, and during this time the system is essentially unusable for whatever its normal purpose is.

 

This is why testing is critical; in addition to showing whether the patch is beneficial and addresses a problem, testing gives the organization a good look at how the process will take place. In that way, the organization can determine the best way to deploy the patch or update and achieve minimal disruption and downtime.

 

There’s a saying that no good plan stays intact after it is put into action— and the more complex the environment or the more critical the situation, the greater the chance that saying will apply (or at least that’s the way it seems). It is not unheard of for IT to run into a situation where they have installed a piece of software or applied a patch numerous times without incident and then there is a failure even though they did everything the same way.

 

When these situations occur, it is important to have a rollback plan. With a rollback plan, when a patch or update doesn’t go as planned and causes more problems than it’s worth, you have a way to get out of it gracefully with minimal disruption. In some cases, this may mean simply uninstalling a patch or update and then rebooting the system, and you’re back to where you were before the issue.

 

In other cases, you may have to rebuild the system (though this may be extreme), in which case you’ve hopefully planned ahead and have images that you can deploy to the system rapidly to get it back up and running. The lesson to be learned here is to always have a backup plan in the event that things don’t go the way they’re supposed to—in other words, hope for the best but plan for the worst.

 

Managing Change

Managing Change

Something that has to be addressed when discussing patch management is the issue of changes. Change management is a process that provides a mechanism to approve, track, change, and implement changes. For security reasons, you always want a clear picture of what is occurring on your system, and you want to be able to access that information and review it at any time for auditing or compliance reasons.

 

The change management process by design should include all the plans to perform the process of getting a patch into an environment. This includes testing, deployment, and rollback plans, as well as anything else that’s needed to ensure that things happen from beginning to end in a clear and documented way.

 

In some cases, the change management process should also include documentation on the risks and how a given change or update affected those risks. Finally, in numerous cases, there will be benchmarks set that a change is expected to meet in order to be considered successful.

 

Installing and Deploying Patches

The deployment phase of the patch management process is where administrators have the most experience. Deployment is where the work of applying patches and updates to systems occurs. While this stage is the most visible to the organization, the effort expended throughout the entire patch management process is what dictates the overall success of a given deployment and the patch management program in total.

 

Auditing and Assessing

Regular audit and assessment help measure the ongoing success and scope of patch management. In this phase of the patch management program, you need to answer the following questions:

 

What systems need to be patched for any given vulnerability or bug? Are the systems that are supposed to be updated actually patched?

What legacy systems are excluded from patch management, and what measures are in place to offset the risk?

 

The audit and assessment component will help answer these questions, but there are dependencies. Two critical success factors are an accurate and effective asset and cost management.

 

Performing Compliance

While the audit and assessment element of your patch management program will help identify systems that are out of compliance, additional work is required to reduce noncompliance hosts. Your audit and assessment efforts can be considered “after the fact” evaluations of compliance since the systems being evaluated will typically already be deployed into production. To supplement post-implementation assessment, controls should be in place to ensure that newly deployed and rebuilt systems are up to spec with regard to patch levels.

 

Hardening Passwords

Password

Passwords form one of the primary methods of barring access to a system to unauthorized users. Passwords act much like the key for a house or car, allowing only those who have the correct key from getting into the car or house. Passwords have served the vital purpose of allowing only authorized users to access a system or service.

 

One of the biggest problems with passwords is that they are frequently rendered useless through carelessness or recklessness, two things that are addressed in this section through the proper use of passwords.

 

Being Careful When Installing Software

Software

The software is what you use to do whatever it is you are doing with a computer. Software includes all the applications, services, and the operating system itself, so there is a lot happening on even the most basic of systems. The problem is that software is running however the designer intended it to, which can mean that it could potentially cause harm. It is with this in mind that you must carefully consider the applications you download and what they may be doing on your computer.

 

When talking about software, consider just how much a software application can possibly do. Consider that any operation you can do—including deleting files, making system configuration changes, uninstalling applications, or disabling features—the application can do as well. Keep in mind that what you download may not have your best interests at heart.

 

Consider that some applications, when downloaded, may not include any documentation or scant guidance on all the things it does and will leave you to fend for yourself. Even worse, the software may not even have an author that you can contact when you need help. You may be left to decide if the application is going to help you or if it may possibly be something that could do something more sinister.

 

By applying the following set of guidelines, you can avoid some of the issues associated with untrusted or unknown software:

  • Learn as much as you can about the product and what it does before you purchase it.
  • Understand the refund/return policy before you make your purchase.

 

  • Buy from a local store that you already know or a national chain with an established reputation.
  • If downloading a piece of software, get it from a reputable source.

 

  • Never install untrusted software on a secure system; if it needs to be installed, put it on an isolated test system first to test what it does.
  • Scan all downloaded content with an antivirus and antispyware application.
  • Ensure that the hash value matches what the vendor has published to ensure the integrity of the software.
  • Do not download software from file sharing systems such as BitTorrent.

 

Note the presence of downloaded applications on the list. In today’s world, many of the applications you use are available in digital format only online. A multitude of free programs is available for all types of systems, with more available each day. The challenge is to decide which programs deserve your confidence and are, therefore, worth the risk of installing and running on your home computer.

 

So with a huge amount of software being available for download only, what can you do to be safe? Consider the following as a guide:

 

What does the program do? You should be able to read a clear description of what the program does. This description could be on the website where you can download it or on the CD you use to install it. You need to realize that if the program was written with malicious intent, the author/intruder isn’t going to tell you that the program will harm your system. They will probably try to mislead you. So, learn what you can, but consider the source and consider whether you can trust that information.

 

What files are installed and what other changes are made on your system when you install and run the program? Again, to do this test, you may have to ask the author/intruder how their program changes your system. Consider the source.

 

Can you use email, telephone, letter, or some other means to contact the software developer? Once you get this information, use it to try to contact them to verify that the contact information works. Your interactions with them may give you more clues about the program and its potential effects on your computer and you.

 

Has anybody else used this program, and what can you learn from him or her? Try some Internet searches using your web browser. Somebody has probably used this program before you, so learn what you can before you install it.

 

If you can’t determine these questions with certainty, then strongly consider whether it’s worth the risk. Only you can decide what’s best. Whatever you do, be prepared to rebuild your computer from scratch in case the program goes awry and destroys it.

 

Remember that an antivirus program prevents some of the problems caused by downloading and installing programs. However, remember that there’s a lag between recognizing a virus and when your computer also knows about it. Even if that nifty program you’ve just downloaded doesn’t contain a virus, it may behave in an unexpected way. You should continue to exercise care and do your homework when downloading, installing, and running new programs.

 

Using Antivirus Packages

Antivirus

One of the dangers of modern computing with networking and shared media is that of malware in the form of viruses and worms. Although some systems are more vulnerable than others, all systems are vulnerable whether they are based on Windows, Mac, or Linux. Each has malware targeted toward them and it’s just a question of how much. Whereas some viruses are merely annoying, others can cause severe damage to a computer and may even corrupt data beyond repair or recovery.

 

In order to protect a system from viruses, there are a few simple and necessary steps that can be taken, with the installation and maintenance at the top of the list. You must consider it a full-time job to protect your systems from viruses; your computer is never truly safe unless it is disconnected from the Internet and you never insert computer disks or software from unreliable sources.

 

Backing Up a System

Backing Up

Everything on a computer is typically considered as either those items you can replace or those you can’t. What have you done about the items that you can’t replace on the computer you use, such as project files, photographs, applications, and financial statements? What happens if your computer malfunctions or is destroyed by a successful attacker? Are those files gone forever?

 

Do you have a backup or a way to recover information when you have a loss caused by a malfunction or an intruder? Do you back up your files on to some other media so that you can recover them if you need to?

 

When deciding what to do about backing up files on your computer, ask these questions:

What files should you back up? The files you select are those that you can neither easily re-create nor reinstall from somewhere else, such as the CDs or the floppy disks that came with your computer.

 

That check register you printed does not constitute a backup from which you can easily re-create the files needed by your checking account program. You’re probably not going to re-enter all that data if the files are destroyed. Just as you protect your irreplaceable valuables, back up the files you cannot replace, easily or otherwise.

 

How often should you back them up? In the best of all cases, you should back up a file every time it changes. If you don’t, you’ll have to reintroduce all the changes that happened since your last backup. Just as you store your precious jewelry in a lockbox at the local bank, you need to store your files safely (back them up) after every use (change in the file) lest an intruder destroys the file or there’s a system catastrophe.

 

Where should you back them up to—that is, what media should you use to hold backed-up files? The answer is: whatever you have. It’s a question of how many of that media you have to use and how convenient it is. Larger capacity removable disk drives and writable CDs as well as external hard drives also work well and take less time.

 

Where should you store that media once it contains backed-up files? No matter how you back up your files, you need to be concerned about where those backed-up copies live which includes potential storage locations such as the cloud.

 

A robber can gain access to the same information by stealing your backups. It is more difficult, though, since the robber must know where your backups are, whereas an intruder can access your home computer from literally anywhere in the world. The key is to know where the media is that contains your backed-up files.

 

Just like important papers stored in a fireproof container at your house, you need to be concerned about your backups being destroyed if your living space is destroyed or damaged. This means that you should always keep a copy of all backed-up files in a fireproof container or somewhere where they are out of harm’s way.

 

Hardening Your Network

Hardening Your Network

So far we have discussed network-level and application attacks, but those are only part of the equation for a pentester. A pentester must not only know about systems and how to improvise and find ways to identify weaknesses that breach security; they must also know how to address any issues they locate and recommend fixes for the customer.

In this blog, you’ll learn to:

  • Define network hardening
  • Understand why you want to do it
  • Recognize how hardened systems are by default

 

Introduction to Network Hardening

In the previous blog we talked about hardening from the perspective of individual hosts and devices on a network, but not how to harden the network and the services. Much like with hosts, a network has to be evaluated to determine where it’s currently vulnerable, the types of vulnerabilities and their seriousness, and where each vulnerability is located, as well as how they relate to one another. The end result of this process should be that the network becomes much more resilient and resistant to attack or compromise and therefore should be in a more secure state.

 

As you can imagine with the complexities, coverage, diverse range of services, and potential size of the user base, network hardening is going to be much tougher and challenging, but definitely doable. As with anything of this scope and size, careful planning is required to get the best results.

 

In fact, if you’ve been doing your job with the same level of care and consideration, then you should have thorough documentation and results from your pentest that will simply require you to do some research, take some time to figure out the best way to deal with what you found, and then make those recommendations to the customer.

 

So with our existing knowledge of the process of hardening hosts in hand, we are now going to discuss how to secure a network and some of the various items, tasks, and devices that you can make use of to make this happen.

 

What Is Hardening a Network?

Hardening a Network

When you undertake the process of hardening a network, much like you would with a host, it can involve technical, administrative, and physical measures that will end up making your final secure solution. It’s important to understand that there’s no one area or one component of technical, administrative, or physical controls that are going to help you entirely on its own; some combination of these things can get you the most bang for your buck:

 

Technical controls, or anything that is going to be based in the world of technology, such as servers, authentication systems, or even items like firewalls (which we’ll explore in just a moment)

 

Administrative controls, or a series of policies and procedures that dictate how to secure an environment as well as how to react within that environment

 

Physical controls, or anything that protects any component or area on the network from being physically accessed and touched by someone who is not authorized to do so

  • We will be focusing mostly on the technical controls in this blog.
  • Now let’s talk about some of the things that you will run into when you try to harden and defend a network.

 

Intrusion Detection Systems

Intrusion Detection Systems

As you recall, intrusion detection systems (IDSs) act as a burglar alarm that provides some of the earliest indications of an attack or other suspicious activity. While they will not stop the activity from taking place, they will provide notice of it. Remember, these devices are positioned to monitor a network or host. While many notifications that come from the system may be innocuous, the detecting of and responding to potential misuse or attacks must be able to respond based on the alert that is provided.

 

An IDS is a safeguard that can take one of two different forms: a software version which would be an application that could be configured to the consumer’s needs or a hardware version which would be a physical device and likely a higher performing device. Both are valid ways to monitor your system. The second is a device that gathers and analyzes information generated by the computer, network, or appliance.

 

A network-based intrusion detection system (NIDS) is an IDS that fits into this category. It can detect suspicious activity on a network, such as a misuse, SYN floods, MAC floods, or similar types of behavior, and would be the most advantageous for deployment onto a network.

 

A NIDS is capable of detecting a great number of different activities, both suspicious and malicious in nature, and this is a great candidate for monitoring the network. It can detect the following:

  • Repeated probes of the available services on your machines Connections from unusual locations
  • Repeated log-in attempts from remote hosts
  • Arbitrary data in log files, indicating an attempt at creating either a denial of service or a crashed service
  • Changes in traffic patterns Use of unusual protocols Application traffic

 

Putting It Together

The intrusion detection process is a combination of information gathered from several processes. The process is designed to respond to packets sniffed and then analyzed. In this example the information is sniffed from a network with a host or device running the network sensor, sniffing and analyzing packets off a local segment.

  • 1. A host creates a network packet.
  • 2. The sensor sniffs the packet off the network segment.
  • 3. The IDS and the sensor match the packet with known signatures of misuse.
  • 4. The command console receives and displays the alert, which notifies the security administrator or system owner of the intrusion.
  • 5. The response is tailored to respond to the incident as desired by the system owner.
  • 6. The alert is logged for future analysis and reference.
  • 7. A report is created with the incident detailed.
  • 8. The alert is compared with other data to determine if a pattern exists or if there is an indication of an extended attack.

 

Components of HIDS

Components of HIDS

A host-based IDS (HIDS) is another type of IDS that makes an appearance in large network environments, and it is solely responsible for monitoring activity of many different types on an individual system rather than a network. Host-based IDSs can get kind of confusing as far as what features they are supposed to have. There are so many vendors offering so many different types of host-based IDSs, and they’ve been around for so long that the feature sets vary widely from one to the next.

 

Much like a network-based IDS, a host-based IDS has a command console where all the monitoring and management of the system takes place. This piece of software is the component that is used to make any changes or updates to the system as required. This management point can be placed on another system where the system admin can access it remotely through specialized software or through a web browser.

 

In some cases the console may be accessible only on the local system; in that case, the admin needs to go to that system to manage it or find another way to access it remotely.

 

The second component in the HIDS is known as an agent. Much like a network sensor, an agent is responsible for monitoring and reporting any activities that occur on the system that are out of the ordinary or are suspect. The agent will be deployed to the target system and monitor activities such as permission usage, changes to system settings, file modifications, and other suspicious activity on the system.

 

Limitations of IDS

Limitations of IDS

An IDS is capable of monitoring and alerting system administrators to what is happening on their network, but it does have its limitations as well as situations it’s just not suitable for. To ensure that you work with these systems in a way that will get you the most return on your investment, you should understand their benefits as well as their limitations.

 

When you’ve identified problems in a client’s environment and decided that your strategy is going to include one or more IDSs, think about the monitoring goals you’re trying to address. Remember that even though IDSs are great systems that can help you tighten up and harden your network, using them incorrectly can give you a false sense of security—you may think that they’re doing their job but in reality, they are incapable of doing what you need to do.

 

For example, a network IDS is great at detecting traffic and malicious activity on a network, but it’s not so good when you try to monitor activities such as changes in files or system configuration settings on individual hosts. Or, the IDS may chatter away about problems that it perceives exist, but they don’t actually exist—something triggered the system to fire an alert that the IDS mistakes for an attack.

 

Also, do not make the mistake that many new security professionals make: thinking that an IDS is capable of responding to and stopping a threat. Remember the D in IDS stands for detection, and detection means just that—it will detect the issue but it doesn’t react or respond.

 

In fact, this last point illustrates, indirectly, the reason why you should always implement security in layers and not as a standalone component: a standalone component would, in the case of an IDS, tell you attacks are happening though it won’t do anything about it.

 

Never expect an IDS to be able to detect and notify you of every event on your network that is suspicious; it will only detect and report what you tell it to. Also, consider the fact that an IDS is programmed to detect specific types of attacks, and since attacks evolve rapidly, an IDS will not detect attacks it is not programmed or designed to do. Remember, an IDS is a tool that is designed to assist you and is not a substitute for good security skills or due diligence.

 

Investigation of an Event

An IDS provides a way of detecting an attack, but not dealing with it. An IDS is limited as to the potential actions it can take when an attack or some sort of activity occurs. An IDS observes, compares, and detects the intrusion and will report it. The system or network administrator has to follow up. All the system can do is notify you if something isn’t right; it can’t list the individual reasons why.

 

Information gathered from an IDS can be generated quite rapidly, and this data requires careful analysis in order to ensure that every potential activity that may be harmful is caught. You will have the task of developing and implementing a plan to analyze the sea of data that will be generated and ensuring that any questionable activity is caught.

 

Firewalls

Firewalls

Located and working right alongside IDSs in many situations is a class of devices collectively known as firewalls. In simple terms, a firewall is a device used to control the access to and from or in and out of a network. Since their initial introduction many years ago, firewalls have undergone tremendous changes to better protect the networks they are placed on. Because of their capabilities, firewalls have become an increasingly important component of network security, and you must have a firm command of the technology.

 

In most cases, a firewall will be located on the perimeter areas of a network, where it can best block or control the flow of traffic into and out of the client’s network. It is because of this ideal placement that firewalls are able to fully regulate and control the types of traffic.

 

The flow of traffic across to firewalls is determined by a series of rules that the system owner will configure based on what their particular needs are. For example, a system owner could choose to allow web traffic to pass but not other types of traffic, such as file sharing protocols, if they decided they were unnecessary and presented a security risk.

 

With the earliest types of firewalls, the process of allowing or disallowing access was fairly easy to configure relative to today’s standards. Older devices only required the configuration of rules designed to look at some of the information included in the header of a packet.

 

While these types of firewalls still exist and modern firewalls incorporate the same rule system, nowadays firewalls have evolved to thwart and deal with seemingly endless and more complex forms of attack. With the rapid increase and creativity of attacks, the firewalls of the past have had to evolve or face the fact that they would not be able to counter the problems.

 

To counter the threats that have emerged, firewalls have added new features in order to be better prepared for what they will face when deployed. The result has been firewalled that are much better prepared than they have been at any point in the past to deal with and control unauthorized and undesirable behavior.

 

Firewall Methodologies

Firewall Methodologies

If you were to look up firewalls using a simple Google search, you would undoubtedly get numerous results, many of those linking back to the various vendors of firewall software and hardware. You’d also quickly find that each and every vendor has their own way of describing a firewall. However, when you review this information, be aware that vendors have found creative ways to describe their products in an effort to sound compelling to potential customers.

 

If we boil away all the marketing spin and glossy ads and flowery language, you’ll find that firewalls generally work very similarly at some level.

Firewalls can operate in one of two basic modes:

 

Packet filtering

Proxy servers or application gateway

Packet filtering represents what could be thought of as the first generation of firewalls. The firewalls that would be classified as packet filtering firewalls may seem primitive by the standards of later generations of firewalls, but they did have their place and they still are used quite effectively in numerous deployments.

 

To understand why packet filtering firewalls are still in use, let’s look at the operation of a packet filtering firewall. For a firewall to be a true packet filtering device or system, it has to be looking at each and every packet at a very basic level—which means it will look at where a piece of information (packet) is coming from, where it’s going to, and the port or protocol that it is using.

 

To properly filter the desired and undesired traffic, the system or network administrator configures the firewall with the rules needed to perform the appropriate action on a packet when it meets the criteria in a given rule.

 

When we look closely at a packet filtering firewall, it’s quite easy to see that it is very limited in what it can do. It is looking at only a very limited amount of information in regard to a packet. As was previously mentioned, a packet filtering firewall only looks at where a packet is coming from, where it’s going to, and the port or protocol that it is using; anything else that may be present in that packet cannot be analyzed by this type of firewall.

 

Implementation of a packet filtering firewall is quite simple and it does exactly what it’s been designed to do, but because of the fact that it’s only able to look at limited amounts of information on a packet, anything that falls outside of these items is essentially invisible to this type of firewall. In practice, this means that while the packet filtering firewall can control the flow of traffic, there still is the potential for attacks to be performed successfully.

 

This type of firewall is still in use, but that begs the question of why, if they are so simple in what they do. While the simplicity of design does offer benefits in the form of performance, this type of firewall only looks at the most basic piece of information and does not look any deeper. This type of firewall is effective when you know that you will not be using a certain protocol on your network at all; you could simply block it so it can’t come on or off the network.

 

For example, if you know FTP is a security risk and you decide not to use FTP on your network, you can use a packet filtering firewall to block it from even coming onto the network in the first place. There’s no need to filter what is inside a packet using FTP when you know you don’t need it anyway, so a packet filtering firewall can just drop the packets outright instead of passing them on for further analysis.

 

A later generation of firewalls is known as the proxy server or, as they are sometimes known, application gateways. With a proxy server added to the mix, the firewall now has the built-in or native ability to do a more detailed inspection or analysis on a packet in addition to, or instead of, what was part of the packet header.

 

In short, this means that this type of firewall has the ability to start looking within a packet. To relate this type of firewall to a packet filtering firewall, think of a packet filtering firewall only analyzing the address label on an envelope. On the other hand, a proxy or application-level firewall is going to take a closer look at what is inside the envelope and how it’s laid out and packaged before making a determination as to what to do.

 

With the ability to look deeper into traffic as it moves back and forth across the firewall, system admins are able to fine-tune to a greater degree the types of traffic that are allowed or blocked.

 

In practice, proxy servers are pieces of software that are designed and placed based on the idea that they will intercept communications content. The proxy will observe and recognize incoming requests and, on behalf of the client, make a request to the server. The net result is that no client ever makes direct contact with the server and the proxy acts as the go-between or man-in-the-middle.

 

As was stated previously, this setup with a proxy server can allow or deny traffic based on actual information within the packet. The downside is more analysis means more overhead, so a price in performance is paid.

 

Limitations of a Firewall

Limitations of a Firewall

Even with this cursory examination of firewalls, it seems as if they have a lot of power and can go a long way toward protecting a network. However, there are limitations on this technology and there are some things firewalls are just not suited for. Having an understanding of what they can and can’t help you with is essential to the proper and effective use of firewalls.

 

Before you decide to purchase or otherwise acquire a firewall technology, ensure that the specific issue you’re trying to address can be handled and which type of firewall you have to have to properly address that issue.

 

Always know what your goals are when you build out a design intended to make the network environment more secure. Unfortunately, many companies acquire firewalls, as well as other devices that matter, and don’t have a clear goal or path in mind as to what they are going to address and how.

 

Simply put, know where you’re going before you turn the key and hit the gas. In our case, choosing the wrong firewall for a job will allow for the possibility of malicious or accidental things happening, and it may even give you a false sense of security because you think it is working when in reality it is ill-suited for the way you’ve deployed it.

 

The following areas represent the types of actions and events that a firewall will provide little or no value in stopping:

some firewalls

Viruses While some firewalls do include the ability to scan for and block viruses, this is not defined as an inherent ability of a firewall and should not be relied on. Also consider the fact that as viruses evolve and take on new forms, firewalls will most likely lose their ability to detect them easily and will need to be updated. In most cases, antivirus software in the firewalls is not and should not be a replacement for system-resident antivirus.

 

Misuse This is another hard issue for a firewall to address as employees already have a higher level of access to the system. Put this fact together with the ability of an employee to ignore mandates to not bring in their own software or download software from the Internet and you have a recipe for disaster. Firewalls cannot perform well against intent.

 

Secondary Connections In some situations secondary access is present and as such presents a major problem. For example, if a firewall is put in place, but the employee can unplug the phone line and plug it into their computer, and then plug the computer into the network with the modem running, thus opening a backdoor and circumventing the firewall.

 

Social Engineering If the network administrators gave out firewall information to someone calling from your ISP, with no verification, there is a serious problem.

 

Poor Design If a firewall design has not been well thought out or implemented, the net result is a firewall that is less like a wall and more like Swiss cheese. Always ensure that proper security policy and practices are followed.

 

Implementing a Firewall

Implementing a Firewall

As with many things, firewalls have many different ways of being deployed, and there is no one standard way for deploying these key components of network security. However, we can discuss the basic configurations that can be used in the options that are available, and then you can decide if these need to be enhanced or modified in any way to get a result more suited to your needs. Let’s take a look at some of these options:

 

One way of implementing a firewall is the use of what is known as a multihomed device. A multihomed device is identified as a device that has three or more network adapters within it. Each one of the network adapters will typically be connected to a different network, and then the firewall administrator will be tasked with configuring rules to determine how packets will be forwarded or denied between the different interfaces. This type of device and setup is not uncommon and is observed quite a bit out in the wild.

 

However, there are some key points to remember when discussing this type of device. As far as benefits go, this type of configuration offers the ability to set up a perimeter network or DMZ (which we’ll talk about in a moment) using just one device to do so.

 

This setup also has the benefit of simplicity due to the fact that it will be one device and a set of multiple devices, thus reducing administrative overhead and maintenance. As far as disadvantages go, this device represents a potential single point of failure, which means that if the device is compromised or configured improperly, it could allow blanket access or at least unwanted access to different parts of the operating environment.

 

Making things a little more interesting is the configuration known as a screened host. This type of setup combines a packet filtering firewall with a proxy server to achieve a faster and more efficient setup, but at the cost of somewhat decreased security. This type of setup is easily recognizable just by analyzing devices in place. In this setup, as traffic attempts to enter a protected network it will first encounter a router that will do packet filtering on the traffic.

 

Then, if packet filtering allows it to pass, it will encounter a proxy, which will, in turn, do its own filtering, such as looking for restricted content or disallowed types of traffic. This type of setup is often used to set up a perimeter network also known as a DMZ (demilitarized zone).

 

DMZs are an important part of network security. To make things simple a DMZ can be visualized as a limited or small network sandwiched between two firewalls; outside these firewalls, you’ll have the outside world (Internet) and on the other extreme you’ll have the intranet, which is the client’s protected network.

 

The idea behind this type of deployment is that publicly accessible or available services such as web servers can be hosted in the DMZ. For example, if a client wants to host their own web server and make the content available to the public, they could create a DMZ and place the web server within this zone.

 

Without a DMZ and just a single firewall, you would have a choice to make: you would have to put the web server either on the Internet side or on the intranet side of the firewall.

 

Neither one of these options is practical. If the server was placed on the Internet, it’s completely exposed with no protection on it, and if it’s placed on the client’s own network, then you have to give access to the client’s network from the outside world and that opens up the door to a lot of potential mischiefs.

 

However, by using the DMZ you avoid both of these issues by having only selected traffic come past the Internet-facing firewall to access the web server whereas no traffic will be allowed to pass from the outside through the inner firewall that separates a DMZ from the client’s network. Of course, there are different restrictions on traffic leaving from the client’s network.

 

Authoring a Firewall Policy

Firewall Policy

Before you place a firewall, you need a plan, one that defines how you will configure the firewall and what is expected; this is the role of policy. The policy will be the blueprint that dictates how the firewall is installed, configured, and managed. It will make sure that the solution is addressing the correct issues in the desired way and reduces the chances of anything undesired occurring.

 

For a firewall to be correctly designed and implemented, the firewall policy must be in place ahead of time. The firewall policy will represent a small subset of the overall organizational security policy. The firewall policy will fit into the overall company security policy in some fashion and uphold the organization’s security goals, but enforce and support those goals with the firewall device.

 

The firewall policy you create will usually approach the problem of controlling traffic in and out of an organization in one of two ways. The first option is to implicitly allow everything and only explicitly deny those things that you do not want. The other option is to implicitly deny everything and only allow those things you know you need.

 

The two options represent drastically different methods in configuring the firewall. In the first option, you are allowing everything unless you say otherwise, whereas with the second you will not allow anything unless you explicitly say otherwise. Obviously one is much more secure by default than the other.

 

Consider the option of implicit deny, which is the viewpoint that assumes all traffic is denied, except that which has been identified as explicitly being allowed. Usually, this turns out to be much easier in the long run for the network/security administrator.

 

For example, visualize creating a list of all the ports Trojans use plus all the ports your applications are authorized to use and then creating rules to block each of them. Contrast that with creating a list of what the users are permitted to use and granting them access to those services and applications explicitly.

 

Network Connection Policy

Network Connection

This portion of the policy involves the types of devices and connections that are allowed and will be permitted to connect to the company-owned network. You can expect to find information relating to the network operating system, types of device, device configuration, and communication types.

 

Physical Security Controls

Physical Security

Physical security controls represent one of the most visible forms of security controls. Controls in this category include such items as barriers, guards, cameras, locks, and other types of measures. Ultimately physical controls are designed to more directly protect the people, facilities and equipment than the other types of controls do.

 

Some of the preventive security controls include the following:

  • Alternate power sources Flood management
  • Data backup Fences
  • Human guards Locks
  • Fire-suppression systems Biometrics
  • Location

 

Generally, you can rely on your power company to provide your organization with power that is clean, consistent, and adequate, but this isn’t always the case. However, anyone who has worked in an office building or another type of setting has noticed at the very least a light flicker if not a complete blackout. Alternate power sources safeguard against these problems to various degrees.

 

Hurricane Katrina showed us how devastating a natural disaster can be, but the disaster wasn’t just the hurricane—it was the flood that came with it. You can’t necessarily stop a flood, but you can exercise flood management strategies to soften the impact. Choosing a facility in a location that is not prone to flooding is one option. Having adequate drainage and similar measures can also be of assistance. Finally, mounting items such as servers several inches off the floor can be a help as well.

 

Data backup is another form of physical control that is commonly used to safeguard assets. Never underestimate the fact that backing up critical systems is one of the most important tools that you have at your disposal. Such procedures provide a vital protection against hardware failure and other types of system failure.

 

Not all backups are created equal, and the right backup makes all the difference:

Full backups are the complete backing up of all data on a volume; these types of backups typically take the longest to run.

 

Incremental backups copy only those files and other data that have changed since the last backup. The advantage here is that the time required is much less and therefore it is done quicker.

 

The disadvantage is that these backups take more time to rebuild a system.

 

Differential backups provide the ability to both reduce backup time and speed up the restoration process. Differential backups copy from a volume that has changed since the last full backup.

 

Fences are a physical control that represents a barrier that deters casual trespassers. While some organizations are willing to install tall fences with barbed wire and other features, that is not always the case. Typically the fence will be designed to meet the security profile of the organization, so if your company is a bakery instead of performing duties vital to national security, the fence design will be different because there are different items to protect.

 

Guards provide a security measure that can react to the unexpected as the human element is uniquely able to do. When it comes down to it, technology can do quite a bit, but it cannot replace the human element and brain. Additionally, once an intruder makes the decision to breach security, guards are a quick responding defense against them actually reaching critical assets.

 

The most common form of physical control is the ever-popular lock. Locks can take many forms, including key locks, cipher locks, warded locks, and other types of locks, all designed to secure assets.

 

Navigating the Path to Job Success

Penetration testing

Penetration testing can be both an exciting and a rewarding job and career. With the rapid changes in technology and the ever-increasing number of threats and instability in the world, your life will never be boring. As hackers ratchet up the number and ferocity of their attacks and gain ever more sensitive information with increasing regularity, pentesters who are able to identify flaws, understand each of them, and demonstrate their business impact through mindful exploitation are an important piece of the defensive puzzle for many organizations.

 

This blog will highlight some nontechnical tips as you start down the path to becoming a pentester.

In this blog, you’ll learn to:

  • Choose a career path
  • Build a reference library
  • Pick some tools to practice with
  • Practice your technical writing skills
  • Choosing Your Career Path

 

Over the many years that I have worked with clients and students, a question I often encounter is “How I get into the field of penetration testing?” Unfortunately, this question is not as straightforward as you may think.

 

There are many paths to becoming a pentester, and this section will examine just a few of the potential paths that you may take. Remember that your own individual journey may be different from the ones outlined here. In fact, you may find that your journey can change paths several times and still get you to your goal.

 

For me, my journey into the world of pentesting started with me tinkering with technology at a very young age. I always loved taking hardware apart and trying different things. I also wanted to know what every feature on a piece of software was supposed to do, and I wanted to find out how I could make software do things it wasn’t supposed to be able to do. My formal education and experience came some years later after I had done my tinkering, read numerous blogs, and do a lot of hands-on work.

 

These are some of the possible paths you may choose to take on your way to becoming a pentester:

 

Security or IT Person Moving into Pentesting This is a common path where someone starts in the IT area, then trains and transitions to a position as a pentester. This is popular in enterprise environments and other large organizations where plenty of opportunities exists to cross-train into other positions and possibly shadow current personnel.

 

This approach, however, has its downsides. In order to transition roles, you may need to put in your own time and money for a period of time. Typically this means you may need to learn some of the basics on your own and be willing to work outside of your current position prior to being able to formally transition.

 

This extra time and effort not only shows that you are willing to commit to and invest in yourself, but it also demonstrates to management that you are prepared to make the jump from one job to another. In the case of pentesting, you may even be able to participate in or observe a test and participate in the analysis of data and results with experienced pentesters.

 

People with existing IT skills will have an advantage as many of these skills—such as those of networking, operating systems, and management principles—will be used in the process of testing.

 

Working for a Security Company Doing Pentesting This type of path is best suited to those who already have existing skills that have been developed over many years. The individuals taking this route will already have strong general IT experience as well as some degree of pentesting experience. Some security companies will hire these individuals and finish off their training by working with current teams.

 

Those who do not have prior experience at any level doing testing of this sort will find this path somewhat tough to follow. Although some security companies may be willing to hire inexperienced testers and just train them as needed, many companies will not want to assume this burden in both cost and the time it takes to get the individual proficient enough to do a live test.

 

Flying Solo For those who are more ambitious and adventurous, the option to start their own small business that specializes in pentesting may be an option. In this path, an individual starts their own business doing testing for local businesses and builds a name and experience at the same time. This may be ideal for those who need flexibility, are self-starters, and are OK being responsible for both testing and business operations.

 

This path is perhaps the toughest one to take but can allow for a lot of possibilities for those self-starters who are disciplined and curious. This path will require that you put in your own time studying and researching to find answers and ideas. My opinion is that this is a great path to take if you can handle it because you have more opportunities to explore the field of pentesting. Of course, it is not for everyone and still can be helped along with extra formal training and structure. In any case, you’ll want to refer to the “Display Your Skills” section later in this blog.

 

No matter which path you decide to pursue, always remember that you must build your reputation and trustworthiness in the field of security. When testing your skills, make sure you consider that testing against anything that you don’t own or have permission to work with can get you into trouble, possibly of the legal variety. Such an outcome can seriously impact your career options in this field as well as your freedom in some cases.

 

Build a Library

Library

I strongly recommend to anyone interested in the field of pentesting that they build a library of resources they can call upon if needed. You should consider adding blogs or manuals of the following types:

 

Web Applications and Web Application Security Blogs Considering that many of the environments you will be assessed will have not only web servers but applications of various types running on those web servers, you will need experience and/or reference material on these environments. Since web applications are one of the easiest and quickest ways for skilled attackers to enter an organization, having information about and experience with these environments is a must.

 

A Reference Guide or Material on Tools Such as NMAP and Wireshark Many of the tools discussed in this blog are complex and have numerous options. Be sure to have manuals and guides on these tools.

 

Web Server Guides When performing pentesting, you will encounter many environments that have web servers in them that will need to be evaluated. While you can find information on the whole universe of web servers, I would at least include information on web servers such as Microsoft’s Internet Information Services (IIS), Apache, and perhaps ngnx. While there are other web servers, they are less likely to be encountered and are not essential in many cases.

 

Operating System Guides Let’s face it: you will encounter a small number of operating systems in your testing. As such, you should include reference guides on Microsoft’s Windows, Linux, Unix, and Mac OS. Additionally, you will need to include reference material on mobile operating systems such as Android, iOS, and maybe Windows Mobile.

 

Infrastructure Guides You will need to have material on networking hardware such as Cisco devices, including routers, switches, and the like.

  • Wireless Guides With a wireless present in so many different environments, you should include materials that cover wireless technologies.
  • Firewall Guides Firewall guides may be necessary for reference purposes.
  • A TCP/IP Guide This should be obvious considering that you will be working with the IPv4 and IPv6 protocols in most environments.
  • Kali Linux Reference Guide Since you will at some point be using Kali Linux in your pentesting career, a reference on this is a must.

 

There are many more that could be included on this list, and you undoubtedly will find a lot of possibilities to include in your own personal library. I also suggest obtaining guides and manuals on various hardware and equipment that you may encounter.

 

You’ll have to decide for yourself whether you should go with printed or digital guides. Personally, I find digital versions of most of my blogs and reference guides the best way to go because of the smaller size, which is less stressful on my back when I travel.

 

In fact, currently I carry a Google Nexus 7 (an older device, I know), but it is loaded not only with tools but also with other items such as Amazon’s Kindle app with my titles on it, as well as PDF manuals, reference apps, dictionaries, and whatever I find helpful. I love the device because it is small and powerful enough for my needs, and I can even add a case with a keyboard on it if I want to take notes (though the keyboard is small).

 

Practice Technical Writing

Technical Writing

Since at the end of a test you will have to write reports and organize your findings, you must have well-developed skills in both. I recommend picking up a blog or taking a class on how to do technical writing and report writing. Also, learn how to be organized and document thoroughly; many IT and security professionals lack both skills.

 

Finally, since you will be doing a fair bit of writing as part of this career field, you need to bump up your spelling and grammar skills to be top notch. Use the tools in your favorite word processing program to analyze both your spelling and your grammar prior to giving your report to your client. Simple misspellings and poor grammar reflect upon you no matter how good your work may be otherwise.

 

Keep in mind that good technical writing is an acquired skill, and even mine (yes, as a published author) can still use practice to get better. In fact, if it wasn’t for my exceptionally talented developmental editor fixing my wording here and there, I would look a lot less talented (thanks again, Kim!).

 

Display Your Skills

Display Your Skills

In the world of pentesting, schooling or a lack thereof is not something that can cause you to be unsuccessful. However, a lack of formal training may require you to prove yourself and your skills. Fortunately, there are many different ways for you to do this:

 

Consider starting a blog where you can share your knowledge, give advice, or show your research and ideas.

Open up a Twitter account where you can post links and information about different topics that may be of use to other individuals.

Look for magazines that publish security and pentesting articles. You may have to start with smaller sites and magazines and work your way up to larger publications or sites.

 

If you have the skills, participate in bug bounty programs that are sponsored by different developers. These projects seek to locate defects or flaws in software and provide information to software developers about the issue so they can address it as needed.

 

Create white papers for software or hardware vendors if the opportunity is available.

 

Visit the idea of perhaps presenting at a security conference or group. Major conferences such as DefCon and Black Hat have these opportunities. However, before doing such a presentation, make sure you have both the technical and presentation skills as they will be needed in equal measure. Consider attending these conferences before you attempt to present so you can accurately assess if you are ready to do one.

 

Remember, having a lack of schooling will not typically hinder your progress if you have quality skills that can be demonstrated. However, you will need to prove this to a greater extent than perhaps a more traditional student would have to. Bug bounties are a great way to prove your prowess but will require time and effort, not to mention skills.

 

It’s worth experimenting with some of the analysis frameworks such as Metasploit. Consider developing skills with a scripting language such as Python or Ruby so you can start automating various aspects of your tasks and even extend the capabilities of tools such as the Metasploit framework.

 

Building a Test Lab for Penetration Testing

Test Lab

Let’s finish our voyage of discovery of the pentesting field by talking about how can continue to develop your skills. The best way to gain experience is to get your hands dirty. Unfortunately, you can easily get into trouble if you’re not careful because you cannot just choose some random targets and attack them with the various hacking tools and techniques discussed in this blog. Not only is doing so ethically wrong, but it is also illegal.

 

Therefore, the best way for you to practice what’s covered in this blog is to build your own lab environment. With it, you can practice with tools without finding yourself on the wrong end of the law.

 

Deciding to Build a Lab

When you’re a pentester, you can’t practice your skills in the open since attacking targets is illegal when you don’t have permission to do so. Therefore, you’ll need to have a lab environment that you own where you can test software and practice attacks without getting into trouble. When you have your own lab, you can practice to your heart’s content with a seemingly endless range of configurations and environments.

 

This is a huge advantage to you as a pentester because you will encounter many variations of environments in the field, and being able to customize the environment to more closely emulate what you see in the field will reap immediate benefits in your work.

 

Another advantage to testing within your own environment is that you can feel more comfortable about trying all the tools and techniques that you want to experiment with. You don’t have to worry if one of these tools or techniques happens to have catastrophic results, such as crashing or destroying your target (it will happen).

 

Because you’re working in a lab environment, it’s a simple matter for you to restore and rebuild your environment and try again with a different approach. This is not something you can do as easily if you don’t own the environment, not to mention the trouble you could get into if you crash someone else’s environment that you don’t have permission to interact with in the first place.

 

Finally, when you’re testing in an unknown environment, you don’t have an immediate way to confirm whether the results match reality. Setting up your own lab environment means that you know what’s being put in place, and therefore the results that you get from your scans and your exploration can be verified to see whether you’re getting the expected results. Examining the results means you’ll have an easier time interpreting other results later more accurately.

 

All lab environments will be different; you can have numerous approaches, all of which are valid for your test on. Most importantly, you’ll want to build an environment that best fits your needs, so here are some questions to ask yourself:

 

  • What operating systems will you most likely encounter? What are the operating system versions needed?
  • What tools do you want to use?
  • What hardware are you most likely to encounter?
  • What configurations do you need to gain more experience on? What should the network look like?
  • What does the server environment need to look like? Do you need mobile operating systems?
  • Do you need to experiment with technologies like Active Directory? Do you need to understand certain vulnerabilities that exist?
  • Are the tools you’re using or at least intend to use usable within a virtual environment?
  • Do you need any specialized applications to be present?
  • Do you need to emulate a client environment to experiment with different approaches to your test?

 

Answering these questions will help you start envisioning a design. Remember, you’ll have to meet specific hardware and software requirements in terms of memory, processing, or network access, to name a few, to get your systems up and running. To get a better handle on the requirements needed to implement your intended environment, you may need to refer to different vendor websites to see what the system requirements are and how you can deploy things within certain parameters. Then you’ll need to put all these requirements together to make the stuff work.

 

Considering Virtualization

One of the most common ways to create a lab environment is to use a technique referred to as virtualization. Virtualization is an extremely common technique in the IT field that is used to consolidate multiple machines to fewer machines and to isolate systems for better stability and security while doing development and testing.

 

Virtualization is great for setting up a lab because it allows for the rapid deployment and reconfiguration of a system; it also allows you to have multiple configurations available without having multiple physical machines lying around your house, each with its own custom environment. Instead, virtualization allows you to have a laptop that has several virtual environments posted on it that you can test against, meaning that everything is consolidated on the one system that is portable. Multiple physical machines would not meet the same standard.

 

Just about any environment that you’re going to encounter can be deployed into a virtual environment, with a few exceptions. Common operating systems such as Windows and Linux as well as Android can all be quickly and easily hosted within a virtual environment, along with all the various tools that we have talked about in this blog.

 

Here is how virtualization works: The host or host system is the physical system, including the operating system, upon which the virtualization software is installed on top of. Once you have the host in place and have installed the virtualization software on top of the host, you can install your virtualized environments on top of all that.

 

These environments hosted on top of virtualization or within virtualization are the guests. Guests will have an operating system installed on the virtual system and will include the applications and tools all bundled to run on top of the virtualized environment.

 

In practice, a system will have one physical host with the potential to run multiple guests. In most cases, the only limitation on the number of guests that can be hosted on top of a given host is the amount of memory and other resources that are available to split among all the various guests as well as the host and have them all run at an acceptable performance level (which is a little trickier than it sounds).

 

When hosting guests, one of the questions that come up is how much access you need. In practice, virtualization software allows for networks to be private, meaning that they are limited to an individual computer so all the guests on the computer can communicate among themselves and with the physical host, but not outside of that computer. (However, the host will be allowed to communicate with the network just like they would if the virtualization portion didn’t exist.)

 

A network can also be configured when using virtualization to have full access to the network resources both on and off the system. In that case, the guests will act like any other physical hosts on the network. Unless careful examination is made, a client or server anywhere else in the network will not see any difference between the virtual system and a physical system existing somewhere else.

 

There are other options as far as configuring the network, but keeping a network private might be a good idea for some of your testings as you get started. If you were to type in the wrong IP address or wrong destination or generate a lot of traffic as a result of your testing, the effects will be limited to that one system and not impact anything else around you or cause potential negative results.

 

Keep in mind that network access is something you can change on any guest at any time; you just have to consult with your virtualization software of choice to see how that’s done.

 

Advantages of Virtualization

These are the advantages that the virtual machine model offers you as a pentester:

Testing malware in a virtual environment is highly recommended because it can greatly limit the potential damages that might happen if the malware is released to a live environment.

 

Testing different servers, applications, and configurations is a highly attractive option and is the reason you are building a lab with virtualization. Multiple configurations can be easily tested just by shutting down a guest and moving files from one system to another or one location to another and then just restarting the guest with the new configuration.

 

If during your testing and experimentation you happen to adversely damage or impact a guest, things can be easily repaired. In fact, in most cases, simply backing up your virtual machines prior to experimentation allows you to shut down a damaged virtual system and copy the backups over the damaged files and then restart the damaged system. That’s all it takes to get you back up and running.

 

You can set restore points or snapshots that are available in most virtualization packages prior to installing and testing new tools. In the event that something doesn’t go as expected, all you have to do is roll that guest back to a point in time prior to the changes being made, and once again you’re free to continue with your testing and try a different set of operations or procedures.

 

One of the biggest advantages of virtualization is that it’s much cheaper than having multiple physical systems. In addition, the lower power requirements, maintenance requirements, and portability make it a much more efficient way to go.

 

Disadvantages of Virtualization

A virtualization is an attractive option in just about every case in the IT field; however, nothing comes without its disadvantages, and virtualization is not exempt from this rule. In fact, virtualization, though an effective solution for many problems, is not something that should ever be treated as a magic wand that can be used to address any potential problem. The following are some situations where virtualization is just not a good candidate:

 

In most cases, the software that you will choose to run in a virtual environment should run pretty much without any major issues. However, there are some cases where software that needs to have direct access to hardware will fail in a virtual environment. Do your research before you include virtualization entirely.

 

Much like some software won’t work in virtualization or virtualized environments, so is the case with some hardware, which is just not going to work properly or at all in this environment. For example, some wireless adapters or Bluetooth adapters will not work properly in a virtual environment. Thus, if you need to work with these tools, you probably need to stay with a physical system.

 

Though not necessarily a barrier against using virtualization, it is worth noting that the hardware requirements on the physical host will be greater than they would be if you hosted one environment on a physical system. How much greater the hardware requirements will be in terms of memory and processor is not something I can answer here because the requirements vary depending on what you choose to host on top of a given physical system.

 

What I can say is that the hardware requirements will be greater than they would be if you had a one-to-one relationship between operating system applications and hardware.

 

The lists presented here are not meant to be exhaustive by any means. You should simply evaluate these issues for your own work given your choice of hardware and software as well as applications and virtualization packages because each combination can alter the results you might achieve.

 

Three popular virtualization packages are Microsoft Hyper-V, Oracle’s VirtualBox, and EMC’s VMware. There’s really not one way to create a lab-based around virtualization; it is just a matter of you figuring out your own requirements and what your pocketbook can handle. Be prepared to do a lot of reading and evaluating before you find the environment that fits you.

 

Getting Starting and What You Will Need

When you build your lab, you can create a list of must-haves and a list of things that are like-to-haves. However, no matter what your lists look like, you must establish a foundation before building your lab on top of it all.

 

I recommend that you go back and review the questions that you asked your-self early on when you were establishing your motivations for building your lab. Then look at the virtualization software packages that are attractive to you and try to nail down a specific one that is right for you. Then you can start figuring out what your foundation looks like in terms of operating system, hardware requirements, and network access.

 

Remember that you can choose from numerous approaches when creating your lab. There is no one-size-fits-all approach that everyone can work with. However, you can establish some expected minimums that you will have to consider as starting points.

 

The basic requirements that you should consider sticking to are as follows:

 

In terms of memory, the more, the merrier. Ideally any system on which you’ll install your tools and testing environment should never have less than 8 GB of RAM; otherwise, you’ll sacrifice performance and some cases won’t be able to run the tools you need to perform the test. While you can run virtualization with less RAM, 32 GB of DDR2 is recommended to support virtualization and obtain acceptable performance.

 

Keep a close eye on the amount of hard drive space that you have available. You can quickly consume all the available drive space with just operating systems, without any applications or data. So plan for the appropriate amount of drive space as well as free space for applications and data in paging files and temporary files. Plan on having at least 1 TB of space available.

 

Consider using a solid-state drive (SSD) drive instead of a traditional drive (which has spinning discs inside). An SSD will give you much better performance over traditional drives—a fact that becomes much more noticeable when you’re running a lot of things that are hitting the hard drive at once.

 

Start thinking about your host operating system. Any of the major players and operating systems are suitable, but keep in mind that not every virtualization package is available for every operating system. You can use intentionally vulnerable virtual machines such as Metasploitable, a Linux OS that is designed for pentesting but not for use in a non-testing production environment.

Check to see if your hardware of choice supports monitor mode with respect to wireless adapters.

 

Installing Software

After you’ve set up your environment, you’ll need to determine which tools to use. We’ve discussed many different types of tools that you can use during a pentest, and there are plenty of others that are not discussed in this blog.

 

The following lists are tools that are must-haves for a pentester. Consider them as something to get you started, but don’t feel that you have to stick with these tools exclusively. You should always be on the lookout for tools that may be complements to the ones listed here.

 

The following are scanners:

NMAP NMAP can be acquired at Nmap: the Network Mapper, which is the website of the developer. Since this tool is such a flexible and powerful piece of software and is cross-platform, you should seriously consider making it part of your toolkit.

 

Angry IP Available at Angry IP Scanner, this piece of software is a simple way of locating which hosts are up or down on a network. While the functionality for this tool can be replicated with a few switches in NMAP, it may still prove a good fit for your toolkit.

 

The following are password-cracking tools:

L0phtCrack This can be obtained from Auditing, cracking and recovering passwords. John the Ripper This can be obtained from John the Ripper password cracker.

 

Trinity Rescue Kit Another multipurpose tool, this is useful for performing password resets on a local computer. It can be downloaded from www. Trinity Rescue Kit | CPR for your computer.

 

The following are sniffers:

Wireshark The most popular packet sniffer in the IT industry, Wireshark is available from Wireshark · Go Deep. It’s fully customizable and features packed, with plenty of documentation and help to be found online and in print. Wireshark boasts cross-platform support and consistency across those platforms.

 

Tcpdump This is a popular command-line sniffer available on both the Unix and Linux platforms. See TCPDUMP/LIBPCAP public repository.

Windump This is a version of tcpdump but ported to the Windows platform.

 

The following are wireless tools:

  • Insider, This is a network detection and location tool. See Because Your WiFi Should be Awesome!.
  • Bluesnarfer This can be obtained from the repositories of any Linux distribution.
  • Aircrack-ng This is a suite of tools used to target and assess wireless networks. 
  •