Intrusion Detection Systems
As you recall, intrusion detection systems (IDSs) act as a burglar alarm that provides some of the earliest indications of an attack or other suspicious activity. While they will not stop the activity from taking place, they will provide notice of it. Remember, these devices are positioned to monitor a network or host.
While many notifications that come from the system may be innocuous, the detecting of and responding to potential misuse or attacks must be able to respond based on the alert that is provided. This blog explains What is Intrusion Detection Systems (IDS) and how it works with examples.
An IDS is a safeguard that can take one of two different forms: a software version which would be an application that could be configured to the consumer’s needs or a hardware version which would be a physical device and likely a higher performing device.
Both are valid ways to monitor your system. The second is a device that gathers and analyzes information generated by the computer, network, or appliance.
A network-based intrusion detection system (NIDS) is an IDS that fits into this category. It can detect suspicious activity on a network, such as SYN floods, MAC floods, or similar types of behavior, and would be the most advantageous for deployment onto a network.
A NIDS is capable of detecting a great number of different activities, both suspicious and malicious in nature, and this is a great candidate for monitoring the network. It can detect the following:
Repeated probes of the available services on your machines Connections from unusual locations
Repeated log-in attempts from remote hosts
Arbitrary data in log files, indicating an attempt at creating either a denial of service or a crashed service
Changes in traffic patterns Use of unusual protocols Application traffic
The intrusion detection process
The intrusion detection process is a combination of information gathered from several processes. The process is designed to respond to packets sniffed and then analyzed. In this example the information is sniffed from a network with a host or device running the network sensor, sniffing and analyzing packets off a local segment.
1. A host creates a network packet.
2. The sensor sniffs the packet of the network segment.
3. The IDS and the sensor match the packet with known signatures of misuse.
4. The command console receives and displays the alert, which notifies the security administrator or system owner of the intrusion.
5. The response is tailored to respond to the incident as desired by the system owner.
6. The alert is logged for future analysis and reference.
7. A report is created with the incident detailed.
8. The alert is compared with other data to determine if a pattern exists or if there is an indication of an extended attack.
Components of Host-based IDSs
A host-based IDS (HIDS) is another type of IDS that makes an appearance in large network environments, and it is solely responsible for monitoring activity of many different types on an individual system rather than a network.
Host-based IDSs can get kind of confusing as far as what features they are supposed to have. There are so many vendors offering so many different types of host-based IDSs, and they’ve been around for so long that the feature sets vary widely from one to the next.
Much like a network-based IDS, a host-based IDS has a command console where all the monitoring and management of the system takes place. This piece of software is the component that is used to make any changes or updates to the system as required.
This management point can be placed on another system where the system admin can access it remotely through specialized software or through a web browser.
In some cases the console may be accessible only on the local system; in that case, the admin needs to go to that system to manage it or find another way to access it remotely.
The second component in the HIDS is known as an agent. Much like a network sensor, an agent is responsible for monitoring and reporting any activities that occur on the system that is out of the ordinary or are suspect.
The agent will be deployed to the target system and monitor activities such as permission usage, changes to system settings, file modifications, and other suspicious activity on the system.
Limitations of IDS
An IDS is capable of monitoring and alerting system administrators to what is happening on their network, but it does have its limitations as well as situations it’s just not suitable for.
To ensure that you work with these systems in a way that will get you the most return on your investment, you should understand their benefits as well as their limitations.
When you’ve identified problems in a client’s environment and decided that your strategy is going to include one or more IDSs, think about the monitoring goals you’re trying to address.
Remember that even though IDSs are great systems that can help you tighten up and harden your network, using them incorrectly can give you a false sense of security—you may think that they’re doing their job but in reality, they are incapable of doing what you need to do.
For example, a network IDS is great at detecting traffic and malicious activity on a network, but it’s not so good when you try to monitor activities such as changes in files or system configuration settings on individual hosts.
Or, the IDS may chatter away about problems that it perceives exist, but they don’t actually exist—something triggered the system to fire an alert that the IDS mistakes for an attack.
Also, do not make the mistake that many new security professionals make: thinking that an IDS is capable of responding to and stopping a threat. Remember the D in IDS stands for detection, and detection means just that—it will detect the issue but it doesn’t react or respond.
In fact, this last point illustrates, indirectly, the reason why you should always implement security in layers and not as a standalone component: a standalone component would, in the case of an IDS, tell you attacks are happening though it won’t do anything about it.
Never expect an IDS to be able to detect and notify you of every event on your network that is suspicious; it will only detect and report what you tell it to.
Also, consider the fact that an IDS is programmed to detect specific types of attacks, and since attacks evolve rapidly, an IDS will not detect attacks it is not programmed or designed to do. Remember, an IDS is a tool that is designed to assist you and is not a substitute for good security skills or due diligence.
Investigation of an Event
An IDS provides a way of detecting an attack, but not dealing with it. An IDS is limited as to the potential actions it can take when an attack or some sort of activity occurs.
An IDS observes, compares, and detects the intrusion and will report it. The system or network administrator has to follow up. All the system can do is notify you if something isn’t right; it can’t list the individual reasons why.
Information gathered from an IDS can be generated quite rapidly, and this data requires careful analysis in order to ensure that every potential activity that may be harmful is caught.
You will have the task of developing and implementing a plan to analyze the sea of data that will be generated and ensuring that any questionable activity is caught.
Located and working right alongside IDSs in many situations is a class of devices collectively known as firewalls. In simple terms, a firewall is a device used to control the access to and from or in and out of a network.
Since their initial introduction many years ago, firewalls have undergone tremendous changes to better protect the networks they are placed on. Because of their capabilities, firewalls have become an increasingly important component of network security, and you must have a firm command of the technology.
In most cases, a firewall will be located on the perimeter areas of a network, where it can best block or control the flow of traffic into and out of the client’s network. It is because of this ideal placement that firewalls are able to fully regulate and control the types of traffic.
The flow of traffic across to firewalls is determined by a series of rules that the system owner will configure based on what their particular needs are. For example, a system owner could choose to allow web traffic to pass but not other types of traffic, such as file sharing protocols, if they decided they were unnecessary and presented a security risk.
With the earliest types of firewalls, the process of allowing or disallowing access was fairly easy to configure relative to today’s standards. Older devices only required the configuration of rules designed to look at some of the information included in the header of a packet.
While these types of firewalls still exist and modern firewalls incorporate the same rule system, nowadays firewalls have evolved to thwart and deal with seemingly endless and more complex forms of attack.
With the rapid increase and creativity of attacks, the firewalls of the past have had to evolve or face the fact that they would not be able to counter the problems.
To counter the threats that have emerged, firewalls have added new features in order to be better prepared for what they will face when deployed. The result has been firewalled that are much better prepared than they have been at any point in the past to deal with and control unauthorized and undesirable behavior.
If you were to look up firewalls using a simple Google search, you would undoubtedly get numerous results, many of those linking back to the various vendors of firewall software and hardware. You’d also quickly find that each and every vendor has their own way of describing a firewall.
However, when you review this information, be aware that vendors have found creative ways to describe their products in an effort to sound compelling to potential customers.
If we boil away all the marketing spin and glossy ads and flowery language, you’ll find that firewalls generally work very similarly at some level.
Firewalls can operate in one of two basic modes:
Proxy servers or application gateway
Packet filtering represents what could be thought of as the first generation of firewalls. The firewalls that would be classified as packet filtering firewalls may seem primitive by the standards of later generations of firewalls, but they did have their place and they still are used quite effectively in numerous deployments.
To understand why packet filtering firewalls are still in use, let’s look at the operation of a packet filtering firewall. For a firewall to be a true packet filtering device or system, it has to be looking at each and every packet at a very basic level—which means it will look at where a piece of information (packet) is coming from, where it’s going to, and the port or protocol that it is using.
To properly filter the desired and undesired traffic, the system or network administrator configures the firewall with the rules needed to perform the appropriate action on a packet when it meets the criteria in a given rule.
When we look closely at a packet filtering firewall, it’s quite easy to see that it is very limited in what it can do. It is looking at only a very limited amount of information in regard to a packet.
As was previously mentioned, a packet filtering firewall only looks at where a packet is coming from, where it’s going to, and the port or protocol that it is using; anything else that may be present in that packet cannot be analyzed by this type of firewall.
Implementation of a packet filtering firewall is quite simple and it does exactly what it’s been designed to do, but because of the fact that it’s only able to look at limited amounts of information on a packet, anything that falls outside of these items is essentially invisible to this type of firewall.
In practice, this means that while the packet filtering firewall can control the flow of traffic, there still is the potential for attacks to be performed successfully.
This type of firewall is still in use, but that begs the question of why, if they are so simple in what they do. While the simplicity of design does offer benefits in the form of performance, this type of firewall only looks at the most basic piece of information and does not look any deeper.
This type of firewall is effective when you know that you will not be using a certain protocol on your network at all; you could simply block it so it can’t come on or off the network.
For example, if you know FTP is a security risk and you decide not to use FTP on your network, you can use a packet filtering firewall to block it from even coming onto the network in the first place.
There’s no need to filter what is inside a packet using FTP when you know you don’t need it anyway, so a packet filtering firewall can just drop the packets outright instead of passing them on for further analysis.
A later generation of firewalls is known as the proxy server or, as they are sometimes known, application gateways. With a proxy server added to the mix, the firewall now has the built-in or native ability to do a more detailed inspection or analysis on a packet in addition to, or instead of, what was part of the packet header.
In short, this means that this type of firewall has the ability to start looking within a packet. To relate this type of firewall to a packet filtering firewall, think of a packet filtering firewall only analyzing the address label on an envelope.
On the other hand, a proxy or application-level firewall is going to take a closer look at what is inside the envelope and how it’s laid out and packaged before making a determination as to what to do.
With the ability to look deeper into traffic as it moves back and forth across the firewall, system admins are able to fine-tune to a greater degree the types of traffic that are allowed or blocked.
In practice, proxy servers are pieces of software that are designed and placed based on the idea that they will intercept communications content. The proxy will observe and recognize incoming requests and, on behalf of the client, make a request to the server.
The net result is that no client ever makes direct contact with the server and the proxy acts as the go-between or man-in-the-middle.
As was stated previously, this setup with a proxy server can allow or deny traffic based on actual information within the packet. The downside is more analysis means more overhead, so a price in performance is paid.
Limitations of a Firewall
Even with this cursory examination of firewalls, it seems as if they have a lot of power and can go a long way toward protecting a network. However, there are limitations on this technology and there are some things firewalls are just not suited for.
Having an understanding of what they can and can’t help you with is essential to the proper and effective use of firewalls.
Before you decide to purchase or otherwise acquire a firewall technology, ensure that the specific issue you’re trying to address can be handled and which type of firewall you have to have to properly address that issue.
Always know what your goals are when you build out a design intended to make the network environment more secure. Unfortunately, many companies acquire firewalls, as well as other devices that matter, and don’t have a clear goal or path in mind as to what they are going to address and how.
Simply put, know where you’re going before you turn the key and hit the gas. In our case, choosing the wrong firewall for a job will allow for the possibility of malicious or accidental things happening, and it may even give you a false sense of security because you think it is working when in reality it is ill-suited for the way you’ve deployed it.
The following areas represent the types of actions and events that a firewall will provide little or no value in stopping:
Viruses While some firewalls do include the ability to scan for and block viruses, this is not defined as an inherent ability of a firewall and should not be relied on.
Also consider the fact that as viruses evolve and take on new forms, firewalls will most likely lose their ability to detect them easily and will need to be updated. In most cases, antivirus software in the firewalls is not and should not be a replacement for system-resident antivirus.
Misuse This is another hard issue for a firewall to address as employees already have a higher level of access to the system. Put this fact together with the ability of an employee to ignore mandates to not bring in their own software or download software from the Internet and you have a recipe for disaster. Firewalls cannot perform well against intent.
Secondary Connections In some situations secondary access is present and as such presents a major problem. For example, if a firewall is put in place, but the employee can unplug the phone line and plug it into their computer, and then plug the computer into the network with the modem running, thus opening a backdoor and circumventing the firewall.
Social Engineering If the network administrators gave out firewall information to someone calling from your ISP, with no verification, there is a serious problem.
Poor Design If a firewall design has not been well thought out or implemented, the net result is a firewall that is less like a wall and more like Swiss cheese. Always ensure that proper security policy and practices are followed.
Implementing a Firewall
As with many things, firewalls have many different ways of being deployed, and there is no one standard way for deploying these key components of network security.
However, we can discuss the basic configurations that can be used in the options that are available, and then you can decide if these need to be enhanced or modified in any way to get a result more suited to your needs. Let’s take a look at some of these options:
One way of implementing a firewall is the use of what is known as a multihomed device. A multihomed device is identified as a device that has three or more network adapters within it.
Each one of the network adapters will typically be connected to a different network, and then the firewall administrator will be tasked with configuring rules to determine how packets will be forwarded or denied between the different interfaces. This type of device and setup is not uncommon and is observed quite a bit out in the wild.
However, there are some key points to remember when discussing this type of device. As far as benefits go, this type of configuration offers the ability to set up a perimeter network or DMZ (which we’ll talk about in a moment) using just one device to do so.
This setup also has the benefit of simplicity due to the fact that it will be one device and a set of multiple devices, thus reducing administrative overhead and maintenance.
As far as disadvantages go, this device represents a potential single point of failure, which means that if the device is compromised or configured improperly, it could allow blanket access or at least unwanted access to different parts of the operating environment.
Making things a little more interesting is the configuration known as a screened host. This type of setup combines a packet filtering firewall with a proxy server to achieve a faster and more efficient setup, but at the cost of somewhat decreased security.
This type of setup is easily recognizable just by analyzing devices in place. In this setup, as traffic attempts to enter a protected network it will first encounter a router that will do packet filtering on the traffic.
Then, if packet filtering allows it to pass, it will encounter a proxy, which will, in turn, do its own filtering, such as looking for restricted content or disallowed types of traffic. This type of setup is often used to set up a perimeter network also known as a DMZ (demilitarized zone).
DMZs are an important part of network security. To make things simple a DMZ can be visualized as a limited or small network sandwiched between two firewalls; outside these firewalls, you’ll have the outside world (Internet) and on the other extreme you’ll have the intranet, which is the client’s protected network.
The idea behind this type of deployment is that publicly accessible or available services such as web servers can be hosted in the DMZ. For example, if a client wants to host their own web server and make the content available to the public, they could create a DMZ and place the web server within this zone.
Without a DMZ and just a single firewall, you would have a choice to make: you would have to put the web server either on the Internet side or on the intranet side of the firewall.
Neither one of these options is practical. If the server was placed on the Internet, it’s completely exposed with no protection on it, and if it’s placed on the client’s own network, then you have to give access to the client’s network from the outside world and that opens up the door to a lot of potential mischiefs.
However, by using the DMZ you avoid both of these issues by having only selected traffic come past the Internet-facing firewall to access the web server whereas no traffic will be allowed to pass from the outside through the inner firewall that separates a DMZ from the client’s network. Of course, there are different restrictions on traffic leaving from the client’s network.
Authoring a Firewall Policy
Before you place a firewall, you need a plan, one that defines how you will configure the firewall and what is expected; this is the role of policy.
The policy will be the blueprint that dictates how the firewall is installed, configured, and managed. It will make sure that the solution is addressing the correct issues in the desired way and reduces the chances of anything undesired occurring.
For a firewall to be correctly designed and implemented, the firewall policy must be in place ahead of time. The firewall policy will represent a small subset of the overall organizational security policy.
The firewall policy will fit into the overall company security policy in some fashion and uphold the organization’s security goals, but enforce and support those goals with the firewall device.
The firewall policy you create will usually approach the problem of controlling traffic in and out of an organization in one of two ways. The first option is to implicitly allow everything and only explicitly deny those things that you do not want. The other option is to implicitly deny everything and only allow those things you know you need.
The two options represent drastically different methods in configuring the firewall. In the first option, you are allowing everything unless you say otherwise, whereas with the second you will not allow anything unless you explicitly say otherwise. Obviously one is much more secure by default than the other.
Consider the option of implicit deny, which is the viewpoint that assumes all traffic is denied, except that which has been identified as explicitly being allowed. Usually, this turns out to be much easier in the long run for the network/security administrator.
For example, visualize creating a list of all the ports Trojans use plus all the ports your applications are authorized to use and then creating rules to block each of them. Contrast that with creating a list of what the users are permitted to use and granting them access to those services and applications explicitly.
[Note: You can free download the complete Office 365 and Office 2019 com setup Guide.]
Network Connection Policy
This portion of the policy involves the types of devices and connections that are allowed and will be permitted to connect to the company-owned network. You can expect to find information relating to the network operating system, types of device, device configuration, and communication types.
Physical Security Controls
Physical security controls represent one of the most visible forms of security controls. Controls in this category include such items as barriers, guards, cameras, locks, and other types of measures.
Ultimately physical controls are designed to more directly protect the people, facilities and equipment than the other types of controls do.
Some of the preventive security controls include the following:
Alternate power sources Flood management
Data backup Fences
Human guards Locks
Fire-suppression systems Biometrics
Generally, you can rely on your power company to provide your organization with power that is clean, consistent, and adequate, but this isn’t always the case.
However, anyone who has worked in an office building or another type of setting has noticed at the very least a light flicker if not a complete blackout. Alternate power sources safeguard against these problems to various degrees.
Hurricane Katrina showed us how devastating a natural disaster can be, but the disaster wasn’t just the hurricane—it was the flood that came with it. You can’t necessarily stop a flood, but you can exercise flood management strategies to soften the impact.
Choosing a facility in a location that is not prone to flooding is one option. Having adequate drainage and similar measures can also be of assistance. Finally, mounting items such as servers several inches off the floor can be a help as well.
Data backup is another form of physical control that is commonly used to safeguard assets. Never underestimate the fact that backing up critical systems is one of the most important tools that you have at your disposal. Such procedures provide vital protection against hardware failure and other types of system failure.
Not all backups are created equal, and the right backup makes all the difference:
Full backups are the complete backing up of all data on a volume; these types of backups typically take the longest to run.
Incremental backups copy only those files and other data that have changed since the last backup. The advantage here is that the time required is much less and therefore it is done quicker.
The disadvantage is that these backups take more time to rebuild a system.
Differential backups provide the ability to both reduce backup time and speed up the restoration process. Differential backups copy from a volume that has changed since the last full backup.
Fences are a physical control that represents a barrier that deters casual trespassers. While some organizations are willing to install tall fences with barbed wire and other features, that is not always the case.
Typically the fence will be designed to meet the security profile of the organization, so if your company is a bakery instead of performing duties vital to national security, the fence design will be different because there are different items to protect.
Guards provide a security measure that can react to the unexpected as the human element is uniquely able to do. When it comes down to it, technology can do quite a bit, but it cannot replace the human element and brain.
Additionally, once an intruder makes the decision to breach security, guards are a quick responding defense against them actually reaching critical assets.
The most common form of physical control is the ever-popular lock. Locks can take many forms, including key locks, cipher locks, warded locks, and other types of locks, all designed to secure assets.
Navigating the Path to Job Success
Penetration testing can be both an exciting and rewarding job and career. With the rapid changes in technology and the ever-increasing number of threats and instability in the world, your life will never be boring.
As hackers ratchet up the number and ferocity of their attacks and gain ever more sensitive information with increasing regularity, pentesters who are able to identify flaws, understand each of them, and demonstrate their business impact through mindful exploitation are an important piece of the defensive puzzle for many organizations.
This blog will highlight some nontechnical tips as you start down the path to becoming a pentester.
In this blog, you’ll learn to:
Choose a career path
Build a reference library
Pick some tools to practice with
Practice your technical writing skills
Choosing Your Career Path
Over the many years that I have worked with clients and students, a question I often encounter is “How I get into the field of penetration testing?” Unfortunately, this question is not as straightforward as you may think.
There are many paths to becoming a pentester, and this section will examine just a few of the potential paths that you may take. Remember that your own individual journey may be different from the ones outlined here. In fact, you may find that your journey can change paths several times and still get you to your goal.
For me, my journey into the world of pentesting started with me tinkering with technology at a very young age. I always loved taking hardware apart and trying different things.
I also wanted to know what every feature on a piece of software was supposed to do, and I wanted to find out how I could make software do things it wasn’t supposed to be able to do. My formal education and experience came some years later after I had done my tinkering, read numerous blogs, and do a lot of hands-on work.
These are some of the possible paths you may choose to take on your way to becoming a pentester:
Security or IT Person Moving into Pentesting This is a common path where someone starts in the IT area, then trains and transitions to a position as a pentester.
This is popular in enterprise environments and other large organizations where plenty of opportunities exists to cross-train into other positions and possibly shadow current personnel.
This approach, however, has its downsides. In order to transition roles, you may need to put in your own time and money for a period of time. Typically this means you may need to learn some of the basics on your own and be willing to work outside of your current position prior to being able to formally transition.
This extra time and effort not only shows that you are willing to commit to and invest in yourself, but it also demonstrates to management that you are prepared to make the jump from one job to another. In the case of pentesting, you may even be able to participate in or observe a test and participate in the analysis of data and results with experienced pentesters.
People with existing IT skills will have an advantage as many of these skills—such as those of networking, operating systems, and management principles—will be used in the process of testing.
Working for a Security Company Doing Pentesting This type of path is best suited to those who already have existing skills that have been developed over many years.
The individuals taking this route will already have strong general IT experience as well as some degree of pentesting experience. Some security companies will hire these individuals and finish off their training by working with current teams.
Those who do not have prior experience at any level doing testing of this sort will find this path somewhat tough to follow. Although some security companies may be willing to hire inexperienced testers and just train them as needed, many companies will not want to assume this burden in both cost and the time it takes to get the individual proficient enough to do a live test.
Flying Solo For those who are more ambitious and adventurous, the option to start their own small business that specializes in pentesting may be an option.
In this path, an individual starts their own business doing testing for local businesses and builds a name and experience at the same time. This may be ideal for those who need flexibility, are self-starters, and are OK being responsible for both testing and business operations.
This path is perhaps the toughest one to take but can allow for a lot of possibilities for those self-starters who are disciplined and curious.
This path will require that you put in your own time studying and researching to find answers and ideas. My opinion is that this is a great path to take if you can handle it because you have more opportunities to explore the field of pentesting.
Of course, it is not for everyone and still can be helped along with extra formal training and structure. In any case, you’ll want to refer to the “Display Your Skills” section later in this blog.
No matter which path you decide to pursue, always remember that you must build your reputation and trustworthiness in the field of security.
When testing your skills, make sure you consider that testing against anything that you don’t own or have permission to work with can get you into trouble, possibly of the legal variety.
Such an outcome can seriously impact your career options in this field as well as your freedom in some cases.
Build a Library
I strongly recommend to anyone interested in the field of pentesting that they build a library of resources they can call upon if needed. You should consider adding blogs or manuals of the following types:
Web Applications and Web Application Security Blogs Considering that many of the environments you will be assessed will have not only web servers but applications of various types running on those web servers, you will need experience and/or reference material on these environments.
Since web applications are one of the easiest and quickest ways for skilled attackers to enter an organization, having information about and experience with these environments is a must.
A Reference Guide or Material on Tools Such as NMAP and Wireshark Many of the tools discussed in this blog are complex and have numerous options. Be sure to have manuals and guides on these tools.
Web Server Guides When performing pentesting, you will encounter many environments that have web servers in them that will need to be evaluated.
While you can find information on the whole universe of web servers, I would at least include information on web servers such as Microsoft’s Internet Information Services (IIS), Apache, and perhaps ngnx.
While there are other web servers, they are less likely to be encountered and are not essential in many cases.
Operating System Guides Let’s face it: you will encounter a small number of operating systems in your testing. As such, you should include reference guides on Microsoft’s Windows, Linux, Unix, and Mac OS. Additionally, you will need to include reference material on mobile operating systems such as Android, iOS, and maybe Windows Mobile.
Infrastructure Guides You will need to have material on networking hardware such as Cisco devices, including routers, switches, and the like.
Wireless Guides With a wireless present in so many different environments, you should include materials that cover wireless technologies.
Firewall Guides Firewall guides may be necessary for reference purposes.
A TCP/IP Guide This should be obvious considering that you will be working with the IPv4 and IPv6 protocols in most environments.
Kali Linux Reference Guide Since you will at some point be using Kali Linux in your pentesting career, a reference on this is a must.
There are many more that could be included on this list, and you undoubtedly will find a lot of possibilities to include in your own personal library. I also suggest obtaining guides and manuals on various hardware and equipment that you may encounter.
You’ll have to decide for yourself whether you should go with printed or digital guides. Personally, I find digital versions of most of my blogs and reference guides the best way to go because of the smaller size, which is less stressful on my back when I travel.
In fact, currently I carry a Google Nexus 7 (an older device, I know), but it is loaded not only with tools but also with other items such as Amazon’s Kindle app with my titles on it, as well as PDF manuals, reference apps, dictionaries, and whatever I find helpful.
I love the device because it is small and powerful enough for my needs, and I can even add a case with a keyboard on it if I want to take notes (though the keyboard is small).
Practice Technical Writing
Since at the end of a test you will have to write reports and organize your findings, you must have well-developed skills in both. I recommend picking up a blog or taking a class on how to do technical writing and report writing. Also, learn how to be organized and document thoroughly; many IT and security professionals lack both skills.
Finally, since you will be doing a fair bit of writing as part of this career field, you need to bump up your spelling and grammar skills to be top notch.
Use the tools in your favorite word processing program to analyze both your spelling and your grammar prior to giving your report to your client. Simple misspellings and poor grammar reflect upon you no matter how good your work may be otherwise.
Keep in mind that good technical writing is an acquired skill, and even mine (yes, as a published author) can still use practice to get better. In fact, if it wasn’t for my exceptionally talented developmental editor fixing my wording here and there, I would look a lot less talented (thanks again, Kim!).
Display Your Skills
In the world of pentesting, schooling or a lack thereof is not something that can cause you to be unsuccessful. However, a lack of formal training may require you to prove yourself and your skills. Fortunately, there are many different ways for you to do this:
Consider starting a blog where you can share your knowledge, give advice, or show your research and ideas.
Open up a Twitter account where you can post links and information about different topics that may be of use to other individuals.
Look for magazines that publish security and pentesting articles. You may have to start with smaller sites and magazines and work your way up to larger publications or sites.
If you have the skills, participate in bug bounty programs that are sponsored by different developers. These projects seek to locate defects or flaws in software and provide information to software developers about the issue so they can address it as needed.
Create white papers for software or hardware vendors if the opportunity is available.
Visit the idea of perhaps presenting at a security conference or group. Major conferences such as DefCon and Black Hat have these opportunities. However, before doing such a presentation, make sure you have both the technical and presentation skills as they will be needed in equal measure.
Consider attending these conferences before you attempt to present so you can accurately assess if you are ready to do one.
Remember, having a lack of schooling will not typically hinder your progress if you have quality skills that can be demonstrated. However, you will need to prove this to a greater extent than perhaps a more traditional student would have to. Bug bounties are a great way to prove your prowess but will require time and effort, not to mention skills.
It’s worth experimenting with some of the analysis frameworks such as Metasploit. Consider developing skills with a scripting language such as Python or Ruby so you can start automating various aspects of your tasks and even extend the capabilities of tools such as the Metasploit framework.
Building a Test Lab for Penetration Testing
Let’s finish our voyage of discovery of the pentesting field by talking about how can continue to develop your skills. The best way to gain experience is to get your hands dirty.
Unfortunately, you can easily get into trouble if you’re not careful because you cannot just choose some random targets and attack them with the various hacking tools and techniques discussed in this blog. Not only is doing so ethically wrong, but it is also illegal.
Therefore, the best way for you to practice what’s covered in this blog is to build your own lab environment. With it, you can practice with tools without finding yourself on the wrong end of the law.
Deciding to Build a Lab
When you’re a pentester, you can’t practice your skills in the open since attacking targets is illegal when you don’t have permission to do so.
Therefore, you’ll need to have a lab environment that you own where you can test software and practice attacks without getting into trouble. When you have your own lab, you can practice to your heart’s content with a seemingly endless range of configurations and environments.
This is a huge advantage to you as a pentester because you will encounter many variations of environments in the field, and being able to customize the environment to more closely emulate what you see in the field will reap immediate benefits in your work.
Another advantage of testing within your own environment is that you can feel more comfortable about trying all the tools and techniques that you want to experiment with. You don’t have to worry if one of these tools or techniques happens to have catastrophic results, such as crashing or destroying your target (it will happen).
Because you’re working in a lab environment, it’s a simple matter for you to restore and rebuild your environment and try again with a different approach.
This is not something you can do as easily if you don’t own the environment, not to mention the trouble you could get into if you crash someone else’s environment that you don’t have permission to interact with in the first place.
Finally, when you’re testing in an unknown environment, you don’t have an immediate way to confirm whether the results match reality.
Setting up your own lab environment means that you know what’s being put in place, and therefore the results that you get from your scans and your exploration can be verified to see whether you’re getting the expected results.
Examining the results means you’ll have an easier time interpreting other results later more accurately.
All lab environments will be different; you can have numerous approaches, all of which are valid for your test on. Most importantly, you’ll want to build an environment that best fits your needs, so here are some questions to ask yourself:
What operating systems will you most likely encounter? What are the operating system versions needed?
What tools do you want to use?
What hardware are you most likely to encounter?
What configurations do you need to gain more experience on? What should the network look like?
What does the server environment need to look like? Do you need mobile operating systems?
Do you need to experiment with technologies like Active Directory? Do you need to understand certain vulnerabilities that exist?
Are the tools you’re using or at least intend to use usable within a virtual environment?
Do you need any specialized applications to be present?
Do you need to emulate a client environment to experiment with different approaches to your test?
Answering these questions will help you start envisioning a design. Remember, you’ll have to meet specific hardware and software requirements in terms of memory, processing, or network access, to name a few, to get your systems up and running.
To get a better handle on the requirements needed to implement your intended environment, you may need to refer to different vendor websites to see what the system requirements are and how you can deploy things within certain parameters. Then you’ll need to put all these requirements together to make the stuff work.
One of the most common ways to create a lab environment is to use a technique referred to as virtualization. Virtualization is an extremely common technique in the IT field that is used to consolidate multiple machines to fewer machines and to isolate systems for better stability and security while doing development and testing.
Virtualization is great for setting up a lab because it allows for the rapid deployment and reconfiguration of a system; it also allows you to have multiple configurations available without having multiple physical machines lying around your house, each with its own custom environment.
Instead, virtualization allows you to have a laptop that has several virtual environments posted on it that you can test against, meaning that everything is consolidated on the one system that is portable. Multiple physical machines would not meet the same standard.
Just about any environment that you’re going to encounter can be deployed into a virtual environment, with a few exceptions. Common operating systems such as Windows and Linux as well as Android can all be quickly and easily hosted within a virtual environment, along with all the various tools that we have talked about in this blog.
Here is how virtualization works: The host or host system is the physical system, including the operating system, upon which the virtualization software is installed on top of.
Once you have the host in place and have installed the virtualization software on top of the host, you can install your virtualized environments on top of all that.
These environments hosted on top of virtualization or within virtualization are the guests. Guests will have an operating system installed on the virtual system and will include the applications and tools all bundled to run on top of the virtualized environment.
In practice, a system will have one physical host with the potential to run multiple guests.
In most cases, the only limitation on the number of guests that can be hosted on top of a given host is the amount of memory and other resources that are available to split among all the various guests as well as the host and have them all run at an acceptable performance level (which is a little trickier than it sounds).
When hosting guests, one of the questions that come up is how much access you need.
In practice, virtualization software allows for networks to be private, meaning that they are limited to an individual computer so all the guests on the computer can communicate among themselves and with the physical host, but not outside of that computer. (However, the host will be allowed to communicate with the network just like they would if the virtualization portion didn’t exist.)
A network can also be configured when using virtualization to have full access to the network resources both on and off the system. In that case, the guests will act like any other physical hosts on the network.
Unless careful examination is made, a client or server anywhere else in the network will not see any difference between the virtual system and a physical system existing somewhere else.
There are other options as far as configuring the network, but keeping a network private might be a good idea for some of your testings as you get started.
If you were to type in the wrong IP address or wrong destination or generate a lot of traffic as a result of your testing, the effects will be limited to that one system and not impact anything else around you or cause potential negative results.
Keep in mind that network access is something you can change on any guest at any time; you just have to consult with your virtualization software of choice to see how that’s done.
Advantages of Virtualization
These are the advantages that the virtual machine model offers you as a pentester:
Testing malware in a virtual environment is highly recommended because it can greatly limit the potential damages that might happen if the malware is released to a live environment.
Testing different servers, applications, and configurations is a highly attractive option and is the reason you are building a lab with virtualization. Multiple configurations can be easily tested just by shutting down a guest and moving files from one system to another or one location to another and then just restarting the guest with the new configuration.
If during your testing and experimentation you happen to adversely damage or impact a guest, things can be easily repaired. In fact, in most cases, simply backing up your virtual machines prior to experimentation allows you to shut down a damaged virtual system and copy the backups over the damaged files and then restart the damaged system. That’s all it takes to get you back up and running.
You can set restore points or snapshots that are available in most virtualization packages prior to installing and testing new tools.
In the event that something doesn’t go as expected, all you have to do is roll that guest back to a point in time prior to the changes being made, and once again you’re free to continue with your testing and try a different set of operations or procedures.
One of the biggest advantages of virtualization is that it’s much cheaper than having multiple physical systems. In addition, the lower power requirements, maintenance requirements, and portability make it a much more efficient way to go.
Disadvantages of Virtualization
Virtualization is an attractive option in just about every case in the IT field; however, nothing comes without its disadvantages, and virtualization is not exempt from this rule.
In fact, virtualization, though an effective solution for many problems, is not something that should ever be treated as a magic wand that can be used to address any potential problem. The following are some situations where virtualization is just not a good candidate:
In most cases, the software that you will choose to run in a virtual environment should run pretty much without any major issues. However, there are some cases where software that needs to have direct access to hardware will fail in a virtual environment. Do your research before you include virtualization entirely.
Much like some software won’t work in virtualization or virtualized environments, so is the case with some hardware, which is just not going to work properly or at all in this environment.
For example, some wireless adapters or Bluetooth adapters will not work properly in a virtual environment. Thus, if you need to work with these tools, you probably need to stay with a physical system.
Though not necessarily a barrier against using virtualization, it is worth noting that the hardware requirements on the physical host will be greater than they would be if you hosted one environment on a physical system.
How much greater the hardware requirements will be in terms of memory and processor is not something I can answer here because the requirements vary depending on what you choose to host on top of a given physical system.
What I can say is that the hardware requirements will be greater than they would be if you had a one-to-one relationship between operating system applications and hardware.
The lists presented here are not meant to be exhaustive by any means. You should simply evaluate these issues for your own work given your choice of hardware and software as well as applications and virtualization packages because each combination can alter the results you might achieve.
Three popular virtualization packages are Microsoft Hyper-V, Oracle’s VirtualBox, and EMC’s VMware. There’s really not one way to create a lab-based around virtualization; it is just a matter of you figuring out your own requirements and what your pocketbook can handle. Be prepared to do a lot of reading and evaluating before you find the environment that fits you.
Getting Starting and What You Will Need
When you build your lab, you can create a list of must-haves and a list of things that are like-to-haves. However, no matter what your lists look like, you must establish a foundation before building your lab on top of it all.
I recommend that you go back and review the questions that you asked your-self early on when you were establishing your motivations for building your lab.
Then look at the virtualization software packages that are attractive to you and try to nail down a specific one that is right for you. Then you can start figuring out what your foundation looks like in terms of operating system, hardware requirements, and network access.
Remember that you can choose from numerous approaches when creating your lab. There is no one-size-fits-all approach that everyone can work with. However, you can establish some expected minimums that you will have to consider as starting points.
The basic requirements that you should consider sticking to are as follows:
In terms of memory, the more, the merrier. Ideally any system on which you’ll install your tools and testing environment should never have less than 8 GB of RAM; otherwise, you’ll sacrifice performance and some cases won’t be able to run the tools you need to perform the test.
While you can run virtualization with less RAM, 32 GB of DDR2 is recommended to support virtualization and obtain acceptable performance.
Keep a close eye on the amount of hard drive space that you have available. You can quickly consume all the available drive space with just operating systems, without any applications or data.
So plan for the appropriate amount of drive space as well as free space for applications and data in paging files and temporary files. Plan on having at least 1 TB of space available.
Consider using a solid-state drive (SSD) drive instead of a traditional drive (which has spinning discs inside). An SSD will give you much better performance over traditional drives—a fact that becomes much more noticeable when you’re running a lot of things that are hitting the hard drive at once.
Start thinking about your host operating system. Any of the major players and operating systems are suitable, but keep in mind that not every virtualization package is available for every operating system.
You can use intentionally vulnerable virtual machines such as Metasploitable, a Linux OS that is designed for pentesting but not for use in a non-testing production environment.
Check to see if your hardware of choice supports monitor mode with respect to wireless adapters.
After you’ve set up your environment, you’ll need to determine which tools to use. We’ve discussed many different types of tools that you can use during a pen-test.
The following lists are tools that are must-haves for a pentester. Consider them as something to get you started, but don’t feel that you have to stick with these tools exclusively. You should always be on the lookout for tools that may be complements to the ones listed here.
The following are scanners:
NMAP NMAP can be acquired at Nmap: the Network Mapper, which is the website of the developer. Since this tool is such a flexible and powerful piece of software and is cross-platform, you should seriously consider making it part of your toolkit.
Angry IP Available at Angry IP Scanner, this piece of software is a simple way of locating which hosts are up or down on a network. While the functionality for this tool can be replicated with a few switches in NMAP, it may still prove a good fit for your toolkit.
The following are password-cracking tools:
L0phtCrack This can be obtained from Auditing, cracking and recovering passwords. John the Ripper This can be obtained from John the Ripper password cracker.
Trinity Rescue Kit Another multipurpose tool, this is useful for performing password resets on a local computer. It can be downloaded from www. Trinity Rescue Kit | CPR for your computer.
The following are sniffers:
Wireshark The most popular packet sniffer in the IT industry, Wireshark is available from Wireshark Go Deep.
It’s fully customizable and features packed, with plenty of documentation and help to be found online and in print. Wireshark boasts cross-platform support and consistency across those platforms.
Tcpdump This is a popular command-line sniffer available on both the Unix and Linux platforms. Windump This is a version of tcpdump but ported to the Windows platform.
The following are wireless tools:
Insider, This is a network detection and location tool. See Because Your WiFi Should be Awesome!.
Bluesnarfer This can be obtained from the repositories of any Linux distribution.
Aircrack-ng This is a suite of tools used to target and assess wireless networks.