50+ Social Engineering Attacks (2019)
Social engineering attacks can be accomplished in many ways, including over a computer, using a phone call, in-person, or using traditional postal mail. When social engineering originates on the computer, it’s usually done using email or over the web.
This blog explains 50+ new Social Engineering Attacks with defending mechanism in 2019. And also explains 50 best Anti–Social Engineering techniques and Technologies.
A common social engineering target is to capture a user’s login credentials, using what is called phishing. Phishing emails or websites attempt to trick the user into supplying their legitimate login credentials by posing as a legitimate website or administrator that the end-user is familiar with.
The most common phishing ploy is to send an email purporting to be from a web site administrator claiming that the user’s password must be verified or else their access to the site will be cut off.
Spearphishing is a type of phishing attempt that is particularly targeted against a specific person or group using non-public information that the targets would be familiar with.
An example of spear phishing is a project manager being sent a document in an email supposedly from a project member purporting to be related to a project they are working on, and when they open the document it executes malicious commands.
Spear phishing is often involved in many of the most high-profile corporate compromises.
Trojan Horse Execution
Another just as popular social engineering ploy is used to get the unsuspecting end-user to execute a Trojan Horse program. It can be done via email, either as a file attachment or in an embedded URL.
It is done on websites just as frequently. Often a legitimate website is compromised, and when a visiting trusting user loads the web page, the user is instructed to execute a file. The file can be a “needed” third-party add-on, a fake antivirus detector, or a “needed” patch.
The legitimate website can be directly compromised, or another independently involved element, such as a third-party banner ad service, is. Either way, the user, who often trusts the legitimate website after years of visiting without a problem, has no reason to suspect that the trusted website has been compromised.
Over the Phone
Scammers can also call users purporting to be either technical support, a popular vendor, or from a government agency.
One of the most popular scams is when the user is called from someone claiming to be from tech support claiming that a malware program has been detected on the user’s computer.
They then request that the user download an “anti-malware” program, which proceeds, not unsurprisingly, to detect many, many malware programs.
They then get the user to download and execute a remote access program, which the fake tech support person then uses to log on to the victim’s computer to plant more malicious software. The bogus tech support program culminates when the victim buys a fake anti-malware program using their credit card number.
Over-the-phone scammers can also purport to be from tax collection services, law enforcement, or other government agencies, looking to get paid so that the end-user will avoid stiff penalties or jail.
Another very popular scam is carried out against people buying or selling goods on websites, such as auction sites or Craigslist-like websites. The innocent victim is either buying or selling something.
In buying scams, the buyer quickly replies, usually offers to pay the full purchase price plus shipping and asks the seller to use their “trusted” escrow agents. They then send the victim a fake check for more than the agreed upon purchase amount, which the victim deposits into their bank account.
The buyer asks the victim seller to return the “extra” money to their shipper or escrow agent. The seller victim is usually out at least that amount in the end.
In selling scams, the victim buyer sends the funds but never receives the goods. The average selling scam is at least a thousand dollars. The average buying scam can be tens of thousands of dollars.
Some of the most notorious social engineering scams are those that have been accomplished in-person by the hacker themselves. Physical social engineers are well known for walking into banks and installing keylogging devices on employee terminals while posing as computer repair people.
As distrusting as people are by nature of strangers, they are surprisingly disarmed if that stranger happens to be a repair person, especially if that service person says something like, “I hear your computer has been acting slow lately.”
Who can refute that statement? The repair person obviously knows about the ongoing problem and is finally here to fix it.
Carrot or Stick
The end-user is often either threatened with a penalty for not doing something or promised a reward for doing something. The ruse begins by putting the victim under duress, as people don’t weigh risk as carefully during stress events. They have to either pay a fine or go to jail.
They have to run the program or risk having their computer stay infected and their bank account emptied. They have to send money or someone they care about will remain in a foreign jail. They have to change the boss’s password or else get in trouble with the boss.
One of my favorite social engineering ruses, when I’m pen testing, is to send an email out to a company’s employees purporting to be from the CEO or CFO and announcing that the employee’s company is merging with their next biggest rival.
I tell them to click on the attached, booby-trapped document to see if their jobs are affected by the merger.
Or I send a legal-looking email to the male employees purporting to be from their ex-wife’s lawyer asking for more child support. You’d be amazed at how successful these two trick emails are.
Social Engineering Defenses
Defending against social engineering attacks takes a combination of training and technology.
Anti-social engineering training is one of the best, most essential defenses against social engineering. The training must include examples of the most common types of social engineering and how potential victims can spot the signs of illegitimacy.
At my current company, each employee is required to watch an anti-social engineering video each year and take a short test.
The most successful training have included other very smart, trusted, and well-liked employees who share their personal experience of having been successfully tricked by a particular type of common social engineering ploy.
I think every company should have fake phishing campaigns where their employees are sent phish-looking emails asking for the credentials. Employees providing their credentials should be given additional training.
There are a variety of resources, both free and commercial, for doing fake phishing campaigns, with the commercial ones obviously offering easier use and sophistication.
All computer users need to be taught about social engineering tactics. People buying and selling goods on the Internet need to be educated about purchase scams. They should only use legitimate escrow services and follow all the web site’s recommendations for an untainted transaction.
Be Careful of Installing Software from Third-Party Websites
Users should be taught never to install any software program directly from a website they are visiting unless it is the website of the legitimate vendor who created the software.
If a website says you need to install some piece of third-party software to continue to view it, and you think it is a legitimate request, leave the website and go to the software vendor’s website to install it.
Never install another vendor’s software from someone else’s website. It might actually be legitimate software, but the risk is too great.
EV Digital Certificates
Web surfers should be taught to look for the “extended validation” (EV) digital certificates on many of the most popular websites. EV websites are often highlighted in some way (usually a green address bar or highlighted green name) to conform to the user that the web site’s URL and identity have been confirmed by a trusted third party.
Get Rid of Passwords
Credential phishing can’t work if the employee can’t give away their login credential. Simple login names with passwords are going away in favor of two-factor authentication (2FA), digital certificates, login devices, out-of-band authentication, and other login methods that cannot be phished.
Anti–Social Engineering Technologies
Most anti-malware, web filtering software and email anti-spam solutions try to minimize social engineering done using computers. Anti-malware software will try to detect the execution of malicious files.
Web filtering software will try to identify malicious websites as the visitor’s browser tries to load the page.
And email anti-spam solutions often filter out social engineering emails. However, technology will never be completely successful, so end-user training and other methods must be used in conjunction.
Social engineering is a very successful hacking method. Some computer security experts will tell you that you cannot do enough training to successfully make all employees aware of social engineering tactics. They are wrong. A combination of enough training and the right technologies can significantly diminish the risk of social engineering.
Software vulnerabilities are susceptible weaknesses (i.e. “bugs”) in software, often from exploitable flaws written by the developer or inherent in the programming language. Not every software bug is a security vulnerability. The bug must be exploitable by an attacker to become a threat or risk.
Most software bugs cause an operational issue (which may not even directly manifest themselves to the operator) or even cause a fatal interruption to process but cannot be leveraged by an attacker to gain unauthorized system access.
Exploitable software vulnerabilities are responsible for a large (if not the largest) percentage of hacking in a given time period, even though other hacking methods (such as Trojan horse programs and social engineering) are often very competitive.
Some computer security experts think most computer security issues would go away if all software was bug-free, although this isn’t true or possible. Still, even if not a panacea, more secure code with fewer vulnerabilities would wipe a significant category of hacking issues out and make our computing environment appreciably safer.
Number of Software Vulnerabilities
There are many sources for tracking public software vulnerabilities, although the bugs listed for each may vary significantly.
On average, each year, major software developers and bug finders publicly announce 5000–6000 new software vulnerabilities. That’s about 15 bugs per day, day after day.
Many other vendors track either their own vulnerabilities or all known vulnerabilities as well. Check out any issue of Microsoft’s Security Intelligence Report to get the latest known figures and great analysis.
Of course, these are just the bugs the public gets to know about. Many vendors don’t publicly announce every bug. Many don’t announce bugs found by internal resources or bugs fixed in pre-production–released software.
Although there is no way to confirm this, most experts think the “real” number of bugs is significantly higher than the publicly known numbers.
NOTE The number of software vulnerabilities is just one measure, and it is not the complete picture of the overall security of a program or system. The only measure that truly matters is how much damage the software vulnerabilities were responsible for.
The number of software vulnerabilities might conceivably go down as the amount of damage goes up, although in general, having more secure programs is better for everyone.
Why Are Software Vulnerabilities Still a Big Problem?
These days, vendors often patch most critical exploits in a matter of hours to days. That being the case, why are software vulnerabilities still a significant problem, especially when most vendors have auto-updating mechanisms for faster patching?
The answer is that a significant portion of computer devices is slowly patched, or in a significant portion of cases, never patched. And each patch has the potential of causing an unexpected operational issue, sometimes causing more frustration to the end-user than the bug itself might have caused.
The number of overall exploits is fairly overwhelming and constant. A significant portion of computer administration is spent addressing and applying patches. It’s an incredible waste of time, money, and other resources that could be better spent on more productive things.
Even when users and administrators have patching down to an efficient science, during the time between when the vendor releases the patch and the user or admin applies it is the opportunity for a hacker to be successful against a given system.
If I’m a patient, persistent hacker against a particular target, I can just wait until a vendor announces a new patch and use it to exploit my objective.
When vendors release patches, both whitehat and blackhat resources immediately analyze the patch to locate the targeted vulnerability.
They then create exploits that can take advantage of the bug. There are dozens of commercial companies, a few free services, and an unnamed number of hackers that do this every day.
You can purchase and/or download vulnerability scanners that will scan each targeted device and report back on unpatched vulnerabilities. These vulnerability scanners often have thousands and thousands of exploits built in.
There are many hacker websites around the world with thousands of individual exploits that you can download to exploit a particular vulnerability. One of the most popular free tools used by blackhats and white hats alike is Metasploit.
Defenses Against Software Vulnerabilities
The number one defense against software vulnerabilities is better-trained software developers and more-secure-by-default programming languages.
Security Development Lifecycle
The process of trying to reduce the number of software vulnerabilities is now commonly known as the Security Development Lifecycle (SDL). The SDL focuses on every component in the lifecycle of a software program, from its initial creation to its patching of newly found vulnerabilities, in order to make more secure software.
Although not invented at Microsoft, Microsoft Corporation has probably done more work in the area and released more free information and tools (https://www.microsoft.com/sdl) than any other single source.
The fallibility of humans ensures that software code will always have exploitable bugs, but by following the SDL, we can have fewer of them (per the same number of lines of code).
More Secure Programming Languages
More secure programs cannot begin without more secure programming languages. Over the years, most programming languages have strived to create more secure default versions.
These languages try to reduce or eliminate common causes of exploits. To this end, they have been fairly successful, and programs written in them are significantly harder to exploit than those built using more insecure languages.
Code and Program Analysis
After a program version is written, it should always be analyzed for known and recognizable bugs.
This can be done using human analysis or software tools. Human analysis tends to be the least efficient, finding the least number of bugs per hour spent, but it can find significantly exploitable bugs that the tools are not coded to find.
Software bug-finding tools are often classified as “static analysis” or “fuzzers.” Static analyzers look at the source code (or programs) checking for known software bugs in the coding itself. Fuzzers enter in unexpected data inputs looking for vulnerabilities in the runtime program.
More Secure Operating Systems
Most operating systems are not only being coded by programmers steeped in the SDL and using more secure, by default, programming languages but are also including built-in defenses against common exploit vectors.
Most of today’s popular operating systems include specially designed memory defenses and protect the operating system’s most critical areas. Some even include built-in anti-buffer overflow, anti-malware, and firewall software, all of which help to limit exploitable bugs or their subsequent damage.
Third-Party Protections and Vendor Add-Ons
There are thousands of programs that will defend a computer system against previously unknown software vulnerabilities, with at least some success.
Some are offered as free or paid add-ons by the vendor, and others are from independent third parties. Programs that promise to detect and stop new exploits are very common, and although never perfect, they can significantly reduce the risk of new threats.
One of my favorite types of defensive software is called “application control” or “whitelisting” programs. These programs won’t stop the initial exploit, but they can stop or make it harder for the hacker or mal-ware program to do further damage.
Perfect Software Won’t Cure All Ills
No defense beats software that is more securely coded with less exploitable bugs from the start. However, perfect, bug-free, software is impossible and would not cure all hacking even if it was possible. Unfortunately, software vulnerabilities aren’t our only problem. Trojan horse programs work by simply getting the user to run a purely malicious program.
Many hackers and malware programs exploit the inherent, otherwise legitimate, capabilities of data, scripting languages, and other components to do bad things. And social engineering can accomplish what software cannot.
The traditional types of malware are a virus, worm, and Trojan horse. A computer virus is a self-replicating program that when executed goes around looking for other programs (or sometimes, as with macro viruses, data) to infect.
A computer worm is a replicating program that doesn’t usually modify other programs or data. It simply travels around on devices and networks using its own coded instructions, often exploiting one or more software vulnerabilities.
A Trojan horse program masquerades as some other legitimate program that the device or user is tricked into executing. Today’s malware is often a combination of two or more of these traditional types.
For example, it may initially be spread as a Trojan to gain the initial foothold and then use its own code to replicate to spread further.
Malware can be quite effective. Thousands of different malware programs have successfully infected entire networks across the globe in hours. Hundreds of malware programs have infected a significant portion of the computers attached to the Internet in a day.
The speed record still belongs to the 2003 SQL Slammer worm, which infected most Internet-accessible, vulnerable SQL servers in around 10 minutes.
The related patch had been out for five months, but back then almost no one patched in a timely manner.
Today, most malware programs are Trojans and require that an end-user initiate an action (such as opening a web link or file attachment) to initiate the malware, although the involved device or user may have had no hand in (accidentally) executing the program at all. It depends on the malware scenario and how it was spread.
Number of Malware Programs
There are literally hundreds of millions of different malware programs on the planet today and an immeasurable number of new ones being made each year. Most malware programs are slight, customized variants derived from just a few thousand different base malware programs.
Still, each variant must be detected by anti-malware programs, which often use a combination of digital signatures (a unique set of bytes for each malware program or family of programs) and behavioral detection.
An anti-malware program must be able to quickly scan tens of millions of files against hundreds of millions of malware programs and do it without significantly slowing down the device it is installed on.
It’s a tough thing to do, and even if done with the maximum degree of accuracy, it can be defeated by one new malware program with a single changed byte.
NOTE Anti-malware programs are more often called antivirus programs even though they detect and remove multiple malware classes because most malware programs were computer viruses when the scanning type of program became very popular.
Defenses Against Malware
There are many defenses against malware exploitation, most of which are good against several other forms of hacking as well.
Fully Patched Software
A fully patched system is far more difficult for malware to exploit than one that is not. These days “exploit kits” are hosted on compromised websites, and when a user visits, the exploit kit will look for one or more unpatched vulnerabilities before attempting to trick the user into running a Trojan horse program.
If the system is unpatched, often the malicious program can be secretly executed without the user being aware of anything.
A fully patched system is difficult for malware to compromise without involving the end-user. In cases where the malware or exploit kit doesn’t find an unpatched vulnerability, it will usually resort to some sort of social engineering trick.
Usually, it involves telling the end-user they need to run or open something in order to satisfy some beneficial outcome. Training users about common social engineering techniques is a great way to reduce the success of malware.
Anti-malware (frequently referred to as antivirus) software is a necessity on almost every computer system.
Even the best anti-malware software can miss a malware program, and no program is 100% perfect at blocking all malware, but running a computer system without such a program is like driving with leaky brakes.
You may get away with it for a while, but eventually, disaster will strike. At the same time, never believe any antivirus vendor’s claim of 100% detection. That is always a lie.
Application Control Programs
Application control programs (also known as “whitelisting” or blacklisting” programs) are great at stopping malware when used in whitelisting mode. In the whitelisting mode, only predefined and authorized programs are allowed to run.
This stops most viruses, worms, and Trojans. Application control programs can be operationally difficult to implement because by their very nature, every program and executable must be pre-approved to run.
And not every malware program type or hacker can be prevented, especially those that use built-in, legitimate programs and scripting tools.
That said, application control programs are an effective tool and are getting better all the time. Personally, I think for any system to be considered “very secure,” it must have an active and defined the whitelisting program.
Firewalls and other types of local and network security boundaries are good at keeping malware away from even being able to exploit a computer device.
Most operating systems come with built-in, local firewalls, but most are not configured and enabled by default. Implementing a firewall can significantly reduce malicious risk, especially if there is an unpatched vulnerability present.
Network intrusion detection/prevention (NID/P) and host intrusion detection/prevention (HID/P) software and devices can be used to recognize and stop malware on the network or localhost.
But like traditional anti-malware programs, NIDs and HIDs are not 100% reliable and should not be trusted alone to detect and stop malware.
Malware has long been a part of computer security threats and will always remain a top threat. That was back when we had just hundreds of malware programs. Now, with hundreds of millions of distinct malware variants, I realize how overly hopeful (and innocent) I had been.
How Do Hackers Hack IoT?
The same way they do regular computers, by picking one or more vulnerabilities along the Open Systems Interconnection (OSI) model layers (Physical, Data-Link, Network, Transport, Session, Presentation, and Application).
The only difference is that the IoT device may not use traditional hardware or a well-known operating system (or it might not even have a traditional operating system at all).
Hackers have to learn as much as they can about the device, research its components and operations, and look for vulnerabilities.
For example, suppose a hacker wants to see if they can hack an IoT toaster. The first order of business is to get one and study all the accompanying documentation. They then attempt to determine how it connects to a network and what it sends over the network by enabling a network sniffer and turning on the device.
You can learn an incredible amount about a device by listening to what it does or tries when it starts up. They might port scan it, looking for listening ports and trying to enumerate what operating system and services are running.
If there is an admin console, they try to connect to it. They try to find out what language the device’s code was written in and look for application programming interfaces (APIs).
Physical hacking is also a common method of IoT hacking. Hackers take the device apart and see what components it has, noting individual chips and chip numbers.
Most devices use common chips, and those common chips are often well-documented. Sometimes the chip’s vulnerabilities are well-known and can be similarly exploited across a range of devices.
Hardware hackers cross wires, join chip pins, and even create their own custom chips to get around the device’s authentication and access control blockers. They pay special attention to looking for input and output ports and see whether they can connect some sort of debugger to the device.
They use man-in-the-middle (MitM) attacks on communications to try to see information being transmitted and whether they can change those values and what happens.
They often share the information they learn in general IoT forums or even device-specific IoT forums. They even create virtual groups dedicated to a particular device, synergizing the expertise of the different members.
In general, if you can penetration test regular computers, you can penetration test IoT devices, except IoT devices sometimes take a bit more creativity and research, especially if you aren’t familiar with the operating system or chips.
But not only can they be hacked, but they are also far more likely to be hackable because most IoT vendors don’t understand the risk and don’t allocate enough defensive resources—at least for now.
It’s not like plenty of people aren’t working on better securing IoT devices. Most vendors at least think they are addressing it appropriately.
Dozens of independent groups like the IoT Village are working to help vendors better secure their devices. Unfortunately, hacker forums like San Francisco IOT Hacking Meetup are just as active and are having more success.
When an IoT vendor tells you that their device is very secure, they are probably wrong. Most likely very wrong.
So what can an IoT vendor do to better secure their device? Well, treat it like you are defending a regular computer. Threat model the device from the very beginning, and make sure the programming includes security design lifecycle (SDL) considerations from the very start to the end of the product’s life.
Make sure the device uses the most up-to-date software with the latest patches applied, and make the device self-updating. Remove any unnecessary software, services, and scripts.
Close any unneeded ports. Use good cryptography in a reliable way. Ensure customer privacy. Don’t collect information you don’t need. Securely store customer information you do need.
Require strong authentication to access and conduct multiple penetration tests during the product’s creation and beta testing.
Offer bug bounties. Don’t punish hackers for reporting bugs. Essentially take all the computer security lessons learned in the world of computers over multiple decades and apply them to IoT devices.
Unfortunately, most IoT vendors aren’t doing this, and we are probably doomed to have decades of very hacked IoT devices.
[Note: You can free download the complete Office 365 and Office 2019 com setup Guide.]
What Is Cryptography?
In the digital world, cryptography is the use of a series of binary 1s and 0s to encrypt or verify other digital content.
Cryptography involves using mathematical formulas (called ciphers) along with those 1s and 0s (called cryptographic keys) to prevent unauthorized people from seeing private content or to prove the identity or validity of another person or some unadulterated content.
The simplest encryption example I can think of is where some plaintext (non-encrypted) content is converted to an encrypted representation by moving the alphabet of each involved character by one place.
Thus, the word FROG would become GSPH. The decryptor could reverse the process to reveal the original plaintext content. In this example, the cipher (it’s almost silly to call it one) is the math, which in this case is + or – (addition or subtraction), and the key is 1.
As simple as this example is, hundreds of years of secret messages (and cereal box decoder rings) have used it, although they were not always successful at keeping the message secret from unintended readers.
In today’s digital world, cryptographic keys are usually at least 128 bits (128 1s or 0s) long, if not longer. Depending on the cipher, a key may be longer, although if the math is resistant to cipher attacks, usually the longest key sizes are 4096 bits.
If you see longer key sizes, that is usually indicative of weak math or someone who does not know cryptography very well.
Why Can’t Attackers Just Guess All the Possible Keys?
People new to cryptography don’t understand why attackers can’t simply try all the possible key 1 and 0 combinations that can result from a particular key size. Couldn’t someone with a very fast computer guess all the possible combinations? In short, no. Even a modern key size of 2000 bits is resistant against “brute force guessing”.
Not only isn’t any single computer powerful enough but if you took every computer in the world, not only today but for the foreseeable future, there wouldn’t be enough power. Hence, all (pure) cryptographic breaks rely on hints in the content or weaknesses in the math.
Cryptographic math is tricky (to say the least) to get right, and what might initially look like undefeatable math is often found to be full of flaws that allow significantly fast breaking. That’s why encryption standards and key sizes change all the time, as old ciphers get weakened and new, more resistant, ciphers emerge.
Symmetric Versus Asymmetric Keys
If the encryption key that is used to encrypt something is the same as what is used to decrypt it later on (such as in our simple example above, the 1), then the key is called “symmetric.”
If the key used to encrypt something is different than what is used to decrypt it, it’s called “asymmetric.”
Asymmetric ciphers are also known as public key encryption, where one party has the private key that only they know, but everyone else can have the “public” key, and as long as no one else knows the private key, it all works securely. However, symmetric encryption is usually faster and stronger for stated key size.
These days many encryption ciphers are well known and tested to become an industry, if not world-wide, standards. Popular symmetric encryption keys include Data Encryption Standard (DES), 3DES (Triple DES), and Advanced Encryption Standard (AES). The first two examples are early examples and are no longer used.
The latter, AES, is considered strong and is the most popular symmetric cipher used today. Symmetric key sizes usually range from 128 bits to 256 bits, but they gain length over time. Every single bit increase, say from 128 bits to 129 bits, usually doubles the strength of the key within the same cipher.
Popular asymmetric ciphers include Diffie-Hellman (the Hellman in Diffie-Hellman is profiled in the next blog), Rivest-Shamir-Adleman (RSA), and Elliptical Curve Cryptography (ECC). ECC is the new kid on the block and just starting to be used.
Asymmetric key sizes typically range from 1024 bits to 4096 bits, although 2048 bits is considered the bare minimum for Diffie-Hellman and RSA today. ECC uses smaller key sizes, starting at 256 bits.
384 bits is considered sufficiently strong today. In general, asymmetric ciphers are used to securely transmit symmetric keys, which do the majority of the encryption, between a source and destination.
Cryptography is also used for verifying identities and content. Both instances use cipher algorithms known as cryptographic hashes. With this approach, the plaintext content to be verified is mathematically applied to a key (again just a series of 1s and 0s) to get a unique output (called the hash result or hash).
An identity or content can be hashed at any point in time and be re-hashed again, and the two hashes can be compared to confirm that the hashed content has not changed since its original hashing.
Common hashes are Secure Hash Algorithm-1 (SHA-1), SHA-2, and SHA-3. SHA-1 was found to have some cryptographic weaknesses (also shared with SHA-2), and so SHA-1 is being retired. SHA-2 is becoming the most popular hash, but cipher experts are already recommending that SHA-3 be used.
Most cryptographic solutions use symmetric, asymmetric, and hashing algorithms to produce the desired protection. Many countries, like the US, have a standards body that analyzes and approves various ciphers for government use. The officially approved ciphers often become used around the world.
In the US, the National Institute for Standards and Technology (http://www .nist.gov) in conjunction with the National Security Agency (http://www.nsa.gov) have public contests in which cryptographers around the world are invited to submit their own ciphers for analysis and selection.
It’s conducted in a fairly public way and often even the losers agree on the final selections. Unfortunately, the NSA and NIST have also been accused, at least twice, of intentionally weakening official standards.
This has created tension, and many people no longer trust what NIST and NSA declare as good cryptography.
Cryptography underlies much of our online digital world. Cryptography protects our passwords and biometric identities, and it is used in digital certificates.
Cryptography is used every time we log on to our computer and connect to an HTTPS-protected website. It is used to verify downloaded software, to secure email, and verify computers to each other.
Encryption is used to protect hard drives and portable media against unauthorized viewing, to prevent OS boot sector corruption, and to protect wireless networks.
It is used to sign programming, scripts, and documents. It allows us to have private connections over the public Internet to our companies and computers, and it is behind almost all credit card and financial transactions in the world.
Good cryptography is the enemy of spies, tyrants, and authoritarian regimes. It is not hyperbole to say that without cryptography, the Internet would not be the Internet and our computers would not ever be under our control.
There are a host of cryptographic attacks. The following sections will explore a few of the more prominent ones.
Many attacks are simply theoretical attacks that find mathematical weaknesses. Without a mathematical weakness, a given cipher can usually withstand a brute force attack equal to the number of bits in the key minus one.
Thus, a 128-bit cipher (2128) like SHA-1 should be capable of withstanding 2127 guesses on average before it falls. Attackers have now successfully weakened SHA-1, using math to find and prove math flaws, to something like 257 bits.
Although 2127 is considered unbreakable (at least now), 257 is considered either breakable today or going to be readily breakable in the near future without a hacker need to use a tremendous amount of computing power.
Many attacks are successful because they have a hint (also known as a crib). The crib is usually in the form of a known set of bits or bytes, in either the ciphertext, plaintext content, or private key. A crib has the effect of shortening the possible number of bits in the protective cipher key.
Side Channel Attacks
Side channel attacks often attack an unforeseen implementation artifact to be able to more easily determine the secret keys. A common example is when a computer’s CPU changes it's sound or magnetic frequency wave when processing a 0 versus a 1.
Thus, someone with a very sensitive listening device might be able to determine what 0s and 1s a computer is processing when accessing a private key. Another related example is an attacker being able to determine which keyboard keys you are typing simply because they record the sound of you typing.
The vast majority of successful attacks against cryptography in the real world do not attack the cipher’s math or keys. Instead, attackers find implementation flaws that are the equivalent of placing a locked door’s key under the doormat. Even strong math cannot save a weak implementation.
Their many other types of cryptographic attacks, although those listed above are the most common methods. The only defense against cryptographic attacks is good, proven math, secure implementations, and invisible or easy-to-understand end-user interfaces. Nothing else matters.
While computer systems have been good at generating lots of events, humans and their evaluating alert systems haven’t been so good at making sense of them. To most computer users, event log files are full of thousands of events that muddy up any chance for true maliciousness to be detected.
The average time from an initial compromise by a hacker to the exfiltration of private data or login credentials is usually measured in minutes to a few days.
Most attackers (70% to 80%) are in the system for long periods of time (months) before discovery. Discovery of a breach by internal resources only happens about 10% of the time.
This is despite the evidence that most breaches are in the event logs and would have likely been detected if only the logs were looked at or managed correctly. To be clear, I’m talking about computer system event logs and also the log files of computer security defense devices (e.g. firewalls, intrusion detection systems, and so on).
Traits of a Good Security Event Message
Unfortunately, most computer security defenses generate thousands, if not billions, of event log messages that do not indicate maliciousness. Or if they do indicate actual maliciousness, they document an event that has very, very low risk to an environment (like when a firewall logs a blocked packet).
The end result is that most security event logs are very “noisy,” meaning full of more useless information than useful. With that in mind, a good computer security event message should have these traits:
Low false positives and low false negatives, meaning that an occurrence likely indicates true maliciousness
Readily understood description of the event
As much surrounding detail as can be captured and useful to investigators
Generation of the event always triggers an incident response investigation
These traits are the Holy Grail of intrusion detection.
Advanced Persistent Threats (APTs)
Advanced persistent threats (APTs) attacks are conducted by professional, criminal groups and have been responsible for compromising a large majority of businesses, military systems, and other entities over the last decade.
In fact, most security experts believe that all Internet-connected entities have been successfully compromised by APTs, or at the very least, could easily, at will, be compromised by an APT.
APTs are run by full-time, professional hackers who are different from traditional hackers in the following ways:
They intend to remain permanently engaged after the initial compromise.
They do not “run” when discovered.
They have dozens to hundreds of compromises and exploits they can use, including zero-days.
They always get total ownership of the environment.
Their objective is often stealing victim’s intellectual property (IP) over the long-term.
Their origination is often a “safe harbor” country that will never prosecute them for their activities. (Indeed, they are often state-sponsored and celebrated.)
The reason why APTs are covered in this blog is that they are more difficult to detect using traditional intrusion detection—not impossible, just difficult without preparing and adjusting traditional intrusion detection methods. Some of the newer methods covered in this blog are becoming quite accurate at detecting and preventing APTs.
Types of Intrusion Detection
There are two basic types of intrusion detection: behavior-based and signature-based. Many intrusion detection systems are a combination of both methods.
Also known as anomaly detection, behavior-based intrusion detection looks for behaviors that indicate maliciousness.
For example, a file trying copy itself into another file (for example computer virus), a program trying to perfidiously redirect a browser away from its user-intended URL (for example adware, a MitM attack, and so on), an unexpected connection to a honeypot, or a person copying all the contents of an authentication database (for example credential theft).
The basic idea behind behavior detection is that there too many bad things to individually identify, so look for their malicious behavior instead. It makes great sense.
Signature-based intrusion detection systems take the opposite approach. They believe that malicious behaviors change too often or that legitimate programs can create too many false-positive indications to be reliable.
Antivirus scanners are the perfect example of signature-based programs. They contain millions of unique bytes (signatures), which if detected will indicate maliciousness.
Intrusion Detection Tools and Services
In a general sense, any computer defense hardware or software that looks for and indicates maliciousness is an intrusion detection program. This includes firewalls, honeypots, anti-malware programs, and general event log management systems. Some experts only like to include solutions with “intrusion detection” in their name.
Intrusion Detection/Prevention Systems
Intrusion detection systems (IDSs) are purposefully built to detect malicious-ness, usually using a combination of behavior- and signature-based methods.
Intrusion prevention systems (IPSs) detect and prevent maliciousness. Many IDSs come with IPS preventive mitigations, so the term IDS can easily mean IPS as well. Few hold to the strict definition.
Some defenders are hesitant to activate automatic preventative mitigations even if they are available because of the frequent false positives many IDSs/IPSs have. Other times, for lower risk IPS systems like anti-malware solutions, defenders want the automatic prevention enabled.
An IDS/IPS is further classified as being a host-based IDS/IPS (HIDS/HIPS) or a network-based IDS/IPS (NIDS/NIPS), depending on whether the defense protects an individual host system or analyzes packets running across the network for maliciousness.
Event Log Management Systems
Behind every successful intrusion detection or event log solution is a system that detects and collects events from one or more “sensors.”
In any enterprise with more than a few computer devices, it becomes essential to collect and analyze these events as a whole to get the best benefit. Event log management systems are responsible for collecting these events, analyzing them, and generating alerts.
How well and accurately these systems do their job determines the effectiveness or ineffectiveness of the overall system. There are a lot of components and considerations for any event log management system.
Detecting Advanced Persistent Threats (APTs)
Professional APT hackers are very skilled at infiltrating a company with a minimum of maliciously detected activity. For many years it was considered difficult, if not impossible, to detect them.
But the field of intrusion detection eventually rose up to the challenge, and now there are several products, services, and companies that are very good at detecting APTs and APT-like activities.
Operating system vendors are building in features and services that are significantly better at detecting these sorts of online criminals. Examples of this are Microsoft’s Advanced Persistent Threat and Advanced Threat Protection services.
Many companies now routinely follow dozens of different APT groups, readily detecting where they go and what they are doing. Many companies offer services that can quickly detect APTs in your environment and alert you to their presence.
Probably the biggest difference between traditional intrusion detection and the newer forms is the ability of the data to be collated across many, many companies distributed across the Internet.
Firewalls are a great example of technology being a victim of its own success.
Firewalls have worked so well at defending computers for three decades that the threats they were created to prevent have almost stopped even being tried. The bad guys are giving up! At least on those types of threats.
Some experts have even argued whether firewalls are even necessary anymore, but most believe that firewalls, like anti-malware scanners, are an essential item in anyone’s computer security base configuration.
What Is a Firewall?
In a nutshell, a firewall is a software or hardware component designed to prevent unauthorized access between two or more security boundaries.
It is traditionally accomplished by a protocol name or port number and usually at the network level using packet filtering.
Many firewalls can also allow or deny traffic based on user names, device names, group membership, and information found in the upper levels of the application traffic.
They often offer additional and advanced features such as high-level packet analysis, intrusion detection/prevention, malware detection, and VPN services. Most firewalls come with detailed log files. Turning on any firewall will usually result in a log file full of entries.
All firewalls have rules (or policies). The most common default firewall rule is this: Allow anything to go out by default, but deny any undefined inbound connections that were not previously created by an outbound connection.
Very secure firewalls also restrict any previously undefined outbound traffic. Unfortunately, when very strict rules are implemented, it often causes too much operational interruption or management, and so most implementers go with the most common default rule.
Where Are Firewalls?
Firewalls can be placed at the network level or directly on computer hosts.
Traditionally, most firewalls are located as network devices between two or more network segments. The only thing that has changed is that the number of managed segments has increased to the point that some firewalls can manage dozens of segments at once.
Today’s newly emerging software-defined networks (SDNs) contain some packet-forwarding components that can directly trace their origins back to traditional firewalls.
Many people believe that even a firewall-protected network cannot be trusted. Even Cheswick infamously said that inside a network firewall’s perimeter is a “soft, chewy center.”
Cheswick was stating that we must make sure all our hosts are securely configured and hardened to help defend against things that get past the network firewall perimeter.
Host-based firewalls help with that process. They normally still work at the network and packet level, but they often have additional capabilities because they are integrated with the host and its operating system.
For example, the Windows Firewall can be easily configured on a per-service basis and by user and group.
Windows comes built-in with nearly 100 firewall rules that are enabled by the operating system even if you disable the user-controllable software application.
Many computer security boundary purists believe that every host should only be able to communicate with other explicitly defined hosts, following essentially very secure, strict firewall rules that define exactly what traffic is and isn’t allowed into and between all hosts.
This sort of ultra-granular control is considered to be the Holy Grail of firewalls. Unfortunately, the complexity and management of such firewalls make them unlikely to be widely scaled beyond some small, super-secure scenarios.
Advanced firewalls have been around for decades and usually refer to features that a traditional packet-filtering– the only firewall doesn’t typically offer.
A traditional firewall may be able to block by protocol (by name or number), but an advanced firewall can often block by almost any detailed, individual component of the protocol (sometimes called “deep-packet inspection”). Or it can aggregate multiple packets to identify specific attacks.
A traditional firewall may drop a particular number of packets, but only an advanced firewall can say you’re under a distributed denial-of-service attack. Application-level firewalls can look at the application layers of the network and detect maliciousness or prevent it from reaching the host.
For example, an advanced firewall could drop a buffer overflow sequence from reaching a web server. Advanced firewalls are so common that most firewalls are advanced to some degree.
What Firewalls Protect Against
Firewalls prevent malicious attacks originating from unauthorized network traffic. Traditionally, remote buffer overflow attacks against vulnerable services were the number one threat that firewalls prevented.
But over time, services became more robust (often because their underlying operating systems became more secure by default), and firewalls made it harder for attackers to be successful using these types of attacks.
Accordingly, few of the attacks today, due to the way they are implemented, would be prevented by a firewall. For example, if an end-user can be tricked into running a Trojan horse program arriving in an email, there’s isn’t much a firewall can do to prevent the subsequent maliciousness.
Still, because firewalls are readily available (often free and implemented by default) and can stop certain types of attacks, most people believe every network and computing device should have one activated. You can choose to implement a firewall or not to implement one. Either way, that choice essentially points to the firewall’s great success.
What Is a Honeypot?
A “honeypot” is any system set up for the expressed purpose of being a “fake” system to detect unauthorized activity. A honeypot can be a computer system, a device, a network router, a wireless access point, a printer anything the honeypot administrator wishes to deploy.
A “honeynet” is a collection of honeypots. A honeypot can be created by deploying a real but otherwise unused system or by deploying specialized honeypot software that emulates systems.
The emulation can be anywhere along the Open Systems Interconnection (OSI) model layers—Physical, Data-Link, Network, Transport, Session, Presentation, or Application—or any combination of these layers.
There are many open-source and commercial honeypot options, each offering various features and realism.
The buyer must beware though. There are some honeypot products that have been around for longer than a decade, but the vast majority of honeypot offerings come and go in a few years, free or commercial, so be aware of longevity issues.
How well a honeypot system emulates or works at a particular layer determines its “interaction.” A “low-interaction” honeypot only mimics very simplistic port connections and logs them. The connecting user may or may not be offered a login screen, but usually, a successful login isn’t allowed.
A “medium-interaction” honeypot allows the user to log on and tries to offer up a moderate but realistic experience. If it is emulating a website, often it tries to emulate a decent but fairly static web site.
If it does FTP emulation, the FTP site allows logins, have files that can be downloaded and allows multiple FTP commands to be used. “High-interaction” honeypots mimic a real production system to the point that a hacker interacting with it should not be able to tell the difference between it and a real production asset.
If it is emulating a website, the website is broad and realistic-looking with frequently updated content.
Lower emulation is far easier to maintain, but sometimes the goal of the honeypot requires higher interaction. Of course, a real system offers the best emulation but can be more difficult to configure and manage over the long-term.
Why Use a Honeypot?
There are many reasons to have a honeypot, including:
As an early warning system to detect malware and hackers
To determine the intent of a hacker
For hacker and malware research
To practice forensic analysis
A honeypot, when appropriately fine-tuned, is incredibly low-noise and high-value, especially when analyzing logs or generating alerts.
For example, firewall logs are always full of tens of thousands of dropped packet events every day, most of which have nothing to do with maliciousness.
And even if there was a malicious probe, good luck in deciphering which one of those packets among the multitude is the one you are supposed to generate an alert on and respond to.
A honeypot is a fake system, and by design, no one (or thing) should be attempting to connect to it.
You have to spend a little time filtering out the normal broadcast traffic and legitimate connection attempts (for example from your antivirus updating programs, patch management and other system management tools, and so on).
But once that is done (which usually takes anywhere from two hours to two days), any other connection attempt is, by definition, malicious.
A honeypot is absolutely the best way to catch an intruder who has bypassed all other defenses. It sits there waiting for any unexpected connection attempt.
I’ve met and tracked a lot of hackers and pen testers in my decades of experience, and one fact that is true is that they search and move around a network once they have gained initial access.
Few hackers know which systems are or aren’t honeypots, and when they move around and simply “touch” the honeypot, you’ve got them.
Case in point: One of the most common attack worries is the advanced persistent threats (APTs). They move laterally and horizontally with ease, usually without detection.
But place one or more honeypots as fake web servers, database servers, and application servers, and you’ll be hard-pressed not to detect an APT.
Sure, you’ve got hackers who will simply go from their first internal compromise to a specific asset or set of assets, but that is rarely the case. Usually, even after compromising an intended primary target, they will look around.
And when they look around and touch a honeypot, boom, you’ve got them. Or at least you know about them. I’m a big fan of placing low- to medium-interaction honeypots around the internal environment to give early warning of a successful compromise.
Catching My Own Russian Spy
I’ve deployed dozens of honeypot systems over the years, but one of my favorite stories is when I was deploying a honeynet at a defense contractor. The contractor was concerned about external hacking, but our honeypots quickly turned up an unauthorized insider attack.
We tracked it back to a Russian data entry person in the payroll department. We already had a camera installed in the department so we could see everything that she was doing.
She had inserted an unauthorized wireless card in her PC to “bridge” two air-gapped networks, and she was exfiltrating large amounts of private data to another external partner.
After two days of watching and determining her intent (she was definitely going after top secret projects), we walked into the room to confront her along with corporate security. She immediately broke into tears and was such a good actress that had we not been already watching her for days I would have believed her.
She was an uber-hacker, but the payroll department had thought she was so computer illiterate that they had sent her to keyboarding school to learn how to type better.
She was just one of many Russian employees that had been hired as part of a temp agency contract. In the end, all were found to be spying and dealt with accordingly.
Honeypot Resources to Explore
The Honeynet Project (http://www.honeynet.org) is the single best place for honeypot information and forensics. Its Honeywall CD-ROM image is a great, free, all-in-one honeynet software for users not scared of a little Linux configuration.
It is menu-driven, full of functionality, and easier to get up and running than a brand-new Honeyd install.
Honeyd (http://www.honeyd.org) is a flexible, free, open-source, feature-rich honeypot software program, but it requires solid Linux and network skills to deploy and operate.
It performs excellent, broad emulation of over 100 operating systems, and it can be easily linked with other products and scripts. On the downside, it hasn’t been updated in years. I think it’s a good first-time honeypot for those who want to see everything that is possible.
My favorite honeypot software is Kfsensor (www.keyfocus.net). It’s a commercial product that only works on Windows computers, but the maintainer is constantly updating and improving the product. Kfsensor has its flaws, but it’s feature-rich and fairly easy to set up.
It has hundreds of options and customizations, and it allows logging and alerting to a variety of databases and logs. Free trial versions are available.
There are many (more than a hundred) honeypot products out in the world. A few new ones appear on the Internet every year. If you’re interested in honeypots, give some of them a try.
There is no doubt that every corporate entity should be running a honeynet of honeypots if they are interested in the earliest warning possible of a successful hacker or malware infiltration.
Hacking passwords has always been a popular activity for cyber attackers, although the newer methods have evolved from simple password guessing.
The Hollywood notion of the hacker is someone who sits in front of a login screen and simply guesses the correct password out of thin air. Although this does happen, it is fairly rare. Real password hacking usually involves a lot more guesses or no guessing at all.
To understand passwords, you really have to understand the authentication systems in general. The user (or device), also known as the security principal or subject, must submit something (such as a text label, a certificate, and so on) that uniquely identifies them and their login to the authentication system service. For most traditional password scenarios, this is a label known as a username.
The subject must then be able to prove ownership of the label, which is usually done by submitting another bit of information tied to the label that only the subject and the authentication system knows and agrees upon.
This is what the password is. When the user submits the correct password associated with the username, that proves the subject controls the username, and the system allows them access (in other words, they are authenticated) and may track them while they are accessing the system (which is called accounting or auditing).
Most operating systems also ensure that the subject is supposed to access the objects they are trying to access (a process called access control). Thus, you might hear the entire authentication process known as the four As (authentication, access, auditing, and accounting). They are related but usually evaluated separately.
A password can be an acceptable set of characters that the authentication system accepts.
For example, on a Microsoft Windows system, the local Security Accounts Management (SAM) database or the networked Active Directory authentication system (NTDS) can accept thousands of different characters, many of which require special keystroke combinations (for example Alt+0128) to create.
Passwords are stored in a local and/or networked database known as the authentication database.
The authentication database is usually protected or encrypted, and it is rarely directly accessible by non-privileged users. Passwords are also often stored in local and/or remote memory (if networked) while the user or device is active.
Most typed passwords are converted into some other intermediary form for security reasons. In most traditional operating systems, passwords get converted into a cryptographic hash.
The hash can be used in the authentication sequence itself or simply stored for later authentication purposes. Common password hashes on Windows systems are LANManager (LM), NTLANManager (NT), and PBKDF2 for local password cache storage.
Linux systems often use MD5, Blowfish SHA-256, or SHA-512. The best hashes create and use a random value (called the “salt”) during the creation and storage of the password hash. This makes it harder for a hacker obtaining the password hash to convert it back to its plaintext original value.
Secure network authentication scenarios do not pass the password or the password hash across a network link. Instead, an authentication challenge is performed.
Usually, the remote server, which already knows the client’s password or password hash, creates a random value and performs a cryptographic operation that only the legitimate client, with the same legitimate password or hash, can also correctly perform.
The server sends the random value to the client, and the client uses the password (or intermediate representation) to perform the expected calculations and sends the result back to the server.
The server compares the result sent by the client to its own internally expected result for the client, and if the two agree, the client is successfully authenticated.
This way if an intruder captures the packets used in network authentication, they will not immediately have the password or the password hash, although it is often possible with cryptographic analysis to sometimes work back to one or the other over time.
Because passwords can easily be stolen (and sometimes guessed), authentication systems are increasingly asking for additional “factors” for a subject to prove ownership of a logon label.
There are three basic types of factors: something you know (such as a password, PIN, passphrase, or screen pattern), something you have (such as a security token, cell phone, or smart card), or something you are (such as a biometric identifier, like a fingerprint, retina print, or hand geometry).
In general, the more factors required to authenticate, the better. The idea is that it is harder for an attacker to steal two or more factors than it is to steal just one factor.
Using two factors is known as two-factor authentication (or 2FA), and using more is known as multi-factor authentication (or MFA). Using two or more of the same factor is not as strong as using different types of factors.
There are many ways to hack passwords, including the methods described in the following sections.
Just like in the movies, hackers can simply guess a person’s password. If the password is simple and the hacker knows something about the person, they can try guessing a password based on the person’s interests. It’s well known that users often create passwords named after themselves, loved ones, or their favorite hobbies.
The hacker can manually try to guess a person’s password at a login screen or use one of the many hacker tools for automating password guessing. If the automated password-guesser blindly tries every possible password combination, it is known as a “brute force” guessing attack.
If it uses a predefined set of possible password values, which is often a dictionary of words, then the password guessing tool is known as a “dictionary” password guessing attack.
Most password guessers use a tool that begins with a dictionary set of words that then supplements the plaintext words with different combinations of numbers and special characters to guess at more complex passwords.
NOTE Once in my life I literally randomly guessed a password for a user I knew nothing about and had it work on my first try.
The password I guessed was “Rosebud,” because I had just finished watching the famous Orson Wells movie, Citizen Kane, where trying to guess that previously unknown word is the plot of the whole movie. But that was the one time in my career that this happened.
The hacker can also use a realistic-looking, but a fraudulent, online request (via website or email) to trick the user into revealing their password. This is known as “phishing.”
If the phishing attempt uses what was previously private or internal information, it’s known as “spearphishing.” Hackers can also use a phone or show up in person to attempt to trick users out of their passwords. It works far more often than you would think.
If the hacker already has elevated access to the victim’s computer, they can install a program called a “keylogger,” which captures typed keystrokes. Keyloggers are great for capturing passwords, and they don’t care if the password is long or complex.
If the hacker can access the victim’s authentication database, they can access the stored password, or more likely, password hashes. Strong hashes are cryptographically resistant to converting back to their original plaintext forms. Weaker hashes, unsalted hashes, and even strong hashes of short passwords are subject to “hash cracking.”
A hash cracker tries (either using brute force or dictionary methods) to input every possible password, converts it to a hash and then compares the newly created hash to the stolen hash.
If they match, then the hacker now has the plaintext password. “Rainbow tables” are related to traditional hash crackers, only their hash table stores an intermediate form used for a password or hash comparison that significantly speeds up the cracking.
There are many free password guessing and cracking programs available on the Internet. If you’re interested in trying a password hash cracker, the open source John the Ripper (http://www.openwall.com/john/) is a great one to learn with.
If the hacker already has elevated access, they can steal the user’s password hash or other credential representation from computer memory or the stored authentication database, and then replay it to other computers that accept authentication using the stolen credentials.
This type of attack, and in particular one known as “Pass-the-Hash” (or PTH), has become quite popular over the last decade.
In a traditional PTH scenario, the attacker first breaks into one or more regular end-user computers, locates the locally elevated account hashes, and then uses that access to eventually access the computer’s or network’s storage of all credentials, which essentially compromises the whole IT environment.
PTH attacks have happened to nearly every company and entity connected to the Internet over the last decade.
Hacking Password Reset Portals
Many times, the quickest way to hack a password is to hack the password’s related reset portal. Many authentication systems, especially the big, online systems, allow the end-user to answer a series of predefined questions to reset their password.
Hackers have found that it is far easier to guess or research the answer to a particular victim’s reset questions (such as “What is your mother’s maiden name?” “What is the first elementary school you went to?”) than it is to guess at their password. Many big celebrity hacks have occurred using this method.
There are just as many ways to defend against password hacks as there are ways to attack them.
Complexity and Length
Long and complex passwords make it significantly harder for password guessing and cracking tools to be successful. Longer is better than complexity (unless you can get true strong entropic complexity). Today most password experts recommend 12-character or longer passwords, and that’s just for regular users.
Privileged user accounts should be 16 characters or more. The length of the recommended minimum password size increases over time. However, this has no effect on credential reuse attacks, like PTH attacks.
Frequent Changes with No Repeating
Enforcing a maximum number of days that a particular password can be used (usually 90 days or less) with no repeating is a common password defense recommendation/requirement.
The thinking is that it usually takes a password guesser a long period of time to guess or crack a long and complex password, but it can eventually be done with enough time and computing power. Enforcing periodic changes in the password reduces the risk that the hacker will be successful before the new password is used.
Not Sharing Passwords Between Systems
This is one of the best defenses, but very hard (if not impossible) to enforce. Users should never use the same password between any system that has a different authentication database.
Re-using credentials between different systems increase the risk that the hacker will compromise one of the systems, capture your shared login credentials, and then use it to attack another.
This is a frequent password-guessing defense. For systems where hackers try to guess against active login screens (for example interactively), the authentication system should lock out or freeze the account after a set number of incorrect password guessing attempts.
The lockout can be temporary or require that the end user call the help desk to get it reactivated or to reset it at a password reset portal. This defensive measure defeats many password-guessing hackers and tools, but has its own risks, as the lockout feature can be used by the hacker to create a widespread, denial-of-service, lockout attack.
Strong Password Hashes
Authentication systems should always use strong hashes and prevent the use of weak, vulnerable hashes. Most operating systems default to strong hashes, but some allow weak hashes to still be used for backward compatibility purposes.
In Microsoft Windows, LM hashes are considered weak and shouldn’t be used. In Linux, MD5 and SHA-1 hashes are considered weak.
Don’t Use Passwords
These days the conventional wisdom is that password requirements are getting so long and complex that most users might be better off not using a password at all. Instead, users should use 2FA, biometrics, security tokens, digital certificates, and anything other than a simple login name and password combination.
This has been the recommendation for decades, but it is now becoming fairly common in both company networks on popular online systems. If your website allows you to use something better than a password, use it.
Credential Theft Defenses
Because credential theft attacks such as PTH attacks have become so popular lately, many operating systems come with built-in anti–credential theft attack defenses.
Most of these focus on making sure the passwords or password hashes aren’t available in memory to easily steal, or they don’t share the password or hash across network connections.