5 Most Common Firewall Configuration Mistakes
A misconfigured firewall can damage your organization in more ways than you think. Here’s where to look for the holes.
As security threats become more and more advanced, managing your firewall configurations has never been more important. IT professionals spend much of their time worrying about flaws and vulnerabilities, but according to Gartner research, 95% of all firewall breaches are caused by misconfiguration, not flaws.
Firewalls are an essential part of your network security, and a misconfigured firewall can damage your organization and give easy access to an attacker. Yet misconfigurations are alarmingly common. In my work I come across lots of mistakes in firewall configurations. Below are five of the most common types that I encounter, along with advice on how you can avoid them.
1. Broad policy configurations
Firewalls are often set up with an open policy of allowing traffic from any source to any destination. This is because IT teams don’t know exactly what they need at the outset, and therefore start with broad rules and work backwards. However, the reality is that due to time pressures or simply not regarding it as a priority, they never get round to defining firewall policies. This leaves the network in a perpetually exposed state.
Organizations should follow the principle of least privilege – that is, giving the minimum level of privilege that the user or service needs to function normally, thereby limiting the potential damage caused by a breach. It’s also a good idea to regularly revisit your firewall policies to look at application usage trends and identify new applications being used on the network and what connectivity they require.
2. Risky rogue services and management services
Services that are left running on the firewall that don’t need to be is another mistake I often find. Two of the main culprits are dynamic routing, which typically should not be enabled on security devices as best practice, and “rogue” DHCP servers on the network distributing IPs, which can potentially lead to availability issues as a result of IP conflicts. I’m also surprised to see the number of devices that are still managed using unencrypted protocols like telnet, despite the protocol being over 30 years old.
The answer to this problem is hardening devices and ensuring that configurations are compliant before the device is put into a production setting. This is something with which a lot of enterprises struggle. But by configuring your devices based on the function that you actually want them to fulfill and following the principle of least privileged access, you will improve security and reduce the chances of accidentally leaving a risky service running on your firewall.
3. Non-standard authentication mechanisms
During my work, I often find organizations that use routers that don’t follow the enterprise standard for authentication. For example, a large bank I worked with had all the devices in its primary data center controlled by a central authentication mechanism, but did not use the same mechanism at its remote office. By not enforcing corporate authentication standards, staff in the remote branch could access local accounts with weak passwords, and had a different limit on login failures before account lockout.
This scenario reduces security and creates more vectors for attackers, as it’s easier for them to access the corporate network via the remote office. Organizations should ensure that all remote offices follow the same central authentication mechanism as the rest of the company.
4. Test systems using production data
Companies tend to have good governance policies requiring that test systems should not connect to production systems and collect production data. But in practice, this is often not enforced because the people who are working in testing see production data as the most accurate way to test. The problem occurs because when you allow test systems to collect data from production, you’re likely to bring that data into an environment with a lower level of security. The data could be highly sensitive, and it could also be subject to regulatory compliance. So if you do use production data in a test environment, make sure that you use the correct security controls according to the classification of the data.
5. Log outputs from security devices
The issue that I see more often than I should is organizations not analyzing log outputs from their security devices -- or without enough granularity. This is one of the biggest mistakes you can make in terms of network security; not only will you not be alerted when you’re under attack, but you’ll have little or no traceability when you’re investigating post-breach.
The excuse I often hear for not logging properly is that logging infrastructure is expensive, and hard to deploy, analyze, and maintain. However, the costs of being breached without being alerted or being able to trace the attack are surely far higher.
Enterprises need to look at the state of their firewall security and identify where holes might exist. By addressing these misconfiguration issues, organizations can quickly improve their overall security posture and dramatically reduce their risk of a breach.
About the Author
You May Also Like