High Availability Firewall

25
Norgegatan 2, SE-164 32 Kista, Tel: +46 (0)8 58 83 01 00, Fax: +46 (0)8 23 02 31, www.op5.com High availability firewall Creating a high availability firewall cluster with automated fail-over, state-synchronization and LSB by Jonathan Petersson jpetersson @ op 5. com Copyright (C) op5 AB 2010 Last updated on Friday, September 17, 2010

Transcript of High Availability Firewall

Page 1: High Availability Firewall

Norgegatan 2, SE-164 32 Kista, Tel: +46 (0)8 58 83 01 00, Fax: +46 (0)8 23 02 31, www.op5.com

High availability firewall Creating a high availability firewall cluster with automated fail-over, state-synchronization and LSB by Jonathan [email protected] Copyright (C) op5 AB 2010 Last updated on Friday, September 17, 2010

Page 2: High Availability Firewall

Table of ContentsIntroduction

SoftwareDisclaimers

Server installationUpgrading to SqueezeSoftware installation

NetworkLayoutConfigurationIP-forwarding

HeartbeatCRM

Custom OCF resourcesconntrackdiptables

Shared IPsConntrackdIptablesLSB-services

OpenVPNIPsec

Final configurationMonitoring

NRPE

Page 3: High Availability Firewall

IntroductionThe purpose of this document is to outline how to set up two firewalls in a redundant setup running in active/passive mode utilizing open-source software in conjunction with custom heartbeat modules developed by op5 AB.

Software The installation has been made in a virtual environment using the following software:

● Oracle® VirtualBox 3.2.8 r64453 (www.virtualbox.org)● Debian Squeeze/sid with Linux 2.6.32-5 i586 (www.debian.org)

To create our cluster we’ll be using:

● Heartbeat 3.0.3-2 (www.linux-ha.org)● Pacemaker 1.0.9.1 (www.clusterlabs.org)● Conntrack-tools 0.9.14-2 (conntrack-tools.netfilter.org/)

In addition the following services will be used:

● OpenVPN 2.1.0-3 (www.openvpn.org)● Strongswan 4.3.2-1.3 (www.strongswan.org)

Disclaimers Please be aware that op5 AB does not acknowledge any responsibility of any information, code or guidelines provided in this document. Please refer to each respective project for support and documentation. Notice that the majority of all configuration examples has been taken from the first node in the cluster, you will need to modify most IP and DNS parameters to make it work in your environment.

To utilize the information in this document you’re expected to have two Debian Squeezy servers pre-installed with the software mentioned above with it’s respective dependencies. Installation of the server-software will not be covered in detail, configuration of the VPN services is entirely left out as there’s no special hooks into heartbeat to run these.

Page 4: High Availability Firewall

Server installationWe wont cover the details of installing the server but here’s some general pointers. The Debian project has compiled a well-written manual of the Debian installation which is available at http://www.debian.org/releases/stable/installmanual. Unless you’re planning to run any special software on the server keep the installation as small as possible as we want to minimize additional services running on the servers. As far as the installation of a high availability cluster goes there’s really no difference in what architecture you use as long as it’s supported by the Squeeze release. Given that the current stable version of Debian is version 5 (Lenny) we’ll briefly cover how to upgrade to version 6 (Squeeze) as this is a requirement to get the newer packages needed for this guide. If you want to use the stable version of Debian you’ll have to compile the packages manually.

Upgrading to SqueezeOnce you’ve installed your two servers you need to make a few changes in the apt-repository sources. Start of by deleting or moving “/etc/apt/sources.list” followed by recreating it with the following content. deb http://ftp.se.debian.org/debian/ squeeze main contrib non-freedeb-src http://ftp.se.debian.org/debian/ squeeze main contrib non-free deb http://security.debian.org/ squeeze/updates main contrib non-freedeb-src http://security.debian.org/ squeeze/updates main contrib non-free

Once finished run “aptitude update” to update your local repository database with the packages needed. Once this is finished run “aptitude safe-upgrade -y” and go through any potential questions asked by the system. When all of this is finished you’ll have two Squeeze servers ready for configuration.

Software installationTo make things easy we’ll install all the necessary software directly from the Squeeze repository using apt-get. deb-squeeze:~# apt-get install conntrackd iptables-persistent heartbeat pacemaker -yReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following extra packages will be installed: cluster-agents cluster-glue fancontrol gawk libcluster-glue libcorosync4 libesmtp5 libglib2.0-0 libglib2.0-data libheartbeat2 libltdl7 libnet1 libnetfilter-conntrack3 libnspr4-0d libnss3-1d libopenhpi2 libopenipmi0 libperl5.10 libsensors4 libsnmp-base libsnmp15 libtimedate-perl libxml2-utils libxslt1.1 lm-sensors openhpid psmisc shared-mime-info

Page 5: High Availability Firewall

Suggested packages: snmp-mibs-downloader sensord read-edid i2c-toolsThe following NEW packages will be installed: cluster-agents cluster-glue conntrackd fancontrol gawk heartbeat iptables-persistent libcluster-glue libcorosync4 libesmtp5 libglib2.0-0 libglib2.0-data libheartbeat2 libltdl7 libnet1 libnetfilter-conntrack3 libnspr4-0d libnss3-1d libopenhpi2 libopenipmi0 libperl5.10 libsensors4 libsnmp-base libsnmp15 libtimedate-perl libxml2-utils libxslt1.1 lm-sensors openhpid pacemaker psmisc shared-mime-info0 upgraded, 32 newly installed, 0 to remove and 6 not upgraded.Need to get 13.0MB of archives.After this operation, 36.8MB of additional disk space will be used.

In addition to this you want to have ntpd installed and configured, time-drifting will cause your cluster to fail rendering it unusuable. For details on how to configure OpenVPN and StrongSwan please refer to their respective web-sites. OpenVPN http://openvpn.net/index.php/open-source/documentation.htmlStrongSwan http://wiki.strongswan.org/projects/strongswan/wiki/UserDocumentation

Network

LayoutIn this example our firewalls are equipped with 4 network interfaces divided respectively:

● eth0: WAN/External interface● eth1: LAN/Internal interface● eth2: Synchronization interface● eth3: Management interface

Configuration For the configuration we’ll utilize the standard Debian network configuration present in ”/etc/network/interfaces”.

deb-fw1 deb-fw2

auto loiface lo inet loopback auto eth0iface eth0 inet static address 130.131.132.131 netmask 255.255.255.128 auto eth1

auto loiface lo inet loopback auto eth0iface eth0 inet static address 130.131.132.132 netmask 255.255.255.128 auto eth1

Page 6: High Availability Firewall

iface eth1 inet static address 120.121.122.2 netmask 255.255.255.128 auto eth2iface eth2 inet static address 130.131.132.253 netmask 255.255.255.252 auto eth3iface eth3 inet dhcp

iface eth1 inet static address 120.121.122.3 netmask 255.255.255.128 auto eth2iface eth2 inet static address 130.131.132.254 netmask 255.255.255.252 auto eth3iface eth3 inet dhcp

As described above each individual interface has it’s own purpose, you may want to divide them differently, add redundancy with bonding or bridging. In our example we’ve a cross-over link for the synchronization on interface eth2, this is not necessary but highly recommended to avoid that traffic is seen by other hosts on the network which may put you at risk. Once you’ve configured the synchronization interface verify that you can communicate between the two nodes. deb-fw1:~# ping -c1 130.131.132.254PING 130.131.132.254 (130.131.132.254) 56(84) bytes of data.64 bytes from 130.131.132.254: icmp_req=1 ttl=64 time=1.48 ms --- 130.131.132.254 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 1.485/1.485/1.485/0.000 ms

IP-forwardingTo allow network traffic to flow through the server “ip_forwarding” must be enabled. To enable it on startup set it to 1 in /etc/sysctl.conf net.ipv4.ip_forward=1

FirewallTo ensure that the firewalls are properly locked down we’ll utilize the iptables-script below which sets up standard rules for lockdown and opens up for services necessary such as heartbeat, conntrackd, OpenVPN and IPsec. You’ll need to modify this for the services you find necessary in your environment. The script automatically collects IP-addresses from the node which it’s initiated from and sets peer-ips in local variables where applicable. In addition to this custom rules for hosts behind the firewall can be configured in separate files present in the folder hosts. The script requires open communication between the two nodes, due to this you must create

Page 7: High Availability Firewall

SSH-keys without passphrases which can be used for the file-exchange. deb-fw1:~# ssh-keygen -t dsa -N "" -f /root/.ssh/id_dsa && cat .ssh/id_dsa.pub >> .ssh/authorized_keys && scp -r .ssh 130.131.132.254:~/Generating public/private dsa key pair.Your identification has been saved in /root/.ssh/id_dsa.Your public key has been saved in /root/.ssh/id_dsa.pub.The key fingerprint is:82:e8:dc:92:1f:5a:0e:19:dc:f6:fc:16:60:9c:1c:25 [email protected] key's randomart image is:+--[ DSA 1024]----+| E.. || .. || o o || . o .* || + +...S || o * o .. || B + o . || B . .. || . o .. |[email protected]'s password: id_dsa 100% 668 0.7KB/s 00:00 authorized_keys 100% 1432 1.4KB/s 00:00 id_dsa.pub 100% 610 0.6KB/s 00:00

Make sure that you’ve each nodes respective SSH host-keys present in known_hosts otherwise the script will fail upon synchronization. The script directory structure is as follows:

● /root/scripts/iptables○ iptables.sh○ hosts

■ ns.op5.se Install “iptables-persistent” to load the rules on boot. deb-fw1:~# apt-get install iptables-persistentdeb-fw1:~# update-rc.d iptables-persistent defaults

The script is later called using heartbeat upon node fail-over. Upon updates it also triggers the slave to update itself based on the data available at the master.

deb-fw1:~# cat /root/scripts/iptables/iptables.sh#!/bin/bash## Conntrackd for initating active/backup between two nodes## Copyright (c) op5 AB, Jonathan Petersson <[email protected]># All Rights Reserved.## This software has only been tested on Debian Lenny, modifications# may be needed for other distributions and operative-systems.# Conntrackd is required to run in the background, this software only# maintains the active/backup initiations between 2 nodes.## This program is free software; you can redistribute it and/or modify# it under the terms of version 2 of the GNU General Public License as# published by the Free Software Foundation.

Page 8: High Availability Firewall

## This program is distributed in the hope that it would be useful, but# WITHOUT ANY WARRANTY; without even the implied warranty of# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.## Further, this software is distributed without any warranty that it is# free of the rightful claim of any third person regarding infringement# or the like. Any license provided herein, whether implied or# otherwise, applies only to this software file. Patent licenses, if# any, provided herein do not apply to combinations of this program with# other software, or any other product whatsoever.## You should have received a copy of the GNU General Public License# along with this program; if not, write the Free Software Foundation,# Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.# ETH0IP=`ip addr show dev eth0 | grep inet | head -n 1 | awk '{print $2 }' | cut -d "/" -f1`ETH1IP=`ip addr show dev eth1 | grep inet | head -n 1 | awk '{print $2 }' | cut -d "/" -f1`ETH2IP=`ip addr show dev eth2 | grep inet | head -n 1 | awk '{print $2 }' | cut -d "/" -f1` if [ $ETH0IP == "130.131.132.131" ]; then ETH0PEER="130.131.132.132"else ETH0PEER="130.131.132.131"fi if [ $ETH1IP == "120.121.122.2" ]; then ETH1PEER="120.121.122.3"else ETH1PEER="120.121.122.2"fi if [ $ETH2IP == "130.131.132.253" ]; then ETH2PEER="130.131.132.254"else ETH2PEER="130.131.132.253"fi master() { IPTABLES="/sbin/iptables " $IPTABLES --flush $IPTABLES -t nat --flush $IPTABLES -P INPUT DROP $IPTABLES -P FORWARD DROP $IPTABLES -P OUTPUT ACCEPT ETH0SHARED='130.131.132.130' ETH1SHARED='120.121.122.1' INTNETS='10.0.123.0/24172.27.76.0/24172.27.86.0/24192.168.1.0/24' EXTNETS='120.121.122.0/24130.131.132.128/25' # Global input rules $IPTABLES -I INPUT -s 127.0.0.0/8 -d 127.0.0.0/8 -m state --state NEW -j ACCEPT $IPTABLES -I INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT $IPTABLES -I INPUT -p icmp --icmp-type echo-request -j ACCEPT # Local services

Page 9: High Availability Firewall

$IPTABLES -I INPUT -p tcp --dport ssh -m state --state NEW -j ACCEPT $IPTABLES -I INPUT -p udp --dport snmp -d $ETH1IP -m state --state NEW -s 120.121.122.27 -j ACCEPT $IPTABLES -I INPUT -p tcp --dport 5666 -d $ETH1IP -m state --state NEW -s 120.121.122.27 -j ACCEPT # Shared services $IPTABLES -I INPUT -p tcp -d $ETH0SHARED --dport https -m state --state NEW -j ACCEPT ### Sync services# heartbeat $IPTABLES -I INPUT -p udp -s $ETH2PEER -d 224.0.10.100 --dport 694 -j ACCEPT# conntrackd $IPTABLES -I INPUT -p udp -s $ETH2PEER -d 225.0.0.50 --dport 3780 -j ACCEPT ### FORWARD rules $IPTABLES -I FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT# icmp $IPTABLES -I FORWARD -p icmp --icmp-type time-exceeded -j ACCEPT $IPTABLES -I FORWARD -p icmp --icmp-type echo-reply -j ACCEPT $IPTABLES -I FORWARD -p icmp --icmp-type destination-unreachable -j ACCEPT $IPTABLES -I FORWARD -p icmp --icmp-type echo-request -j ACCEPT # Permit traffic between subnets for NET1 in $INTNETS; do

for NET2 in $INTNETS; do if [ $NET1 != $NET2 ]; then

$IPTABLES -I FORWARD -s $NET1 -d $NET2 -m state --state NEW -j ACCEPT fidonefor NET3 in $EXTNETS; do $IPTABLES -I FORWARD -s $NET1 -d $NET3 -m state --state NEW -j ACCEPTdone

done ### NAT rules for NET in $INTNETS; do

$IPTABLES -t nat -A POSTROUTING -o eth1 -s $NET -j SNAT --to-source $ETH0SHARED done for dr in `ls /root/scripts/iptables/hosts/ | grep -v '~'` do

source /root/scripts/iptables/hosts/$dr done ssh $ETH2PEER 'bash /root/scripts/iptables/iptables.sh slave' writeRules exit 0} slave() { diff <(iptables-save | grep -v -E \[[0-9]+:[0-9]+\] | egrep -v "(#)|($ETH0IP)|($ETH1IP)|($ETH2IP)|($ETH0PEER)|($ETH1PEER)|($ETH2PEER)|(\*)" | sort) <(ssh $ETH2PEER 'cat /etc/iptables/rules' | grep -v -E \[[0-9]+:[0-9]+\] | egrep -v "(#)|($ETH0IP)|($ETH1IP)|($ETH2IP)|($ETH0PEER)|($ETH1PEER)|($ETH2PEER)|(\*)" | sort) if [ $? -eq 1 ]; then

scp -r $ETH2PEER:/root/scripts/iptables /root/scripts/master

fi writeRules exit 0} writeRules() { diff <(iptables-save | grep -v -E \[[0-9]+:[0-9]+\] | grep -v "#") <(cat /etc/iptables/rules | grep -v -E \[[0-9]+:[0-9]+\] | grep -v "#") if [ $? -eq 1 ]; then rm /etc/iptables/rules

Page 10: High Availability Firewall

iptables-save > /etc/iptables/rules fi} usage() { echo "$0 {master|slave}"} case "$1" in master) master;; slave) slave;; *) usage

exit 1 ;;esac To set rules for a specific host create a file with the applicable rules in the hosts folder. deb-fw1:~# cat /root/scripts/iptables/hosts/ns.example.com$IPTABLES -I FORWARD -p tcp --dport 53 -d 120.121.122.2 -m state --state NEW -j ACCEPT$IPTABLES -I FORWARD -p udp --dport 53 -d 120.121.122.2 -m state --state NEW -j ACCEPT

This is the final ruleset pulled from the first node in the cluster. The slave should look identical with the exception of it’s peer-IPs. deb-fw1:~# cat /etc/iptables/rules# Generated by iptables-save v1.4.8 on Thu Sep 9 14:18:26 2010*filter:INPUT DROP [0:0]:FORWARD DROP [0:0]:OUTPUT ACCEPT [106:17718]-A INPUT -s 130.131.132.254/32 -d 225.0.0.50/32 -p udp -m udp --dport 3780 -j ACCEPT -A INPUT -s 130.131.132.254/32 -d 224.0.10.100/32 -p udp -m udp --dport 694 -j ACCEPT -A INPUT -d 130.131.132.130/32 -p udp -m udp --dport 5002 -j ACCEPT -A INPUT -d 130.131.132.130/32 -p udp -m udp --dport 5001 -j ACCEPT -A INPUT -d 130.131.132.130/32 -p udp -m udp --dport 1194 -j ACCEPT -A INPUT -d 130.131.132.130/32 -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT -A INPUT -s 120.121.122.27/32 -d 120.121.122.2/32 -p tcp -m tcp --dport 5666 -m state --state NEW -j ACCEPT -A INPUT -s 120.121.122.27/32 -d 120.121.122.2/32 -p udp -m udp --dport 161 -m state --state NEW -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -s 127.0.0.0/8 -d 127.0.0.0/8 -m state --state NEW -j ACCEPT -A FORWARD -d 120.121.122.2/32 -p udp -m udp --dport 53 -m state --state NEW -j ACCEPT -A FORWARD -d 120.121.122.2/32 -p tcp -m tcp --dport 53 -m state --state NEW -j ACCEPTCOMMIT# Completed on Thu Sep 9 14:18:26 2010# Generated by iptables-save v1.4.8 on Thu Sep 9 14:18:26 2010*nat:PREROUTING ACCEPT [0:0]:POSTROUTING ACCEPT [0:0]:OUTPUT ACCEPT [0:0]-A POSTROUTING -s 10.0.123.0/24 -o eth1 -j SNAT --to-source 130.131.132.130 -A POSTROUTING -s 172.27.76.0/24 -o eth1 -j SNAT --to-source 130.131.132.130 -A POSTROUTING -s 172.27.86.0/24 -o eth1 -j SNAT --to-source 130.131.132.130 -A POSTROUTING -s 192.168.1.0/24 -o eth1 -j SNAT --to-source 130.131.132.130 COMMIT# Completed on Thu Sep 9 14:18:26 2010

Page 11: High Availability Firewall

Conntrack The conntrack-configuration is a plain copy of the FTFW example packaged with conntrackd. It’s been modified with the appropriate peer-addresses. Notice that you’ll need to update these to the IPs you’re planning to use. Make sure that you populate “Address Ignore” with the fixed IPs of the firewalls as you wont have any usage of synchronizing state of these IP’s. It’s essential that you do not put the shared IPs in the ignore-section as any traffic being NAT’ed over these will loose its state upon fail-over. If you’re running OpenVPN or a similar service you want to add the tunnel IP to this section as well. Notice that the UDP and ICMP support is relatively new and may be unstable, remove support for this if you notice any issues.

deb-fw1:~# cat /etc/conntrackd/conntrackd.confSync {

Mode FTFW {}

Multicast {

IPv4_address 225.0.0.50Group 3780IPv4_interface 130.131.132.253Interface eth2SndSocketBuffer 1249280RcvSocketBuffer 1249280Checksum on

}} General {

Nice -20HashSize 32768HashLimit 131072LogFile onLockFile /var/lock/conntrack.lock

UNIX {

Path /var/run/conntrackd.ctlBacklog 20

}

NetlinkBufferSize 2097152NetlinkBufferSizeMaxGrowth 8388608

Filter From Userspace {

Protocol Accept {

TCP

Page 12: High Availability Firewall

UDP ICMP

}

Address Ignore {IPv4_address 127.0.0.1 # loopbackIPv4_address 130.131.132.132 # WAN InterfaceIPv4_address 120.121.122.3 # LAN interfaceIPv4_address 130.131.132.254 # Sync Interface

} }

}

HeartbeatHeartbeat is configured utilizing multicast on the synchronization interface to communicate with it’s neighbor node. Heartbeat relies on the node-name to be resolvable. Due to this it’s recommended to set static pointers in /etc/hosts if the DNS-server for some reason would become unavailable, this will also decrease the lookup-time. deb-fw1:~# cat /etc/hosts127.0.0.1 localhost127.0.1.1 deb-fw1.example.com.example.com deb-fw1.example.com # The following lines are desirable for IPv6 capable hosts::1 localhost ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allroutersff02::3 ip6-allhosts 130.131.132.253 deb-fw1.example.com deb-fw1130.131.132.254 deb-fw2.example.com deb-fw2

Heartbeat will communicate using multicast over UDP at port 694 with destination 224.0.10.100 which we’ve opened up for in the firewall. If you’ve multiple clusters within the same broadcast network make sure to differentiate the multicast address and port used to ensure that you don’t get any conflicting communications making your systems unreliable, using a dedicated link will eliminate this potential issue. For this setup we’ve enabled debug mode to be able to troubleshoot the setup. In a live environment you want to have this turned off as it generates a lot of log-data. If it’s essential for you to have this information consider sending it to a syslog server. We’ve used recommended keepalive, warntime, deadtime and initdead time variables, you may want to modify these depending on the connectivity between your two nodes to avoid split-brain situations. deb-fw1:~# cat /etc/ha.d/ha.cfcrm respawn

Page 13: High Availability Firewall

debug 1use_logd falselogfacility daemonmcast eth2 224.0.10.100 694 1 0node deb-fw1.example.comnode deb-fw2.example.comautojoin noneudpport 694keepalive 1warntime 5deadtime 10initdead 20debugfile /var/log/ha-debug

In addition to this we must configure authentication-keys which is used between the two nodes. Use the script below to generate it, make sure to replace “yoursecret” with the string you want to use. Copy this file to the secondary node as the key needs to match on both ends. echo "auth 1> 1 sha1 `echo 'yoursecret' | sha1sum | awk '{ print $1 }'`" >> /etc/ha.d/authkeys ; chmod 600 /etc/ha.d/authkeys

Confirm that the output of the file is correct. deb-fw1:~# cat /etc/conntrackd/authkeysauth11 sha1 ee60909613cba07967d32e602dd98641f21fd111

Once you’re finished you can go ahead and start heartbeat. deb-fw1:~# /etc/init.d/heartbeat startStarting High-Availability services: Done.

If you encounter any issues have a look in /var/log/ha-debug for pointers.

CRMOnce heartbeat is started you’ll be able to interact with it using crm, crm is a CLI-tool used to manage your nodes and resources. Notice that it may take a while before you can interact with heartbeat using the crm command. First of we want to disable stonith, you want to have this configured but we’ll not cover the configuration of this in this document and during setup it’s beneficial to have it turned off to ease any potential troubleshooting. deb-fw1:~# crm configure property stonith-enabled false

Once disabled have a quick look at the configuration, you should see the same data on both nodes if they’ve established a connection successfully. deb-fw1:~# crm configure show

Page 14: High Availability Firewall

node $id="56654cb6-9f5b-442d-9367-f8dc4136c6e4" deb-fw2.example.com \attributes standby="off"

node $id="e2dbecb7-3568-4c34-a000-87dcf4df82d2" deb-fw1.example.com \attributes standby="off"

property $id="cib-bootstrap-options" \dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \cluster-infrastructure="Heartbeat" \stonith-enabled="false"

Now lets have a look at the status of the nodes. deb-fw1:~/scripts# crm_mon -1============Last updated: Thu Sep 9 16:49:36 2010Stack: HeartbeatCurrent DC: deb-fw1.example.com (e2dbecb7-3568-4c34-a000-87dcf4df82d2) - partition with quorumVersion: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b2 Nodes configured, unknown expected votes0 Resources configured.============ Online: [ deb-fw2.example.com deb-fw1.example.com ]

Custom OCF resourcesPrior to configuring heartbeat resources we need to add support for conntrackd and our iptables-script. These are handled using OCF resources. There’s multiple modules pre-installed for other types of services and in addition to this LSB resources can be used as well. LSB is used to call regular init.d-scripts, however this requires that the script can handle start, stop and status. As conntrackd and iptables is services which is to run at all time these have to be handled a bit differently as we don’t want to turn them on or off upon fail-over.

conntrackdCreate a file called conntrackd in /usr/lib/ocf/resource.d/ and populate it with the following script.

#!/bin/sh## Conntrackd for initating active/backup state sync between two nodes# code based on Dummy template and primary-backup.sh## Copyright (C) 2010 op5 AB, Jonathan Petersson <[email protected]># All Rights Reserved.# Copyright (C) 2008 by Pablo Neira Ayuso <[email protected]># All Rights Reserved.# Copyright (C) 2004 SUSE LINUX AG, Lars Marowsky-Brée# All Rights Reserved.## Disclamer:# This software has only been tested on Debian Lenny, modifications# may be needed for other distributions and operating-systems.## Conntrackd will get started automatically if it's not already# running. However there's no active error-handling for startup errors# please refer to conntrackd regular error-logs for trouble shooting.## This program is free software; you can redistribute it and/or modify# it under the terms of version 2 of the GNU General Public License as# published by the Free Software Foundation.#

Page 15: High Availability Firewall

# This program is distributed in the hope that it would be useful, but# WITHOUT ANY WARRANTY; without even the implied warranty of# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.## Further, this software is distributed without any warranty that it is# free of the rightful claim of any third person regarding infringement# or the like. Any license provided herein, whether implied or# otherwise, applies only to this software file. Patent licenses, if# any, provided herein do not apply to combinations of this program with# other software, or any other product whatsoever.## You should have received a copy of the GNU General Public License# along with this program; if not, write the Free Software Foundation,# Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.# ######################################################################## Initialization: : ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/resource.d/heartbeat}. ${OCF_FUNCTIONS_DIR}/.ocf-shellfuncs ####################################################################### # Fill in some defaults if no values are specifiedOCF_RESKEY_bin_default="/usr/sbin/conntrackd"OCF_RESKEY_cfg_default="/etc/conntrackd/conntrackd.conf"OCF_RESKEY_lck_default="/var/lock/conntrack.lock"CONNTRACKD="${OCF_RESKEY_bin} -C ${OCF_RESKEY_cfg}" : ${OCF_RESKEY_bin=${OCF_RESKEY_bin_default}}: ${OCF_RESKEY_cfg=${OCF_RESKEY_cfg_default}}: ${OCF_RESKEY_lck=${OCF_RESKEY_lck_default}} meta_data() {

cat <<END<?xml version="1.0"?><!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd"><resource-agent name="Conntrackd" version="0.5"><version>1.0</version> <longdesc lang="en">This is a Conntrackd resource to manage primary and secondary state between two firewalls in a cluster.</longdesc><shortdesc lang="en">Manages primary/backup conntrackd state</shortdesc> <parameters><parameter name="bin" unique="1"><longdesc lang="en">Location of conntrackd binary.</longdesc><shortdesc lang="en">Conntrackd bin</shortdesc><contect type="string" default="/usr/sbin/conntrackd"/></parameter> <parameter name="cfg" unique="1"><longdesc lang="en">Location of conntrackd configuration file.</longdesc><shortdesc lang="en">Conntrackd config</shortdesc><contect type="string" default="/etc/conntrackd/conntrackd.conf"/></parameter> <parameter name="lck" unique="1"><longdesc lang="en">Location of conntrackd lock-file.</longdesc><shortdesc lang="en">Conntrackd lock-file</shortdesc><contect type="string" default="/var/lock/conntrackd.lock"/></parameter>

Page 16: High Availability Firewall

</parameters> <actions><action name="start" timeout="20" /><action name="stop" timeout="20" /><action name="monitor" timeout="20" interval="10" depth="0" /><action name="reload" timeout="20" /><action name="migrate_to" timeout="20" /><action name="migrate_from" timeout="20" /><action name="meta-data" timeout="5" /><action name="validate-all" timeout="20" /></actions></resource-agent>END} ####################################################################### conntrackd_usage() {

cat <<ENDusage: $0 {start|stop|monitor|migrate_to|migrate_from|validate-all|meta-data} Expects to have a fully populated OCF RA-compliant environment set.END} conntrackd_start() { # Call monitor to verify that conntrackd is running conntrackd_monitor if [ $? = $OCF_SUCCESS ]; then

# commit the external cache into the kernel table$CONNTRACKD -cif [ $? -eq 1 ]then return $OCF_ERR_GENERICfi # flush the internal and the external caches$CONNTRACKD -fif [ $? -eq 1 ]then return $OCF_ERR_GENERICfi # resynchronize my internal cache to the kernel table$CONNTRACKD -Rif [ $? -eq 1 ]then return $OCF_ERR_GENERICfi # send a bulk update to backups$CONNTRACKD -Bif [ $? -eq 1 ]then return $OCF_ERR_GENERICfi return $OCF_SUCCESS

fi} conntrackd_stop() { # Call monitor to verify that conntrackd is running conntrackd_monitor

Page 17: High Availability Firewall

if [ $? = $OCF_SUCCESS ]; then

# shorten kernel conntrack timers to remove the zombie entries.$CONNTRACKD -tif [ $? -eq 1 ]then return $OCF_ERR_GENERICfi # request resynchronization with the master firewall replica$CONNTRACKD -nif [ $? -eq 1 ]then return $OCF_ERR_GENERICfi

fi return $OCF_SUCCESS} conntrackd_monitor() { # Define conntrackd_pid variable local conntrackd_pid=`pidof ${OCF_RESKEY_bin}` # Check for conntrackd lock-file if [ -f $OCF_RESKEY_lck ]; then

# Check for conntrackd pidif [ $conntrackd_pid ]; then # Successfull if lock and pid exists return $OCF_SUCCESSelse # Error if pid exists but pid isn't running return $OCF_ERR_GENERICfi

else# False if lock and pid missing$OCF_NOT_RUNNING # Start conntrackd daemon$CONNTRACKD -d

fi} conntrackd_validate() { # Check if conntrackd binary exists check_binary ${OCF_RESKEY_bin} if [ $? != 0 ]; then

return $OCF_ERR_ARGS fi # Check if conntrackd config exists if [ ! -f ${OCF_RESKEY_cfg} ]; then

return $OCF_ERR_ARGS fi return $OCF_SUCCESS} case $__OCF_ACTION inmeta-data) meta_data

exit $OCF_SUCCESS;;

start) conntrackd_start;;stop) conntrackd_stop;;

Page 18: High Availability Firewall

monitor) conntrackd_monitor;;migrate_to) ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} to ${OCF_RESKEY_CRM_meta_migrate_to}."

conntrackd_stop;;

migrate_from) ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} to ${OCF_RESKEY_CRM_meta_migrated_from}." conntrackd_start

;;reload) ocf_log err "Reloading..."

conntrackd_start;;

validate-all) conntrackd_validate;;usage|help) conntrackd_usage

exit $OCF_SUCCESS;;

*) conntrackd_usageexit $OCF_ERR_UNIMPLEMENTED;;

esacrc=$?ocf_log debug "${OCF_RESOURCE_INSTANCE} $__OCF_ACTION : $rc"exit $rc

iptablesCreate a file called iptables in /usr/lib/ocf/resource.d/ and populate it with the following script. #!/bin/sh## iptables sync wrapper## Copyright (C) 2010 op5 AB, Jonathan Petersson <[email protected]># All Rights Reserved.# Copyright (C) 2004 SUSE LINUX AG, Lars Marowsky-Brée# All Rights Reserved.## Disclamer:## This program is free software; you can redistribute it and/or modify# it under the terms of version 2 of the GNU General Public License as# published by the Free Software Foundation.## This program is distributed in the hope that it would be useful, but# WITHOUT ANY WARRANTY; without even the implied warranty of# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.## Further, this software is distributed without any warranty that it is# free of the rightful claim of any third person regarding infringement# or the like. Any license provided herein, whether implied or# otherwise, applies only to this software file. Patent licenses, if# any, provided herein do not apply to combinations of this program with# other software, or any other product whatsoever.## You should have received a copy of the GNU General Public License# along with this program; if not, write the Free Software Foundation,# Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.# ######################################################################## Initialization: : ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/resource.d/heartbeat}. ${OCF_FUNCTIONS_DIR}/.ocf-shellfuncs ####################################################################### # Fill in some defaults if no values are specifiedOCF_RESKEY_bin_default="/root/scripts/iptables/iptables.sh"

Page 19: High Availability Firewall

: ${OCF_RESKEY_bin=${OCF_RESKEY_bin_default}} meta_data() {

cat <<END<?xml version="1.0"?><!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd"><resource-agent name="iptables" version="0.5"><version>1.0</version> <longdesc lang="en">This is a iptables resource to manage synchronization between two nodes</longdesc><shortdesc lang="en">Manages iptables sync</shortdesc> <parameters><parameter name="bin" unique="1"><longdesc lang="en">Location of iptables binary.</longdesc><shortdesc lang="en">iptables bin</shortdesc><contect type="string" default="/root/scripts/iptables/iptables.sh"/></parameter></parameters> <actions><action name="start" timeout="20" /><action name="stop" timeout="20" /><action name="monitor" timeout="20" interval="10" depth="0" /><action name="reload" timeout="20" /><action name="migrate_to" timeout="20" /><action name="migrate_from" timeout="20" /><action name="meta-data" timeout="5" /><action name="validate-all" timeout="20" /></actions></resource-agent>END} ####################################################################### iptables_usage() {

cat <<ENDusage: $0 {start|stop|monitor|migrate_to|migrate_from|validate-all|meta-data} Expects to have a fully populated OCF RA-compliant environment set.END} iptables_start() { bash $OCF_RESKEY_bin slave if [ $? -eq 0 ]; then

return $OCF_SUCCESS else

return $OCF_ERR_GENERIC fi } iptables_stop() { bash $OCF_RESKEY_bin master if [ $? -eq 0 ]; then

return $OCF_SUCCESS else

return $OCF_ERR_GENERIC fi

Page 20: High Availability Firewall

} iptables_monitor() { return $OCF_SUCCESS} iptables_validate() { # Check if iptables binary exists check_binary ${OCF_RESKEY_bin} if [ $? != 0 ]; then

return $OCF_ERR_ARGS fi return $OCF_SUCCESS} case $__OCF_ACTION inmeta-data) meta_data

exit $OCF_SUCCESS;;

start) iptables_start;;stop) iptables_stop;;monitor) iptables_monitor;;migrate_to) ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} to ${OCF_RESKEY_CRM_meta_migrate_to}."

iptables_stop;;

migrate_from) ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} to ${OCF_RESKEY_CRM_meta_migrated_from}." iptables_start

;;reload) ocf_log err "Reloading..."

iptables_start;;

validate-all) iptables_validate;;usage|help) iptables_usage

exit $OCF_SUCCESS;;

*) iptables_usageexit $OCF_ERR_UNIMPLEMENTED;;

esacrc=$?ocf_log debug "${OCF_RESOURCE_INSTANCE} $__OCF_ACTION : $rc"exit $rc

Once you’ve added both scripts to your resource directory restart heartbeat on both nodes to make them available.

Shared IPsThe essence of having a HA environment is to have shared resources, the primary resource which we’ll share is the routable IPs, these IPs will also serve as service IPs for VPN. In this setup we’ve two of them, one for the WAN-side and one for the LAN side. We’ll configure them using the crm cli tool. Notice that you can launch crm by itself and run “help” to get a list of commands available. deb-fw1:~# crmcrm(live)# help

Page 21: High Availability Firewall

This is the CRM command line interface program. Available commands:

cib manage shadow CIBsresource resources managementnode nodes managementoptions user preferencesconfigure CRM cluster configurationra resource agents information centerstatus show cluster statusquit,bye,exit exit the programhelp show helpend,cd,up go back one level

crm(live)#

When setting the IPs it’s important that you’ve the NICs connected identically on both firewalls since heartbeat is told to allocate a certain IP to a certain NIC. Assigning the IP to the wrong NIC will leave you with a broken installation. If you’ve multiple networks or NICs you can add these the same way. deb-fw1:~# crm configure resource primitive ExtIP ocf:heartbeat:IPaddr2 \ params ip="130.131.132.130" cidr_netmask="25" nic="eth0" \ op monitor interval="30s"deb-fw1:~# crm configure resource primitive IntIP ocf:heartbeat:IPaddr2 \ params ip="120.121.122.1" cidr_netmask="25" nic="eth1" \ op monitor interval="30s"

In addition to adding the IPs we want to group them to make sure that they’re both present on the same node. Unless this is done there’s a risk of the two nodes takes one IP each. deb-fw1:~# crm configure group IPs ExtIP IntIP \ meta target-role="Started"

Further we want to assign the resource to a primary node, in our case we want deb-fw1 to be the primary node. deb-fw1:~# crm resource migrate IPs deb-fw1.example.com

If everything is configured correctly deb-fw1 should now own the shared IPs on NIC eth0 and eth1. Verify this by checking that the resource is started with crm_mon. deb-fw1:~# crm_mon -1============Last updated: Fri Sep 10 13:21:26 2010Stack: HeartbeatCurrent DC: deb-fw2.example.com (56654cb6-9f5b-442d-9367-f8dc4136c6e4) - partition with quorumVersion: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b2 Nodes configured, unknown expected votes2 Resources configured.============

Page 22: High Availability Firewall

Online: [ deb-fw2.example.com deb-fw1.example.com ] Resource Group: IPs ExtIP (ocf::heartbeat:IPaddr2): Started deb-fw1.example.com IntIP (ocf::heartbeat:IPaddr2): Started deb-fw1.example.com

Verify that the IPs has been set. deb-fw1:~# ip addr show dev eth02: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:a7:25:0f brd ff:ff:ff:ff:ff:ff inet 130.131.132.131/25 brd 130.131.132.255 scope global eth0 inet 130.131.132.130/25 brd 130.131.132.255 scope global secondary eth0 inet6 fe80::a00:27ff:fea7:250f/64 scope link valid_lft forever preferred_lft foreverdeb-fw1:~# ip addr show dev eth13: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:8b:a1:3a brd ff:ff:ff:ff:ff:ff inet 120.121.122.2/25 brd 120.121.122.127 scope global eth1 inet 120.121.122.1/25 brd 120.121.122.127 scope global secondary eth1 inet6 fe80::a00:27ff:fe8b:a13a/64 scope link valid_lft forever preferred_lft forever

ConntrackdTo ensure that the firewall state is properly synced on fail-over we’ll call our custom OCF resource which we added above, if your system doesn’t have the binaries or configuration-files present in their default locations you can modify these by adding parameters to the resource. deb-fw1:~# crm configure primitive conntrackd ocf:heartbeat:conntrackd \ bin=”/usr/sbin/conntrackd” cfg=”/etc/conntrackd/conntrackd.conf” \ lck=”/var/lock/conntrack.lock \ op monitor interval=”30s”

If conntrackd is available in the standard locations used by Debian you can simple add the resource the following way. deb-fw1:~# crm configure primitive conntrackd ocf:heartbeat:conntrackd \ op monitor interval=”30s”

IptablesTo have a proper synchronization of the firewall rules we’ll call our custom OCF triggering our iptables-script to activate and sync all rules. As a node takes ownership of the iptables-resource it will make sure that all rules are consistent on both ends. Notice that you need to run “bash iptables.sh master” after modifying any rules to apply the rules and sync between the servers otherwise the rule will get wiped upon fail-over.

Page 23: High Availability Firewall

Upon adding the resource we’ll set the state-parameter to “master”. deb-fw1:~# crm configure primitive iptables ocf:heartbeat:iptables \ params state=”master” \ op monitor interval=”30s”

In addition to this we’ve installed iptables-persistent which loads the rules upon boot of the server based on the rules written to “/etc/iptables/rules” by the script.

LSB-servicesThere’s a couple of services we only want active on the active node, in our case this is OpenVPN and IPsec. Due to this there’s no major need for OCF-resources and we’ll simply call them using LSB which will trigger the init.d-scripts for each respective service. It’s important to remember to have the services turned off during startup so they wont cause interference once heartbeat tries to start them. deb-fw1:~# update-rc.d -f openvpn removedeb-fw1:~# update-rc.d -f ipsec remove

OpenVPNOnce you’ve configured OpenVPN with the appropriate tunnels you want to use we’ll call it with heartbeat using LSB. Once a node takes ownership of the service it will automatically get started using the init.d-scripts. deb-fw1:~# crm configure primitive openvpn lsb:openvpn \ op monitor interval=”30s”

It’s important that the OpenVPN service isn’t started prior to setting the shared IPs which is used by OpenVPN. If this happens the service wont start and you’ll be left with a broken VPN. To ensure that OpenVPN isn’t started before the IPs are assigned set the following order-rule. deb-fw1:~# crm configure order IP_before_openvpn inf: IPs openvpn

IPsecIPsec is configured in the same sense as OpenVPN utilizing LSB. deb-fw1:~# crm configure primitive ipsec lsb:ipsec \ op monitor interval=”30s”

Like OpenVPN it’s important to ensure that IPsec isn’t started until the IPs has been assigned

Page 24: High Availability Firewall

on the node which it is to run on. deb-fw1:~# crm configure order IP_before_ipsec inf: IPs ipsec

Final configuration node $id="56654cb6-9f5b-442d-9367-f8dc4136c6e4" deb-fw2.example.com \

attributes standby="off"node $id="e2dbecb7-3568-4c34-a000-87dcf4df82d2" deb-fw1.example.com \

attributes standby="off"primitive ExtIP ocf:heartbeat:IPaddr2 \

params ip="130.131.132.130" cidr_netmask="25" nic="eth0" \op monitor interval="30s"

primitive IntIP ocf:heartbeat:IPaddr2 \params ip="120.121.122.1" cidr_netmask="25" nic="eth1" \op monitor interval="30s"

primitive conntrackd ocf:heartbeat:conntrackd \op monitor interval="30s"

primitive ipsec lsb:ipsec \op monitor interval="30s" \meta target-role="Started"

primitive iptables ocf:heartbeat:iptables \op monitor interval="30s" \meta target-role="Started"

primitive openvpn lsb:openvpn \op monitor interval="30s" \meta target-role="Started"

group IPs ExtIP IntIP \meta target-role="Started"

location cli-prefer-IPs IPs \rule $id="cli-prefer-rule-IPs" inf: #uname eq deb-fw1.example.com

location cli-prefer-conntrackd conntrackd \rule $id="cli-prefer-rule-conntrackd" inf: #uname eq deb-fw1.example.com

location cli-prefer-ipsec ipsec \rule $id="cli-prefer-rule-ipsec" inf: #uname eq deb-fw1.example.com

location cli-prefer-iptables iptables \rule $id="cli-prefer-rule-iptables" inf: #uname eq deb-fw1.example.com

location cli-prefer-openvpn openvpn \rule $id="cli-prefer-rule-openvpn" inf: #uname eq deb-fw1.example.com

order IP_before_ipsec inf: IPs ipsecorder IP_before_openvpn inf: IPs openvpnproperty $id="cib-bootstrap-options" \

dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \cluster-infrastructure="Heartbeat" \stonith-enabled="false"

MonitoringAs we want to monitor the services running our installation we’ll cover some brief options available. It’s expected that you’ve a Nagios or op5 Monitor installation which allows for NRPE checks. We wont cover the installation or configuration of the monitoring software, please refer to each respective project for details on this.

Page 25: High Availability Firewall

NRPENRPE allows the monitoring server to trigger a binary to execute on the monitored server responding with the current status of the system. In our case we want to make sure that all daemons we’re relying on are running properly. First we’ll install nrpe. deb-fw1:~# apt-get install nagios-nrpe-server nagios-nrpe-plugin -y

We’ll need to open up nrpe to allow for the monitoring host, modify “allowed_hosts” in /etc/nrpe.conf and add the IP. Remember to modify the firewall script to allow port 5666/TCP. Further we’ll add some proc-checks to the configuration command[proc_heartbeat]=/opt/plugins/check_procs -w 4: -c 4:5 -C heartbeatcommand[proc_conntrackd]=/opt/plugins/check_procs -w 1: -c 1:2 -C conntrackdcommand[proc_crm]=/opt/plugins/check_procs -w 1: -c 1:2 -C crmd

Once finished restart nrpe and you should be able to monitor the services above.