Scenario / Questions
TL;DR version: Turns out this was a deep Broadcom networking bug in Windows Server 2008 R2. Replacing with Intel hardware fixed it. We don’t use Broadcom hardware any more. Ever.
We have been using HAProxy along with heartbeat from the Linux-HA project. We are using two linux instances to provide a failover. Each server has with their own public IP and a single IP which is shared between the two using a virtual interface (eth1:1) at IP: 184.108.40.206
The virtual interface (eth1:1) IP 220.127.116.11 is configured as the gateway for the windows servers behind them and we use ip_forwarding to route traffic.
We are experiencing an occasional network outage on one of our windows servers behind our linux gateways. HAProxy will detect the server is offline which we can verify by remoting to the failed server and attempting to ping the gateway:
Pinging 18.104.22.168 with 32 bytes of data: Reply from 22.214.171.124: Destination host unreachable.
arp -a on this failed server shows that there is no entry for the gateway address (126.96.36.199):
Interface: 188.8.131.52 --- 0xa Internet Address Physical Address Type 184.108.40.206 00-26-88-63-c7-80 dynamic 220.127.116.11 00-15-5d-0a-3e-0e dynamic 18.104.22.168 00-21-5e-4d-45-c9 dynamic 22.214.171.124 00-15-5d-00-b2-0d dynamic 126.96.36.199 00-21-5e-4d-61-1a dynamic 188.8.131.52 00-21-5e-4d-2c-e8 dynamic 184.108.40.206 00-21-5e-4d-38-e5 dynamic 220.127.116.11 00-15-5d-00-b2-0d dynamic 18.104.22.168 00-15-5d-0a-3e-09 dynamic 22.214.171.124 ff-ff-ff-ff-ff-ff static 126.96.36.199 01-00-5e-00-00-16 static 188.8.131.52 01-00-5e-00-00-fc static 184.108.40.206 01-00-5e-00-00-01 static
On our linux gateway instances
arp -a shows:
peak-colo-196-220.peak.org (220.127.116.11) at <incomplete> on eth1 stackoverflow.com (18.104.22.168) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-215.peak.org (22.214.171.124) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-219.peak.org (126.96.36.199) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-222.peak.org (188.8.131.52) at 00:15:5d:0a:3e:09 [ether] on eth1 peak-colo-196-209.peak.org (184.108.40.206) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (220.127.116.11) at 00:21:5e:4d:2c:e8 [ether] on eth1
Why would arp occasionally set the entry for this failed server as <incomplete>? Should we be defining our arp entries statically? I’ve always left arp alone since it works 99% of the time, but in this one instance it appears to be failing. Are there any additional troubleshooting steps we can take help resolve this issue?
THINGS WE HAVE TRIED
I added a static arp entry for testing on one of the linux gateways which still didn’t help.
root@haproxy2:~# arp -a peak-colo-196-215.peak.org (18.104.22.168) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-221.peak.org (22.214.171.124) at 00:15:5d:00:b2:0d [ether] on eth1 stackoverflow.com (126.96.36.199) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-219.peak.org (188.8.131.52) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-209.peak.org (184.108.40.206) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (220.127.116.11) at 00:21:5e:4d:2c:e8 [ether] on eth1 peak-colo-196-220.peak.org (18.104.22.168) at 00:21:5e:4d:30:8d [ether] PERM on eth1 root@haproxy2:~# arp -i eth1 -s 22.214.171.124 00:21:5e:4d:30:8d root@haproxy2:~# ping 126.96.36.199 PING 188.8.131.52 (184.108.40.206) 56(84) bytes of data. --- 220.127.116.11 ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6006ms
Rebooting the windows web server solves this issue temporarily with no other changes to the network but our experience shows this issue will come back.
Swapping network cards and switches
I noticed the link light on the port of the switch for the failed windows server was running at 100Mb instead of 1Gb on the failed interface. I moved the cable to several other open ports and the link indicated 100Mb for each port that I tried. I also swapped the cable with the same result. I tried changing the properties of the network card in windows and the server locked up and required a hard reset after clicking apply. This windows server has two physical network interfaces so I have swapped the cables and network settings on the two interfaces to see if the problem follows the interface. If the public interface goes down again we will know that it is not an issue with the network card.
(We also tried another switch we have on hand, no change)
Changing network hardware driver versions
We’ve had the same problem with the latest Broadcom driver, as well as the built-in driver that ships in Windows Server 2008 R2.
Replacing network cables
As a last ditch effort we remembered another change that occurred was the replacement of all of the patch cords between our servers / switch. We had purchased two sets, one green of lengths 1ft – 3ft for the private interfaces and another set of red cables for the public interfaces. We swapped out all of the public interface patch cables with a different brand and ran our servers without issue for a full week … aaaaaand then the problem recurred.
Disable checksum offload, remove TProxy
We also tried disabling TCP/IP checksum offload in the driver, no change. We’re now pulling out TProxy and moving to a more traditional
x-forwarded-for network arrangement without any fancy IP address rewriting. We’ll see if that helps.
Switch Virtualization providers
On the off chance this was related to Hyper-V in some way (we do host Linux VMs on it), we switched to VMWare Server. No change.
Switch host model
We’ve reached the end of our troubleshooting rope and are now formally involving Microsoft support. They recommended changing the host model:
We did that, and we also got some unpublished kernel hotfixes which were presumably rolled into 2008 R2 SP1. No fix.
Replacing network card hardware
Ultimately, replacing the Broadcom network hardware with Intel network hardware fixed this issue for us. So I am inclined to think that the Broadcom Windows Server 2008 R2 drivers are at fault!
Find below all possible solutions or suggestions for the above questions..
If no ARP cache entry exists for a requested destination IP, the kernel will generate mcast_solicit ARP requests until receiving an answer. During this discovery period, the ARP cache entry will be listed in an incomplete state. If the lookup does not succeed after the specified number of ARP requests, the ARP cache entry will be listed in a failed state. If the lookup does succeed, the kernel enters the response into the ARP cache and resets the confirmation and update timers.
It looks like your gateway box is not responding (or responding too slowly) to ARP requests from your gateway box. Does that
<incomplete> eventually switch to
<failed>? What network hardware do you have between the the server and the gateway? Is it possible broadcast ARP requests are being filtered or blocked somewhere between the two hosts?
It means that you pinged the address, the IP has a PTR record (hence the name) but nothing responded from the machine in question. When we see this it’s most commonly due to a subnet mask being set incorrectly – or in the case of IPs bound to a loopback interface that were accidentally bound to the eth interface instead.
What is 196.220? What is it’s relationship with 196.211? I’m assuming that .220 is one of the HA Proxy hosts. When you run ifconfig -a & arp -a on it what does it show?
As Max Clark says, the <incomplete> just means that 18.104.22.168 has put out an ARP request for 22.214.171.124 and hasn’t received a response yet. (In Windows-land you’ll see this as an ARP mapping to “00-00-00-00-00-00″… It seems odd to me, BTW, that you’re not seeing such an ARP mapping on 126.96.36.199 for 188.8.131.52.)
I tend not to like to use static ARP entries because, in my experience, ARP has generally done its job all the time.
If it were me, I’d sniff the appropriate Ethernet interface on the “failing” Windows machine (184.108.40.206) to observe it ARP’ing for 220.127.116.11, and to observe how / if it’s responding to ARP requests from 18.104.22.168. I’d also consider sniffing on the gateway machine for ARP only (
tcpdump -i interface-name arp) to see what the ARP traffic looks like from the side of the Linux machine.
I know, from the blog, that you’ve got a back-end network and a front-end network. During these outages, does the “failing” Windows server (22.214.171.124) have any problems communicating to other machines in the front-end network, or is it just having problems talking to its gateway? I’m curious if you’re coming at the failing machine thru the front-end or back-end network when you’re catching it in the act.
What are you doing to “resolve” the issue when it occurs?
I see from your update that you’re rebooting the “failing” Windows machine to resolve the issue. Before you do that next time, can you verify that the Windows machine is able to “talk” on its front-end interface at all? Also, grab a copy of the routing table from the Windows machine (
route print) during a failure, too. (I’m trying to ascertain if the NIC / driver is going bonkers on the Windows machine, basically.)
This document shows the different states (table 2.1). Incomplete would mean that it has sent a first ARP request (presumably after a stale, delay, probe) but hasn’t yet received a response.
The reason the static ARP on the haproxy node doesn’t help is that your web server still can’t figure out how to get back to the gateway.
Static ARP on the web server breaks the ability for your web servers to switch gateways when one of the haproxy nodes failed — I’m guessing the virtual interface shares the same MAC address as the haproxy node’s eth1, so you’d have to hard code to one of the two gateways into each web server.
Do you have any kind of security software installed on the failing web server? I spent a long night with a Windows 2008 server that had Symantec Endpoint Security on it — it installs some filtering code in the networking stack that prevented it from seeing the gateway’s ARP packets at all. The fix for that (as provided by Microsoft) was to remove the registry entry that loaded the DLL.
The other time this problem occurred, removing the whole network adapter from device manager and reinstalling seemed to help.
Since you’ve statically set your arp entry, your servers know where to find the gateway. However, if your switch doesn’t know where the gateway is, it won’t forward your packets.
Sounds like you’ve got a bad (or confused) switch between your HAproxy’s and your web servers. Reboot it.
Either that, or your HAproxy servers disagree about which one is in control, and both answering arp lookups for .211.
Along the same lines, if your switch is overloaded, your HAproxies might be unable to communicate with each other fast enough, and are failing over.
The next time this problem occurs, I would suggest running some packet captures on the two hosts in question, to determine what ARP traffic each of them is observing.
In fact, thinking about it, as the problem appears to be with ARP specifically, you could potentially just continuously record all ARP traffic on the HAproxy machine and the Windows machine in question, with a rolling capture file of (for argument’s sake) 10MB. That should be large enough such that by the time you’ve detected a failure, the capture file will still contain the ARP traffic from before the failure. (It’s worth experimenting by running the capture for an hour or so, to see how much data it generates).
Example capture syntax for Linux tcpdump (note, I don’t have a Linux box handy to test this on; please test the behaviour of -C and -W before using in production!):
tcpdump -C 10 -i eth1 -w /var/tmp/arp.cap -W 1 arp
This should hopefully give you some indication of what precisely is failing. When an ARP entry expires (and according to this article, newer versions of Windows appear to age out ‘inactive’ entries very aggressively), I would expect the following to happen:
- The source host will send an ARP request to the target host. ARP requests are generally broadcast, but in the case where a host is refreshing an existing entry, the ARP may be sent unicast.
- The target host will respond with an ARP reply. 99% of the time this will be unicast, but the RFC permits broadcast responses. (See also the RFC regarding IPv4 Address Collision Detection for more detail).
Simple as it sounds, there are a bunch of other things that may interfere with this process:
- The original request may not be arriving at the target.
- The request may be arriving at target, but the response may not be reaching the source.
- Some sort of high availability mechanism may be interfering with the ‘normal’ behaviour of ARP:
- How does failover between the HAProxy nodes work? Does it use a shared MAC address, or does it use gratuitous ARP to fail an IP address over between nodes?
- A lot of the MAC addresses in the ARP tables above begin with 00-15-5D, which is apparently registered to Microsoft. Are you using any form of clustering or other HA on the Windows machine in question? Are these 00-15-5D MAC addresses the same ones you see associated with the hardware NICs when you do an ‘ipconfig /all’ on the Windows server?
Things to check if/when this happens again:
- Look at the packet captures of ARP traffic; has any part of the conversation obviously not occurred?
- Check the switch’s bridging/CAM tables; do all the MAC addresses in question map to the ports you expect them to?
- Do other hosts on the subnet have valid ARP entries for the IP addresses of both the Windows and HAProxy hosts?
- Do ARP entries for the same target IP on multiple different source machines resolve to the same MAC address? i.e. log on to a couple of other hosts on the subnet and verify that 196.211 resolves to the same MAC address on both.
We had a similar issue with one of our 2008 R2 terminal servers where all traffic on the NIC would stop but stay connected, and the NIC LEDs would show comms. This was an ongoing issue that kept cropping up 2-3 times a week, but only after around 12-13 hours uptime (server is rebooted nightly).
I found Seriousbit Netbalancer was the cause, after I tried (out of curiosity) terminating the NetbalancerService service. Traffic then started moving across the interface. I’ve since uninstalled Netbalancer.
I had a same problem with Asus Mainboard lan. It was fixed by installing a latest driver from realtek website
Kubernetes Free Online Tutorial, Kubernetes Beginner Tutorial
DevOps Free Online Tutorial, DevOps Beginner Tutorial
Ansible Free Online Tutorial, Ansible Beginner Tutorial
Docker Free Online Tutorial, Docker Beginner Tutorial
Openstack Free Online Tutorial, Openstack Beginner Tutorial
Disclaimer: This has been sourced from a third party syndicated feed through internet. We are not responsibility or liability for its dependability, trustworthiness, reliability and data of the text. We reserves the sole right to alter, delete or remove (without notice) the content in its absolute discretion for any reason whatsoever.