Incoming bandwidth limitation on OpenVZ with CentOS 6.x running on VMware ESX(i)

Want to help support this blog? Try out Oh Dear, the best all-in-one monitoring tool for your entire website, co-founded by me (the guy that wrote this blogpost). Start with a 10-day trial, no strings attached.

We offer uptime monitoring, SSL checks, broken links checking, performance & cronjob monitoring, branded status pages & so much more. Try us out today!

Profile image of Mattias Geniar

Mattias Geniar, February 26, 2012

Follow me on Twitter as @mattiasgeniar

I must agree, it’s a bit of a weird combination, but I’m running an OpenVZ host on a CentOS 6 machine that has been created as a virtual machine on a VMware vSphere ESXi 4.1 host. Why? It’s a simple way to create extra containers with a low-memory footprint. It’s useful.

The problem I was having, however, was that the incoming bandwidth of my host OpenVZ system (ct0) was utilizing the full bandwidth, the incoming bandwidth inside a container was experiencing “spikes”. Downloads of files would start at full line speed inside a container, but would drop to 5-10KB/s after a few seconds. This kept happening every single time.

My first reaction was to check for any kind of incoming bandwidth shaping that would possibly limit it.

root@host:~# tc qdisc show dev venet0
root@host:~# tc class show dev venet0
root@host:~# tc filter show dev venet0

But that wasn’t case. Next up was a possible set of IPtables, but both the host and the container had an empty set of rules.

root@host:~# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination   

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination   

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

So I got stuck: I could continuously replicate the problem, it wasn’t a one-time thing, but I couldn’t find the exact root cause. Turns out other people at the OpenVZ forum had a similar issue, but it wasn’t stated they were using a VMware Virtual Machine in the progress.

A lot of debugging later, with great response from the dev-team at OpenVZ, this has been identified as a possible problem with the VMXNET3 network interface card that is added by default when you install a CentOS 6/RHEL6 system. Switching the NIC from VMXNET3 to e1000 has solved the problem with slow network performance inside the containers. The default e1000 driver appears to be better supported here and gives us full line speed inside a container.

If you’re changing your NIC drivers (remove network adapter and add a new network adapter), your eth-device naming may change due to a change in MAC addresses.. If you want to change it back to eth0, have a look at the blogpost “changing interface back to eth0 from eth1 in linux (centos/rhel)". Hope this helps you somehow!



Want to subscribe to the cron.weekly newsletter?

I write a weekly-ish newsletter on Linux, open source & webdevelopment called cron.weekly.

It features the latest news, guides & tutorials and new open source projects. You can sign up via email below.

No spam. Just some good, practical Linux & open source content.