Increase the number of open files for jobs managed by supervisord

Mattias Geniar, Thursday, November 29, 2018

In Linux, a non-privileged user by default can only open 1024 files on a machine. This includes handles to log files, but also local sockets, TCP ports, ... everything's a file and the usage is limited, as a system protection.

Normally, we can increase the amount of processes a particular user can open by increasing the system limits. This is configured in /etc/security/limits.d/.

For instance, this allows the user john to open up to 10.000 files.

$ cat /etc/security/limits.d/john.conf
john		soft		nofile		10000

You would assume that once configured, this would apply to all the commands that run as the user john. Alas, that's not the case if you use supervisord to run a process.

Take the following supervisor job for instance:

$ cat /etc/supervisord.d/john.ini
command=/usr/bin/php /path/to/script.php

This would add a job to supervisor to always keep the task /usr/bin/php /path/to/script.php running as the user john, and if it were to crash or stop, it would automatically restart it.

However, if we were to inspect the actual limits being enforced on that process, we'd find the following.

$ cat /proc/19153/limits
Limit                     Soft Limit           Hard Limit           Units
Max open files            1024                 4096                 files

The process has a soft-limit of 1024 files and a hard limit of 4096, despite an increase in the amount of files it can open in our limits.d directory.

The reason is that supervisord has a setting of its own, minfds, that it uses to set the amount of files it can open. And that setting gets inherited by all the children that supervisord spawns, so it overrides any setting you may set in limits.d.

Its default value is set to 1024 and can be increased so anything you'd like (or need).

$ cat /etc/supervisord.conf

You'll find this file on /etc/supervisor/supervisord.conf on Debian or Ubuntu systems. Either add or modify the minfds parameter, restart supervisord (which will restart all your spawned jobs, too) and you'll notice the limits have actually been increased.

Hi! My name is Mattias Geniar. 👋 I'm an independent software developer ⌨️ & Linux sysadmin 👨‍💻, a general web geek & public speaker. Currently working on DNS Spy & Oh Dear! Follow me on Twitter as @mattiasgeniar 🐦.

🔥 If you're stuck with a technical problem, I'm available for hire to help you fix it!

Share this post

Did you like this post? Help me share it on social media! Thanks. 🤗

Have feedback?

New comments have been disabled on this blog, existing comments will remain as-is. Want to give feedback? Is there a mistake in the post?

Send me a tweet on @mattiasgeniar!


Arthur Pewty Friday, December 28, 2018 at 19:49 -

Why wouldn’t you just use systemd for this these days, in the event you have poor-quality code which needs restarting unattended? Limits can be set per process in the service unit files.

Inbound links