This is an annoying problem on a lot of Linux distributions, and it can have several causes.
Caused by open file descriptors
If you delete files from the filesystem, the command “df -h” might not show the deleted space as being available. This is because the deleted files could still be held open by (defunct) processes, where the file descriptor handles still point to those files. As a result, the df command assumes the files are still there, and doesn’t clear the space.
Here are some ways you can track which processes still refer to the deleted files.
# lsof | grep 'deleted' # ls -ld /proc/* | grep '(deleted)'
The solution is to either stop the process (kill
Reserved space for journaling
Alternativaly, if you’re using a journaling filesystem (like EXT3), keep in mind that df will also count the space used for this journal log in the output.
Default block reservation for super-user
Also keep in mind that there will, by default, be a 5% block reservation for the super-user per blockdevice (in short: for every seperate partition on a hard disk in your system). You can check the amount of reserved space, by running the tune2fs -l command.
# tune2fs -l /dev/sda2 | grep -i reserved Reserved block count: 208242 Reserved GDT blocks: 1016 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root)
As it’s described in the mkfs.ext3 manual, for the -m parameter.
-m: Specify the percentage of the filesystem blocks reserved for the super-user. This avoids fragmentation, and allows root-owned daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem. The default percentage is 5%.