On Linux you are limited to the number of files you can have open at one time. This note tells how to determine the number of open files on Ubuntu and how to increase the limit.
To determine how many open files a process has, figure out the process id then run the lsof command.
To figure out the process id use the ps command, for example:
Since there is so much output, it is often useful to grep for the process you are interested in, for example to find the tomcat process:
ps aux | grep tomcat
steve 21405 0.0 3.3 428148 69060 ? Sl 10:07 0:00 /usr/lib/jvm/java-6-sun/bin/java …
Here the process id is 21405.
Once you have the process id, you can see the open files using the lsof command. For example:
lsof -p 21405
”COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 21405 steve cwd DIR 8,1 4096 407944 /home/steve/apache”
java 21405 steve rtd DIR 8,1 4096 2 /
java 21405 steve txt REG 8,1 47308 400238 /usr/lib/jvm/java-6-sun-188.8.131.52/jre/bin/java”
To determine the number of open files, count the number of lines output by the lsof command, for example:
lsof -p 21405 | wc -l
An alternate way to see the open files.
To see the list sorted by filename where the names are in numerical sequence:
ll /proc/21405/fd | sort -g +7 -7
To sort by the Name column:
lsof -p 24226 | sort +8 -8 >~/tmp/open.txt
The FD column is the File Descriptor column. It is either the number of the file or one of the following:
- cwd current working directory
- Lnn library references (AIX)
- err FD information error (see NAME column)
- jld jail directory (FreeBSD)
- ltx shared library text (code and data)
- Mxx hex memory-mapped type number xx
- m86 DOS Merge mapped file
- mem memory-mapped file
- mmap memory-mapped device
- pd parent directory
- rtd root directory
- tr kernel trace file (OpenBSD)
- txt program text (code and data)
- v86 VP/ix mapped file
Determine the Limit
Each user has a limit for the number of open files. This limit applies to each process run by the user. For example say the limit is 1024 and the user has three processes running, each process can open 1024 files for a total of 3072.
To determine the soft limit:
To determine the hard limit:
ulimit -n shows you the soft limit. The soft limit is the limit applied for opening files. The hard limit is the limit you can increase the soft limit to.
Increase the Limit
To increase the limit to 1080 use the following command:
ulimit -Sn 1080
You can change the hard limit too, ulimit -Hn 2040. ulimit -n 2040 changes both the soft and hard limits to the same value. Once you change the hard limit, you cannot increase it above the value you just set without rebooting.
If you try to set the soft limit above the hard limit you get the following message:
ulimit -Sn 3000
bash: ulimit: open files: cannot modify limit: Invalid argument
Note: Once you reboot the limit is reset.
You cannot determine the limit of the root user using ulimit. For example:
sudo ulimit -n
sudo: ulimit: command not found
To make the limits bigger and to make the change permanent, edit your configuration file and reboot. On Ubuntu you edit the following file:
sudo nano /etc/security/limits.conf
Add lines like these:
steve soft nofile 4000
steve hard nofile 5000
You can use * in the limit.conf file instead of a user name to specify all users, however this does not apply to the root!
- soft nofile 20000
- hard nofile 30000
The limit.conf file is applied during the boot process. If you start a process during the boot process before the limits are applied, you will get the default 1024 value. You can record the starting limit in a file right before starting your process, then check the value to make sure it’s the expected value.
ulimit -n >mylimit.txt
You cannot start a process late enough in the boot process! For example: “sudo update-rc.d tomcat defaults 99 01” is at the end and it is still too late.
The work around is to force the limit to be set before starting the process. Put “ulimit -n 4000” before starting your process, then the limit.conf file is processed here.
Testing the Limit
I wrote a program called openmany that I use to test the open file limit. It creates a bunch of files in a folder then opens them.
java -jar openmany
Usage: openmany [-c] number
c Continue to run holding on to the open files.
number The number of files to open.
java -jar openmany 100
Creating 100 files in folder openmany.
Opening the files.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
To remove the directory of files created by the program:
rm -r openmany/
When setting the limit at 60,000, the system ran out of memory at about 30,000. So the effective limit is dependent on the java memory size allocated to the program.
Here is an example of trying to open more than the limit:
java -jar openmany.jar 1050
1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 Exception in thread “main” java.io. FileNotFoundException: openmany/1019.txt (Too many open files)
at java.io. FileInputStream.<init> (FileInputStream.java:106)
at java.io. FileReader.<init> (FileReader.java:55)
at openmany.Main.main (Main.java:49)
You can run the program as root and test its limits too:
sudo java -jar openmany.jar 1050
Set System Wide Limits
There is another file limit in the system, the total number of files that can be opened by all processes.
To see the file max value:
sysctl -a | grep fs.file-max
fs.file-max = 170469
Since it is so big there is no reason to change it.
Documentation for lsof:
Changing the limit: