This week I faced a rather peculiar issue while trying to setup clustering in CrateDB, which is very much similar to the clustering done in ElasticSearch. As soon as I did the basic configuration which you can find here, starting the process showed the following kinds of exceptions:
ERROR: bootstrap checks failed max file descriptors [16384] for elasticsearch process is too low, increase to at least [65536] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
java.lang.RuntimeException: bootstrap checks failed initial heap size ... not equal to maximum heap size ...; this can cause resize pauses and prevents mlockall from locking the entire heap please set [discovery.zen.minimum_master_nodes] to a majority of the number of master eligible nodes in your cluster at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:132)
[WARN ][o.e.b.BootstrapChecks ] [node-1] max file descriptors [...] for elasticsearch process is too low, increase to at least [65536] [WARN ][env ] max file descriptors [65535] for elasticsearch process likely too low, consider increasing to at least [65536] [WARN ][o.e.b.BootstrapChecks ] [node-1] max number of threads [1024] for user [] is too low, increase to at least [2048] [WARN ][o.e.b.BootstrapChecks ] [node-1] system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
The second one which is related to heap size can be very easily solved by specifying HEAP params in environment variables of your system for CrateDB or ElasticSearch. You need to have the same heap size for -Xmx and -Xms. Something like:
CRATE_HEAP_SIZE=2g
CRATE_JAVA_OPTS=-Xmx2g -Xms2g
The other one, max file descriptors, is related to ulimit set on your system. To see your existing hard and soft limits, use these commands in terminal:
$ ulimit -Hn
$ ulimit -Sn
In my case, hard limit was set to 4096 and soft limit was set to 1024 by default.
Also, using this option doesn't work in Ubuntu, atleast not in 18.04:
$ ulimit -n 70000
In order to solve the remaining issue related to max open files or increase file descriptors, just follow these steps and you will be good to go:
$ sudo vim /etc/security/limits.conf
- Go to the following file:
$ sudo vim /etc/security/limits.conf
- Add the following lines in the file:
* soft nofile 80000 * hard nofile 80000 * soft nproc 80000 * hard nproc 80000 root soft nofile 80000 root hard nofile 80000
- Save the changes.
- Next you need to go to the following file:
- Add this to the file and save changes:
DefaultLimitNOFILE=80000
- Do the same in the following file:
DefaultLimitNOFILE=80000These changes take care that the ulimit is increased for GUI based login.
If you're a non-Ubuntu Linux user and following the above steps doesn't help you then maybe you can try the following steps:
First check the following values:
$ cat /proc/sys/fs/file-max
$ cat /proc/sys/vm/max_map_count
If the values are below 65536, then do this:
- Edit this file:
with the following changes:
fs.file-max = 500000 vm.max_map_count=200000
- Next, edit the following files:
$ sudo vim /etc/pam.d/common-session-noninteractive
Add the following line in both of these files:
session required pam_limits.so
P.S. You will need to reboot the system for the changes to take effect.
Let me know via comments below if you face any problem in any of the above steps. Adios till my next post!
Do you like this post? Please link back to this article by copying one of the codes below.
URL: HTML link code: Forum link code: