If you're using Redis, you can find your application logs start to show the following error messages:
$ tail -f error.log OOM command not allowed when used memory > 'maxmemory'
This can happen every time a WRITE operations is sent to Redis, to store new data.
What does it mean?
The OOM command not allowed when used memory > 'maxmemory' error means that Redis was configured with a memory limit and that particular limit was reached. In other words: its memory is full, it can't store any new data.
You can see the memory values by using the
redis CLI tool.
$ redis-cli -p 6903 127.0.0.1:6903> info memory # Memory used_memory:3221293632 used_memory_human:3.00G used_memory_rss:3244535808 used_memory_peak:3222595224
If you run a Redis instance with a password on it, change the
redis-cli command to this:
$ redis-cli -p 6903 -a your_secret_pass
info memory command remains the same.
The example above shows a Redis instance configured to run with a maximum of 3GB of memory and consuming all of it (=
Fixing the OOM command problem
There are 3 potential fixes.
1. Increase Redis memory
Probably the easiest to do, but it has its limits. Find the Redis config (usually somewhere in
/etc/redis/*) and increase the memory limit.
$ vim /etc/redis/6903.conf maxmemory 3gb
Somewhere in that config file, you'll find the
maxmemory parameter. Modify it to your needs and restart the Redis instance afterwards.
2. Change the cache invalidation settings
Redis is throwing the error because it can't store new items in memory. By default, the "cache invalidation" setting is set pretty conservatively, to
volatile-lru. This means it'll remove a key with an expire set using an LRU algorithm.
This can cause items to be kept in the queue even when new items try to be stored. In other words, if your Redis instance is full, it won't just throw away the oldest items (like a Memcached would).
You can change this to a couple of alternatives:
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory # is reached? You can select among five behavior: # # volatile-lru -> remove the key with an expire set using an LRU algorithm # allkeys-lru -> remove any key accordingly to the LRU algorithm # volatile-random -> remove a random key with an expire set # allkeys->random -> remove a random key, any key # volatile-ttl -> remove the key with the nearest expire time (minor TTL) # noeviction -> don't expire at all, just return an error on write operations # # Note: with all the kind of policies, Redis will return an error on write # operations, when there are not suitable keys for eviction. # # At the date of writing this commands are: set setnx setex append # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby # getset mset msetnx exec sort
In the very same Redis config you can find the
directive (somewhere in
/etc/redis/*), there's also an option called
The default is:
$ grep maxmemory-policy /etc/redis/* maxmemory-policy volatile-lru
If you don't really care about the data in memory, you can change it to something more agressive, like
$ vim /etc/redis/6903.conf maxmemory-policy allkeys-lru
Afterwards, restart your Redis again.
Keep in mind though that this can mean Redis removes items from its memory that haven't been persisted to disk just yet. This is configured with the
save parameter, so make sure you look at this values too to determine a correct "max memory" policy. Here are the defaults:
# In the example below the behaviour will be to save: # after 900 sec (15 min) if at least 1 key changed # after 300 sec (5 min) if at least 10 keys changed # after 60 sec if at least 10000 keys changed # # Note: you can disable saving at all commenting all the save lines. save 900 1 save 300 10 save 60 10000
With the above in mind, setting a different
maxmemory-policy could mean dataloss in your Redis instance!
3. Store less data in Redis
I know, stupid 'solution', right? But ask yourself this: is everything you're storing in Redis really needed? Or are you using Redis as a caching solution and just storing too much data in it?
If your SQL queries return 10 columns but realistically you only need 3 of those on a regular basis, just store those 3 values -- not all 10.