Cache::FastMmap - Uses an mmap'ed file to act as a shared memory interprocess cache |
Cache::FastMmap - Uses an mmap'ed file to act as a shared memory interprocess cache
use Cache::FastMmap;
# Uses vaguely sane defaults $Cache = Cache::FastMmap->new();
# $Value must be a reference... $Cache->set($Key, $Value); $Value = $Cache->get($Key);
$Cache = Cache::FastMmap->new(raw_values => 1);
# $Value can't be a reference... $Cache->set($Key, $Value); $Value = $Cache->get($Key);
A shared memory cache through an mmap'ed file. It's core is written in C for performance. It uses fcntl locking to ensure multiple processes can safely access the cache at the same time. It uses a basic LRU algorithm to keep the most used entries in the cache.
In multi-process environments (eg mod_perl, forking daemons, etc), it's common to want to cache information, but have that cache shared between processes. Many solutions already exist, and may suit your situation better:
In the case I was working on, I needed:
Which is why I developed this module. It tries to be quite efficient through a number of means:
get()
calls O(1)
and
fast
On each set()
, if there are slots and page space available, only
the slot has to be updated and the data written at the end of the used
data space. If either runs out, a re-organisation of the page is
performed to create new slots/space which is done in an efficient way
The class also supports read-through, and write-back or write-through callbacks to access the real data if it's not in the cache, meaning that code like this:
my $Value = $Cache->get($Key); if (!defined $Value) { $Value = $RealDataSource->get($Key); $Cache->set($Key, $Value) }
Isn't required, you instead specify in the constructor:
Cache::FastMmap->new( ... context => $RealDataSourceHandle, read_cb => sub { $_[0]->get($_[1]) }, write_cb => sub { $_[0]->set($_[1], $_[2]) }, );
And then:
my $Value = $Cache->get($Key);
$Cache->set($Key, $NewValue);
Will just work and will be read/written to the underlying data source as needed automatically.
If you're storing relatively large and complex structures into the cache, then you're limited by the speed of the Storable module. If you're storing simple structures, or raw data, then Cache::FastMmap has noticeable performance improvements.
See http://cpan.robm.fastmail.fm/cache_perf.html for some comparisons to other modules.
Cache::FastMmap uses mmap to map a file as the shared cache space, and fcntl to do page locking. This means it should work on most UNIX like operating systems, but will not work on Windows or Win32 like environments.
Because Cache::FastMmap mmap's a shared file into your processes memory space, this can make each process look quite large, even though it's just mmap'd memory that's shared between all processes that use the cache, and may even be swapped out if the cache is getting low usage.
However, the OS will think your process is quite large, which might mean you hit some BSD::Resource or 'ulimits' you set previously that you thought were sane, but aren't anymore, so be aware.
Because Cache::FastMmap uses an mmap'ed file, when you put values into the cache, you are actually ``dirtying'' pages in memory that belong to the cache file. Your OS will want to write those dirty pages back to the file on the actual physical disk, but the rate it does that at is very OS dependent.
In Linux, you have some control over how the OS writes those pages back using a number of parameters in /proc/sys/vm
dirty_background_ratio dirty_expire_centisecs dirty_ratio dirty_writeback_centisecs
How you tune these depends heavily on your setup.
As an interesting point, at least on our setup, we have observed a signficant change in behaviour somewhere between Linux 2.6.16 and 2.6.20. We found that certain machines that were fine under 2.6.16 were suddenly experiencing a lot more IO under 2.6.20 that we were able to attribute to the Cache::FastMmap files, even though we hadn't changed any kernel parameters.
In most cases, people are not actually concerned about the persistence of data in the cache, and so are happy to disable writing of any cache data back to disk at all. Baically what they want is an in memory only shared cache. The best way to do that is to use a ``tmpfs'' filesystem and put all cache files on there.
For instance, all our machines have a /tmpfs mount point that we create in /etc/fstab as:
none /tmpfs tmpfs defaults,noatime,size=1000M 0 0
And we put all our cache files on there. The tmpfs filesystem is smart enough to only use memory as required by files actually on the tmpfs, so making it 1G in size doesn't actually use 1G of memory, it only uses as much as the cache files we put on it. In all cases, we ensure that we never run out of real memory, so the cache files effectively act just as named access points to shared memory.
To reduce lock contention, Cache::FastMmap breaks up the file into pages. When you get/set a value, it hashes the key to get a page, then locks that page, and uses a hash table within the page to get/store the actual key/value pair.
One consequence of this is that you cannot store values larger than
a page in the cache at all. Attempting to store values larger than
a page size will fail (the set()
function will return false).
Also keep in mind that each page has it's own hash table, and that we store the key and value data of each item. So if you are expecting to store large values and/or keys in the cache, you should use page sizes that are definitely larger than your largest key + value size + a few kbytes for the overhead.
Because the cache uses shared memory through an mmap'd file, you have to make sure each process connects up to the file. There's probably two main ways to do this:
The first way is usually the easiest. If you're using the cache in a
Net::Server based module, you'll want to open the cache in the
pre_loop_hook
, because that's executed before the fork, but after
the process ownership has changed and any chroot has been done.
In mod_perl, just open the cache at the global level in the appropriate module, which is executed as the server is starting and before it starts forking children, but you'll probably want to chmod or chown the file to the permissions of the apache process.
Basic global parameters are:
Note: This is quite important to do in the parent to ensure a consistent file structure. The shared file is not perfectly transaction safe, and so if a child is killed at the wrong instant, it might leave the the cache file in an inconsistent state.
You may specify the cache size as:
Or specify explicit page size/page count values. If none of these are specified, the values page_size = 64k and num_pages = 89 are used.
The cache allows the use of callbacks for reading/writing data to an underlying data store.
$read_cb->($context, $Key) Should return the value to use. This value will be saved in the cache for future retrievals. Return undef if there is no value for the given key
$write_cb->($context, $Key, $Value, $ExpiryTime) In 'write_through' mode, it's always called as soon as a I<set(...)> is called on the Cache::FastMmap class. In 'write_back' mode, it's called when a value is expunged from the cache if it's been changed by a I<set(...)> rather than read from the underlying store with the I<read_cb> above.
Note: Expired items do result in the write_cb being
called if 'write_back' caching is enabled and the item has been
changed. You can check the $ExpiryTime against time()
if you only
want to write back values which aren't expired.
Also remember that write_cb may be called in a different process to the one that placed the data in the cache in the first place
$delete_cb->($context, $Key)
Called as soon as remove(...) is called on the Cache::FastMmap class
The trick is that we only want to do this in the parent process, we don't want any child processes to empty the cache when they exit. So if you set this, it takes the PID via $$, and only calls empty in the DESTROY method if $$ matches the pid we captured at the start. (default: 0)
As with empty_on_exit, this will only unlink the file if the DESTROY occurs in the same PID that the cache was created in so that any forked children don't unlink the file.
This value defaults to 1 if the share_file specified does not already exist. If the share_file specified does already exist, it defaults to 0.
%Options is optional, and is used by get_and_set()
to control
the locking behaviour. For now, you should probably ignore it
unless you read the code to understand how it works
%Options is optional, and is used by get_and_set()
to control
the locking behaviour. For now, you should probably ignore it
unless you read the code to understand how it works
This method returns true if the value was stored in the cache, false otherwise. See the PAGE SIZE AND KEY/VALUE LIMITS section for more details.
The page is locked while retrieving the $Key and is unlocked only after the value is set, thus guaranteeing the value does not change betwen the get and set operations.
$Sub is a reference to a subroutine that is called to calculate the new value to store. $Sub gets $Key and the current value as parameters, and should return the new value to set in the cache for the given $Key.
For example, to atomically increment a value in the cache, you can just use:
$Cache->get_and_set($Key, sub { return ++$_[1]; });
The return value from this function is the new value stored back into the cache.
Notes:
%Options is optional, and is used by get_and_remove()
to control
the locking behaviour. For now, you should probably ignore it
unless you read the code to understand how it works
The page is locked while retrieving the $Key and is unlocked only after the value is removed, thus guaranteeing the value stored by someone else isn't removed by us.
Note: If you're using callbacks, this has no effect on items in the underlying data store. No delete callbacks are made
Note: If you're using callbacks, this has no effect on items in the underlying data store. No delete callbacks are made, and no write callbacks are made for the expired data
Note: If 'write_back' mode is enabled, any changed items are written back to the underlying store. Expired items are written back to the underlying store as well.
If $Mode == 0, an array of keys is returned
If $Mode == 1, then an array of hashrefs, with 'key', 'last_access', 'expire_time' and 'flags' keys is returned
If $Mode == 2, then hashrefs also contain 'value' key
The main advantage of this is just a speed one, if you happen to need to search for a lot of items on each call.
For instance, say you have users and a bunch of pieces
of separate information for each user. On a particular
run, you need to retrieve a sub-set of that information
for a user. You could do lots of get()
calls, or you
could use the 'username' as the page key, and just
use one multi_get()
and multi_set()
call instead.
A couple of things to note:
multi_get()/multi_set()
and get()/set()
incompatiable. Don't mix calls to the two, because
you won't find the data you're expecting
The writeback and callback modes of operation do
not work with multi_get()/multi_set(). Don't attempt
to use them together.
Expunged items (that have not expired) are written back to the underlying store if write_back is enabled
Expunged items (that have not expired) are written back to the underlying store if write_back is enabled
Otherwise the defaults seem sensible to cleanup unneeded share files rather than leaving them around to accumulate.
the MLDBM::Sync manpage, the IPC::MM manpage, the Cache::FileCache manpage, the Cache::SharedMemoryCache manpage, the DBI manpage, the Cache::Mmap manpage, BerkeleyDB
Latest news/details can also be found at:
http://cpan.robm.fastmail.fm/cachefastmmap/
Rob Mueller mailto:cpan@robm.fastmail.fm
Copyright (C) 2003-2007 by FastMail IP Partners
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Cache::FastMmap - Uses an mmap'ed file to act as a shared memory interprocess cache |