mirror of
https://github.com/django/django.git
synced 2024-12-26 11:06:07 +00:00
0fa1aa8711
create the necessary table structure. This closes #515; thanks again, Eugene! git-svn-id: http://code.djangoproject.com/svn/django/trunk@692 bcc190cf-cafb-0310-a4f2-bffc1f526a37
187 lines
7.0 KiB
Plaintext
187 lines
7.0 KiB
Plaintext
========================
|
|
Django's cache framework
|
|
========================
|
|
|
|
So, you got slashdotted. Now what?
|
|
|
|
Django's cache framework gives you three methods of caching dynamic pages in
|
|
memory or in a database. You can cache the output of entire pages, you can
|
|
cache only the pieces that are difficult to produce, or you can cache your
|
|
entire site.
|
|
|
|
Setting up the cache
|
|
====================
|
|
|
|
The cache framework is split into a set of "backends" that provide different
|
|
methods of caching data. There's a simple single-process memory cache (mostly
|
|
useful as a fallback) and a memcached_ backend (the fastest option, by far, if
|
|
you've got the RAM).
|
|
|
|
Before using the cache, you'll need to tell Django which cache backend you'd
|
|
like to use. Do this by setting the ``CACHE_BACKEND`` in your settings file.
|
|
|
|
The CACHE_BACKEND setting is a "fake" URI (really an unregistered scheme).
|
|
Examples:
|
|
|
|
============================== ===========================================
|
|
CACHE_BACKEND Explanation
|
|
============================== ===========================================
|
|
memcached://127.0.0.1:11211/ A memcached backend; the server is running
|
|
on localhost port 11211.
|
|
|
|
db://tablename/ A database backend in a table named
|
|
"tablename". This table should be created
|
|
with "django-admin createcachetable".
|
|
|
|
file:///var/tmp/django_cache/ A file-based cache stored in the directory
|
|
/var/tmp/django_cache/.
|
|
|
|
simple:/// A simple single-process memory cache; you
|
|
probably don't want to use this except for
|
|
testing. Note that this cache backend is
|
|
NOT threadsafe!
|
|
|
|
locmem:/// A more sophisticaed local memory cache;
|
|
this is multi-process- and thread-safe.
|
|
============================== ===========================================
|
|
|
|
All caches may take arguments -- they're given in query-string style. Valid
|
|
arguments are:
|
|
|
|
timeout
|
|
Default timeout, in seconds, to use for the cache. Defaults to 5
|
|
minutes (300 seconds).
|
|
|
|
max_entries
|
|
For the simple and database backends, the maximum number of entries
|
|
allowed in the cache before it is cleaned. Defaults to 300.
|
|
|
|
cull_percentage
|
|
The percentage of entries that are culled when max_entries is reached.
|
|
The actual percentage is 1/cull_percentage, so set cull_percentage=3 to
|
|
cull 1/3 of the entries when max_entries is reached.
|
|
|
|
A value of 0 for cull_percentage means that the entire cache will be
|
|
dumped when max_entries is reached. This makes culling *much* faster
|
|
at the expense of more cache misses.
|
|
|
|
For example::
|
|
|
|
CACHE_BACKEND = "memcached://127.0.0.1:11211/?timeout=60"
|
|
|
|
Invalid arguments are silently ignored, as are invalid values of known
|
|
arguments.
|
|
|
|
The per-site cache
|
|
==================
|
|
|
|
Once the cache is set up, the simplest way to use the cache is to simply
|
|
cache your entire site. Just add ``django.middleware.cache.CacheMiddleware``
|
|
to your ``MIDDLEWARE_CLASSES`` setting, as in this example::
|
|
|
|
MIDDLEWARE_CLASSES = (
|
|
"django.middleware.cache.CacheMiddleware",
|
|
"django.middleware.common.CommonMiddleware",
|
|
)
|
|
|
|
Make sure it's the first entry in ``MIDDLEWARE_CLASSES``. (The order of
|
|
``MIDDLEWARE_CLASSES`` matters.)
|
|
|
|
Then, add the following three required settings:
|
|
|
|
* ``CACHE_MIDDLEWARE_SECONDS`` -- The number of seconds each page should be
|
|
cached.
|
|
* ``CACHE_MIDDLEWARE_KEY_PREFIX`` -- If the cache is shared across multiple
|
|
sites using the same Django installation, set this to the name of the site,
|
|
or some other string that is unique to this Django instance, to prevent key
|
|
collisions. Use an empty string if you don't care.
|
|
* ``CACHE_MIDDLEWARE_GZIP`` -- Either ``True`` or ``False``. If this is
|
|
enabled, Django will gzip all content for users whose browsers support gzip
|
|
encoding. Using gzip adds a level of overhead to page requests, but the
|
|
overhead generally is cancelled out by the fact that gzipped pages are stored
|
|
in the cache. That means subsequent requests won't have the overhead of
|
|
zipping, and the cache will hold more pages because each one is smaller.
|
|
|
|
Pages with GET or POST parameters won't be cached.
|
|
|
|
The cache middleware also makes a few more optimizations:
|
|
|
|
* Sets and deals with ``ETag`` headers.
|
|
* Sets the ``Content-Length`` header.
|
|
* Sets the ``Last-Modified`` header to the current date/time when a fresh
|
|
(uncached) version of the page is requested.
|
|
|
|
It doesn't matter where in the middleware stack you put the cache middleware.
|
|
|
|
The per-page cache
|
|
==================
|
|
|
|
A more granular way to use the caching framework is by caching the output of
|
|
individual views. ``django.views.decorators.cache`` defines a ``cache_page``
|
|
decorator that will automatically cache the view's response for you. It's easy
|
|
to use::
|
|
|
|
from django.views.decorators.cache import cache_page
|
|
|
|
def slashdot_this(request):
|
|
...
|
|
|
|
slashdot_this = cache_page(slashdot_this, 60 * 15)
|
|
|
|
Or, using Python 2.4's decorator syntax::
|
|
|
|
@cache_page(60 * 15)
|
|
def slashdot_this(request):
|
|
...
|
|
|
|
This will cache the result of that view for 15 minutes. (The cache timeout is
|
|
in seconds.)
|
|
|
|
The low-level cache API
|
|
=======================
|
|
|
|
There are times, however, that caching an entire rendered page doesn't gain
|
|
you very much. The Django developers have found it's only necessary to cache a
|
|
list of object IDs from an intensive database query, for example. In cases like
|
|
these, you can use the cache API to store objects in the cache with any level
|
|
of granularity you like.
|
|
|
|
The cache API is simple::
|
|
|
|
# the cache module exports a cache object that's automatically
|
|
# created from the CACHE_BACKEND setting
|
|
>>> from django.core.cache import cache
|
|
|
|
# The basic interface is set(key, value, timeout_seconds) and get(key)
|
|
>>> cache.set('my_key', 'hello, world!', 30)
|
|
>>> cache.get('my_key')
|
|
'hello, world!'
|
|
|
|
# (Wait 30 seconds...)
|
|
>>> cache.get('my_key')
|
|
None
|
|
|
|
# get() can take a default argument
|
|
>>> cache.get('my_key', 'has_expired')
|
|
'has_expired'
|
|
|
|
# There's also a get_many() interface that only hits the cache once.
|
|
# Also, note that the timeout argument is optional and defaults to what
|
|
# you've given in the settings file.
|
|
>>> cache.set('a', 1)
|
|
>>> cache.set('b', 2)
|
|
>>> cache.set('c', 3)
|
|
|
|
# get_many() returns a dictionary with all the keys you asked for that
|
|
# actually exist in the cache (and haven't expired).
|
|
>>> cache.get_many(['a', 'b', 'c'])
|
|
{'a': 1, 'b': 2, 'c': 3}
|
|
|
|
# There's also a way to delete keys explicitly.
|
|
>>> cache.delete('a')
|
|
|
|
That's it. The cache has very few restrictions: You can cache any object that
|
|
can be pickled safely, although keys must be strings.
|
|
|
|
.. _memcached: http://www.danga.com/memcached/
|