1
0
mirror of https://github.com/django/django.git synced 2024-12-22 17:16:24 +00:00

Added missing pycon directives in various docs.

This commit is contained in:
Mariusz Felisiak 2023-10-25 12:27:27 +02:00 committed by GitHub
parent ee104251c4
commit 718b32c691
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 295 additions and 130 deletions

View File

@ -1246,25 +1246,31 @@ blue.
sources (using the sample data from the GeoDjango tests; see also the
:ref:`gdal_sample_data` section).
.. code-block:: pycon
>>> from django.contrib.gis.gdal import GDALRaster
>>> rst = GDALRaster('/path/to/your/raster.tif', write=False)
>>> rst = GDALRaster("/path/to/your/raster.tif", write=False)
>>> rst.name
'/path/to/your/raster.tif'
>>> rst.width, rst.height # This file has 163 x 174 pixels
(163, 174)
>>> rst = GDALRaster({ # Creates an in-memory raster
... 'srid': 4326,
... 'width': 4,
... 'height': 4,
... 'datatype': 1,
... 'bands': [{
... 'data': (2, 3),
... 'offset': (1, 1),
... 'size': (2, 2),
... 'shape': (2, 1),
... 'nodata_value': 5,
... }]
... })
>>> rst = GDALRaster(
... { # Creates an in-memory raster
... "srid": 4326,
... "width": 4,
... "height": 4,
... "datatype": 1,
... "bands": [
... {
... "data": (2, 3),
... "offset": (1, 1),
... "size": (2, 2),
... "shape": (2, 1),
... "nodata_value": 5,
... }
... ],
... }
... )
>>> rst.srs.srid
4326
>>> rst.width, rst.height
@ -1274,7 +1280,7 @@ blue.
[5, 2, 3, 5],
[5, 2, 3, 5],
[5, 5, 5, 5]], dtype=uint8)
>>> rst_file = open('/path/to/your/raster.tif', 'rb')
>>> rst_file = open("/path/to/your/raster.tif", "rb")
>>> rst_bytes = rst_file.read()
>>> rst = GDALRaster(rst_bytes)
>>> rst.is_vsi_based
@ -1287,7 +1293,9 @@ blue.
The name of the source which is equivalent to the input file path or the name
provided upon instantiation.
>>> GDALRaster({'width': 10, 'height': 10, 'name': 'myraster', 'srid': 4326}).name
.. code-block:: pycon
>>> GDALRaster({"width": 10, "height": 10, "name": "myraster", "srid": 4326}).name
'myraster'
.. attribute:: driver
@ -1302,15 +1310,27 @@ blue.
An in-memory raster is created through the following example:
>>> GDALRaster({'width': 10, 'height': 10, 'srid': 4326}).driver.name
.. code-block:: pycon
>>> GDALRaster({"width": 10, "height": 10, "srid": 4326}).driver.name
'MEM'
A file based GeoTiff raster is created through the following example:
.. code-block:: pycon
>>> import tempfile
>>> rstfile = tempfile.NamedTemporaryFile(suffix='.tif')
>>> rst = GDALRaster({'driver': 'GTiff', 'name': rstfile.name, 'srid': 4326,
... 'width': 255, 'height': 255, 'nr_of_bands': 1})
>>> rstfile = tempfile.NamedTemporaryFile(suffix=".tif")
>>> rst = GDALRaster(
... {
... "driver": "GTiff",
... "name": rstfile.name,
... "srid": 4326,
... "width": 255,
... "height": 255,
... "nr_of_bands": 1,
... }
... )
>>> rst.name
'/tmp/tmp7x9H4J.tif' # The exact filename will be different on your computer
>>> rst.driver.name
@ -1320,14 +1340,18 @@ blue.
The width of the source in pixels (X-axis).
>>> GDALRaster({'width': 10, 'height': 20, 'srid': 4326}).width
.. code-block:: pycon
>>> GDALRaster({"width": 10, "height": 20, "srid": 4326}).width
10
.. attribute:: height
The height of the source in pixels (Y-axis).
>>> GDALRaster({'width': 10, 'height': 20, 'srid': 4326}).height
.. code-block:: pycon
>>> GDALRaster({"width": 10, "height": 20, "srid": 4326}).height
20
.. attribute:: srs
@ -1337,7 +1361,9 @@ blue.
setting it to an other :class:`SpatialReference` or providing any input
that is accepted by the :class:`SpatialReference` constructor.
>>> rst = GDALRaster({'width': 10, 'height': 20, 'srid': 4326})
.. code-block:: pycon
>>> rst = GDALRaster({"width": 10, "height": 20, "srid": 4326})
>>> rst.srs.srid
4326
>>> rst.srs = 3086
@ -1350,7 +1376,9 @@ blue.
property is a shortcut to getting or setting the SRID through the
:attr:`srs` attribute.
>>> rst = GDALRaster({'width': 10, 'height': 20, 'srid': 4326})
.. code-block:: pycon
>>> rst = GDALRaster({"width": 10, "height": 20, "srid": 4326})
>>> rst.srid
4326
>>> rst.srid = 3086
@ -1374,7 +1402,9 @@ blue.
The default is ``[0.0, 1.0, 0.0, 0.0, 0.0, -1.0]``.
>>> rst = GDALRaster({'width': 10, 'height': 20, 'srid': 4326})
.. code-block:: pycon
>>> rst = GDALRaster({"width": 10, "height": 20, "srid": 4326})
>>> rst.geotransform
[0.0, 1.0, 0.0, 0.0, 0.0, -1.0]
@ -1384,7 +1414,9 @@ blue.
reference system of the source, as a point object with ``x`` and ``y``
members.
>>> rst = GDALRaster({'width': 10, 'height': 20, 'srid': 4326})
.. code-block:: pycon
>>> rst = GDALRaster({"width": 10, "height": 20, "srid": 4326})
>>> rst.origin
[0.0, 0.0]
>>> rst.origin.x = 1
@ -1397,7 +1429,9 @@ blue.
object with ``x`` and ``y`` members. See :attr:`geotransform` for more
information.
>>> rst = GDALRaster({'width': 10, 'height': 20, 'srid': 4326})
.. code-block:: pycon
>>> rst = GDALRaster({"width": 10, "height": 20, "srid": 4326})
>>> rst.scale
[1.0, -1.0]
>>> rst.scale.x = 2
@ -1410,7 +1444,9 @@ blue.
with ``x`` and ``y`` members. In case of north up images, these
coefficients are both ``0``.
>>> rst = GDALRaster({'width': 10, 'height': 20, 'srid': 4326})
.. code-block:: pycon
>>> rst = GDALRaster({"width": 10, "height": 20, "srid": 4326})
>>> rst.skew
[0.0, 0.0]
>>> rst.skew.x = 3
@ -1423,7 +1459,9 @@ blue.
``(xmin, ymin, xmax, ymax)`` in the spatial reference system of the
source.
>>> rst = GDALRaster({'width': 10, 'height': 20, 'srid': 4326})
.. code-block:: pycon
>>> rst = GDALRaster({"width": 10, "height": 20, "srid": 4326})
>>> rst.extent
(0.0, -20.0, 10.0, 0.0)
>>> rst.origin.x = 100
@ -1434,8 +1472,16 @@ blue.
List of all bands of the source, as :class:`GDALBand` instances.
>>> rst = GDALRaster({"width": 1, "height": 2, 'srid': 4326,
... "bands": [{"data": [0, 1]}, {"data": [2, 3]}]})
.. code-block:: pycon
>>> rst = GDALRaster(
... {
... "width": 1,
... "height": 2,
... "srid": 4326,
... "bands": [{"data": [0, 1]}, {"data": [2, 3]}],
... }
... )
>>> len(rst.bands)
2
>>> rst.bands[1].data()
@ -1478,12 +1524,18 @@ blue.
For example, the warp function can be used for aggregating a raster to
the double of its original pixel scale:
>>> rst = GDALRaster({
... "width": 6, "height": 6, "srid": 3086,
.. code-block:: pycon
>>> rst = GDALRaster(
... {
... "width": 6,
... "height": 6,
... "srid": 3086,
... "origin": [500000, 400000],
... "scale": [100, -100],
... "bands": [{"data": range(36), "nodata_value": 99}]
... })
... "bands": [{"data": range(36), "nodata_value": 99}],
... }
... )
>>> target = rst.warp({"scale": [200, -200], "width": 3, "height": 3})
>>> target.bands[0].data()
array([[ 7., 9., 11.],
@ -1512,12 +1564,18 @@ blue.
argument. Consult the :attr:`~GDALRaster.warp` documentation for detail
on those arguments.
>>> rst = GDALRaster({
... "width": 6, "height": 6, "srid": 3086,
.. code-block:: pycon
>>> rst = GDALRaster(
... {
... "width": 6,
... "height": 6,
... "srid": 3086,
... "origin": [500000, 400000],
... "scale": [100, -100],
... "bands": [{"data": range(36), "nodata_value": 99}]
... })
... "bands": [{"data": range(36), "nodata_value": 99}],
... }
... )
>>> target_srs = SpatialReference(4326)
>>> target = rst.transform(target_srs)
>>> target.origin
@ -1543,13 +1601,15 @@ blue.
To remove a metadata item, use ``None`` as the metadata value.
>>> rst = GDALRaster({'width': 10, 'height': 20, 'srid': 4326})
.. code-block:: pycon
>>> rst = GDALRaster({"width": 10, "height": 20, "srid": 4326})
>>> rst.metadata
{}
>>> rst.metadata = {'DEFAULT': {'OWNER': 'Django', 'VERSION': '1.0'}}
>>> rst.metadata = {"DEFAULT": {"OWNER": "Django", "VERSION": "1.0"}}
>>> rst.metadata
{'DEFAULT': {'OWNER': 'Django', 'VERSION': '1.0'}}
>>> rst.metadata = {'DEFAULT': {'OWNER': None, 'VERSION': '2.0'}}
>>> rst.metadata = {"DEFAULT": {"OWNER": None, "VERSION": "2.0"}}
>>> rst.metadata
{'DEFAULT': {'VERSION': '2.0'}}
@ -1687,7 +1747,11 @@ blue.
For example:
>>> rst = GDALRaster({'width': 4, 'height': 4, 'srid': 4326, 'datatype': 1, 'nr_of_bands': 1})
.. code-block:: pycon
>>> rst = GDALRaster(
... {"width": 4, "height": 4, "srid": 4326, "datatype": 1, "nr_of_bands": 1}
... )
>>> bnd = rst.bands[0]
>>> bnd.data(range(16))
>>> bnd.data()
@ -1704,7 +1768,7 @@ blue.
[ 4, -1, -2, 7],
[ 8, -3, -4, 11],
[12, 13, 14, 15]], dtype=int8)
>>> bnd.data(data='\x9d\xa8\xb3\xbe', offset=(1, 1), size=(2, 2))
>>> bnd.data(data="\x9d\xa8\xb3\xbe", offset=(1, 1), size=(2, 2))
>>> bnd.data()
array([[ 0, 1, 2, 3],
[ 4, -99, -88, 7],

View File

@ -94,13 +94,17 @@ Examples:
.. _Full Text Search docs: https://www.postgresql.org/docs/current/textsearch-controls.html#TEXTSEARCH-PARSING-QUERIES
.. code-block:: pycon
>>> from django.contrib.postgres.search import SearchQuery
>>> SearchQuery('red tomato') # two keywords
>>> SearchQuery('tomato red') # same results as above
>>> SearchQuery('red tomato', search_type='phrase') # a phrase
>>> SearchQuery('tomato red', search_type='phrase') # a different phrase
>>> SearchQuery("'tomato' & ('red' | 'green')", search_type='raw') # boolean operators
>>> SearchQuery("'tomato' ('red' OR 'green')", search_type='websearch') # websearch operators
>>> SearchQuery("red tomato") # two keywords
>>> SearchQuery("tomato red") # same results as above
>>> SearchQuery("red tomato", search_type="phrase") # a phrase
>>> SearchQuery("tomato red", search_type="phrase") # a different phrase
>>> SearchQuery("'tomato' & ('red' | 'green')", search_type="raw") # boolean operators
>>> SearchQuery(
... "'tomato' ('red' OR 'green')", search_type="websearch"
... ) # websearch operators
``SearchQuery`` terms can be combined logically to provide more flexibility:

View File

@ -148,6 +148,8 @@ if validation has side effects, those side effects will only be triggered once.
Returns a ``dict`` that maps fields to their original ``ValidationError``
instances.
.. code-block:: pycon
>>> f.errors.as_data()
{'sender': [ValidationError(['Enter a valid email address.'])],
'subject': [ValidationError(['This field is required.'])]}
@ -170,6 +172,8 @@ messages in ``Form.errors``.
Returns the errors serialized as JSON.
.. code-block:: pycon
>>> f.errors.as_json()
{"sender": [{"message": "Enter a valid email address.", "code": "invalid"}],
"subject": [{"message": "This field is required.", "code": "required"}]}
@ -325,10 +329,14 @@ Checking which form data has changed
Use the ``has_changed()`` method on your ``Form`` when you need to check if the
form data has been changed from the initial data.
>>> data = {'subject': 'hello',
... 'message': 'Hi there',
... 'sender': 'foo@example.com',
... 'cc_myself': True}
.. code-block:: pycon
>>> data = {
... "subject": "hello",
... "message": "Hi there",
... "sender": "foo@example.com",
... "cc_myself": True,
... }
>>> f = ContactForm(data, initial=data)
>>> f.has_changed()
False
@ -336,6 +344,8 @@ form data has been changed from the initial data.
When the form is submitted, we reconstruct it and provide the original data
so that the comparison can be done:
.. code-block:: pycon
>>> f = ContactForm(request.POST, initial=data)
>>> f.has_changed()
@ -350,9 +360,12 @@ The ``changed_data`` attribute returns a list of the names of the fields whose
values in the form's bound data (usually ``request.POST``) differ from what was
provided in :attr:`~Form.initial`. It returns an empty list if no data differs.
.. code-block:: pycon
>>> f = ContactForm(request.POST, initial=data)
>>> if f.has_changed():
... print("The following fields changed: %s" % ", ".join(f.changed_data))
...
>>> f.changed_data
['subject', 'message']

View File

@ -1183,11 +1183,13 @@ Slightly complex built-in ``Field`` classes
The list of fields that should be used to validate the field's value (in
the order in which they are provided).
.. code-block:: pycon
>>> from django.forms import ComboField
>>> f = ComboField(fields=[CharField(max_length=20), EmailField()])
>>> f.clean('test@example.com')
>>> f.clean("test@example.com")
'test@example.com'
>>> f.clean('longemailaddress@example.com')
>>> f.clean("longemailaddress@example.com")
Traceback (most recent call last):
...
ValidationError: ['Ensure this value has at most 20 characters (it has 28).']

View File

@ -89,11 +89,13 @@ Usage examples:
A Python value passed to ``Coalesce`` on MySQL may be converted to an
incorrect type unless explicitly cast to the correct database type:
.. code-block:: pycon
>>> from django.db.models import DateTimeField
>>> from django.db.models.functions import Cast, Coalesce
>>> from django.utils import timezone
>>> now = timezone.now()
>>> Coalesce('updated', Cast(now, DateTimeField()))
>>> Coalesce("updated", Cast(now, DateTimeField()))
``Collate``
-----------

View File

@ -997,6 +997,8 @@ databases don't allow ``LIMIT`` or ``OFFSET`` in the combined queries.
Uses SQL's ``INTERSECT`` operator to return the shared elements of two or more
``QuerySet``\s. For example:
.. code-block:: pycon
>>> qs1.intersection(qs2, qs3)
See :meth:`union` for some restrictions.
@ -1197,7 +1199,9 @@ item in the Pizza ``QuerySet``.
We can reduce to just two queries using ``prefetch_related``:
>>> Pizza.objects.prefetch_related('toppings')
.. code-block:: pycon
>>> Pizza.objects.prefetch_related("toppings")
This implies a ``self.toppings.all()`` for each ``Pizza``; now each time
``self.toppings.all()`` is called, instead of having to go to the database for
@ -1241,7 +1245,9 @@ database.
results, and retrieve data using a fresh database query. So, if you write
the following:
>>> pizzas = Pizza.objects.prefetch_related('toppings')
.. code-block:: pycon
>>> pizzas = Pizza.objects.prefetch_related("toppings")
>>> [list(pizza.toppings.filter(spicy=True)) for pizza in pizzas]
...then the fact that ``pizza.toppings.all()`` has been prefetched will not
@ -1301,6 +1307,8 @@ Chaining ``prefetch_related`` calls will accumulate the lookups that are
prefetched. To clear any ``prefetch_related`` behavior, pass ``None`` as a
parameter:
.. code-block:: pycon
>>> non_prefetched = qs.prefetch_related(None)
One difference to note when using ``prefetch_related`` is that objects created
@ -1332,20 +1340,28 @@ the prefetch operation.
In its simplest form ``Prefetch`` is equivalent to the traditional string based
lookups:
.. code-block:: pycon
>>> from django.db.models import Prefetch
>>> Restaurant.objects.prefetch_related(Prefetch('pizzas__toppings'))
>>> Restaurant.objects.prefetch_related(Prefetch("pizzas__toppings"))
You can provide a custom queryset with the optional ``queryset`` argument.
This can be used to change the default ordering of the queryset:
.. code-block:: pycon
>>> Restaurant.objects.prefetch_related(
... Prefetch('pizzas__toppings', queryset=Toppings.objects.order_by('name')))
... Prefetch("pizzas__toppings", queryset=Toppings.objects.order_by("name"))
... )
Or to call :meth:`~django.db.models.query.QuerySet.select_related()` when
applicable to reduce the number of queries even further:
.. code-block:: pycon
>>> Pizza.objects.prefetch_related(
... Prefetch('restaurants', queryset=Restaurant.objects.select_related('best_pizza')))
... Prefetch("restaurants", queryset=Restaurant.objects.select_related("best_pizza"))
... )
You can also assign the prefetched result to a custom attribute with the optional
``to_attr`` argument. The result will be stored directly in a list.
@ -1353,32 +1369,42 @@ You can also assign the prefetched result to a custom attribute with the optiona
This allows prefetching the same relation multiple times with a different
``QuerySet``; for instance:
.. code-block:: pycon
>>> vegetarian_pizzas = Pizza.objects.filter(vegetarian=True)
>>> Restaurant.objects.prefetch_related(
... Prefetch('pizzas', to_attr='menu'),
... Prefetch('pizzas', queryset=vegetarian_pizzas, to_attr='vegetarian_menu'))
... Prefetch("pizzas", to_attr="menu"),
... Prefetch("pizzas", queryset=vegetarian_pizzas, to_attr="vegetarian_menu"),
... )
Lookups created with custom ``to_attr`` can still be traversed as usual by other
lookups:
.. code-block:: pycon
>>> vegetarian_pizzas = Pizza.objects.filter(vegetarian=True)
>>> Restaurant.objects.prefetch_related(
... Prefetch('pizzas', queryset=vegetarian_pizzas, to_attr='vegetarian_menu'),
... 'vegetarian_menu__toppings')
... Prefetch("pizzas", queryset=vegetarian_pizzas, to_attr="vegetarian_menu"),
... "vegetarian_menu__toppings",
... )
Using ``to_attr`` is recommended when filtering down the prefetch result as it is
less ambiguous than storing a filtered result in the related manager's cache:
.. code-block:: pycon
>>> queryset = Pizza.objects.filter(vegetarian=True)
>>>
>>> # Recommended:
>>> restaurants = Restaurant.objects.prefetch_related(
... Prefetch('pizzas', queryset=queryset, to_attr='vegetarian_pizzas'))
... Prefetch("pizzas", queryset=queryset, to_attr="vegetarian_pizzas")
... )
>>> vegetarian_pizzas = restaurants[0].vegetarian_pizzas
>>>
>>> # Not recommended:
>>> restaurants = Restaurant.objects.prefetch_related(
... Prefetch('pizzas', queryset=queryset))
... Prefetch("pizzas", queryset=queryset),
... )
>>> vegetarian_pizzas = restaurants[0].pizzas.all()
Custom prefetching also works with single related relations like
@ -1394,10 +1420,13 @@ where prefetching with a custom ``QuerySet`` is useful:
* You want to use performance optimization techniques like
:meth:`deferred fields <defer()>`:
>>> queryset = Pizza.objects.only('name')
.. code-block:: pycon
>>> queryset = Pizza.objects.only("name")
>>>
>>> restaurants = Restaurant.objects.prefetch_related(
... Prefetch('best_pizza', queryset=queryset))
... Prefetch("best_pizza", queryset=queryset)
... )
When using multiple databases, ``Prefetch`` will respect your choice of
database. If the inner query does not specify a database, it will use the
@ -1427,20 +1456,26 @@ database selected by the outer query. All of the following are valid:
Take the following examples:
>>> prefetch_related('pizzas__toppings', 'pizzas')
.. code-block:: pycon
>>> prefetch_related("pizzas__toppings", "pizzas")
This works even though it's unordered because ``'pizzas__toppings'``
already contains all the needed information, therefore the second argument
``'pizzas'`` is actually redundant.
>>> prefetch_related('pizzas__toppings', Prefetch('pizzas', queryset=Pizza.objects.all()))
.. code-block:: pycon
>>> prefetch_related("pizzas__toppings", Prefetch("pizzas", queryset=Pizza.objects.all()))
This will raise a ``ValueError`` because of the attempt to redefine the
queryset of a previously seen lookup. Note that an implicit queryset was
created to traverse ``'pizzas'`` as part of the ``'pizzas__toppings'``
lookup.
>>> prefetch_related('pizza_list__toppings', Prefetch('pizzas', to_attr='pizza_list'))
.. code-block:: pycon
>>> prefetch_related("pizza_list__toppings", Prefetch("pizzas", to_attr="pizza_list"))
This will trigger an ``AttributeError`` because ``'pizza_list'`` doesn't exist yet
when ``'pizza_list__toppings'`` is being processed.
@ -4058,12 +4093,14 @@ The ``lookup`` argument describes the relations to follow and works the same
as the string based lookups passed to
:meth:`~django.db.models.query.QuerySet.prefetch_related()`. For example:
.. code-block:: pycon
>>> from django.db.models import Prefetch
>>> Question.objects.prefetch_related(Prefetch('choice_set')).get().choice_set.all()
>>> Question.objects.prefetch_related(Prefetch("choice_set")).get().choice_set.all()
<QuerySet [<Choice: Not much>, <Choice: The sky>, <Choice: Just hacking again>]>
# This will only execute two queries regardless of the number of Question
# and Choice objects.
>>> Question.objects.prefetch_related(Prefetch('choice_set'))
>>> Question.objects.prefetch_related(Prefetch("choice_set"))
<QuerySet [<Question: What's up?>]>
The ``queryset`` argument supplies a base ``QuerySet`` for the given lookup.
@ -4071,17 +4108,21 @@ This is useful to further filter down the prefetch operation, or to call
:meth:`~django.db.models.query.QuerySet.select_related()` from the prefetched
relation, hence reducing the number of queries even further:
.. code-block:: pycon
>>> voted_choices = Choice.objects.filter(votes__gt=0)
>>> voted_choices
<QuerySet [<Choice: The sky>]>
>>> prefetch = Prefetch('choice_set', queryset=voted_choices)
>>> prefetch = Prefetch("choice_set", queryset=voted_choices)
>>> Question.objects.prefetch_related(prefetch).get().choice_set.all()
<QuerySet [<Choice: The sky>]>
The ``to_attr`` argument sets the result of the prefetch operation to a custom
attribute:
>>> prefetch = Prefetch('choice_set', queryset=voted_choices, to_attr='voted_choices')
.. code-block:: pycon
>>> prefetch = Prefetch("choice_set", queryset=voted_choices, to_attr="voted_choices")
>>> Question.objects.prefetch_related(prefetch).get().voted_choices
[<Choice: The sky>]
>>> Question.objects.prefetch_related(prefetch).get().choice_set.all()

View File

@ -371,11 +371,13 @@ Methods
Otherwise the absolute URI is built using the server variables available in
this request. For example:
.. code-block:: pycon
>>> request.build_absolute_uri()
'https://example.com/music/bands/the_beatles/?print=true'
>>> request.build_absolute_uri('/bands/')
>>> request.build_absolute_uri("/bands/")
'https://example.com/bands/'
>>> request.build_absolute_uri('https://example2.com/bands/')
>>> request.build_absolute_uri("https://example2.com/bands/")
'https://example2.com/bands/'
.. note::
@ -494,7 +496,9 @@ a subclass of dictionary. Exceptions are outlined here:
Instantiates a ``QueryDict`` object based on ``query_string``.
>>> QueryDict('a=1&a=2&c=3')
.. code-block:: pycon
>>> QueryDict("a=1&a=2&c=3")
<QueryDict: {'a': ['1', '2'], 'c': ['3']}>
If ``query_string`` is not passed in, the resulting ``QueryDict`` will be

View File

@ -498,24 +498,30 @@ If you ``pop()`` too much, it'll raise
You can also use ``push()`` as a context manager to ensure a matching ``pop()``
is called.
.. code-block:: pycon
>>> c = Context()
>>> c['foo'] = 'first level'
>>> c["foo"] = "first level"
>>> with c.push():
... c['foo'] = 'second level'
... c['foo']
... c["foo"] = "second level"
... c["foo"]
...
'second level'
>>> c['foo']
>>> c["foo"]
'first level'
All arguments passed to ``push()`` will be passed to the ``dict`` constructor
used to build the new context level.
.. code-block:: pycon
>>> c = Context()
>>> c['foo'] = 'first level'
>>> with c.push(foo='second level'):
... c['foo']
>>> c["foo"] = "first level"
>>> with c.push(foo="second level"):
... c["foo"]
...
'second level'
>>> c['foo']
>>> c["foo"]
'first level'
.. method:: Context.update(other_dict)
@ -525,26 +531,31 @@ object also defines an ``update()`` method. This works like ``push()``
but takes a dictionary as an argument and pushes that dictionary onto
the stack instead of an empty one.
.. code-block:: pycon
>>> c = Context()
>>> c['foo'] = 'first level'
>>> c.update({'foo': 'updated'})
>>> c["foo"] = "first level"
>>> c.update({"foo": "updated"})
{'foo': 'updated'}
>>> c['foo']
>>> c["foo"]
'updated'
>>> c.pop()
{'foo': 'updated'}
>>> c['foo']
>>> c["foo"]
'first level'
Like ``push()``, you can use ``update()`` as a context manager to ensure a
matching ``pop()`` is called.
.. code-block:: pycon
>>> c = Context()
>>> c['foo'] = 'first level'
>>> with c.update({'foo': 'second level'}):
... c['foo']
>>> c["foo"] = "first level"
>>> with c.update({"foo": "second level"}):
... c["foo"]
...
'second level'
>>> c['foo']
>>> c["foo"]
'first level'
Using a ``Context`` as a stack comes in handy in :ref:`some custom template
@ -555,20 +566,24 @@ tags <howto-writing-custom-template-tags>`.
Using ``flatten()`` method you can get whole ``Context`` stack as one dictionary
including builtin variables.
.. code-block:: pycon
>>> c = Context()
>>> c['foo'] = 'first level'
>>> c.update({'bar': 'second level'})
>>> c["foo"] = "first level"
>>> c.update({"bar": "second level"})
{'bar': 'second level'}
>>> c.flatten()
{'True': True, 'None': None, 'foo': 'first level', 'False': False, 'bar': 'second level'}
A ``flatten()`` method is also internally used to make ``Context`` objects comparable.
.. code-block:: pycon
>>> c1 = Context()
>>> c1['foo'] = 'first level'
>>> c1['bar'] = 'second level'
>>> c1["foo"] = "first level"
>>> c1["bar"] = "second level"
>>> c2 = Context()
>>> c2.update({'bar': 'second level', 'foo': 'first level'})
>>> c2.update({"bar": "second level", "foo": "first level"})
{'foo': 'first level', 'bar': 'second level'}
>>> c1 == c2
True

View File

@ -872,9 +872,11 @@ Accessing the cache
requests for the same alias in the same thread will return the same
object.
.. code-block:: pycon
>>> from django.core.cache import caches
>>> cache1 = caches['myalias']
>>> cache2 = caches['myalias']
>>> cache1 = caches["myalias"]
>>> cache2 = caches["myalias"]
>>> cache1 is cache2
True
@ -906,11 +908,15 @@ The basic interface is:
.. method:: cache.set(key, value, timeout=DEFAULT_TIMEOUT, version=None)
>>> cache.set('my_key', 'hello, world!', 30)
.. code-block:: pycon
>>> cache.set("my_key", "hello, world!", 30)
.. method:: cache.get(key, default=None, version=None)
>>> cache.get('my_key')
.. code-block:: pycon
>>> cache.get("my_key")
'hello, world!'
``key`` should be a ``str``, and ``value`` can be any picklable Python object.
@ -1100,6 +1106,8 @@ nonexistent cache key:
You can close the connection to your cache with ``close()`` if implemented by
the cache backend.
.. code-block:: pycon
>>> cache.close()
.. note::

View File

@ -222,12 +222,14 @@ Combining multiple aggregations
Combining multiple aggregations with ``annotate()`` will :ticket:`yield the
wrong results <10060>` because joins are used instead of subqueries:
.. code-block:: pycon
>>> book = Book.objects.first()
>>> book.authors.count()
2
>>> book.store_set.count()
3
>>> q = Book.objects.annotate(Count('authors'), Count('store'))
>>> q = Book.objects.annotate(Count("authors"), Count("store"))
>>> q[0].authors__count
6
>>> q[0].store__count
@ -237,7 +239,11 @@ For most aggregates, there is no way to avoid this problem, however, the
:class:`~django.db.models.Count` aggregate has a ``distinct`` parameter that
may help:
>>> q = Book.objects.annotate(Count('authors', distinct=True), Count('store', distinct=True))
.. code-block:: pycon
>>> q = Book.objects.annotate(
... Count("authors", distinct=True), Count("store", distinct=True)
... )
>>> q[0].authors__count
2
>>> q[0].store__count
@ -514,7 +520,9 @@ the annotation is computed over all members of the group.
For example, consider an author query that attempts to find out the average
rating of books written by each author:
>>> Author.objects.annotate(average_rating=Avg('book__rating'))
.. code-block:: pycon
>>> Author.objects.annotate(average_rating=Avg("book__rating"))
This will return one result for each author in the database, annotated with
their average book rating.

View File

@ -448,6 +448,8 @@ can specify the field name suffixed with ``_id``. In this case, the value
parameter is expected to contain the raw value of the foreign model's primary
key. For example:
.. code-block:: pycon
>>> Entry.objects.filter(blog_id=4)
If you pass an invalid keyword argument, a lookup function will raise
@ -610,40 +612,42 @@ contained in a single :meth:`~django.db.models.query.QuerySet.filter` call.
As the second (more permissive) query chains multiple filters, it performs
multiple joins to the primary model, potentially yielding duplicates.
.. code-block:: pycon
>>> from datetime import date
>>> beatles = Blog.objects.create(name='Beatles Blog')
>>> pop = Blog.objects.create(name='Pop Music Blog')
>>> beatles = Blog.objects.create(name="Beatles Blog")
>>> pop = Blog.objects.create(name="Pop Music Blog")
>>> Entry.objects.create(
... blog=beatles,
... headline='New Lennon Biography',
... headline="New Lennon Biography",
... pub_date=date(2008, 6, 1),
... )
<Entry: New Lennon Biography>
>>> Entry.objects.create(
... blog=beatles,
... headline='New Lennon Biography in Paperback',
... headline="New Lennon Biography in Paperback",
... pub_date=date(2009, 6, 1),
... )
<Entry: New Lennon Biography in Paperback>
>>> Entry.objects.create(
... blog=pop,
... headline='Best Albums of 2008',
... headline="Best Albums of 2008",
... pub_date=date(2008, 12, 15),
... )
<Entry: Best Albums of 2008>
>>> Entry.objects.create(
... blog=pop,
... headline='Lennon Would Have Loved Hip Hop',
... headline="Lennon Would Have Loved Hip Hop",
... pub_date=date(2020, 4, 1),
... )
<Entry: Lennon Would Have Loved Hip Hop>
>>> Blog.objects.filter(
... entry__headline__contains='Lennon',
... entry__headline__contains="Lennon",
... entry__pub_date__year=2008,
... )
<QuerySet [<Blog: Beatles Blog>]>
>>> Blog.objects.filter(
... entry__headline__contains='Lennon',
... entry__headline__contains="Lennon",
... ).filter(
... entry__pub_date__year=2008,
... )