Adapting other PostgreSQL types - psycopg 3.1.9 documentation
Psycopg - PostgreSQL database adapter for Python - Psycopg documentation
Adapting other PostgreSQL types #
PostgreSQL offers other data types which don’t map to native Python types. Psycopg offers wrappers and conversion functions to allow their use.
Composite types casting #
Psycopg can adapt PostgreSQL composite types (either created with the
CREATE
TYPE
command or implicitly defined after a table row type) to and from
Python tuples,
namedtuple
, or any other suitable object
configured.
Before using a composite type it is necessary to get information about it
using the
CompositeInfo
class and to register it
using
register_composite()
.
- class psycopg.types.composite. CompositeInfo ( name , oid , array_oid , * , regtype = '' , field_names , field_types ) #
-
Manage information about a composite type.
CompositeInfo
is aTypeInfo
subclass: check its documentation for the generic usage, especially thefetch()
method.- python_type #
-
After
register_composite()
is called, it will contain the python type mapping to the registered composite.
- psycopg.types.composite. register_composite ( info , context = None , factory = None ) #
-
Register the adapters to load and dump a composite type.
- Parameters :
-
-
info (
CompositeInfo
) - The object with the information about the composite to register. -
context (
Optional
[AdaptContext
]) - The context where to register the adapters. IfNone
, register it globally. -
factory (
Optional
[Callable
[...
,Any
]]) - Callable to convert the sequence of attributes read from the composite into a Python object.
-
Note
Registering the adapters doesn’t affect objects already created, even if they are children of the registered context. For instance, registering the adapter globally doesn’t affect already existing connections.
After registering, fetching data of the registered composite will invoke
factory
to create corresponding Python objects.If no factory is specified, a
namedtuple
is created and used to return data.If the
factory
is a type (and not a generic callable), then dumpers for that type are created and registered too, so that passing objects of that type to a query will adapt them to the registered type.
Example:
>>> from psycopg.types.composite import CompositeInfo, register_composite
>>> conn.execute("CREATE TYPE card AS (value int, suit text)")
>>> info = CompositeInfo.fetch(conn, "card")
>>> register_composite(info, conn)
>>> my_card = info.python_type(8, "hearts")
>>> my_card
card(value=8, suit='hearts')
>>> conn.execute(
... "SELECT pg_typeof(%(card)s), (%(card)s).suit", {"card": my_card}
... ).fetchone()
('card', 'hearts')
>>> conn.execute("SELECT (%s, %s)::card", [1, "spades"]).fetchone()[0]
card(value=1, suit='spades')
Nested composite types are handled as expected, provided that the type of the composite components are registered as well:
>>> conn.execute("CREATE TYPE card_back AS (face card, back text)")
>>> info2 = CompositeInfo.fetch(conn, "card_back")
>>> register_composite(info2, conn)
>>> conn.execute("SELECT ((8, 'hearts'), 'blue')::card_back").fetchone()[0]
card_back(face=card(value=8, suit='hearts'), back='blue')
Range adaptation #
PostgreSQL range types are a family of data types representing a range of values between two elements. The type of the element is called the range subtype . PostgreSQL offers a few built-in range types and allows the definition of custom ones.
All the PostgreSQL range types are loaded as the
Range
Python type, which is a
Generic
type and can hold bounds of
different types.
- class psycopg.types.range. Range ( lower = None , upper = None , bounds = '[)' , empty = False ) #
-
Python representation for a PostgreSQL range type.
- Parameters :
-
-
lower (
Optional
[TypeVar
(T
)]) - lower bound for the range.None
means unbound -
upper (
Optional
[TypeVar
(T
)]) - upper bound for the range.None
means unbound -
bounds (
str
) - one of the literal strings()
,[)
,(]
,[]
, representing whether the lower or upper bounds are included -
empty (
bool
) - ifTrue
, the range is empty
-
This Python type is only used to pass and retrieve range values to and from PostgreSQL and doesn’t attempt to replicate the PostgreSQL range features: it doesn’t perform normalization and doesn’t implement all the operators supported by the database.
PostgreSQL will perform normalisation on
Range
objects used as query parameters, so, when they are fetched back, they will be found in the normal form (for instance ranges on integers will have[)
bounds).Range
objects are immutable, hashable, and support thein
operator (checking if an element is within the range). They can be tested for equivalence. Empty ranges evaluate toFalse
in a boolean context, nonempty ones evaluate toTrue
.Range
objects have the following attributes:- isempty #
-
True
if the range is empty.
- lower #
-
The lower bound of the range.
None
if empty or unbound.
- upper #
-
The upper bound of the range.
None
if empty or unbound.
- lower_inc #
-
True
if the lower bound is included in the range.
- upper_inc #
-
True
if the upper bound is included in the range.
- lower_inf #
-
True
if the range doesn’t have a lower bound.
- upper_inf #
-
True
if the range doesn’t have an upper bound.
The built-in range objects are adapted automatically: if a
Range
objects
contains
date
bounds, it is dumped using the
daterange
OID,
and of course
daterange
values are loaded back as
Range[date]
.
If you create your own range type you can use
RangeInfo
and
register_range()
to associate the range type with
its subtype and make it work like the builtin ones.
- class psycopg.types.range. RangeInfo ( name , oid , array_oid , * , regtype = '' , subtype_oid ) #
-
Manage information about a range type.
RangeInfo
is aTypeInfo
subclass: check its documentation for generic details, especially thefetch()
method.
- psycopg.types.range. register_range ( info , context = None ) #
-
Register the adapters to load and dump a range type.
- Parameters :
-
-
info (
RangeInfo
) - The object with the information about the range to register. -
context (
Optional
[AdaptContext
]) - The context where to register the adapters. IfNone
, register it globally.
-
Register loaders so that loading data of this type will result in a
Range
with bounds parsed as the right subtype.Note
Registering the adapters doesn’t affect objects already created, even if they are children of the registered context. For instance, registering the adapter globally doesn’t affect already existing connections.
Example:
>>> from psycopg.types.range import Range, RangeInfo, register_range
>>> conn.execute("CREATE TYPE strrange AS RANGE (SUBTYPE = text)")
>>> info = RangeInfo.fetch(conn, "strrange")
>>> register_range(info, conn)
>>> conn.execute("SELECT pg_typeof(%s)", [Range("a", "z")]).fetchone()[0]
'strrange'
>>> conn.execute("SELECT '[a,z]'::strrange").fetchone()[0]
Range('a', 'z', '[]')
Multirange adaptation #
Since PostgreSQL 14, every range type is associated with a multirange , a type representing a disjoint set of ranges. A multirange is automatically available for every range, built-in and user-defined.
All the PostgreSQL range types are loaded as the
Multirange
Python type, which is a mutable
sequence of
Range
elements.
- class psycopg.types.multirange. Multirange ( items = () ) #
-
Python representation for a PostgreSQL multirange type.
- Parameters :
-
items (
Iterable
[Range
[TypeVar
(T
)]]) - Sequence of ranges to initialise the object.
This Python type is only used to pass and retrieve multirange values to and from PostgreSQL and doesn’t attempt to replicate the PostgreSQL multirange features: overlapping items are not merged, empty ranges are not discarded, the items are not ordered, the behaviour of multirange operators is not replicated in Python.
PostgreSQL will perform normalisation on
Multirange
objects used as query parameters, so, when they are fetched back, they will be found ordered, with overlapping ranges merged, etc.Multirange
objects are aMutableSequence
and are totally ordered: they behave pretty much like a list ofRange
. Like Range, they areGeneric
on the subtype of their range, so you can declare a variable to beMultirange[date]
and mypy will complain if you try to add it aRange[Decimal]
.
Like for
Range
, built-in multirange objects are adapted
automatically: if a
Multirange
object contains
Range
with
date
bounds, it is dumped using the
datemultirange
OID, and
datemultirange
values are loaded back as
Multirange[date]
.
If you have created your own range type you can use
MultirangeInfo
and
register_multirange()
to associate the resulting
multirange type with its subtype and make it work like the builtin ones.
- class psycopg.types.multirange. MultirangeInfo ( name , oid , array_oid , * , regtype = '' , range_oid , subtype_oid ) #
-
Manage information about a multirange type.
MultirangeInfo
is aTypeInfo
subclass: check its documentation for generic details, especially thefetch()
method.
- psycopg.types.multirange. register_multirange ( info , context = None ) #
-
Register the adapters to load and dump a multirange type.
- Parameters :
-
-
info (
MultirangeInfo
) - The object with the information about the range to register. -
context (
Optional
[AdaptContext
]) - The context where to register the adapters. IfNone
, register it globally.
-
Register loaders so that loading data of this type will result in a
Range
with bounds parsed as the right subtype.Note
Registering the adapters doesn’t affect objects already created, even if they are children of the registered context. For instance, registering the adapter globally doesn’t affect already existing connections.
Example:
>>> from psycopg.types.multirange import \
... Multirange, MultirangeInfo, register_multirange
>>> from psycopg.types.range import Range
>>> conn.execute("CREATE TYPE strrange AS RANGE (SUBTYPE = text)")
>>> info = MultirangeInfo.fetch(conn, "strmultirange")
>>> register_multirange(info, conn)
>>> rec = conn.execute(
... "SELECT pg_typeof(%(mr)s), %(mr)s",
... {"mr": Multirange([Range("a", "q"), Range("l", "z")])}).fetchone()
>>> rec[0]
'strmultirange'
>>> rec[1]
Multirange([Range('a', 'z', '[)')])
Hstore adaptation #
The
hstore
data type is a key-value store embedded in PostgreSQL. It
supports GiST or GIN indexes allowing search by keys or key/value pairs as
well as regular BTree indexes for equality, uniqueness etc.
Psycopg can convert Python
dict
objects to and from
hstore
structures.
Only dictionaries with string keys and values are supported.
None
is also
allowed as value but not as a key.
In order to use the
hstore
data type it is necessary to load it in a
database using:
=# CREATE EXTENSION hstore;
Because
hstore
is distributed as a contrib module, its oid is not well
known, so it is necessary to use
TypeInfo
.
fetch()
to query the database and get its oid. The
resulting object can be passed to
register_hstore()
to configure dumping
dict
to
hstore
and parsing
hstore
back to
dict
, in the context where the
adapter is registered.
- psycopg.types.hstore. register_hstore ( info , context = None ) #
-
Register the adapters to load and dump hstore.
- Parameters :
-
-
info (
TypeInfo
) - The object with the information about the hstore type. -
context (
Optional
[AdaptContext
]) - The context where to register the adapters. IfNone
, register it globally.
-
Note
Registering the adapters doesn’t affect objects already created, even if they are children of the registered context. For instance, registering the adapter globally doesn’t affect already existing connections.
Example:
>>> from psycopg.types import TypeInfo
>>> from psycopg.types.hstore import register_hstore
>>> info = TypeInfo.fetch(conn, "hstore")
>>> register_hstore(info, conn)
>>> conn.execute("SELECT pg_typeof(%s)", [{"a": "b"}]).fetchone()[0]
'hstore'
>>> conn.execute("SELECT 'foo => bar'::hstore").fetchone()[0]
{'foo': 'bar'}
Geometry adaptation using Shapely #
When using the PostGIS extension, it can be useful to retrieve geometry values and have them automatically converted to Shapely instances. Likewise, you may want to store such instances in the database and have the conversion happen automatically.
Warning
Psycopg doesn’t have a dependency on the
shapely
package: you should
install the library as an additional dependency of your project.
Warning
This module is experimental and might be changed in the future according to users’ feedback.
Since PostgGIS is an extension, the
geometry
type oid is not well
known, so it is necessary to use
TypeInfo
.
fetch()
to query the database and find it. The
resulting object can be passed to
register_shapely()
to configure dumping
shape
instances to
geometry
columns and parsing
geometry
data back to
shape
instances, in the context where the
adapters are registered.
- psycopg.types.shapely. register_shapely ( ) #
-
Register Shapely dumper and loaders.
After invoking this function on an adapter, the queries retrieving PostGIS geometry objects will return Shapely’s shape object instances both in text and binary mode.
Similarly, shape objects can be sent to the database.
This requires the Shapely library to be installed.
- Parameters :
-
-
info - The object with the information about the geometry type.
-
context - The context where to register the adapters. If
None
, register it globally.
-
Note
Registering the adapters doesn’t affect objects already created, even if they are children of the registered context. For instance, registering the adapter globally doesn’t affect already existing connections.
Example:
>>> from psycopg.types import TypeInfo
>>> from psycopg.types.shapely import register_shapely
>>> from shapely.geometry import Point
>>> info = TypeInfo.fetch(conn, "geometry")
>>> register_shapely(info, conn)
>>> conn.execute("SELECT pg_typeof(%s)", [Point(1.2, 3.4)]).fetchone()[0]
'geometry'
>>> conn.execute("""
... SELECT ST_GeomFromGeoJSON('{
... "type":"Point",
... "coordinates":[-48.23456,20.12345]}')
... """).fetchone()[0]
Notice that, if the geometry adapters are registered on a specific object (a connection or cursor), other connections and cursors will be unaffected:
>>> conn2 = psycopg.connect(CONN_STR)
>>> conn2.execute("""
... SELECT ST_GeomFromGeoJSON('{
... "type":"Point",
... "coordinates":[-48.23456,20.12345]}')
... """).fetchone()[0]
'0101000020E61000009279E40F061E48C0F2B0506B9A1F3440'