Adapting basic Python types - psycopg 3.1.9 documentation
Psycopg - PostgreSQL database adapter for Python - Psycopg documentation
Adapting basic Python types #
Many standard Python types are adapted into SQL and returned as Python objects when a query is executed.
Converting the following data types between Python and PostgreSQL works out-of-the-box and doesn’t require any configuration. In case you need to customise the conversion you should take a look at Data adaptation configuration .
Booleans adaptation #
        Python
        
         
          
           bool
          
         
        
        values
        
         
          True
         
        
        and
        
         
          False
         
        
        are converted to the equivalent
        
         PostgreSQL boolean type
        
        :
       
>>> cur.execute("SELECT %s, %s", (True, False))
# equivalent to "SELECT true, false"
        Numbers adaptation #
See also
- 
         
Python
intvalues can be converted to PostgreSQLsmallint,integer,bigint, ornumeric, according to their numeric value. Psycopg will choose the smallest data type available, because PostgreSQL can automatically cast a type up (e.g. passing asmallintwhere PostgreSQL expect anintegeris gladly accepted) but will not cast down automatically (e.g. if a function has anintegerargument, passing it abigintvalue will fail, even if the value is 1). - 
         
Python
floatvalues are converted to PostgreSQLfloat8. - 
         
Python
Decimalvalues are converted to PostgreSQLnumeric. 
        On the way back, smaller types (
        
         
          int2
         
        
        ,
        
         
          int4
         
        
        ,
        
         
          float4
         
        
        ) are
promoted to the larger Python counterpart.
       
Note
         Sometimes you may prefer to receive
         
          
           numeric
          
         
         data as
         
          
           float
          
         
         instead, for performance reason or ease of manipulation: you can configure
an adapter to
         
          
           cast PostgreSQL numeric to Python float
          
         
         . This of course may imply a loss of precision.
        
Strings adaptation #
See also
        Python
        
         
          
           str
          
         
        
        are converted to PostgreSQL string syntax, and PostgreSQL types
such as
        
         
          text
         
        
        and
        
         
          varchar
         
        
        are converted back to Python
        
         
          str
         
        
        :
       
conn = psycopg.connect()
conn.execute(
    "INSERT INTO menu (id, entry) VALUES (%s, %s)",
    (1, "Crème Brûlée at 4.99€"))
conn.execute("SELECT entry FROM menu WHERE id = 1").fetchone()[0]
'Crème Brûlée at 4.99€'
        
        PostgreSQL databases
        
         have an encoding
        
        , and
        
         the session has an encoding
        
        too, exposed in the
        
         
          Connection.info.
         
        
        
         
          
           encoding
          
         
        
        attribute. If your database and connection are in UTF-8 encoding you will
likely have no problem, otherwise you will have to make sure that your
application only deals with the non-ASCII chars that the database can handle;
failing to do so may result in encoding/decoding errors:
       
# The encoding is set at connection time according to the db configuration
conn.info.encoding
'utf-8'
# The Latin-9 encoding can manage some European accented letters
# and the Euro symbol
conn.execute("SET client_encoding TO LATIN9")
conn.execute("SELECT entry FROM menu WHERE id = 1").fetchone()[0]
'Crème Brûlée at 4.99€'
# The Latin-1 encoding doesn't have a representation for the Euro symbol
conn.execute("SET client_encoding TO LATIN1")
conn.execute("SELECT entry FROM menu WHERE id = 1").fetchone()[0]
# Traceback (most recent call last)
# ...
# UntranslatableCharacter: character with byte sequence 0xe2 0x82 0xac
# in encoding "UTF8" has no equivalent in encoding "LATIN1"
        
        In rare cases you may have strings with unexpected encodings in the database.
Using the
        
         
          SQL_ASCII
         
        
        client encoding  will disable decoding of the data
coming from the database, which will be returned as
        
         
          
           bytes
          
         
        
        :
       
conn.execute("SET client_encoding TO SQL_ASCII")
conn.execute("SELECT entry FROM menu WHERE id = 1").fetchone()[0]
b'Cr\xc3\xa8me Br\xc3\xbbl\xc3\xa9e at 4.99\xe2\x82\xac'
        
        Alternatively you can cast the unknown encoding data to
        
         
          bytea
         
        
        to
retrieve it as bytes, leaving other strings unaltered: see
        
         
          Binary adaptation
         
        
       
        Note that PostgreSQL text cannot contain the
        
         
          0x00
         
        
        byte. If you need to
store Python strings that may contain binary zeros you should use a
        
         
          bytea
         
        
        field.
       
Binary adaptation #
        Python types representing binary objects (
        
         
          
           bytes
          
         
        
        ,
        
         
          
           bytearray
          
         
        
        ,
        
         
          
           memoryview
          
         
        
        )
are converted by default to
        
         
          bytea
         
        
        fields. By default data received is
returned as
        
         
          bytes
         
        
        .
       
If you are storing large binary data in bytea fields (such as binary documents or images) you should probably use the binary format to pass and return values, otherwise binary data will undergo ASCII escaping , taking some CPU time and more bandwidth. See Binary parameters and results for details.
Date/time types adaptation #
See also
- 
         
Python
dateobjects are converted to PostgreSQLdate. - 
         
Python
datetimeobjects are converted to PostgreSQLtimestamp(if they don’t have atzinfoset) ortimestamptz(if they do). - 
         
Python
timeobjects are converted to PostgreSQLtime(if they don’t have atzinfoset) ortimetz(if they do). - 
         
Python
timedeltaobjects are converted to PostgreSQLinterval. 
        PostgreSQL
        
         
          timestamptz
         
        
        values are returned with a timezone set to the
        
         connection TimeZone setting
        
        , which is available as a Python
        
         
          
           ZoneInfo
          
         
        
        object in the
        
         
          Connection.info
         
        
        .
        
         
          
           timezone
          
         
        
        attribute:
       
>>> conn.info.timezone
zoneinfo.ZoneInfo(key='Europe/London')
>>> conn.execute("select '2048-07-08 12:00'::timestamptz").fetchone()[0]
datetime.datetime(2048, 7, 8, 12, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/London'))
        Note
         PostgreSQL
         
          
           timestamptz
          
         
         doesn’t store "a timestamp with a timezone
attached": it stores a timestamp always in UTC, which is converted, on
output, to the connection TimeZone setting:
        
>>> conn.execute("SET TIMEZONE to 'Europe/Rome'")  # UTC+2 in summer
         >>> conn.execute("SELECT '2042-07-01 12:00Z'::timestamptz").fetchone()[0]  # UTC input
datetime.datetime(2042, 7, 1, 14, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Rome'))
         Check out the PostgreSQL documentation about timezones for all the details.
JSON adaptation #
Psycopg can map between Python objects and PostgreSQL json/jsonb types , allowing to customise the load and dump function used.
        Because several Python objects could be considered JSON (dicts, lists,
scalars, even date/time if using a dumps function customised to use them),
Psycopg requires you to wrap the object to dump as JSON into a wrapper:
either
        
         
          
           psycopg.types.json.Json
          
         
        
        or
        
         
          
           Jsonb
          
         
        
        .
       
from psycopg.types.json import Jsonb
thing = {"foo": ["bar", 42]}
conn.execute("INSERT INTO mytable VALUES (%s)", [Jsonb(thing)])
        
        By default Psycopg uses the standard library
        
         
          
           json.dumps
          
         
        
        and
        
         
          
           json.loads
          
         
        
        functions to serialize and de-serialize Python objects to JSON. If you want to
customise how serialization happens, for instance changing serialization
parameters or using a different JSON library, you can specify your own
functions using the
        
         
          
           psycopg.types.json.set_json_dumps()
          
         
        
        and
        
         
          
           set_json_loads()
          
         
        
        functions, to apply either globally or
to a specific context (connection or cursor).
       
from functools import partial
from psycopg.types.json import Jsonb, set_json_dumps, set_json_loads
import ujson
# Use a faster dump function
set_json_dumps(ujson.dumps)
# Return floating point values as Decimal, just in one connection
set_json_loads(partial(json.loads, parse_float=Decimal), conn)
conn.execute("SELECT %s", [Jsonb({"value": 123.45})]).fetchone()[0]
# {'value': Decimal('123.45')}
        
        If you need an even more specific dump customisation only for certain objects
(including different configurations in the same query) you can specify a
        
         
          dumps
         
        
        parameter in the
        
         
          
           Json
          
         
        
        /
        
         
          
           Jsonb
          
         
        
        wrapper, which will
take precedence over what is specified by
        
         
          set_json_dumps()
         
        
        .
       
from uuid import UUID, uuid4
class UUIDEncoder(json.JSONEncoder):
    """A JSON encoder which can dump UUID."""
    def default(self, obj):
        if isinstance(obj, UUID):
            return str(obj)
        return json.JSONEncoder.default(self, obj)
uuid_dumps = partial(json.dumps, cls=UUIDEncoder)
obj = {"uuid": uuid4()}
cnn.execute("INSERT INTO objs VALUES %s", [Json(obj, dumps=uuid_dumps)])
# will insert: {'uuid': '0a40799d-3980-4c65-8315-2956b18ab0e1'}
        Lists adaptation #
        Python
        
         
          
           list
          
         
        
        objects are adapted to
        
         PostgreSQL arrays
        
        and back. Only
lists containing objects of the same type can be dumped to PostgreSQL (but the
list may contain
        
         
          None
         
        
        elements).
       
Note
         If you have a list of values which you want to use with the
         
          
           IN
          
         
         operator… don’t. It won’t work (neither with a list nor with a tuple):
        
>>> conn.execute("SELECT * FROM mytable WHERE id IN %s", [[10,20,30]])
Traceback (most recent call last):
  File "" , line 1, in  
psycopg.errors.SyntaxError: syntax error at or near "$1"
LINE 1: SELECT * FROM mytable WHERE id IN $1
                                          ^
         What you want to do instead is to use the ‘= ANY()’ expression and pass the values as a list (not a tuple).
>>> conn.execute("SELECT * FROM mytable WHERE id = ANY(%s)", [[10,20,30]])
         
         This has also the advantage of working with an empty list, whereas
         
          
           IN
          
          
           ()
          
         
         is not valid SQL.
        
UUID adaptation #
        Python
        
         
          
           uuid.UUID
          
         
        
        objects are adapted to PostgreSQL
        
         UUID type
        
        and back:
       
>>> conn.execute("select gen_random_uuid()").fetchone()[0]
UUID('97f0dd62-3bd2-459e-89b8-a5e36ea3c16c')
>>> from uuid import uuid4
>>> conn.execute("select gen_random_uuid() = %s", [uuid4()]).fetchone()[0]
False  # long shot
        Network data types adaptation #
        Objects from the
        
         
          
           ipaddress
          
         
        
        module are converted to PostgreSQL
        
         network
address types
        
        :
       
- 
         
IPv4Address,IPv4Interfaceobjects are converted to the PostgreSQLinettype. On the way back,inetvalues indicating a single address are converted toIPv4Address, otherwise they are converted toIPv4Interface - 
         
IPv4Networkobjects are converted to thecidrtype and back. - 
         
IPv6Address,IPv6Interface,IPv6Networkobjects follow the same rules, with IPv6inetandcidrvalues. 
>>> conn.execute("select '192.168.0.1'::inet, '192.168.0.1/24'::inet").fetchone()
(IPv4Address('192.168.0.1'), IPv4Interface('192.168.0.1/24'))
>>> conn.execute("select '::ffff:1.2.3.0/120'::cidr").fetchone()[0]
IPv6Network('::ffff:102:300/120')
        Enum adaptation #
New in version 3.1.
        Psycopg can adapt Python
        
         
          
           Enum
          
         
        
        subclasses into PostgreSQL enum types
(created with the
        
         
          
           CREATE
          
          
           TYPE
          
          
           ...
          
          
           AS
          
          
           ENUM
          
          
           (...)
          
         
        
        command).
       
        In order to set up a bidirectional enum mapping, you should get information
about the PostgreSQL enum using the
        
         
          
           EnumInfo
          
         
        
        class and
register it using
        
         
          
           register_enum()
          
         
        
        . The behaviour of unregistered
and registered enums is different.
       
- 
         
If the enum is not registered with
register_enum():- 
           
Pure
Enumclasses are dumped as normal strings, using their member names as value. The unknown oid is used, so PostgreSQL should be able to use this string in most contexts (such as an enum or a text field).Changed in version 3.1: In previous version dumping pure enums is not supported and raise a "cannot adapt" error.
 - 
           
Mix-in enums are dumped according to their mix-in type (because a
class MyIntEnum(int, Enum)is more specifically anintthan anEnum, so it’s dumped by default according tointrules). - 
           
PostgreSQL enums are loaded as Python strings. If you want to load arrays of such enums you will have to find their OIDs using
types.TypeInfo.fetch()and register them usingregister(). 
 - 
           
 - 
         
If the enum is registered (using
EnumInfo.fetch()andregister_enum()):- 
           
Enums classes, both pure and mixed-in, are dumped by name.
 - 
           
The registered PostgreSQL enum is loaded back as the registered Python enum members.
 
 - 
           
 
- class psycopg.types.enum. EnumInfo ( name , oid , array_oid , labels ) #
 - 
         
Manage information about an enum type.
EnumInfois a subclass ofTypeInfo: refer to the latter’s documentation for generic usage, especially thefetch()method.- enum #
 - 
           
After
register_enum()is called, it will contain the Python type mapping to the registered enum. 
 
- psycopg.types.enum. register_enum ( info , context = None , enum = None , * , mapping = None ) #
 - 
         
Register the adapters to load and dump a enum type.
- Parameters :
 - 
           
- 
             
info (
EnumInfo) - The object with the information about the enum to register. - 
             
context (
Optional[AdaptContext]) - The context where to register the adapters. IfNone, register it globally. - 
             
enum (
Optional[Type[TypeVar(E, bound=Enum)]]) - Python enum type matching to the PostgreSQL one. IfNone, a new enum will be generated and exposed asEnumInfo.enum. - 
             
mapping (
Union[Mapping[TypeVar(E, bound=Enum),str],Sequence[Tuple[TypeVar(E, bound=Enum),str]],None]) - Override the mapping betweenenummembers andinfolabels. 
 - 
             
 
After registering, fetching data of the registered enum will cast PostgreSQL enum labels into corresponding Python enum members.
If no
enumis specified, a newEnumis created based on PostgreSQL enum labels. 
Example:
>>> from enum import Enum, auto
>>> from psycopg.types.enum import EnumInfo, register_enum
>>> class UserRole(Enum):
...     ADMIN = auto()
...     EDITOR = auto()
...     GUEST = auto()
>>> conn.execute("CREATE TYPE user_role AS ENUM ('ADMIN', 'EDITOR', 'GUEST')")
>>> info = EnumInfo.fetch(conn, "user_role")
>>> register_enum(info, conn, UserRole)
>>> some_editor = info.enum.EDITOR
>>> some_editor
 
>>> conn.execute(
...     "SELECT pg_typeof(%(editor)s), %(editor)s",
...     {"editor": some_editor}
... ).fetchone()
('user_role', ) 
>>> conn.execute(
...     "SELECT ARRAY[%s, %s]",
...     [UserRole.ADMIN, UserRole.GUEST]
... ).fetchone()
[, ]  
        
        If the Python and the PostgreSQL enum don’t match 1:1 (for instance if members
have a different name, or if more than one Python enum should map to the same
PostgreSQL enum, or vice versa), you can specify the exceptions using the
        
         
          mapping
         
        
        parameter.
       
        
         
          mapping
         
        
        should be a dictionary with Python enum members as keys and the
matching PostgreSQL enum labels as values, or a list of
        
         
          (member,
         
         
          label)
         
        
        pairs with the same meaning (useful when some members are repeated). Order
matters: if an element on either side is specified more than once, the last
pair in the sequence will take precedence:
       
# Legacy roles, defined in medieval times.
>>> conn.execute(
...     "CREATE TYPE abbey_role AS ENUM ('ABBOT', 'SCRIBE', 'MONK', 'GUEST')")
>>> info = EnumInfo.fetch(conn, "abbey_role")
>>> register_enum(info, conn, UserRole, mapping=[
...     (UserRole.ADMIN, "ABBOT"),
...     (UserRole.EDITOR, "SCRIBE"),
...     (UserRole.EDITOR, "MONK")])
>>> conn.execute("SELECT '{ABBOT,SCRIBE,MONK,GUEST}'::abbey_role[]").fetchone()[0]
[<UserRole.ADMIN: 1>,
 <UserRole.EDITOR: 2>,
 <UserRole.EDITOR: 2>,
 <UserRole.GUEST: 3>]
>>> conn.execute("SELECT %s::text[]", [list(UserRole)]).fetchone()[0]
['ABBOT', 'MONK', 'GUEST']
        
        A particularly useful case is when the PostgreSQL labels match the
        
         values
        
        of
a
        
         
          str
         
        
        -based Enum. In this case it is possible to use something like
        
         
          {m:
         
         
          m.value
         
         
          for
         
         
          m
         
         
          in
         
         
          enum}
         
        
        as mapping:
       
>>> class LowercaseRole(str, Enum):
...     ADMIN = "admin"
...     EDITOR = "editor"
...     GUEST = "guest"
>>> conn.execute(
...     "CREATE TYPE lowercase_role AS ENUM ('admin', 'editor', 'guest')")
>>> info = EnumInfo.fetch(conn, "lowercase_role")
>>> register_enum(
...     info, conn, LowercaseRole, mapping={m: m.value for m in LowercaseRole})
>>> conn.execute("SELECT 'editor'::lowercase_role").fetchone()[0]