psycopg2.extras - Miscellaneous goodies for Psycopg 2

This module is a generic place used to hold little helper functions and classes until a better place in the distribution is found.

Connection and cursor subclasses

A few objects that change the way the results are returned by the cursor or modify the object behavior in some other way. Typically cursor subclasses are passed as cursor_factory argument to connect() so that the connection’s cursor() method will generate objects of this class. Alternatively a cursor subclass can be used one-off by passing it as the cursor_factory argument to the cursor() method.

If you want to use a connection subclass you can pass it as the connection_factory argument of the connect() function.

Dictionary-like cursor

The dict cursors allow to access to the retrieved records using an interface similar to the Python dictionaries instead of the tuples.

>>> dict_cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
>>> dict_cur.execute("INSERT INTO test (num, data) VALUES(%s, %s)",
...                  (100, "abc'def"))
>>> dict_cur.execute("SELECT * FROM test")
>>> rec = dict_cur.fetchone()
>>> rec['id']
1
>>> rec['num']
100
>>> rec['data']
"abc'def"

The records still support indexing as the original tuple:

>>> rec[2]
"abc'def"
class psycopg2.extras. DictCursor ( *args , **kwargs )

A cursor that keeps a list of column name -> index mappings.

class psycopg2.extras. DictConnection

A connection that uses DictCursor automatically.

Note

Not very useful since Psycopg 2.5: you can use psycopg2.connect (dsn, cursor_factory=DictCursor) instead of DictConnection .

class psycopg2.extras. DictRow ( cursor )

A row object that allow by-column-name access to data.

Real dictionary cursor

class psycopg2.extras. RealDictCursor ( *args , **kwargs )

A cursor that uses a real dict as the base type for rows.

Note that this cursor is extremely specialized and does not allow the normal access (using integer indices) to fetched data. If you need to access database rows both as a dictionary and a list, then use the generic DictCursor instead of RealDictCursor .

class psycopg2.extras. RealDictConnection

A connection that uses RealDictCursor automatically.

Note

Not very useful since Psycopg 2.5: you can use psycopg2.connect (dsn, cursor_factory=RealDictCursor) instead of RealDictConnection .

class psycopg2.extras. RealDictRow ( cursor )

A dict subclass representing a data record.

namedtuple cursor

New in version 2.3.

These objects require collections.namedtuple() to be found, so it is available out-of-the-box only from Python 2.6. Anyway, the namedtuple implementation is compatible with previous Python versions, so all you have to do is to download it and make it available where we expect it to be...

from somewhere import namedtuple
import collections
collections.namedtuple = namedtuple
from psycopg.extras import NamedTupleConnection
# ...
class psycopg2.extras. NamedTupleCursor

A cursor that generates results as namedtuple .

fetch*() methods will return named tuples instead of regular tuples, so their elements can be accessed both as regular numeric items as well as attributes.

>>> nt_cur = conn.cursor(cursor_factory=psycopg2.extras.NamedTupleCursor)
>>> rec = nt_cur.fetchone()
>>> rec
Record(id=1, num=100, data="abc'def")
>>> rec[1]
100
>>> rec.data
"abc'def"
class psycopg2.extras. NamedTupleConnection

A connection that uses NamedTupleCursor automatically.

Note

Not very useful since Psycopg 2.5: you can use psycopg2.connect (dsn, cursor_factory=NamedTupleCursor) instead of NamedTupleConnection .

Logging cursor

class psycopg2.extras. LoggingConnection

A connection that logs all queries to a file or logger object.

filter ( msg , curs )

Filter the query before logging it.

This is the method to overwrite to filter unwanted queries out of the log or to add some extra data to the output. The default implementation just does nothing.

initialize ( logobj )

Initialize the connection to log to logobj .

The logobj parameter can be an open file object or a Logger instance from the standard logging module.

class psycopg2.extras. LoggingCursor

A cursor that logs queries using its connection logging facilities.

Note

Queries that are executed with cursor.executemany() are not logged.

class psycopg2.extras. MinTimeLoggingConnection

A connection that logs queries based on execution time.

This is just an example of how to sub-class LoggingConnection to provide some extra filtering for the logged queries. Both the initialize() and filter() methods are overwritten to make sure that only queries executing for more than mintime ms are logged.

Note that this connection uses the specialized cursor MinTimeLoggingCursor .

class psycopg2.extras. MinTimeLoggingCursor

The cursor sub-class companion to MinTimeLoggingConnection .

Replication connection and cursor classes

See Replication protocol support for an introduction to the topic.

The following replication types are defined:

psycopg2.extras. REPLICATION_LOGICAL
psycopg2.extras. REPLICATION_PHYSICAL
class psycopg2.extras. LogicalReplicationConnection ( *args , **kwargs )

This connection factory class can be used to open a special type of connection that is used for logical replication.

Example:

from psycopg2.extras import LogicalReplicationConnection
log_conn = psycopg2.connect(dsn, connection_factory=LogicalReplicationConnection)
log_cur = log_conn.cursor()
class psycopg2.extras. PhysicalReplicationConnection ( *args , **kwargs )

This connection factory class can be used to open a special type of connection that is used for physical replication.

Example:

from psycopg2.extras import PhysicalReplicationConnection
phys_conn = psycopg2.connect(dsn, connection_factory=PhysicalReplicationConnection)
phys_cur = phys_conn.cursor()

Both LogicalReplicationConnection and PhysicalReplicationConnection use ReplicationCursor for actual communication with the server.

The individual messages in the replication stream are represented by ReplicationMessage objects (both logical and physical type):

class psycopg2.extras. ReplicationMessage

A replication protocol message.

payload

The actual data received from the server.

An instance of either bytes() or unicode() , depending on the value of decode option passed to start_replication() on the connection. See read_message() for details.

data_size

The raw size of the message payload (before possible unicode conversion).

data_start

LSN position of the start of the message.

wal_end

LSN position of the current end of WAL on the server.

send_time

A datetime object representing the server timestamp at the moment when the message was sent.

cursor

A reference to the corresponding ReplicationCursor object.

class psycopg2.extras. ReplicationCursor

A cursor used for communication on replication connections.

create_replication_slot ( slot_name , slot_type=None , output_plugin=None )

Create streaming replication slot.

Parameters:
  • slot_name - name of the replication slot to be created
  • slot_type - type of replication: should be either REPLICATION_LOGICAL or REPLICATION_PHYSICAL
  • output_plugin - name of the logical decoding output plugin to be used by the slot; required for logical replication connections, disallowed for physical

Example:

log_cur.create_replication_slot("logical1", "test_decoding")
phys_cur.create_replication_slot("physical1")

# either logical or physical replication connection
cur.create_replication_slot("slot1", slot_type=REPLICATION_LOGICAL)

When creating a slot on a logical replication connection, a logical replication slot is created by default. Logical replication requires name of the logical decoding output plugin to be specified.

When creating a slot on a physical replication connection, a physical replication slot is created by default. No output plugin parameter is required or allowed when creating a physical replication slot.

In either case the type of slot being created can be specified explicitly using slot_type parameter.

Replication slots are a feature of PostgreSQL server starting with version 9.4.

drop_replication_slot ( slot_name )

Drop streaming replication slot.

Parameters: slot_name - name of the replication slot to drop

Example:

# either logical or physical replication connection
cur.drop_replication_slot("slot1")

Replication slots are a feature of PostgreSQL server starting with version 9.4.

start_replication ( slot_name=None , slot_type=None , start_lsn=0 , timeline=0 , options=None , decode=False )

Start replication on the connection.

Parameters:
  • slot_name - name of the replication slot to use; required for logical replication, physical replication can work with or without a slot
  • slot_type - type of replication: should be either REPLICATION_LOGICAL or REPLICATION_PHYSICAL
  • start_lsn - the optional LSN position to start replicating from, can be an integer or a string of hexadecimal digits in the form XXX/XXX
  • timeline - WAL history timeline to start streaming from (optional, can only be used with physical replication)
  • options - a dictionary of options to pass to logical replication slot (not allowed with physical replication)
  • decode - a flag indicating that unicode conversion should be performed on messages received from the server

If a slot_name is specified, the slot must exist on the server and its type must match the replication type used.

If not specified using slot_type parameter, the type of replication is defined by the type of replication connection. Logical replication is only allowed on logical replication connection, but physical replication can be used with both types of connection.

On the other hand, physical replication doesn’t require a named replication slot to be used, only logical replication does. In any case logical replication and replication slots are a feature of PostgreSQL server starting with version 9.4. Physical replication can be used starting with 9.0.

If start_lsn is specified, the requested stream will start from that LSN. The default is None which passes the LSN 0/0 causing replay to begin at the last point for which the server got flush confirmation from the client, or the oldest available point for a new slot.

The server might produce an error if a WAL file for the given LSN has already been recycled or it may silently start streaming from a later position: the client can verify the actual position using information provided by the ReplicationMessage attributes. The exact server behavior depends on the type of replication and use of slots.

The timeline parameter can only be specified with physical replication and only starting with server version 9.3.

A dictionary of options may be passed to the logical decoding plugin on a logical replication slot. The set of supported options depends on the output plugin that was used to create the slot. Must be None for physical replication.

If decode is set to True the messages received from the server would be converted according to the connection encoding . This parameter should not be set with physical replication or with logical replication plugins that produce binary output.

This function constructs a START_REPLICATION command and calls start_replication_expert() internally.

After starting the replication, to actually consume the incoming server messages use consume_stream() or implement a loop around read_message() in case of asynchronous connection .

start_replication_expert ( command , decode=False )

Start replication on the connection using provided START_REPLICATION command.

Parameters:
  • command - The full replication command. It can be a string or a Composable instance for dynamic generation.
  • decode - a flag indicating that unicode conversion should be performed on messages received from the server.
consume_stream ( consume , keepalive_interval=10 )
Parameters:
  • consume - a callable object with signature consume( msg )
  • keepalive_interval - interval (in seconds) to send keepalive messages to the server

This method can only be used with synchronous connection. For asynchronous connections see read_message() .

Before using this method to consume the stream call start_replication() first.

This method enters an endless loop reading messages from the server and passing them to consume() one at a time, then waiting for more messages from the server. In order to make this method break out of the loop and return, consume() can throw a StopReplication exception. Any unhandled exception will make it break out of the loop as well.

The msg object passed to consume() is an instance of ReplicationMessage class. See read_message() for details about message decoding.

This method also sends keepalive messages to the server in case there were no new data from the server for the duration of keepalive_interval (in seconds). The value of this parameter must be set to at least 1 second, but it can have a fractional part.

After processing certain amount of messages the client should send a confirmation message to the server. This should be done by calling send_feedback() method on the corresponding replication cursor. A reference to the cursor is provided in the ReplicationMessage as an attribute.

The following example is a sketch implementation of consume() callable for logical replication:

class LogicalStreamConsumer(object):

    ...

    def __call__(self, msg):
        self.process_message(msg.payload)

        if self.should_send_feedback(msg):
            msg.cursor.send_feedback(flush_lsn=msg.data_start)

consumer = LogicalStreamConsumer()
cur.consume_stream(consumer)

Warning

When using replication with slots, failure to constantly consume and report success to the server appropriately can eventually lead to "disk full" condition on the server, because the server retains all the WAL segments that might be needed to stream the changes via all of the currently open replication slots.

On the other hand, it is not recommended to send confirmation after every processed message, since that will put an unnecessary load on network and the server. A possible strategy is to confirm after every COMMIT message.

send_feedback ( write_lsn=0 , flush_lsn=0 , apply_lsn=0 , reply=False )
Parameters:
  • write_lsn - a LSN position up to which the client has written the data locally
  • flush_lsn - a LSN position up to which the client has processed the data reliably (the server is allowed to discard all and every data that predates this LSN)
  • apply_lsn - a LSN position up to which the warm standby server has applied the changes (physical replication master-slave protocol only)
  • reply - request the server to send back a keepalive message immediately

Use this method to report to the server that all messages up to a certain LSN position have been processed on the client and may be discarded on the server.

This method can also be called with all default parameters’ values to just send a keepalive message to the server.

Low-level replication cursor methods for asynchronous connection operation.

With the synchronous connection a call to consume_stream() handles all the complexity of handling the incoming messages and sending keepalive replies, but at times it might be beneficial to use low-level interface for better control, in particular to select on multiple sockets. The following methods are provided for asynchronous operation:

read_message ( )

Try to read the next message from the server without blocking and return an instance of ReplicationMessage or None , in case there are no more data messages from the server at the moment.

This method should be used in a loop with asynchronous connections (after calling start_replication() once). For synchronous connections see consume_stream() .

The returned message’s payload is an instance of unicode decoded according to connection encoding iff decode was set to True in the initial call to start_replication() on this connection, otherwise it is an instance of bytes with no decoding.

It is expected that the calling code will call this method repeatedly in order to consume all of the messages that might have been buffered until None is returned. After receiving None from this method the caller should use select() or poll() on the corresponding connection to block the process until there is more data from the server.

The server can send keepalive messages to the client periodically. Such messages are silently consumed by this method and are never reported to the caller.

fileno ( )

Call the corresponding connection’s fileno() method and return the result.

This is a convenience method which allows replication cursor to be used directly in select() or poll() calls.

io_timestamp

A datetime object representing the timestamp at the moment of last communication with the server (a data or keepalive message in either direction).

An actual example of asynchronous operation might look like this:

from select import select
from datetime import datetime

def consume(msg):
    ...

keepalive_interval = 10.0
while True:
    msg = cur.read_message()
    if msg:
        consume(msg)
    else:
        now = datetime.now()
        timeout = keepalive_interval - (now - cur.io_timestamp).total_seconds()
        try:
            sel = select([cur], [], [], max(0, timeout))
            if not any(sel):
                cur.send_feedback()  # timed out, send keepalive message
        except InterruptedError:
            pass  # recalculate timeout and continue
class psycopg2.extras. StopReplication

Exception used to break out of the endless loop in consume_stream() .

Subclass of Exception . Intentionally not inherited from Error as occurrence of this exception does not indicate an error.

Additional data types

JSON adaptation

New in version 2.5.

Changed in version 2.5.4: added jsonb support. In previous versions jsonb values are returned as strings. See the FAQ for a workaround.

Psycopg can adapt Python objects to and from the PostgreSQL json and jsonb types. With PostgreSQL 9.2 and following versions adaptation is available out-of-the-box. To use JSON data with previous database versions (either with the 9.1 json extension , but even if you want to convert text fields to JSON) you can use the register_json() function.

The Python library used by default to convert Python objects to JSON and to parse data from the database depends on the language version: with Python 2.6 and following the json module from the standard library is used; with previous versions the simplejson module is used if available. Note that the last simplejson version supporting Python 2.4 is the 2.0.9.

In order to pass a Python object to the database as query argument you can use the Json adapter:

curs.execute("insert into mytable (jsondata) values (%s)",
    [Json({'a': 100})])

Reading from the database, json and jsonb values will be automatically converted to Python objects.

Note

If you are using the PostgreSQL json data type but you want to read it as string in Python instead of having it parsed, your can either cast the column to text in the query (it is an efficient operation, that doesn’t involve a copy):

cur.execute("select jsondata::text from mytable")

or you can register a no-op loads() function with register_default_json() :

psycopg2.extras.register_default_json(loads=lambda x: x)

Note

You can use register_adapter() to adapt any Python dictionary to JSON, either registering Json or any subclass or factory creating a compatible adapter:

psycopg2.extensions.register_adapter(dict, psycopg2.extras.Json)

This setting is global though, so it is not compatible with similar adapters such as the one registered by register_hstore() . Any other object supported by JSON can be registered the same way, but this will clobber the default adaptation rule, so be careful to unwanted side effects.

If you want to customize the adaptation from Python to PostgreSQL you can either provide a custom dumps() function to Json :

curs.execute("insert into mytable (jsondata) values (%s)",
    [Json({'a': 100}, dumps=simplejson.dumps)])

or you can subclass it overriding the dumps() method:

class MyJson(Json):
    def dumps(self, obj):
        return simplejson.dumps(obj)

curs.execute("insert into mytable (jsondata) values (%s)",
    [MyJson({'a': 100})])

Customizing the conversion from PostgreSQL to Python can be done passing a custom loads() function to register_json() . For the builtin data types ( json from PostgreSQL 9.2, jsonb from PostgreSQL 9.4) use register_default_json() and register_default_jsonb() . For example, if you want to convert the float values from json into Decimal you can use:

loads = lambda x: json.loads(x, parse_float=Decimal)
psycopg2.extras.register_json(conn, loads=loads)
class psycopg2.extras. Json ( adapted , dumps=None )

An ISQLQuote wrapper to adapt a Python object to json data type.

Json can be used to wrap any object supported by the provided dumps function. If none is provided, the standard json.dumps() is used ( simplejson for Python < 2.6; getquoted() will raise ImportError if the module is not available).

dumps ( obj )

Serialize obj in JSON format.

The default is to call json.dumps() or the dumps function provided in the constructor. You can override this method to create a customized JSON wrapper.

psycopg2.extras. register_json ( conn_or_curs=None , globally=False , loads=None , oid=None , array_oid=None , name='json' )

Create and register typecasters converting json type to Python objects.

Parameters:
  • conn_or_curs - a connection or cursor used to find the json and json[] oids; the typecasters are registered in a scope limited to this object, unless globally is set to True . It can be None if the oids are provided
  • globally - if False register the typecasters only on conn_or_curs , otherwise register them globally
  • loads - the function used to parse the data into a Python object. If None use json.loads() , where json is the module chosen according to the Python version (see above)
  • oid - the OID of the json type if known; If not, it will be queried on conn_or_curs
  • array_oid - the OID of the json[] array type if known; if not, it will be queried on conn_or_curs
  • name - the name of the data type to look for in conn_or_curs

The connection or cursor passed to the function will be used to query the database and look for the OID of the json type (or an alternative type if name if provided). No query is performed if oid and array_oid are provided. Raise ProgrammingError if the type is not found.

Changed in version 2.5.4: added the name parameter to enable jsonb support.

psycopg2.extras. register_default_json ( conn_or_curs=None , globally=False , loads=None )

Create and register json typecasters for PostgreSQL 9.2 and following.

Since PostgreSQL 9.2 json is a builtin type, hence its oid is known and fixed. This function allows specifying a customized loads function for the default json type without querying the database. All the parameters have the same meaning of register_json() .

psycopg2.extras. register_default_jsonb ( conn_or_curs=None , globally=False , loads=None )

Create and register jsonb typecasters for PostgreSQL 9.4 and following.

As in register_default_json() , the function allows to register a customized loads function for the jsonb type at its known oid for PostgreSQL 9.4 and following versions. All the parameters have the same meaning of register_json() .

New in version 2.5.4.

Hstore data type

New in version 2.3.

The hstore data type is a key-value store embedded in PostgreSQL. It has been available for several server versions but with the release 9.0 it has been greatly improved in capacity and usefulness with the addition of many functions. It supports GiST or GIN indexes allowing search by keys or key/value pairs as well as regular BTree indexes for equality, uniqueness etc.

Psycopg can convert Python dict objects to and from hstore structures. Only dictionaries with string/unicode keys and values are supported. None is also allowed as value but not as a key. Psycopg uses a more efficient hstore representation when dealing with PostgreSQL 9.0 but previous server versions are supported as well. By default the adapter/typecaster are disabled: they can be enabled using the register_hstore() function.

psycopg2.extras. register_hstore ( conn_or_curs , globally=False , unicode=False , oid=None , array_oid=None )

Register adapter and typecaster for dict - hstore conversions.

Parameters:
  • conn_or_curs - a connection or cursor: the typecaster will be registered only on this object unless globally is set to True
  • globally - register the adapter globally, not only on conn_or_curs
  • unicode - if True , keys and values returned from the database will be unicode instead of str . The option is not available on Python 3
  • oid - the OID of the hstore type if known. If not, it will be queried on conn_or_curs .
  • array_oid - the OID of the hstore array type if known. If not, it will be queried on conn_or_curs .

The connection or cursor passed to the function will be used to query the database and look for the OID of the hstore type (which may be different across databases). If querying is not desirable (e.g. with asynchronous connections ) you may specify it in the oid parameter, which can be found using a query such as SELECT 'hstore'::regtype::oid . Analogously you can obtain a value for array_oid using a query such as SELECT 'hstore[]'::regtype::oid .

Note that, when passing a dictionary from Python to the database, both strings and unicode keys and values are supported. Dictionaries returned from the database have keys/values according to the unicode parameter.

The hstore contrib module must be already installed in the database (executing the hstore.sql script in your contrib directory). Raise ProgrammingError if the type is not found.

Changed in version 2.4: added the oid parameter. If not specified, the typecaster is installed also if hstore is not installed in the public schema.

Changed in version 2.4.3: added support for hstore array.

Composite types casting

New in version 2.4.

Using register_composite() it is possible to cast a PostgreSQL composite type (either created with the CREATE TYPE command or implicitly defined after a table row type) into a Python named tuple, or into a regular tuple if collections.namedtuple() is not found.

>>> cur.execute("CREATE TYPE card AS (value int, suit text);")
>>> psycopg2.extras.register_composite('card', cur)


>>> cur.execute("select (8, 'hearts')::card")
>>> cur.fetchone()[0]
card(value=8, suit='hearts')

Nested composite types are handled as expected, provided that the type of the composite components are registered as well.

>>> cur.execute("CREATE TYPE card_back AS (face card, back text);")
>>> psycopg2.extras.register_composite('card_back', cur)


>>> cur.execute("select ((8, 'hearts'), 'blue')::card_back")
>>> cur.fetchone()[0]
card_back(face=card(value=8, suit='hearts'), back='blue')

Adaptation from Python tuples to composite types is automatic instead and requires no adapter registration.

Note

If you want to convert PostgreSQL composite types into something different than a namedtuple you can subclass the CompositeCaster overriding make() . For example, if you want to convert your type into a Python dictionary you can use:

>>> class DictComposite(psycopg2.extras.CompositeCaster):
...     def make(self, values):
...         return dict(zip(self.attnames, values))

>>> psycopg2.extras.register_composite('card', cur,
...     factory=DictComposite)

>>> cur.execute("select (8, 'hearts')::card")
>>> cur.fetchone()[0]
{'suit': 'hearts', 'value': 8}
psycopg2.extras. register_composite ( name , conn_or_curs , globally=False , factory=None )

Register a typecaster to convert a composite type into a tuple.

Parameters:
  • name - the name of a PostgreSQL composite type, e.g. created using the CREATE TYPE command
  • conn_or_curs - a connection or cursor used to find the type oid and components; the typecaster is registered in a scope limited to this object, unless globally is set to True
  • globally - if False (default) register the typecaster only on conn_or_curs , otherwise register it globally
  • factory - if specified it should be a CompositeCaster subclass: use it to customize how to cast composite types
Returns:

the registered CompositeCaster or factory instance responsible for the conversion

Changed in version 2.4.3: added support for array of composite types

Changed in version 2.5: added the factory parameter

class psycopg2.extras. CompositeCaster ( name , oid , attrs , array_oid=None , schema=None )

Helps conversion of a PostgreSQL composite type into a Python object.

The class is usually created by the register_composite() function. You may want to create and register manually instances of the class if querying the database at registration time is not desirable (such as when using an asynchronous connections ).

make ( values )

Return a new Python object representing the data being casted.

values is the list of attributes, already casted into their Python representation.

You can subclass this method to customize the composite cast .

New in version 2.5.

Object attributes:

name

The name of the PostgreSQL type.

schema

The schema where the type is defined.

New in version 2.5.

oid

The oid of the PostgreSQL type.

array_oid

The oid of the PostgreSQL array type, if available.

type

The type of the Python objects returned. If collections.namedtuple() is available, it is a named tuple with attributes equal to the type components. Otherwise it is just the tuple object.

attnames

List of component names of the type to be casted.

atttypes

List of component type oids of the type to be casted.

Range data types

New in version 2.5.

Psycopg offers a Range Python type and supports adaptation between them and PostgreSQL range types. Builtin range types are supported out-of-the-box; user-defined range types can be adapted using register_range() .

class psycopg2.extras. Range ( lower=None , upper=None , bounds='[)' , empty=False )

Python representation for a PostgreSQL range type.

Parameters:
  • lower - lower bound for the range. None means unbound
  • upper - upper bound for the range. None means unbound
  • bounds - one of the literal strings () , [) , (] , [] , representing whether the lower or upper bounds are included
  • empty - if True , the range is empty

This Python type is only used to pass and retrieve range values to and from PostgreSQL and doesn’t attempt to replicate the PostgreSQL range features: it doesn’t perform normalization and doesn’t implement all the operators supported by the database.

Range objects are immutable, hashable, and support the in operator (checking if an element is within the range). They can be tested for equivalence. Empty ranges evaluate to False in boolean context, nonempty evaluate to True .

Changed in version 2.5.3: Range objects can be sorted although, as on the server-side, this ordering is not particularly meangingful. It is only meant to be used by programs assuming objects using Range as primary key can be sorted on them. In previous versions comparing Range s raises TypeError .

Although it is possible to instantiate Range objects, the class doesn’t have an adapter registered, so you cannot normally pass these instances as query arguments. To use range objects as query arguments you can either use one of the provided subclasses, such as NumericRange or create a custom subclass using register_range() .

Object attributes:

isempty

True if the range is empty.

lower

The lower bound of the range. None if empty or unbound.

upper

The upper bound of the range. None if empty or unbound.

lower_inc

True if the lower bound is included in the range.

upper_inc

True if the upper bound is included in the range.

lower_inf

True if the range doesn’t have a lower bound.

upper_inf

True if the range doesn’t have an upper bound.

The following Range subclasses map builtin PostgreSQL range types to Python objects: they have an adapter registered so their instances can be passed as query arguments. range values read from database queries are automatically casted into instances of these classes.

class psycopg2.extras. NumericRange ( lower=None , upper=None , bounds='[)' , empty=False )

A Range suitable to pass Python numeric types to a PostgreSQL range.

PostgreSQL types int4range , int8range , numrange are casted into NumericRange instances.

class psycopg2.extras. DateRange ( lower=None , upper=None , bounds='[)' , empty=False )

Represents daterange values.

class psycopg2.extras. DateTimeRange ( lower=None , upper=None , bounds='[)' , empty=False )

Represents tsrange values.

class psycopg2.extras. DateTimeTZRange ( lower=None , upper=None , bounds='[)' , empty=False )

Represents tstzrange values.

Note

Python lacks a representation for infinity date so Psycopg converts the value to date.max and such. When written into the database these dates will assume their literal value (e.g. 9999-12-31 instead of infinity ). Check Infinite dates handling for an example of an alternative adapter to map date.max to infinity . An alternative dates adapter will be used automatically by the DateRange adapter and so on.

Custom range types (created with CREATE TYPE ... AS RANGE ) can be adapted to a custom Range subclass:

psycopg2.extras. register_range ( pgrange , pyrange , conn_or_curs , globally=False )

Create and register an adapter and the typecasters to convert between a PostgreSQL range type and a PostgreSQL Range subclass.

Parameters:
  • pgrange - the name of the PostgreSQL range type. Can be schema-qualified
  • pyrange - a Range strict subclass, or just a name to give to a new class
  • conn_or_curs - a connection or cursor used to find the oid of the range and its subtype; the typecaster is registered in a scope limited to this object, unless globally is set to True
  • globally - if False (default) register the typecaster only on conn_or_curs , otherwise register it globally
Returns:

RangeCaster instance responsible for the conversion

If a string is passed to pyrange , a new Range subclass is created with such name and will be available as the range attribute of the returned RangeCaster object.

The function queries the database on conn_or_curs to inspect the pgrange type and raises ProgrammingError if the type is not found. If querying the database is not advisable, use directly the RangeCaster class and register the adapter and typecasters using the provided functions.

class psycopg2.extras. RangeCaster ( pgrange , pyrange , oid , subtype_oid , array_oid=None )

Helper class to convert between Range and PostgreSQL range types.

Objects of this class are usually created by register_range() . Manual creation could be useful if querying the database is not advisable: in this case the oids must be provided.

Object attributes:

range

The Range subclass adapted.

adapter

The ISQLQuote responsible to adapt range .

typecaster

The object responsible for casting.

array_typecaster

The object responsible to cast arrays, if available, else None .

UUID data type

New in version 2.0.9.

Changed in version 2.0.13: added UUID array support.

>>> psycopg2.extras.register_uuid()


>>> # Python UUID can be used in SQL queries
>>> import uuid
>>> my_uuid = uuid.UUID('{12345678-1234-5678-1234-567812345678}')
>>> psycopg2.extensions.adapt(my_uuid).getquoted()
"'12345678-1234-5678-1234-567812345678'::uuid"

>>> # PostgreSQL UUID are transformed into Python UUID objects.
>>> cur.execute("SELECT 'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid")
>>> cur.fetchone()[0]
UUID('a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11')
psycopg2.extras. register_uuid ( oids=None , conn_or_curs=None )

Create the UUID type and an uuid.UUID adapter.

Parameters:
  • oids - oid for the PostgreSQL uuid type, or 2-items sequence with oids of the type and the array. If not specified, use PostgreSQL standard oids.
  • conn_or_curs - where to register the typecaster. If not specified, register it globally.
class psycopg2.extras. UUID_adapter ( uuid )

Adapt Python’s uuid.UUID type to PostgreSQL’s uuid .

Networking data types

By default Psycopg casts the PostgreSQL networking data types ( inet , cidr , macaddr ) into ordinary strings; array of such types are converted into lists of strings.

Changed in version 2.7: in previous version array of networking types were not treated as arrays.

psycopg2.extras. register_ipaddress ( conn_or_curs=None )

Register conversion support between ipaddress objects and network types .

Parameters: conn_or_curs - the scope where to register the type casters. If None register them globally.

After the function is called, PostgreSQL inet values will be converted into IPv4Interface or IPv6Interface objects, cidr values into into IPv4Network or IPv6Network .

psycopg2.extras. register_inet ( oid=None , conn_or_curs=None )

Create the INET type and an Inet adapter.

Parameters:
  • oid - oid for the PostgreSQL inet type, or 2-items sequence with oids of the type and the array. If not specified, use PostgreSQL standard oids.
  • conn_or_curs - where to register the typecaster. If not specified, register it globally.

Deprecated since version 2.7: this function will not receive further development and may disappear in future versions.

>>> psycopg2.extras.register_inet()


>>> cur.mogrify("SELECT %s", (Inet('127.0.0.1/32'),))
"SELECT E'127.0.0.1/32'::inet"

>>> cur.execute("SELECT '192.168.0.1/24'::inet")
>>> cur.fetchone()[0].addr
'192.168.0.1/24'
class psycopg2.extras. Inet ( addr )

Wrap a string to allow for correct SQL-quoting of inet values.

Note that this adapter does NOT check the passed value to make sure it really is an inet-compatible address but DOES call adapt() on it to make sure it is impossible to execute an SQL-injection by passing an evil value to the initializer.

Deprecated since version 2.7: this object will not receive further development and may disappear in future versions.

Fast execution helpers

The current implementation of executemany() is (using an extremely charitable understatement) not particularly performing. These functions can be used to speed up the repeated execution of a statement againts a set of parameters. By reducing the number of server roundtrips the performance can be orders of magnitude better than using executemany() .

psycopg2.extras. execute_batch ( cur , sql , argslist , page_size=100 )

Execute groups of statements in fewer server roundtrips.

Execute sql several times, against all parameters set (sequences or mappings) found in argslist .

The function is semantically similar to

cur.executemany(sql, argslist)

but has a different implementation: Psycopg will join the statements into fewer multi-statement commands, each one containing at most page_size statements, resulting in a reduced number of server roundtrips.

After the execution of the functtion the cursor.rowcount property will not contain a total result.

New in version 2.7.

Note

execute_batch() can be also used in conjunction with PostgreSQL prepared statements using PREPARE , EXECUTE , DEALLOCATE . Instead of executing:

execute_batch(cur,
    "big and complex SQL with %s %s params",
    params_list)

it is possible to execute something like:

cur.execute("PREPARE stmt AS big and complex SQL with $1 $2 params")
execute_batch(cur, "EXECUTE stmt (%s, %s)", params_list)
cur.execute("DEALLOCATE stmt")

which may bring further performance benefits: if the operation to perform is complex, every single execution will be faster as the query plan is already cached; furthermore the amount of data to send on the server will be lesser (one EXECUTE per param set instead of the whole, likely longer, statement).

psycopg2.extras. execute_values ( cur , sql , argslist , template=None , page_size=100 )

Execute a statement using VALUES with a sequence of parameters.

Parameters:
  • cur - the cursor to use to execute the query.
  • sql - the query to execute. It must contain a single %s placeholder, which will be replaced by a VALUES list . Example: "INSERT INTO mytable (id, f1, f2) VALUES %s" .
  • argslist - sequence of sequences or dictionaries with the arguments to send to the query. The type and content must be consistent with template .
  • template - the snippet to merge to every item in argslist to compose the query. If argslist items are sequences it should contain positional placeholders (e.g. "(%s, %s, %s)" , or "(%s, %s, 42) " if there are constants value...); If argslist is items are mapping it should contain named placeholders (e.g. "(%(id)s, %(f1)s, 42)" ). If not specified, assume the arguments are sequence and use a simple positional template (i.e. (%s, %s, ...) ), with the number of placeholders sniffed by the first element in argslist .
  • page_size - maximum number of argslist items to include in every statement. If there are more items the function will execute more than one statement.

After the execution of the functtion the cursor.rowcount property will not contain a total result.

While INSERT is an obvious candidate for this function it is possible to use it with other statements, for example:

>>> cur.execute(
... "create table test (id int primary key, v1 int, v2 int)")

>>> execute_values(cur,
... "INSERT INTO test (id, v1, v2) VALUES %s",
... [(1, 2, 3), (4, 5, 6), (7, 8, 9)])

>>> execute_values(cur,
... """UPDATE test SET v1 = data.v1 FROM (VALUES %s) AS data (id, v1)
... WHERE test.id = data.id""",
... [(1, 20), (4, 50)])

>>> cur.execute("select * from test order by id")
>>> cur.fetchall()
[(1, 20, 3), (4, 50, 6), (7, 8, 9)])

New in version 2.7.

Fractional time zones

psycopg2.extras. register_tstz_w_secs ( oids=None , conn_or_curs=None )

The function used to register an alternate type caster for TIMESTAMP WITH TIME ZONE to deal with historical time zones with seconds in the UTC offset.

These are now correctly handled by the default type caster, so currently the function doesn’t do anything.

New in version 2.0.9.

Changed in version 2.2.2: function is no-op: see Time zones handling .

Coroutine support

psycopg2.extras. wait_select ( conn )

Wait until a connection or cursor has data available.

The function is an example of a wait callback to be registered with set_wait_callback() . This function uses select() to wait for data available.

Changed in version 2.6.2: allow to cancel a query using Ctrl-C , see the FAQ for an example.