psycopg2.extras - Miscellaneous goodies for Psycopg 2 - Psycopg 2.9.9 documentation
Psycopg 2.9.9 documentation
psycopg2.extras
- Miscellaneous goodies for Psycopg 2
This module is a generic place used to hold little helper functions and classes until a better place in the distribution is found.
Connection and cursor subclasses
A few objects that change the way the results are returned by the cursor or
modify the object behavior in some other way. Typically
cursor
subclasses
are passed as
cursor_factory
argument to
connect()
so that the
connection’s
cursor()
method will generate objects of this
class. Alternatively a
cursor
subclass can be used one-off by passing it
as the
cursor_factory
argument to the
cursor()
method.
If you want to use a
connection
subclass you can pass it as the
connection_factory
argument of the
connect()
function.
Dictionary-like cursor
The dict cursors allow to access to the attributes of retrieved records using an interface similar to the Python dictionaries instead of the tuples.
>>> dict_cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
>>> dict_cur.execute("INSERT INTO test (num, data) VALUES(%s, %s)",
... (100, "abc'def"))
>>> dict_cur.execute("SELECT * FROM test")
>>> rec = dict_cur.fetchone()
>>> rec['id']
1
>>> rec['num']
100
>>> rec['data']
"abc'def"
The records still support indexing as the original tuple:
>>> rec[2]
"abc'def"
- class psycopg2.extras. DictCursor ( * args , ** kwargs )
-
A cursor that keeps a list of column name -> index mappings .
- class psycopg2.extras. DictConnection
-
A connection that uses
DictCursor
automatically.Note
Not very useful since Psycopg 2.5: you can use
psycopg2.connect
(dsn, cursor_factory=DictCursor)
instead ofDictConnection
.
Real dictionary cursor
- class psycopg2.extras. RealDictCursor ( * args , ** kwargs )
-
A cursor that uses a real dict as the base type for rows.
Note that this cursor is extremely specialized and does not allow the normal access (using integer indices) to fetched data. If you need to access database rows both as a dictionary and a list, then use the generic
DictCursor
instead ofRealDictCursor
.
- class psycopg2.extras. RealDictConnection
-
A connection that uses
RealDictCursor
automatically.Note
Not very useful since Psycopg 2.5: you can use
psycopg2.connect
(dsn, cursor_factory=RealDictCursor)
instead ofRealDictConnection
.
namedtuple
cursor
Added in version 2.3.
- class psycopg2.extras. NamedTupleCursor
-
A cursor that generates results as
namedtuple
.fetch*()
methods will return named tuples instead of regular tuples, so their elements can be accessed both as regular numeric items as well as attributes.>>> nt_cur = conn.cursor(cursor_factory=psycopg2.extras.NamedTupleCursor) >>> rec = nt_cur.fetchone() >>> rec Record(id=1, num=100, data="abc'def") >>> rec[1] 100 >>> rec.data "abc'def"
- class psycopg2.extras. NamedTupleConnection
-
A connection that uses
NamedTupleCursor
automatically.Note
Not very useful since Psycopg 2.5: you can use
psycopg2.connect
(dsn, cursor_factory=NamedTupleCursor)
instead ofNamedTupleConnection
.
Logging cursor
- class psycopg2.extras. LoggingConnection
-
A connection that logs all queries to a file or logger object.
- class psycopg2.extras. LoggingCursor
-
A cursor that logs queries using its connection logging facilities.
Note
Queries that are executed with
cursor.executemany()
are not logged.
- class psycopg2.extras. MinTimeLoggingConnection
-
A connection that logs queries based on execution time.
This is just an example of how to sub-class
LoggingConnection
to provide some extra filtering for the logged queries. Both theinitialize()
andfilter()
methods are overwritten to make sure that only queries executing for more thanmintime
ms are logged.Note that this connection uses the specialized cursor
MinTimeLoggingCursor
.
- class psycopg2.extras. MinTimeLoggingCursor
-
The cursor sub-class companion to
MinTimeLoggingConnection
.
Replication support objects
See Replication protocol support for an introduction to the topic.
The following replication types are defined:
- class psycopg2.extras. LogicalReplicationConnection ( * args , ** kwargs )
-
This connection factory class can be used to open a special type of connection that is used for logical replication.
Example:
from psycopg2.extras import LogicalReplicationConnection log_conn = psycopg2.connect(dsn, connection_factory=LogicalReplicationConnection) log_cur = log_conn.cursor()
- class psycopg2.extras. PhysicalReplicationConnection ( * args , ** kwargs )
-
This connection factory class can be used to open a special type of connection that is used for physical replication.
Example:
from psycopg2.extras import PhysicalReplicationConnection phys_conn = psycopg2.connect(dsn, connection_factory=PhysicalReplicationConnection) phys_cur = phys_conn.cursor()
Both
LogicalReplicationConnection
andPhysicalReplicationConnection
useReplicationCursor
for actual communication with the server.
The individual messages in the replication stream are represented by
ReplicationMessage
objects (both logical and physical type):
- class psycopg2.extras. ReplicationMessage
-
A replication protocol message.
- payload
-
The actual data received from the server.
An instance of either
bytes()
orunicode()
, depending on the value ofdecode
option passed tostart_replication()
on the connection. Seeread_message()
for details.
- send_time
-
A
datetime
object representing the server timestamp at the moment when the message was sent.
- cursor
-
A reference to the corresponding
ReplicationCursor
object.
- class psycopg2.extras. ReplicationCursor
-
A cursor used for communication on replication connections.
- create_replication_slot ( slot_name , slot_type = None , output_plugin = None )
-
Create streaming replication slot.
- Parameters :
-
-
slot_name - name of the replication slot to be created
-
slot_type - type of replication: should be either
REPLICATION_LOGICAL
orREPLICATION_PHYSICAL
-
output_plugin - name of the logical decoding output plugin to be used by the slot; required for logical replication connections, disallowed for physical
-
Example:
log_cur.create_replication_slot("logical1", "test_decoding") phys_cur.create_replication_slot("physical1") # either logical or physical replication connection cur.create_replication_slot("slot1", slot_type=REPLICATION_LOGICAL)
When creating a slot on a logical replication connection, a logical replication slot is created by default. Logical replication requires name of the logical decoding output plugin to be specified.
When creating a slot on a physical replication connection, a physical replication slot is created by default. No output plugin parameter is required or allowed when creating a physical replication slot.
In either case the type of slot being created can be specified explicitly using slot_type parameter.
Replication slots are a feature of PostgreSQL server starting with version 9.4.
- drop_replication_slot ( slot_name )
-
Drop streaming replication slot.
- Parameters :
-
slot_name - name of the replication slot to drop
Example:
# either logical or physical replication connection cur.drop_replication_slot("slot1")
Replication slots are a feature of PostgreSQL server starting with version 9.4.
- start_replication ( slot_name = None , slot_type = None , start_lsn = 0 , timeline = 0 , options = None , decode = False , status_interval = 10 )
-
Start replication on the connection.
- Parameters :
-
-
slot_name - name of the replication slot to use; required for logical replication, physical replication can work with or without a slot
-
slot_type - type of replication: should be either
REPLICATION_LOGICAL
orREPLICATION_PHYSICAL
-
start_lsn - the optional LSN position to start replicating from, can be an integer or a string of hexadecimal digits in the form
XXX/XXX
-
timeline - WAL history timeline to start streaming from (optional, can only be used with physical replication)
-
options - a dictionary of options to pass to logical replication slot (not allowed with physical replication)
-
decode - a flag indicating that unicode conversion should be performed on messages received from the server
-
status_interval - time between feedback packets sent to the server
-
If a slot_name is specified, the slot must exist on the server and its type must match the replication type used.
If not specified using slot_type parameter, the type of replication is defined by the type of replication connection. Logical replication is only allowed on logical replication connection, but physical replication can be used with both types of connection.
On the other hand, physical replication doesn’t require a named replication slot to be used, only logical replication does. In any case logical replication and replication slots are a feature of PostgreSQL server starting with version 9.4. Physical replication can be used starting with 9.0.
If start_lsn is specified, the requested stream will start from that LSN. The default is
None
which passes the LSN0/0
causing replay to begin at the last point for which the server got flush confirmation from the client, or the oldest available point for a new slot.The server might produce an error if a WAL file for the given LSN has already been recycled or it may silently start streaming from a later position: the client can verify the actual position using information provided by the
ReplicationMessage
attributes. The exact server behavior depends on the type of replication and use of slots.The timeline parameter can only be specified with physical replication and only starting with server version 9.3.
A dictionary of options may be passed to the logical decoding plugin on a logical replication slot. The set of supported options depends on the output plugin that was used to create the slot. Must be
None
for physical replication.If decode is set to
True
the messages received from the server would be converted according to the connectionencoding
. This parameter should not be set with physical replication or with logical replication plugins that produce binary output.Replication stream should periodically send feedback to the database to prevent disconnect via timeout. Feedback is automatically sent when
read_message()
is called or during run of theconsume_stream()
. To specify the feedback interval use status_interval parameter. The value of this parameter must be set to at least 1 second, but it can have a fractional part.This function constructs a
START_REPLICATION
command and callsstart_replication_expert()
internally.After starting the replication, to actually consume the incoming server messages use
consume_stream()
or implement a loop aroundread_message()
in case of asynchronous connection .Changed in version 2.8.3: added the status_interval parameter.
- start_replication_expert ( command , decode = False , status_interval = 10 )
-
Start replication on the connection using provided
START_REPLICATION
command.- Parameters :
-
-
command - The full replication command. It can be a string or a
Composable
instance for dynamic generation. -
decode - a flag indicating that unicode conversion should be performed on messages received from the server.
-
status_interval - time between feedback packets sent to the server
-
Changed in version 2.8.3: added the status_interval parameter.
- consume_stream ( consume , keepalive_interval = None )
-
- Parameters :
-
-
consume - a callable object with signature
consume( msg )
-
keepalive_interval - interval (in seconds) to send keepalive messages to the server
-
This method can only be used with synchronous connection. For asynchronous connections see
read_message()
.Before using this method to consume the stream call
start_replication()
first.This method enters an endless loop reading messages from the server and passing them to
consume()
one at a time, then waiting for more messages from the server. In order to make this method break out of the loop and return,consume()
can throw aStopReplication
exception. Any unhandled exception will make it break out of the loop as well.The msg object passed to
consume()
is an instance ofReplicationMessage
class. Seeread_message()
for details about message decoding.This method also sends feedback messages to the server every keepalive_interval (in seconds). The value of this parameter must be set to at least 1 second, but it can have a fractional part. If the keepalive_interval is not specified, the value of status_interval specified in the
start_replication()
orstart_replication_expert()
will be used.The client must confirm every processed message by calling
send_feedback()
method on the corresponding replication cursor. A reference to the cursor is provided in theReplicationMessage
as an attribute.The following example is a sketch implementation of
consume()
callable for logical replication:class LogicalStreamConsumer(object): # ... def __call__(self, msg): self.process_message(msg.payload) msg.cursor.send_feedback(flush_lsn=msg.data_start) consumer = LogicalStreamConsumer() cur.consume_stream(consumer)
Warning
When using replication with slots, failure to constantly consume and report success to the server appropriately can eventually lead to "disk full" condition on the server, because the server retains all the WAL segments that might be needed to stream the changes via all of the currently open replication slots.
Changed in version 2.8.3: changed the default value of the keepalive_interval parameter to
None
.
- send_feedback ( write_lsn = 0 , flush_lsn = 0 , apply_lsn = 0 , reply = False , force = False )
-
- Parameters :
-
-
write_lsn - a LSN position up to which the client has written the data locally
-
flush_lsn - a LSN position up to which the client has processed the data reliably (the server is allowed to discard all and every data that predates this LSN)
-
apply_lsn - a LSN position up to which the warm standby server has applied the changes (physical replication master-slave protocol only)
-
reply - request the server to send back a keepalive message immediately
-
force - force sending a feedback message regardless of status_interval timeout
-
Use this method to report to the server that all messages up to a certain LSN position have been processed on the client and may be discarded on the server.
If the reply or force parameters are not set, this method will just update internal structures without sending the feedback message to the server. The library sends feedback message automatically when status_interval timeout is reached. For this to work, you must call
send_feedback()
on the same Cursor that you calledstart_replication()
on (the one inmessage.cursor
) or your feedback will be lost.Changed in version 2.8.3: added the force parameter.
Low-level replication cursor methods for asynchronous connection operation.
With the synchronous connection a call to
consume_stream()
handles all the complexity of handling the incoming messages and sending keepalive replies, but at times it might be beneficial to use low-level interface for better control, in particular toselect
on multiple sockets. The following methods are provided for asynchronous operation:- read_message ( )
-
Try to read the next message from the server without blocking and return an instance of
ReplicationMessage
orNone
, in case there are no more data messages from the server at the moment.This method should be used in a loop with asynchronous connections (after calling
start_replication()
once). For synchronous connections seeconsume_stream()
.The returned message’s
payload
is an instance ofunicode
decoded according to connectionencoding
iff decode was set toTrue
in the initial call tostart_replication()
on this connection, otherwise it is an instance ofbytes
with no decoding.It is expected that the calling code will call this method repeatedly in order to consume all of the messages that might have been buffered until
None
is returned. After receivingNone
from this method the caller should useselect()
orpoll()
on the corresponding connection to block the process until there is more data from the server.Last, but not least, this method sends feedback messages when status_interval timeout is reached or when keepalive message with reply request arrived from the server.
- fileno ( )
-
Call the corresponding connection’s
fileno()
method and return the result.This is a convenience method which allows replication cursor to be used directly in
select()
orpoll()
calls.
- io_timestamp
-
A
datetime
object representing the timestamp at the moment of last communication with the server (a data or keepalive message in either direction).
- feedback_timestamp
-
A
datetime
object representing the timestamp at the moment when the last feedback message sent to the server.Added in version 2.8.3.
- wal_end
-
LSN position of the current end of WAL on the server at the moment of last data or keepalive message received from the server.
Added in version 2.8.
An actual example of asynchronous operation might look like this:
from select import select from datetime import datetime def consume(msg): # ... msg.cursor.send_feedback(flush_lsn=msg.data_start) status_interval = 10.0 while True: msg = cur.read_message() if msg: consume(msg) else: now = datetime.now() timeout = status_interval - (now - cur.feedback_timestamp).total_seconds() try: sel = select([cur], [], [], max(0, timeout)) except InterruptedError: pass # recalculate timeout and continue
- class psycopg2.extras. StopReplication
-
Exception used to break out of the endless loop in
consume_stream()
.Subclass of
Exception
. Intentionally not inherited fromError
as occurrence of this exception does not indicate an error.
Additional data types
JSON adaptation
Added in version 2.5.
Changed in version 2.5.4:
added
jsonb
support. In previous versions
jsonb
values are returned
as strings. See
the FAQ
for a workaround.
Psycopg can adapt Python objects to and from the PostgreSQL
json
and
jsonb
types. With PostgreSQL 9.2 and following versions adaptation is
available out-of-the-box. To use JSON data with previous database versions
(either with the
9.1 json extension
, but even if you want to convert text
fields to JSON) you can use the
register_json()
function.
The Python
json
module is used by default to convert Python objects
to JSON and to parse data from the database.
In order to pass a Python object to the database as query argument you can use
the
Json
adapter:
curs.execute("insert into mytable (jsondata) values (%s)",
[Json({'a': 100})])
Reading from the database,
json
and
jsonb
values will be automatically
converted to Python objects.
Note
If you are using the PostgreSQL
json
data type but you want to read
it as string in Python instead of having it parsed, your can either cast
the column to
text
in the query (it is an efficient operation, that
doesn’t involve a copy):
cur.execute("select jsondata::text from mytable")
or you can register a no-op
loads()
function with
register_default_json()
:
psycopg2.extras.register_default_json(loads=lambda x: x)
Note
You can use
register_adapter()
to adapt any Python
dictionary to JSON, either registering
Json
or any subclass or factory
creating a compatible adapter:
psycopg2.extensions.register_adapter(dict, psycopg2.extras.Json)
This setting is global though, so it is not compatible with similar
adapters such as the one registered by
register_hstore()
. Any other
object supported by JSON can be registered the same way, but this will
clobber the default adaptation rule, so be careful to unwanted side
effects.
If you want to customize the adaptation from Python to PostgreSQL you can
either provide a custom
dumps()
function to
Json
:
curs.execute("insert into mytable (jsondata) values (%s)",
[Json({'a': 100}, dumps=simplejson.dumps)])
or you can subclass it overriding the
dumps()
method:
class MyJson(Json):
def dumps(self, obj):
return simplejson.dumps(obj)
curs.execute("insert into mytable (jsondata) values (%s)",
[MyJson({'a': 100})])
Customizing the conversion from PostgreSQL to Python can be done passing a
custom
loads()
function to
register_json()
. For the builtin data types
(
json
from PostgreSQL 9.2,
jsonb
from PostgreSQL 9.4) use
register_default_json()
and
register_default_jsonb()
. For example, if you
want to convert the float values from
json
into
Decimal
you can use:
loads = lambda x: json.loads(x, parse_float=Decimal)
psycopg2.extras.register_json(conn, loads=loads)
Or, if you want to use an alternative JSON module implementation, such as the faster UltraJSON , you can use:
psycopg2.extras.register_default_json(loads=ujson.loads, globally=True)
psycopg2.extras.register_default_jsonb(loads=ujson.loads, globally=True)
- class psycopg2.extras. Json ( adapted , dumps = None )
-
An
ISQLQuote
wrapper to adapt a Python object tojson
data type.Json
can be used to wrap any object supported by the provided dumps function. If none is provided, the standardjson.dumps()
is used.
- psycopg2.extras. register_json ( conn_or_curs = None , globally = False , loads = None , oid = None , array_oid = None , name = 'json' )
-
Create and register typecasters converting
json
type to Python objects.- Parameters :
-
-
conn_or_curs - a connection or cursor used to find the
json
andjson[]
oids; the typecasters are registered in a scope limited to this object, unless globally is set toTrue
. It can beNone
if the oids are provided -
globally - if
False
register the typecasters only on conn_or_curs , otherwise register them globally -
loads - the function used to parse the data into a Python object. If
None
usejson.loads()
, wherejson
is the module chosen according to the Python version (see above) -
oid - the OID of the
json
type if known; If not, it will be queried on conn_or_curs -
array_oid - the OID of the
json[]
array type if known; if not, it will be queried on conn_or_curs -
name - the name of the data type to look for in conn_or_curs
-
The connection or cursor passed to the function will be used to query the database and look for the OID of the
json
type (or an alternative type if name if provided). No query is performed if oid and array_oid are provided. RaiseProgrammingError
if the type is not found.Changed in version 2.5.4: added the name parameter to enable
jsonb
support.
- psycopg2.extras. register_default_json ( conn_or_curs = None , globally = False , loads = None )
-
Create and register
json
typecasters for PostgreSQL 9.2 and following.Since PostgreSQL 9.2
json
is a builtin type, hence its oid is known and fixed. This function allows specifying a customized loads function for the defaultjson
type without querying the database. All the parameters have the same meaning ofregister_json()
.
- psycopg2.extras. register_default_jsonb ( conn_or_curs = None , globally = False , loads = None )
-
Create and register
jsonb
typecasters for PostgreSQL 9.4 and following.As in
register_default_json()
, the function allows to register a customized loads function for thejsonb
type at its known oid for PostgreSQL 9.4 and following versions. All the parameters have the same meaning ofregister_json()
.Added in version 2.5.4.
Hstore data type
Added in version 2.3.
The
hstore
data type is a key-value store embedded in PostgreSQL. It has
been available for several server versions but with the release 9.0 it has
been greatly improved in capacity and usefulness with the addition of many
functions. It supports GiST or GIN indexes allowing search by keys or
key/value pairs as well as regular BTree indexes for equality, uniqueness etc.
Psycopg can convert Python
dict
objects to and from
hstore
structures.
Only dictionaries with string/unicode keys and values are supported.
None
is also allowed as value but not as a key. Psycopg uses a more efficient
hstore
representation when dealing with PostgreSQL 9.0 but previous server versions
are supported as well. By default the adapter/typecaster are disabled: they
can be enabled using the
register_hstore()
function.
- psycopg2.extras. register_hstore ( conn_or_curs , globally = False , unicode = False , oid = None , array_oid = None )
-
Register adapter and typecaster for
dict
-hstore
conversions.- Parameters :
-
-
conn_or_curs - a connection or cursor: the typecaster will be registered only on this object unless globally is set to
True
-
globally - register the adapter globally, not only on conn_or_curs
-
unicode - if
True
, keys and values returned from the database will beunicode
instead ofstr
. The option is not available on Python 3 -
oid - the OID of the
hstore
type if known. If not, it will be queried on conn_or_curs . -
array_oid - the OID of the
hstore
array type if known. If not, it will be queried on conn_or_curs .
-
The connection or cursor passed to the function will be used to query the database and look for the OID of the
hstore
type (which may be different across databases). If querying is not desirable (e.g. with asynchronous connections ) you may specify it in the oid parameter, which can be found using a query such asSELECT 'hstore'::regtype::oid
. Analogously you can obtain a value for array_oid using a query such asSELECT 'hstore[]'::regtype::oid
.Note that, when passing a dictionary from Python to the database, both strings and unicode keys and values are supported. Dictionaries returned from the database have keys/values according to the unicode parameter.
The
hstore
contrib module must be already installed in the database (executing thehstore.sql
script in yourcontrib
directory). RaiseProgrammingError
if the type is not found.Changed in version 2.4: added the oid parameter. If not specified, the typecaster is installed also if
hstore
is not installed in thepublic
schema.Changed in version 2.4.3: added support for
hstore
array.
Composite types casting
Added in version 2.4.
Using
register_composite()
it is possible to cast a PostgreSQL composite
type (either created with the
CREATE
TYPE
command or implicitly defined
after a table row type) into a Python named tuple, or into a regular tuple if
collections.namedtuple()
is not found.
>>> cur.execute("CREATE TYPE card AS (value int, suit text);")
>>> psycopg2.extras.register_composite('card', cur)
>>> cur.execute("select (8, 'hearts')::card")
>>> cur.fetchone()[0]
card(value=8, suit='hearts')
Nested composite types are handled as expected, provided that the type of the composite components are registered as well.
>>> cur.execute("CREATE TYPE card_back AS (face card, back text);")
>>> psycopg2.extras.register_composite('card_back', cur)
>>> cur.execute("select ((8, 'hearts'), 'blue')::card_back")
>>> cur.fetchone()[0]
card_back(face=card(value=8, suit='hearts'), back='blue')
Adaptation from Python tuples to composite types is automatic instead and requires no adapter registration.
Note
If you want to convert PostgreSQL composite types into something different
than a
namedtuple
you can subclass the
CompositeCaster
overriding
make()
. For example, if you want to convert your type
into a Python dictionary you can use:
>>> class DictComposite(psycopg2.extras.CompositeCaster):
... def make(self, values):
... return dict(zip(self.attnames, values))
>>> psycopg2.extras.register_composite('card', cur,
... factory=DictComposite)
>>> cur.execute("select (8, 'hearts')::card")
>>> cur.fetchone()[0]
{'suit': 'hearts', 'value': 8}
- psycopg2.extras. register_composite ( name , conn_or_curs , globally = False , factory = None )
-
Register a typecaster to convert a composite type into a tuple.
- Parameters :
-
-
name - the name of a PostgreSQL composite type, e.g. created using the
CREATE TYPE
command -
conn_or_curs - a connection or cursor used to find the type oid and components; the typecaster is registered in a scope limited to this object, unless globally is set to
True
-
globally - if
False
(default) register the typecaster only on conn_or_curs , otherwise register it globally -
factory - if specified it should be a
CompositeCaster
subclass: use it to customize how to cast composite types
-
- Returns :
-
the registered
CompositeCaster
or factory instance responsible for the conversion
Changed in version 2.4.3: added support for array of composite types
Changed in version 2.5: added the factory parameter
- class psycopg2.extras. CompositeCaster ( name , oid , attrs , array_oid = None , schema = None )
-
Helps conversion of a PostgreSQL composite type into a Python object.
The class is usually created by the
register_composite()
function. You may want to create and register manually instances of the class if querying the database at registration time is not desirable (such as when using an asynchronous connections ).- make ( values )
-
Return a new Python object representing the data being casted.
values is the list of attributes, already casted into their Python representation.
You can subclass this method to customize the composite cast .
Added in version 2.5.
Object attributes:
- type
-
The type of the Python objects returned. If
collections.namedtuple()
is available, it is a named tuple with attributes equal to the type components. Otherwise it is just thetuple
object.
Range data types
Added in version 2.5.
Psycopg offers a
Range
Python type and supports adaptation between them and
PostgreSQL
range
types. Builtin
range
types are supported out-of-the-box;
user-defined
range
types can be adapted using
register_range()
.
- class psycopg2.extras. Range ( lower = None , upper = None , bounds = '[)' , empty = False )
-
Python representation for a PostgreSQL
range
type.- Parameters :
-
-
lower - lower bound for the range.
None
means unbound -
upper - upper bound for the range.
None
means unbound -
bounds - one of the literal strings
()
,[)
,(]
,[]
, representing whether the lower or upper bounds are included -
empty - if
True
, the range is empty
-
This Python type is only used to pass and retrieve range values to and from PostgreSQL and doesn’t attempt to replicate the PostgreSQL range features: it doesn’t perform normalization and doesn’t implement all the operators supported by the database.
Range
objects are immutable, hashable, and support thein
operator (checking if an element is within the range). They can be tested for equivalence. Empty ranges evaluate toFalse
in boolean context, nonempty evaluate toTrue
.Changed in version 2.5.3:
Range
objects can be sorted although, as on the server-side, this ordering is not particularly meangingful. It is only meant to be used by programs assuming objects usingRange
as primary key can be sorted on them. In previous versions comparingRange
s raisesTypeError
.Although it is possible to instantiate
Range
objects, the class doesn’t have an adapter registered, so you cannot normally pass these instances as query arguments. To use range objects as query arguments you can either use one of the provided subclasses, such asNumericRange
or create a custom subclass usingregister_range()
.Object attributes:
The following
Range
subclasses map builtin PostgreSQL
range
types to
Python objects: they have an adapter registered so their instances can be
passed as query arguments.
range
values read from database queries are
automatically casted into instances of these classes.
- class psycopg2.extras. NumericRange ( lower = None , upper = None , bounds = '[)' , empty = False )
-
A
Range
suitable to pass Python numeric types to a PostgreSQL range.PostgreSQL types
int4range
,int8range
,numrange
are casted intoNumericRange
instances.
- class psycopg2.extras. DateRange ( lower = None , upper = None , bounds = '[)' , empty = False )
-
Represents
daterange
values.
- class psycopg2.extras. DateTimeRange ( lower = None , upper = None , bounds = '[)' , empty = False )
-
Represents
tsrange
values.
- class psycopg2.extras. DateTimeTZRange ( lower = None , upper = None , bounds = '[)' , empty = False )
-
Represents
tstzrange
values.
Note
Python lacks a representation for
infinity
date so Psycopg converts
the value to
date.max
and such. When written into the database these
dates will assume their literal value (e.g.
9999-12-31
instead of
infinity
). Check
Infinite dates handling
for an example of
an alternative adapter to map
date.max
to
infinity
. An
alternative dates adapter will be used automatically by the
DateRange
adapter and so on.
Custom
range
types (created with
CREATE
TYPE
...
AS
RANGE
) can be
adapted to a custom
Range
subclass:
- psycopg2.extras. register_range ( pgrange , pyrange , conn_or_curs , globally = False )
-
Create and register an adapter and the typecasters to convert between a PostgreSQL
range
type and a PostgreSQLRange
subclass.- Parameters :
-
-
pgrange - the name of the PostgreSQL
range
type. Can be schema-qualified -
pyrange - a
Range
strict subclass, or just a name to give to a new class -
conn_or_curs - a connection or cursor used to find the oid of the range and its subtype; the typecaster is registered in a scope limited to this object, unless globally is set to
True
-
globally - if
False
(default) register the typecaster only on conn_or_curs , otherwise register it globally
-
- Returns :
-
RangeCaster
instance responsible for the conversion
If a string is passed to pyrange , a new
Range
subclass is created with such name and will be available as therange
attribute of the returnedRangeCaster
object.The function queries the database on conn_or_curs to inspect the pgrange type and raises
ProgrammingError
if the type is not found. If querying the database is not advisable, use directly theRangeCaster
class and register the adapter and typecasters using the provided functions.
- class psycopg2.extras. RangeCaster ( pgrange , pyrange , oid , subtype_oid , array_oid = None )
-
Helper class to convert between
Range
and PostgreSQL range types.Objects of this class are usually created by
register_range()
. Manual creation could be useful if querying the database is not advisable: in this case the oids must be provided.Object attributes:
- adapter
-
The
ISQLQuote
responsible to adaptrange
.
UUID data type
Added in version 2.0.9.
Changed in version 2.0.13: added UUID array support.
>>> psycopg2.extras.register_uuid()
>>> # Python UUID can be used in SQL queries
>>> import uuid
>>> my_uuid = uuid.UUID('{12345678-1234-5678-1234-567812345678}')
>>> psycopg2.extensions.adapt(my_uuid).getquoted()
"'12345678-1234-5678-1234-567812345678'::uuid"
>>> # PostgreSQL UUID are transformed into Python UUID objects.
>>> cur.execute("SELECT 'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid")
>>> cur.fetchone()[0]
UUID('a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11')
- psycopg2.extras. register_uuid ( oids = None , conn_or_curs = None )
-
Create the UUID type and an uuid.UUID adapter.
- Parameters :
-
-
oids - oid for the PostgreSQL
uuid
type, or 2-items sequence with oids of the type and the array. If not specified, use PostgreSQL standard oids. -
conn_or_curs - where to register the typecaster. If not specified, register it globally.
-
Networking data types
By default Psycopg casts the PostgreSQL networking data types (
inet
,
cidr
,
macaddr
) into ordinary strings; array of such types are
converted into lists of strings.
Changed in version 2.7: in previous version array of networking types were not treated as arrays.
- psycopg2.extras. register_ipaddress ( conn_or_curs = None )
-
Register conversion support between
ipaddress
objects and network types .- Parameters :
-
conn_or_curs - the scope where to register the type casters. If
None
register them globally.
After the function is called, PostgreSQL
inet
values will be converted intoIPv4Interface
orIPv6Interface
objects,cidr
values into intoIPv4Network
orIPv6Network
.
- psycopg2.extras. register_inet ( oid = None , conn_or_curs = None )
-
Create the INET type and an Inet adapter.
- Parameters :
-
-
oid - oid for the PostgreSQL
inet
type, or 2-items sequence with oids of the type and the array. If not specified, use PostgreSQL standard oids. -
conn_or_curs - where to register the typecaster. If not specified, register it globally.
-
Deprecated since version 2.7: this function will not receive further development and may disappear in future versions.
>>> psycopg2.extras.register_inet()
>>> cur.mogrify("SELECT %s", (Inet('127.0.0.1/32'),))
"SELECT E'127.0.0.1/32'::inet"
>>> cur.execute("SELECT '192.168.0.1/24'::inet")
>>> cur.fetchone()[0].addr
'192.168.0.1/24'
- class psycopg2.extras. Inet ( addr )
-
Wrap a string to allow for correct SQL-quoting of inet values.
Note that this adapter does NOT check the passed value to make sure it really is an inet-compatible address but DOES call adapt() on it to make sure it is impossible to execute an SQL-injection by passing an evil value to the initializer.
Deprecated since version 2.7: this object will not receive further development and may disappear in future versions.
Fast execution helpers
The current implementation of
executemany()
is (using an extremely
charitable understatement) not particularly performing. These functions can
be used to speed up the repeated execution of a statement against a set of
parameters. By reducing the number of server roundtrips the performance can be
orders of magnitude better
than using
executemany()
.
- psycopg2.extras. execute_batch ( cur , sql , argslist , page_size = 100 )
-
Execute groups of statements in fewer server roundtrips.
Execute sql several times, against all parameters set (sequences or mappings) found in argslist .
The function is semantically similar to
cur.
executemany
(sql, argslist)but has a different implementation: Psycopg will join the statements into fewer multi-statement commands, each one containing at most page_size statements, resulting in a reduced number of server roundtrips.
After the execution of the function the
cursor.rowcount
property will not contain a total result.>>> nums = ((1,), (5,), (10,)) >>> execute_batch(cur, "INSERT INTO test (num) VALUES (%s)", nums) >>> tuples = ((123, "foo"), (42, "bar"), (23, "baz")) >>> execute_batch(cur, "INSERT INTO test (num, data) VALUES (%s, %s)", tuples)
Added in version 2.7.
Note
execute_batch()
can be also used in conjunction with PostgreSQL
prepared statements using
PREPARE
,
EXECUTE
,
DEALLOCATE
.
Instead of executing:
execute_batch(cur,
"big and complex SQL with %s %s params",
params_list)
it is possible to execute something like:
cur.execute("PREPARE stmt AS big and complex SQL with $1 $2 params")
execute_batch(cur, "EXECUTE stmt (%s, %s)", params_list)
cur.execute("DEALLOCATE stmt")
which may bring further performance benefits: if the operation to perform
is complex, every single execution will be faster as the query plan is
already cached; furthermore the amount of data to send on the server will
be lesser (one
EXECUTE
per param set instead of the whole, likely
longer, statement).
- psycopg2.extras. execute_values ( cur , sql , argslist , template = None , page_size = 100 , fetch = False )
-
Execute a statement using
VALUES
with a sequence of parameters.- Parameters :
-
-
cur - the cursor to use to execute the query.
-
sql - the query to execute. It must contain a single
%s
placeholder, which will be replaced by a VALUES list . Example:"INSERT INTO mytable (id, f1, f2) VALUES %s"
. -
argslist - sequence of sequences or dictionaries with the arguments to send to the query. The type and content must be consistent with template .
-
template -
the snippet to merge to every item in argslist to compose the query.
-
If the argslist items are sequences it should contain positional placeholders (e.g.
"(%s, %s, %s)"
, or"(%s, %s, 42)
" if there are constants value…). -
If the argslist items are mappings it should contain named placeholders (e.g.
"(%(id)s, %(f1)s, 42)"
).
If not specified, assume the arguments are sequence and use a simple positional template (i.e.
(%s, %s, ...)
), with the number of placeholders sniffed by the first element in argslist . -
-
page_size - maximum number of argslist items to include in every statement. If there are more items the function will execute more than one statement.
-
fetch - if
True
return the query results into a list (like in afetchall()
). Useful for queries withRETURNING
clause.
-
After the execution of the function the
cursor.rowcount
property will not contain a total result.While
INSERT
is an obvious candidate for this function it is possible to use it with other statements, for example:>>> cur.execute( ... "create table test (id int primary key, v1 int, v2 int)") >>> execute_values(cur, ... "INSERT INTO test (id, v1, v2) VALUES %s", ... [(1, 2, 3), (4, 5, 6), (7, 8, 9)]) >>> execute_values(cur, ... """UPDATE test SET v1 = data.v1 FROM (VALUES %s) AS data (id, v1) ... WHERE test.id = data.id""", ... [(1, 20), (4, 50)]) >>> cur.execute("select * from test order by id") >>> cur.fetchall() [(1, 20, 3), (4, 50, 6), (7, 8, 9)])
Added in version 2.7.
Changed in version 2.8: added the fetch parameter.
Coroutine support
- psycopg2.extras. wait_select ( conn )
-
Wait until a connection or cursor has data available.
The function is an example of a wait callback to be registered with
set_wait_callback()
. This function usesselect()
to wait for data to become available, and therefore is able to handle/receive SIGINT/KeyboardInterrupt.Changed in version 2.6.2: allow to cancel a query using Ctrl - C , see the FAQ for an example.