Interface Connection Objects - Cursor Objects - Type Objects and Constructors - Datatypes - Functions Subpackages Adabas - IBM DB2 - DBMaker - Informix - MySQL - PostgreSQL - Oracle - Solid - SybaseASE - SybaseASA - Windows ODBC Manager - Unix iODBC Driver Manager - Unix unixODBC Driver Manager - EasySoft ODBC Bridge - Other DBs Hints & Links : Examples : Structure : Testing : Threads : Transactions : Stored Procedures : Debugging : Support : Download : Copyright & License : History : Home |
Version 2.0.7 |
The eGenix™ mxODBC™ package provides a Python Database API 2.0 compliant interface to databases that are accessible via the ODBC API. This can either be done through an ODBC manager, e.g. the one that comes with Windows, or iODBC which is a free ODBC manager for Unix now maintained by OpenLink, or directly by linking to the database's ODBC driver.
Since ODBC is a widely supported standard for accessing databases, it should in general be possible to adapt the package to any ODBC 2.0 compliant database driver / manager. All you have to do is change the include directives to point to the specific files for your database and maybe set some macro switches to disable ODBC APIs that your driver does not provide. See the installation section for details.
The package supports multiple database interfacing meaning that you can access e.g. two different databases from within one process. Included are several preconfigured subpackages for a wide range of common databases.
Note: Unlike most of the other database modules available for Python, this package uses the new date & time types provided by another package I wrote, mxDateTime, eliminating the problems you normally face when handling dates before 1.1.1970 and after 2038. This also makes the module Year 2000 safe.
Commercial Users: Users in commercial environments may use this package for an evaluation period of 30 days starting with the day of initial installation. After this period, commercial users must either buy mxODBC CPU licenses from eGenix.com for continued use (more details) or stop using and uninstall the package. |
The package tries to adhere to the Python DB API Version 1.0 in most details. Many features of the new DB API 2.0 are already supported too. The package will eventually move to the 2.0 DB API version.
Here is a list of deviations from the specifications:
cursor.execute()
though.
The ODBC standard defines the following syntax for calling stored procedures:
{call procedure-name [([parameter][,[parameter]]...)]}
The connection constructor is available under three
different names: ODBC()
(DB API 1.0),
connect()
(DB API 2.0) and
Connect()
(mxODBC specific). See the next
section for details on the used parameters. mxODBC also
defines a DriverConnect()
constructor which
is available for ODBC managers and some ODBC drivers.
(sqlstate, sqltype, errortext, lineno)
where lineno
refers to the line number in the
mxODBC.c file (to ease debugging the package).
The ODBC API is very rich in terms of accessing information about what is stored in the database and mxODBC makes many of these APIs available as additional connection and cursor methods. These are all extensions to the DB API, of course, so porting your program to other database interfaces will be more difficult than otherwise -- on the other hand there are ODBC drivers for almost all relational databases available.
Since most of the parameters and names of the ODBC function names were mapped directly to Python method names (by dropping the SQL prefix and converting them to lower-case) and because of copyright issues, I have not copied the ODBC documentation to this page.
You can download the MS ODBC reference from e.g. Solid Tech -- Solid uses ODBC as native API for their database, BTW.
You connect to a database via a connection object. All communication from and to the database is done using these objects. They are also the scope of transactions you perform. Each connection can be setup to your specific needs, multiple connections may be opened at the same time.
mxODBC uses a dedicated object type for connections. Each
subpackage defines its own object type, but all share the same
name: ConnectionType
.
Connect(dsn, user='',
password=''[, clear_auto_commit=1])
dsn
indicates the data source to be used, user
and
password
are optional and used for database
login.
Normally, auto-commit is turned off by the constructor if the database supports transactions. If given, the value clear_auto_commit overrides the default behaviour. Passing a false value disables the clearing of the flag and lets the connection use the database's default COMMIT behaviour.
Note that a compile time switch (DONT_CLEAR_AUTOCOMMIT)
allows altering the default value for
clear_auto_commit
.
Use the connection method
db.setconnectoption(SQL.AUTOCOMMIT,
SQL.AUTOCOMMIT_ON|OFF|DEFAULT)
(see below) to adjust
the connection's behaviour to your needs.
With auto-commit turned on, transactions are effectively
disabled. The rollback()
method will raise a
NotSupportedError
when used on such a
connection.
If you get an exception during connect telling you that the
driver is not capable or does not support transactions,
e.g. mxODBC.NotSupportedError: ('S1C00', 84,
'[Microsoft][ODBC Excel Driver]Driver not capable ',
4226), please try to connect with
clear_auto_commit
set to 0.
ODBC(dsn, user='', password=''[, clear_auto_commit=1])
Connect()
needed for DB API
1.0 compliance.
connect(dsn, user='',
password=''[, clear_auto_commit=1])
Connect()
needed for DB API
2.0 compliance.
DriverConnect(DSN_string[, clear_auto_commit=1])
Connect()
constructor.
Please refer to the ODBC manuals of your ODBC manager and
database for the exact syntax of the DSN_string. It
typically has these entries:
'DSN=datasource_name;UID=userid;PWD=password;'
(case is important !). See Connect()
for
comments on clear_auto_commit
.
Note that this API is only available if the interface was compiled with the compile time switch HAVE_SQLDriverConnect defined. See the subpackages section and the subpackage's Setup file for details.
If you get an exception during connect telling you that the
driver is not capable or does not support transactions,
e.g. mxODBC.NotSupportedError: ('S1C00', 84,
'[Microsoft][ODBC Excel Driver]Driver not capable ',
4226), please try to connect with
clear_auto_commit
set to 0.
When connecting to a database with transaction support, you
should explicitly do a .rollback()
or
.commit()
prior to closing the connection. mxODBC
does an automatic rollback of the transaction when the
connection is closed if the driver supports transactions.
For some subpackages, mxODBC also defines a few helpers which you can use to query additional information from the ODBC driver or manager.
DataSources()
Returns a dictionary mapping data source names to descriptions.
This function is only available for ODBC manager and some ODBC drivers which have internal ODBC manager support, e.g. IBM's DB2 ODBC driver.
cursor([name])
cursor.getcursorname()
(see the Cursors section).
Note: This is an extension to the DB-API spec. The specification does not allow any arguments to the constructor.
close()
Error
(or subclass) exception will be
raised if any operation is attempted with the connection. The
same applies to all cursor objects trying to use the
connection.
commit()
Commit any pending transaction to the database. Note that if the database supports an auto-commit feature, this must be initially off. An interface method may be provided to turn it back on.
Database modules that do not support transactions should implement this method with void functionality.
rollback()
In case a database does provide transactions this method causes the the database to roll back to the start of any pending transaction. Closing a connection without committing the changes first will cause an implicit rollback to be performed.
The following methods are extensions to the DB-API specification and only available in mxODBC and its subpackages.
setconnectoption(option, value)
option
itself must be an
integer. Suitable option values are available through the
SQL
singleton (see below), e.g. SQL.AUTOCOMMIT
corresponds to the SQL option (SQL_AUTOCOMMIT in C).
The method is a direct interface to the ODBC SQLSetConnectOption() function. Please refer to the ODBC documentation for more information.
Note that while the API function also supports setting character fields, the method currently does not know how to handle these.
Note for ADABAS users: Adabas can emulate several different SQL dialects. They have introduced an option for this to be set. These are the values you can use: 1 = ADABAS, 2 = DB2, 3 = ANSI, 4 = ORACLE, 5 = SAPR3. The option code is SQL.CONNECT_OPT_DRVR_START + 2 according to the Adabas documentation.
getconnectoption(option)
option
must be an
integer. Suitable option values are available through the
SQL
singelton (see below).
The method returns the data as 32-bit integer. It is up to the callee to decode the integer using the SQL defines.
getinfo(info_id)
info_id
must be an integer. Suitable values are
available through the SQL
singleton (see below).
The method returns a tuple (integer, string) giving an integer decoding (in native integer byte order) of the first bytes of the API's result as well as the raw buffer data as string. It is up to the callee to decode the data (e.g. using the struct module).
This API gives you a very wide range of information about the underlying database and its capabilities. See the ODBC documentation for more information.
nativesql(command)
command
as it would have
been modified by the driver to pass to the database engine. It
is a direct interface to the ODBC API SQLNativeSql().
In many cases it simply returns the same string. Some drivers unescape ODBC escape sequences in the command string. Syntax checking is usually not applied by this method and errors are only raised in case of command string truncation.
Note: Not all mxODBC subpackages support this API.
Note: All these attributes are extensions to the DB-API specification. Attributes are not defined for connection objects in the DB-API specification.
bindmethod
datetimeformat
DATETIME_DATETIMEFORMAT
[default]
TIMEVALUE_DATETIMEFORMAT
TUPLE_DATETIMEFORMAT
STRING_DATETIMEFORMAT
I strongly suggest always using the DateTime/DateTimeDelta instances. Note that changing the values of this attribute will not change the date/time format for existing cursors using this connection.
stringformat
Note: This variable only has an effect if mxODBC was compiled with Unicode support. If not, mxODBC will always work in EIGHTBIT_STRINGFORMAT mode.
Possible values are:
EIGHTBIT_STRINGFORMAT
[default]
.encoding
attribute of connection objects)
and then passed as Unicode to the ODBC driver.
On output, all string columns are fetched as strings and passed back as Python 8-bit string objects.
This setting emulates the behaviour of previous mxODBC versions and is the default.
MIXED_STRINGFORMAT
UNICODE_STRINGFORMAT
.encoding
attribute of connection objects)
and then passed as Unicode to the ODBC driver.
On output, string data is converted to Python Unicode objects in the same way.
Use this setting if you plan to use Unicode objects with non-Unicode aware databases (e.g. by setting the encoding to UTF-8 -- be careful though: multibyte character encodings usually take up more space and are not necessarily compatible with the database's string functions).
NATIVE_UNICODE_STRINGFORMAT
.encoding
attribute of connection objects)
and then passed as Unicode to the ODBC driver.
On output, string data is always fetched as Unicode data from the ODBC driver and returned using Python Unicode objects.
Note that even though mxODBC may report that Unicode
support is enabled (the global HAVE_UNICODE_SUPPORT is
1), the ODBC driver may still reject Unicode data. In
this case an InternalError
of type S1003 is
raised whenever trying to read data from the database in
this stringformat mode.
You can use the included test.py script to find out whether the database backend support Unicode or not.
Note that binary and other plain data columns will still use 8-bit strings for interfacing, since storing this data in Unicode objects would cause trouble. mxODBC will eventually use buffer objects to store binary data in some future version.
encoding
closed
ProgrammingError
to be raised. This variable can
be used to conveniently test for this state.
converter
None
per default (meaning to use the standard
conversion mechanism). See the data
type section for details.
These objects represent a database cursor, which is used to
manage the context of a fetch operation. They are created via
the connection method cursor()
. Cursors are only
active as long as the connection is. Using cursors on a closed
connection will result in a ProgrammingError to be raised.
mxODBC uses a dedicated object type for cursors. Each subpackage
defines its own object type, but all share the same name:
CursorType
.
The following cursor methods are defined in the DB API:
callproc(procname[, parameters])
Call a stored database procedure with the given name. The sequence of parameters must contain one entry for each argument that the procedure expects. The result of the call is returned as modified copy of the input sequence. Input parameters are left untouched, output and input/output parameters replaced with possibly new values.
The procedure may also provide a result set as output. This
must then be made available through the standard
fetchXXX()
methods.
close()
Error
(or subclass) exception will be raised if
any operation is attempted with the cursor.
execute(operation[,parameters])
Parameters must be provided as sequence and will be bound to
variables in the operation. Variables are specified using
the ODBC variable placeholder '?', e.g. 'SELECT
name,id FROM table WHERE amount > ? AND amount <
?' (also see the module attribute
paramstyle
) and get bound in the order they
appear in the SQL statement from left to right.
A reference to the operation will be retained by the cursor. If the same operation object is passed in again, then the cursor will optimize its behavior by reusing the previously prepared statement. This is most effective for algorithms where the same operation is used, but different parameters are bound to it (many times).
The parameters may also be specified as list of tuples
to e.g. insert multiple rows in a single operation, but
this kind of usage is deprecated:
executemany()
should be used instead.
Return values are not defined.
executedirect(operation[,parameters])
.execute()
, except that no perpare step
is issued and the operation is not cached. This can result in
better performance with some ODBC driver setups.
Return values are not defined.
executemany(operation,seq_of_parameters[,direct])
seq_of_parameters
.
mxODBC currently only supports lists of tuples in seq_of_parameters. This will be changed in future versions.
The same comments as for execute()
also apply
accordingly to this method.
If the optional integer direct
is given and
true, mxODBC will not cache the operation, but submit it for
one-time execution to the database. This can result in
better performance with some ODBC driver setups.
Return values are not defined.
fetchone()
None
when no more data is available.
An Error
(or subclass) exception is raised
if the previous call to executeXXX()
did
not produce any result set or no call was issued yet.
mxODBC will move the associated database cursor by one row only. This behaviour is not guaranteed by the DB API, though.
fetchmany([size=cursor.arraysize])
The number of rows to fetch per call is specified by the
parameter. If it is not given, the cursor's
arraysize
determines the number of rows to
be fetched. The method should try to fetch as many rows
as indicated by the size parameter. If this is not
possible due to the specified number of rows not being
available, fewer rows may be returned.
An Error
(or subclass) exception is raised
if the previous call to executeXXX()
did
not produce any result set or no call was issued yet.
Note there are performance considerations involved with
the size parameter. For optimal performance, it is
usually best to use the arraysize attribute. If the
size parameter is used, then it is best for it to retain
the same value from one fetchmany()
call to
the next.
fetchall()
arraysize
attribute
can affect the performance of this operation.
An Error
(or subclass) exception is raised
if the previous call to executeXXX()
did
not produce any result set or no call was issued yet.
nextset()
This method will make the cursor skip to the next available set, discarding any remaining rows from the current set.
If there are no more sets, the method returns
None
. Otherwise, it returns a true value
and subsequent calls to the fetch methods will return
rows from the next result set.
An Error
(or subclass) exception is raised
if the previous call to executeXXX()
did
not produce any result set or no call was issued yet.
setinputsizes(sizes)
setoutputsize(size[, column])
The following methods are extensions to the DB-API specification and only available in mxODBC and its subpackages.
Since not all databases support all of these ODBC APIs, they can be selectively switched off at compile time to adapt the interface to the underlying ODBC driver/manager. Please take note of this when using them.
All of the following catalog methods use the same
interface: they do an implicit call to
cursor.execute()
and return their output in form of
a list of rows which that can be fetched with the
fetchXXX()
methods in the usual way. The methods
always return the number of rows in the result set.
Please refer to the ODBC
documentation for more detailed information about parameters
and the layout of the result sets. Note: If you pass
None
as a value where a string would be expected,
that entry is converted to NULL before passing it to the
underlying API.
All catalog methods support keywords and use the indicated default values for parameters which are omitted in the call.
prepare(operation)
cursor.command
.
To execute a prepared statement, pass
cursor.statement
to one of the
.executeXXX()
methods.
Return values are not defined.
Note: this method is unavailable if mxODBC was compiled with compile time switch DISABLE_EXECUTE_CACHE.
tables(qualifier=None, owner=None, table=None,
type=None)
tableprivileges(qualifier=None, owner=None,
table=None)
columns(qualifier=None, owner=None, table=None,
column=None)
columnprivileges(qualifier=None, owner=None,
table=None, column=None)
foreignkeys(primary_qualifier=None, primary_owner=None,
pimary_table=None, foreign_qualifier=None,
foreign_owner=None, foreign_table=None)
primarykeys(qualifier=None, owner=None,
table=None)
procedures(qualifier=None, owner=None,
procedure=None)
procedurecolumns(qualifier=None, owner=None,
procedure=None, column=None)
specialcolumns(qualifier=None,owner=None,table=None,
coltype=SQL.BEST_ROWID,scope=SQL.SCOPE_SESSION,
nullable=SQL.NO_NULLS)
statistics(qualifier=None, owner=None, table=None,
unique=SQL.INDEX_ALL, accuracy=SQL.ENSURE)
gettypeinfo(sqltypecode)
setcursorname(name)
getcursorname()
setconverter(converter)
None
as converter will reset
the converter mechanism to its default setting. See the data type section for details on how
user-defined converters work.
freeset()
Note that .executeXXX()
and all the
catalog methods do an implicit .freeset()
prior to executing a new query.
If you plan to write cross database applications, use these methods with care since at least some of the databases I know don't support certain APIs or return misleading results.
Also, be sure to check the correct performance of the methods and executes. I don't want to see you losing your data due to some error I made, or the fact that the ODBC driver of your DB is buggy.
description
(name, type_code, display_size,
internal_size, precision, scale, null_ok)
.
This attribute will be None
for operations that
do not return rows or if the cursor has not had an operation
invoked via the executeXXX()
method yet.
mxODBC always returns None for display_size
and
internal_size
. This information can
be obtained via connection.gettypeinfo()
.
The type_code
can be interpreted by comparing
it to the Type Objects specified in the
section below. mxODBC returns the SQL type integers in this
field. These are described in the section on Datatypes and are available through
the SQL
singleton defined at module level.
rowcount
executeXXX()
produced (for DQL
statements like select) or affected (for DML
statements like update or insert).
The attribute is -1 in case no executeXXX()
has been performed on the cursor or the rowcount of the
last operation is not determinable by the interface.[7]
arraysize
fetchmany()
. It defaults
to 1 meaning to fetch a single row at a time.
mxODBC observes this value with respect to the
fetchmany()
method, but currently interacts
with the database a single row at a time.
The following attributes are extensions to the DB-API specification and only available in mxODBC and its subpackages.
datetimeformat
connection.datetimeformat
instance variable and defaults
to the creating connection object's settings for
datetimeformat
.
stringformat
connection.stringformat
instance variable and
defaults to the creating connection object's settings for
stringformat
.
command
None
is returned.
SQL commands are set by .prepare()
and
.executeXXX()
. They are reset by the catalog
methods.
colcount
The attribute is -1 in case no executeXXX()
has been performed on the cursor.
paramcount
The attribute is -1 in case this information is not available.
closed
.close()
method.
Any action on a closed connection or cursor will result in a
ProgrammingError
to be raised. This variable
can be used to conveniently test for this state.
Since many database have problems recognizing some column's or parameter's type beforehand (e.g. for LONGs and date/time values), the DB API provides a set of standard constructors to create objects that can hold special values. When passed to the cursor methods, the module can then detect the proper type of the input parameter and bind it accordingly.
In mxODBC these constructors are not really needed: it uses the objectes defined in mxDateTime for date/time values and is able to pass strings and buffer objects to LONG and normal CHAR columns without problems. You only need them to write code that is portable across database interfaces.
A Cursor Object's description
attribute returns
information about each of the result columns of a query.
The type_code
compares equal to one
of Type Objects defined below. Type Objects may be equal to
more than one type code (e.g. DATETIME could be equal to the
type codes for date, time and timestamp columns).
mxODBC returns more detailed description about type codes in
the description
attribute. See the datatypes section for details. The
type objects are only defined for compatibility with the DB
API standard and other database interfaces.
The module exports the following constructors and singletons:
SQL NULL values are represented by the Python
None
singleton on input and output.
Note: Usage of Unix ticks for database interfacing can cause troubles because of the limited date range they cover.
The module uses the DB API 2.0 exceptions layout.
Error
You can use this class to catch all errors related to
database or interface failures. error
is just
an alias to Error
needed for DB-API 1.0
compatibility.
Error is a subclass of exceptions.StandardError.
Warning
Warning is a subclass of exceptions.StandardError. This may change in a future release to some other baseclass indicating warnings.
InterfaceError
DatabaseError
DataError
OperationalError
IntegrityError
InternalError
ProgrammingError
NotSupportedError
This is the exception inheritance layout:
StandardError |__Warning |__Error |__InterfaceError |__DatabaseError |__DataError |__OperationalError |__IntegrityError |__InternalError |__ProgrammingError |__NotSupportedError
A hint for troubles with Warning exceptions:
If some
function does not work because it always raises a
Warning
exception, you could either turn off
warning generation completely by recompiling the module using
the DONT_REPORT_WARNINGS flag (see Setup[.in] for explanations)
or step through the source to find the exact location where the
exception occurs and replace the Py_SQLCheck()
with
Py_SQLErrorCheck()
. If you do the latter please
inform me of the change so that I can include it in future
versions.
If you are interested in the exact mapping of SQL error codes to
exception classes, have a look at the
mxODBC_ErrorCodeTranslations
array defined in
mxODBC.c.
SQL
SQL_AUTOCOMMIT
is available as
SQL.AUTOCOMMIT
.
errorclass
sqltype
CHAR, VARCHAR, LONGVARCHAR,
BINARY, VARBINARY, LONGVARBINARY, TINYINT, SMALLINT, INTEGER,
BIGINT, DECIMAL, NUMERIC, BIT, REAL, FLOAT, DOUBLE, DATE,
TIME, TIMESTAMP [, CLOB, BLOB, TYPE_DATE, TYPE_TIME,
TYPE_TIMESTAMP, UNICODE, UNICODE_LONGVARCHAR,
UNICODE_VARCHAR, WCHAR, WVARCHAR, WLONGVARCHAR]
cursor.description
.
They are also available through the SQL singleton,
e.g. SQL.CHAR
. The dictionary
sqltype
provides the inverse mapping.
The codes mentioned in square brackets are optional and only available if the ODBC driver/manager supports a later ODBC version than 2.5.
Note that mxODBC has support for unknown SQL types: it returns these types converted to strings. The conversion is done by the ODBC driver and may be driver dependent.
threadsafety
apilevel
paramstyle
BIND_USING_SQLTYPE, BIND_USING_PYTHONTYPE
connection.bindmethod
.
SQL type binding means that the interface queries the database to find out which conversion to apply and which input type to expect, while Python type binding looks at the parameters you pass to the methods to find out the type information and then lets the database apply any conversions.
The bind method is usually set at compilation time, but can also differ from database to database when accessing them via an ODBC manager.
DATETIME_DATETIMEFORMAT, TIMEVALUE_DATETIMEFORMAT,
TUPLE_DATETIMEFORMAT, STRING_DATETIMEFORMAT
connection.datetimeformat
and cursor.datetimeformat
.
mxODBC can handle different output formats for date/time values on a per connection and per cursor basis. See the documentation of the two attributes for more information.
EIGHTBIT_STRINGFORMAT, MIXED_STRINGFORMAT,
UNICODE_STRINGFORMAT,
NATIVE_UNICODE_STRINGFORMAT
connection.stringformat
and cursor.stringformat
.
mxODBC can handle different string conversion methods on a per connection and per cursor basis. See the documentation of the two attributes for more information.
HAVE_UNICODE_SUPPORT
To simplify debugging the module I added debugging output in several important places. The feature is only enabled if the module is compiled with -DMAL_DEBUG and output is only generated if Python is run in debugging mode (use the Python interpreter flag '-d'). The debugging log file is named mxODBC.log. It will be created in the current working directory; messages are always appended to the file so no trace is lost until you explicitly erase the log file. If the log file can not be opened, the module will use stderr to do the reporting.
To have the package compiled using the MAL_DEBUG define, prepend the distutils command mx_autoconf --enable-debugging to the build or install command. This will then enable the define and compile a debugging version of the code.
Note that the debug version of the module is almost as fast as the regular build, so you might as well leave it enabled.
mxODBC itself is written in a thread safe way. There are no module globals that are written to and thus no locking is necessary. Many of the underlying ODBC SQL function calls are wrapped by macros unlocking the global Python interpreter lock before doing the call and regaining that lock directly afterwards. The most prominent of those are the connection APIs and the execute and fetch APIs.
So in general when using a separate database connection for each thread, you shouldn't run into threading problems. If you do, it is more likely that the ODBC driver is not 100% thread safe and thus not 100% ODBC compatible. Note that having threads share cursors is not a good idea: there are many very strange transaction related problems you can then run into.
Unlocking the interpreter lock during long SQL function calls gives your application more responsiveness. This is especially important for GUI based applications, since no other Python thread can run when the global lock is acquired by one thread.
Note: mxODBC will only support threading if you have
built Python itself with thread support enabled. Python for
Windows has this per default. Try: python -c "import
thread"
to find out.
Thanks to Andy Dustman for bringing this to my attention. If you do run into mxODBC related threading problems, feel free to contact me.
ODBC uses auto-commit on new connections per default. This means that all SQL statement executes will directly have an effect on the underlying database even in those cases where you would really back out of a certain modification, e.g. due to an unexpected error in your program.
Important: mxODBC resets auto-commit to manual commit
whenever it creates a new connection to reenable manual commit
per default -- unless the constructor flag
clear_auto_commit
is set to 0 or the database does
not provide transactions.
Using a connection in manual commit mode means that all your
commands are grouped in transactions: only the connection will
see the changes it has made to the data in the database until an
explicit connection.commit()
is issued. The commit
informs the database to write all changes done during the last
transaction into the global data storage making it visible to
all other users. A rollback on the other hand, tells the
database to discard all modifications processed in the last
transaction.
New transactions are started in the following cases: creation of
a new connection, on return from a .commit()
and
from .rollback()
.
Note: Unless you perform an explicit
connection.commit()
prior to deleting or closing
the connection, mxODBC will try to issue an implicit rollback on
that connection before actually closing it. Errors are only
reported on case you use the connection.close()
method. Implicit closing of the connection through Python's
garbage collection will ignore any errors occurring during
rollback.
Data sources that do not support transactions, such as Excel
files, cause calls to .rollback()
to fail with an
NotSupportedError
. mxODBC will not turn off
auto-commit behaviour for these sources. The setting of the
connection constructor flag clear_auto_commit
has
no effect in this case.
Some databases for which mxODBC provides special subpackages
such as MySQL
don't have transaction support. For
these, the .rollback()
connection method is not
available at all (i.e. calling it produces an
AttributeError
) and the
clear_auto_commit
flag on connection constructors
defaults to 0.
Even though mxODBC does not yet support the .callproc()
API, it does allow calling stored procedures which return data
using result sets.
The ODBC syntax for calling a stored procedure is as follows:
{call procedure-name [([parameter][,[parameter]]...)]}Using the above syntax, you can call stored procedures through one of the
.executeXXX()
calls,
e.g. cursor.execute("{call myprocedure(?,?)}",
(1,2))
will call the stored procedure
myprocedure with the parameters 1, 2
.
Note: You should not use any output statements such as "PRINT" in the stored procedures, since this will cause at least some ODBC drivers (notably the MS SQL Server one) to turn the output into an SQL error which causes the execution to fail. On the other hand, these error messages can be useful to pass along error conditions to the Python program, since the error message string will be the output of the "PRINT" statement.
mxODBC allows to use two different input variable binding modes (also see the Constants section):
Binding Mode | Value of connection.bindmethod | Comments |
SQL type binding | BIND_USING_SQLTYPE | The database is asked for the appropriate data type and mxODBC tries to convert the input variable into that type. |
Python type binding | BIND_USING_PYTHONTYPE | mxODBC looks at the type of the input variable and passes its value to the database directly; conversion is done by the ODBC driver/manager as necessary. |
The default depends on the settings with which the ODBC subpackage was compiled. If not indicated in the subpackage section, it is set to SQL type binding, since this offers more flexibility.
Note that for SQL type binding to be possible, mxODBC needs a working ODBC SQLDescribeParam() API implementation. This is checked at connect time and the binding style adjusted to Python type binding, if mxODBC cannot rely on SQLDescribeParam().
The following data types are used per default for output variable mapping and for BIND_USING_SQLTYPE input variable binding.
SQL Type | Python Type | Comments | |||
CHAR, VARCHAR, LONGVARCHAR (TEXT, BLOB or LONG in SQL) | String |
The conversion truncates the string at the SQL field
length. The handling of special characters depends on
the codepage the database uses.
Some database drivers/managers can't handle binary data in these column types, so you better check the database's capabilities with the incluced test script first before using them. |
|||
WCHAR, WVARCHAR, WLONGVARCHAR (TEXT, BLOB or LONG in SQL) | String or Unicode |
Whether a Python string or Unicode object is returned is
dependent on the setting of the .stringformat
attribute of the cursor fetching the data. Unicode is only
available in case mxODBC was compiled with Unicode support.
Note that currently none of the tested ODBC drivers natively supports Unicode. The only way to store Unicode data in a non-Unicode aware database is by encoding it using e.g. UTF-8. The conversion truncates the string at the SQL field length. | |||
BINARY, VARBINARY, LONGVARBINARY (BLOB or LONG BYTE in SQL) | String |
Truncation at the SQL field length. These can contain
embedded 0-bytes and other special characters.
Handling of these column types is database dependent. Please refer to the database's documentation for details. Many databases store the passed in data as-is and thus make these columns types useable as storage facility for arbitrary binary data. |
|||
TINYINT, SMALLINT, INTEGER, BIT | Integer | Conversion from the Python integer (a C long) to the SQL type is left to the ODBC driver/manager, so expect the usual truncations. | |||
BIGINT | Long Integer |
Conversion to and from the Python long integer is done
via string representation since there is no C type with
enough precision to hold the value. Because of this, you
might receive errors indicating truncation or errors
because the database sent string data that cannot be
converted to a Python long integer.
Not all SQL databases implement this type, MySQL is one that does. |
|||
DECIMAL, NUMERIC, REAL, FLOAT, DOUBLE | Float | Conversion from the Python float (a C double) to the SQL type is left to the ODBC driver/manager, so expect the usual truncations. | |||
DATE | DateTime instance or ticks or (year,month,day) or String |
While you should use DateTime instances, the module also
excepts ticks (Python numbers indicating the number of
seconds since the Unix Epoch; these are converted to
local time and then stored in the database) and tuples
(year,month,day) on input.
The type of the return values depends on the setting of cursor.datetimeformat and whether the ODBC driver/manager does return the value with proper type information. Default is to return DateTime instances in case it returns the data with SQL DATE type code. |
|||
TIME | DateTimeDelta instance or tocks or (hour,minute,second) or String |
While you should use DateTimeDelta instances, the module
also excepts tocks (Python numbers indicating the number
of seconds since 0:00:00.00) and tuples
(hour,minute,second) on input.
The type of the return values depends on the setting of cursor.datetimeformat and whether the ODBC driver/manager does return the value with proper type information. Default is to return DateTimeDelta instances in case it returns the data with SQL TIME type code. |
|||
TIMESTAMP | DateTime instance or ticks or (year,month,day, hour,minute,second) or String
While you should use DateTime instances, the module also
excepts ticks (Python numbers indicating the number of
seconds since the epoch; these are converted to local
time and then stored in the database) and tuples
(year,month,day, hour,minute,second) on input.
|
The type of the return values depends on the setting of cursor.datetimeformat and whether the ODBC driver/manager does return the value with proper type information. Default is to return DateTime instances in case it returns the data with SQL TIMESTAMP type code. Unsupported Type |
String |
mxODBC will try to fetch data from columns using unsupported
data types as strings. This is likely to always work but may
cause unwanted conversions and or truncations or loss of
precision.
|
Input binding to these columns is also done via strings (or str()ified versions of the input data). If you find this annoying, please contact the author to find out if there are any plans to add the missing type. |
Note that output bindings can only be done using the above mapping by mxODBC if the database correctly identifies the type of the output variables.
Some ODBC drivers return data using different type codes than the ones accepted for input, e.g. a database might accept a time value, convert it internally to a timestamp and then return it in a subsequent SELECT as timestamp value. mxODBC cannot know that the value only contains valid time information and no date information and thus converts the output data into a DateTime instance instead of a DateTimeDelta instance (which would normally be returned for time values).
Use the incluced test script to check for this behaviour. It tests many common column types and reports the outcome.
The above table defines the standard mapping mxODBC does when fetching output data from the database.
You can modify this
mapping on-the-fly by defining a cursor converter
function which takes three arguments and has to return a
2-tuple:
def converter(position,sqltype,sqllen):
# modify sqltype and sqllen as appropriate
return sqltype,sqllen
# Now tell the cursor to use this converter:
cursor.setconverter(converter)
or 3-tuple:
def converter(position,sqltype,sqllen):
# modify sqltype and sqllen as appropriate, provide binddata as
# input (e.g. for file names which should be used for file binding)
return sqltype,sqllen,binddata
# Now tell the cursor to use this converter:
cursor.setconverter(converter)
The converter function is called for each output column prior to the first fetch executed on the cursor. The returned values are then interpreted as defined in the above table.
sqltype
is usually one of the SQL data type
constants, e.g. SQL.CHAR for string data, but could also have
database specific values. mxODBC only understands the ones
defined in the above table, so this gives you a chance to map
user defined types to ones that Python can process.
sqllen
is only used for string data and defines the
maximum length of strings that can be read in that column
(mxODBC allocates a memory buffer of this size for the data
transfer).
Returning 0 as sqllen
will result in mxODBC
dynamically growing the data transfer buffer when fetching the
column data. This is sometimes handy in case you want to fetch
data that can vary in size.
binddata
is optional and only needed for some
special sqltypes
. It will be used in future
versions to e.g. allow binding output columns to files which
some ODBC drivers support (the column data is transfered
directly to a file instead of copied into memory).
Cursors will use the connection's .converter
attribute as default converter. It defaults to
None
, meaning that no converter function is in
effect. None
can also be used to disable the
converter function on a cursor:
# Don't use a converter function on the cursor
cursor.setconverter(None)
You can switch converter functions even in between fetches. mxODBC will then reallocate and rebind the column buffers for you.
These mappings are used for input variables in BIND_USING_PYTHONTYPE mode (see the ODBC documentation for more information on how the C datatypes are mapped to SQL column types). Output variables are treated by applying the conversions defined in the previous section.
Python Type | SQL C Data Type | Comments |
String | CHAR (char *) | The conversion truncates the string at the SQL field length. The string may also contain binary data, if the ODBC driver/manager supports this. |
Unicode | WCHAR (wchar_t *) |
The conversion truncates the string at the SQL field
length. Note that not all ODBC drivers/managers support
Unicode data at C level.
This binding is used for all cursors which do not
have the |
Buffer | BINARY (char *) | The conversion truncates the string at the SQL field length. The string may contain binary data. If the ODBC driver/manager doesn't support processing binary data using strings, wrap the data object using Python buffers (via the buffer() constructor) to have mxODBC use a binary SQL type for interfacing to the driver/manager. |
Integer | SLONG (signed long) | Conversion from the signed long to the SQL column type is left to the ODBC driver/manager, so expect the usual truncations. |
Long Integer | CHAR (char *) | Conversion from the Python long integer is done via the string representation since there usually is no C type with enough precision to hold the value. |
Float | DOUBLE (double) | Conversion from the Python float (a C double) to the SQL column type is left to the ODBC driver/manager, so expect the usual truncations. |
DateTime | TIMESTAMP | Converts the DateTime instance into a TIMESTAMP struct defined by the ODBC standard. The ODBC driver may use the time part of the instance or not depending on the SQL column type (DATE or TIMESTAMP). |
DateTimeDelta | TIME | Converts the DateTimeDelta instance into a TIME struct defined by the ODBC standard. Fractions of a second will be lost in this conversion. |
Buffer (new in Python 1.5.2) | CHAR (char *) | No conversion is done, the character buffer is passed directly to the ODBC driver/manager. The buffer has to be a single segment buffer. |
Any other type | CHAR (char *) | Conversion is done by calling str(variable) and then passing the resulting string value to the ODBC driver/manager. |
While you should always try to use the above Python types for passing input values to the respective columns, the package will try to automatically convert the types you give into the ones the database expects when using the BIND_USING_SQLTYPE bind method (see the Constants section), e.g. an integer literal '123' will be converted into an integer 123 by the interface if the database requests integers.
The situation is different in BIND_USING_PYTHONTYPE mode: the Python type used in the parameter is passed directly to the database, thus passing '123' or 123 does make a difference and could result in an error from the database.
Note: You shouldn't rely on this feature in case you intend to move to another DB API compliant database module, since this is an extension to the DB API standard.
Starting with version 2.0.0, mxODBC also supports Unicode objects to interface with databases. It turns out though, that native Unicode is not supported by most existing ODBC drivers. The only ODBC driver, I've heard of, which does support native Unicode is the MS SQL Server ODBC driver. Unfortunately, I don't have access to a working installation to test mxODBC against (feedback would be appreciated).
Even though you can usually not expect the ODBC driver to handle native Unicode, you can still take advantage of the auto-conversion mechanisms in mxODBC to simulate Unicode capabilities.
mxODBC provides several different run-time configurations to
deal with passing Unicode to and fetching it from an ODBC
driver. The .stringformat
attribute of connection
and cursor objects allows defining how to convert string data
into Python objects and vice-versa.
Unicode conversions to and from 8-bit strings in Python usually
assume the Python default encoding (which is ASCII unless you
modify the Python installation). Since the database may be using
a different encoding, mxODBC allows defining the encoding to be
used on a per-connection basis. The .encoding
attribute of connection objects is writeable for this purpose.
Its default value is None
, meaning that Python's
default encoding is to be used. You can change the encoding by
simply assigning a valid encoding name to the attribute. Make
sure, though, that Python supports the encoding (you can test
this using the unicode()
builtin).
The default conversion mechanism used in mxODBC is
EIGHTBIT_STRINGFORMAT
(Unicode gets converted to
8-bit strings before passing the data to the driver, output is
always an 8-bit string), the default encoding Python's default
encoding.
To store Unicode in a database, one possibility is to use the
UNICODE_STRINGFORMAT
and set the encoding attribute
to e.g. 'utf-8'
. mxODBC will then convert the
Unicode input data to UTF-8, store this in the database and
convert it back to Unicode during fetch operations. Note however
that UTF-8 encoded data usually takes up more room in the
database than the Unicode equivalent, so may experience data
truncations which then cause the decoding process to fail.
The above SQL types are provided by the module as SQL type code
integers as attributes of the singleton SQL
, so
that you can decode the value in cursor.description by
comparing it to one of those constants. A reverse mapping of
integer codes to code names is provided by the dictionary
sqltype
.
Note: You may run into problems when using the tuple versions for date/time/timestamp arguments. This is because some databases (noteably MySQL) want these arguments to be passed as strings. mxODBC does the conversion internally but tuples turn out as: '(1998,4,6)' which it will refuse to accept. The solution: use DateTime[Delta] instances instead. These convert themselves to ISO dates/times which most databases (including MySQL) do understand.
To check the ODBC driver/manager capabilities and support for the above column types, use the incluced test script.
For some subpackages, mxODBC also defines a few helpers which you can use to query additional information from the ODBC driver or manager.
DataSources()
Returns a dictionary mapping data source names to descriptions.
This function is only available for ODBC manager and some ODBC drivers which have internal ODBC manager support, e.g. IBM's DB2 ODBC driver.
In addition to subpackage specific helpers, mxODBC also provides a few additional functions available through the top-level ODBC package. These are:
format_resultset(cursor,headers=(),
colsep=' | ', headersep='-', stringify=repr)
-header- -headersep- -row1- -row2- ...
headers may be given as list of strings. If not given, or too short, the function will add numbered columns as appropriate.
Columns are separated by colsep; the header is separated from the result set by a line of headersep characters.
The function calls stringify to format the value data returned by the driver into a string. It defauls to repr().
print_resultset(cursor, headers=())
headers can be given as list of column header strings.
This section includes some specific notes for preconfigured setups.
Note for Windows users: You should always use the ODBC.Windows subpackage and access the databases through the MS ODBC Driver Manager. The other packages provide Unix based interfaces to the databases.
IMPORTANT: Even though the setups include many database specific settings, you should always check the paths and filenames used in the corresponding Setup file because these depend on your specific installation. |
You may also want to consult Paul Boddie's mxODBC Configuration page which has some details about specific database backends he has used with mxODBC.
The SuSE Linux distribution ships with a free personal edition of Adabas (available in form of RPMs from SuSE). A commercial version is also available, though I'd suggest first trying the personal edition.
If you want to trim down the interface module size, try linking against a shared version of the static ODBC driver libs. You can create a pseudo-shared lib by telling the linker to wrap the static ones into a single shared one:
ld -shared --whole-archive odbclib.a libsqlrte.a libsqlptc.a \ -lncurses -o /usr/local/lib/libadabasodbc.so
Note: The ADABAS ODBC driver returns microseconds in the timestamp fraction field. Because of this the Setup includes a define to do the conversion to seconds using a microseconds scale instead of the ODBC standard nanosecond scale (see the history section for more details on this problem).
The module has been tested under Linux 2 with Adabas D 6.1.1. Linux Edition. Since the ODBC driver for Adabas on Linux also provides the DriverConnect() API it is also exposed by the package (even though the driver itself is not an ODBC manager).
MySQL is a SQL database for Unix and Windows platforms developed by TCX. It is free for most types of usage (see their FAQ for details) and offers good performance and stability. To download MySQL and the ODBC driver MyODBC, check the www.mysql.org website.
There is one particularity with the ODBC driver for MySQL: all input parameters are being processed as string -- even integers and floats. The ODBC driver implements the necessary conversions. mxODBC uses the Python Type binding method to bind the input parameters; see the Constants section.
Since MySQL does not support transactions, clearing the auto-commit flag on connections (which is normally done per default by the connection constructors) will not work. The subpackage simply uses auto-commit mode as default. You can turn this "work-around" off by editing MySQL/Setup and removing the switch DONT_CLEAR_AUTOCOMMIT if the feature should become available.
When using the MyODBC RPMs available from www.mysql.org, please be sure to also have the MySQL shared libs RPM and the MySQL development RPM installed.
Important: The setup MySQL + MyODBC showed some serious memory leaks on my SuSE 5.3 (libc5) Linux machine; the leakage does not occur when accessing the Linux server from a Windows client or when running the setup using SuSE 6.2 (glibc6). The leakage could be libc or compiler related -- it is not mxODBC or MyODBC related. Please check your setup using the included test.py script.
PostgreSQL is a free SQL database for Unix. The subpackage setup was originally provided by Cyril Elkaim. I updated it to PostgreSQL 7. The package links directly to the PostgreSQL ODBC driver. An alternative setup would be connecting to the database via one of the free ODBC managers iODBC or unixODBC also supported by mxODBC.
Even though I don't get any link errors and the ODBC driver does report errors, I haven't been able to actually connect to the PostgreSQL database. The driver keeps giving me strange error messages and I haven't yet figured out what exactly is causing them. Some of them indicate that the ODBC driver is not 100% ODBC 2.0 compatible, others are simply not leading in any obvious direction or seem to be caused by some communication sub-system used by the driver.
If you can get the setup to work, I'd appreciate feedback.
Solid Tech. offers a free personal edition of their database for Linux in addition to the standard server and webserver licenses. More information about prices, licenses and downloads is available on their website.
BTW: The Solid Server's low-level database API uses ODBC as interface standard (most other vendors have proprietary interfaces), so mxODBC should deliver the best performance possible.
Note: The Solid ODBC driver leaves out some of the ODBC 2.0 catalog functions. The missing ones are: SQLTablePrivileges, SQLColumnPrivileges, SQLForeignKeys, SQLProcedures, SQLProcedureColumns. You won't be able to use the corresponding cursor methods.
The setup for Solid was kindly donated by Andy Dustman from ComStar Communications Corp. He also found a long standing bug that needed fixing.
Sybase Adaptive Server Anywhere comes with its own ODBC driver against which mxODBC can link directly. The included Setup is for version 7 of the server.
In case you are running Linux, Sybase has some information on its web-site about the ASA ODBC driver and its setup on Linux. This whitepaper should also be of interest.
You can also use the OpenLink drivers for Sybase ASA: copy Setup.in to Setup and enable the OpenLink section in Setup before compiling.
In any case, you should also consult Paul Boddie's mxODBC Configuration page for the Sybase Adaptive Server Anywhere. It includes valuable information about the setup.
Thanks to Paul Boddie and Sam Rushing for helping in getting the package together.
Note that you will first have to get the Sybase ASE ODBC drivers from Merant in order to use this subpackage -- Sybase ASE does not include ODBC drivers (it's a completely different product than Sybase ASA). This whitepaper has some details about ODBC connectivity of ASE.
Gary Pennington from Sun Microsystems reported that the Merant evaluation drivers work with Sybase Adaptive Server 11.5 on Solaris 2.6.
You can also use the OpenLink drivers for Sybase ASE: copy Setup.in to Setup and enable the OpenLink section in Setup before compiling.
In any case, you should also consult Paul Boddie's mxODBC Configuration page for the Sybase Adaptive Server Enterprise version. It includes valuable information about the setup.
Oracle for Unix doesn't ship with Unix ODBC drivers. You can get them from Merant or OpenLink though (see the Hints section for URLs).
Once you have installed the ODBC drivers following the vendor's instructions, run make -f Makefile.pre.in boot in the Oracle/ subdirectory, enable the appropriate set of directives in Setup and then run make to finish the compilation.
Using Merant drivers is reported to work. Shawn Dyer (irin.com) has kindly provided the setup for this combination and some additional notes:
...we also set the following environment variables:LD_LIBRARY_PATH= both the oracle lib path and the Merant library path
ODBCINI= the odbc.ini file in the Merant installOnce you talk to the Merant odbc driver, it seems to be a simple matter of setting up the ODBC data source name in their .ini file that has that stuff. At that point you can talk to any of their ODBC drivers you have installed.
To use the OpenLink driver setup instead copy Setup.in to Setup and enable the OpenLink section in Setup before compiling.
Informix for Unix doesn't come with Unix ODBC drivers, but there a few source for these: Informix sells the driver under the term "Informix CLI"; Merant and OpenLink also support Informix through their driver suites (see the Hints section for URLs).
Note: There is also a free Informix SDK available for a few commercial Unix platforms like HP-UX and Solaris. It includes the needed ODBC libs and header files (named infxcli.h and libifsql.a).
Once you have installed the ODBC drivers following the vendor's instructions, enable the appropriate set of directives in Setup, run make -f Makefile.pre.in boot in the Informix/ subdirectory, and then run make to finish the compilation.
To use the OpenLink driver setup instead copy Setup.in to Setup and enable the OpenLink section in Setup before compiling.
Gilles Lenfant has mailed me these instructions which you might find useful in setting up the Informix subpackage:
In addition to the change to Setup (or Setup.in) file edition before the "make -f Makefile.pre.in boot", I made it compile and run with the following changes in the Informix section of the "Setup" file (according to the latest "Informix ODBC Driver Programmer's Manual").
Note thas this book must be read carefully for the setup of the Informix related environment variables: The user must have $INFORMIXDIR (informix client software root) set. and his LD_LIBRARY_PATH must include "$INFORMIXDIR/lib:$INFORMIXDIR/lib/esql:$INFORMIXDIR/lib/cli"
Compiling mxODBC requires Informix client SDK (compile time free download from intraware.com) and ESQL/C libraries (client run-time libraries provided with the server CD - not free).
In addition, fixes and tuning must be done in $INFORMIXDIR/etc/odbcinst.ini, and the user must configure his data sources in $HOME/.odbc.ini or $ODBCINI file.
IBM provides a free personal edition of the powerful DB2 database engine which happens to use ODBC as native C level interface.
This package interfaces directly to the ODBC driver
included in the UNIX edition of the database. If you
want to access DB2 from Windows NT, please use the
Windows
subpackage.
There is one quirk you should watch out for: in order to connect to the IBM DB2 database the DB2INSTANCE environment variable must be set to the name of the DB2 instance you would like to connect to.
There may be more environment variables needed, please check the scripts that come with DB2 called db2profile (for bash) or db2cshrc (for C shell) which set the environment variables. Without having these set, mxODBC will fail to load and give you a traceback:
Traceback (most recent call last): ... from mxODBC import * ImportError: initialization of module mxODBC failed (mxODBC.InterfaceError:failed to retrieve error information (line 6778, rc=-1))Unfortunately, the provided shell scripts are buggy, so simply sourcing them won't do any good; you will have to carefully create your own. A typical problem is that the scripts set LIBPATH or LD_LIBRARY_PATH which then causes the following traceback when trying to load mxODBC:
Traceback (most recent call last): ... ImportError: from module mxODBC.so No such file or directory
Also note that DB2 needs to be explicitly told that you want to connect to the database using ODBC. This is done by binding the IBM CLI driver against the database. Please consult the IBM DB2 documentation for details.
If you want to use the DriverConnect()
API,
you'll have to configure the IBM ODBC driver's data
source INI file which is named db2cli.ini and
usually found in the same directory as the above script
files.
DBMaker has a small lean and mean SQL database that comes with an ODBC driver. This subpackage interfaces directly to that ODBC driver.
Note: DBMaker's ODBC driver doesn't have all the advertised SQL catalog functions (the priviledge functions are missing) and also doesn't support the .nativesql() method. It does provide a DriverConnect() API, though, which might be useful for connecting to databases across a network.
mxODBC currently does not support the use of file object for in- and output of large objects. This may change in a future version though (the needed techniques are already in place).
The subpackage links against the DBMaker driver without problems. Testing has only been preliminary though, but since even CASEMaker advertises mxODBC as Python interface for DBMaker, I guess that they did some testing already ;-).
mxODBC compiles on Windows using VC++ and links against the Windows ODBC driver manager. The necessary import libs and header files are included in the VC++ package but are also available for free in the Microsoft ODBC SDK (now called MDAC SDK). Note that the latter is usually more up-to-date.
Compiling the module has to be done in the usual VC++ way (see the Windows install instructions), producing a DLL named mxODBC.pyd. All necessary files are located in the Windows/ subdirectory of the package, the main target being mxODBC.cpp.
Martin Sckopke (gis.ibfs.de) reported that when connecting to an ADABAS database through the ODBC manager, no 'host:' prefix to the DSN is necessary. He has the module running on Windows NT and is interfacing to ADABAS D and Oracle 8.0.x without problems.
Stephen Ng (grossprofit.com) who contributed the previous compiled versions and the VC6 project files is using mxODBC to interface to MS Access and now MS SQL Server.
Notes:
Use the DriverConnect()
API to connect to the
data source if you need to pass in extra configuration
information such as names of log files, etc.
If you have installed the win32 extensions by Mark Hammond et al. you'll run into a naming collision: there already is an odbc module (all lowercase letters) in the distribution that could be loaded instead of the ODBC package (all uppercase letters) depending on your configuration.
AFAIK, there are at least three ways to change this:
The subpackage defaults to SQL type binding mode, but reverts to Python type binding in case the connection does not support the SQLDescribeParam() API. MS Access is one candidate for which this API is not useable.
ODBC drivers working on single files, e.g. the MS Excel file driver, usually do not support transactions. mxODBC will not clear auto-commit for these drivers (it may sometimes still be necessary to set the clear_auto_commit flag in the connect constructors to 0).
If you have troubles with multiple cursors on connections to MS SQL Server the MS Knowledge Base Article INF: Multiple Active Microsoft SQL Server Statements has some valuable information for you. It seems that you'll have to force the usage of server side cursors to be able to execute multiple statements on a single connection to MS SQL Server. According to the article this is done by setting the connection option SQL.CURSOR_TYPE to e.g. SQL.CURSOR_DYNAMIC:
dbc.setconnectoption(SQL.CURSOR_TYPE,SQL.CURSOR_DYNAMIC)Thanks to Damien Morton for tracking this down and digging up the MS KB article.
If you want to connect to a file data source (without having to configure it using the ODBC manager), you can do so by using the FILEDSN= parameter instead of the DSN= parameter:
DriverConnect('FILEDSN=test.dsn;UID=test;PWD=test')This is sometimes useful when you want to dynamically setup a data source, e.g. a MS Access database.
For more information about the FILEDSN-keyword and the other Windows ODBC manager features, see the Microsoft SQLDriverConnect() documentation.
mxODBC compiles against the iODBC version available from the www.iODBC.org site.
Note: Use the DriverConnect() API to connect to the data source if you need to pass in extra configuration information such as names of log files, etc.
I've successfully tested the interface with iODBC-2.50.3 on Linux. Note however that this version has a bug in the SQLDataSources() API which causes the mxODBC API DataSources() to return truncated data. Later versions may not have this problem.
Hint: You may experience problems when trying to connect to MySQL via MyODBC hooked to iODBC in case you are using the binary RPMs available from www.mysql.org. For some reason, the MyODBC driver does not reference the MySQL shared libs it needs to connect to the MySQL server and there's no way to tell iODBC to load two shared libs. Here's a hack which will allow you to create an import lib which solves the problem on Linux:
rm -f /usr/local/lib/libmyodbc.so ld -shared --whole-archive /usr/local/lib/libmyodbc-2.50.34.so \ /usr/lib/libmysqlclient.so.10 \ -o /usr/local/lib/libmyodbc.so ldconfig
mxODBC compiles against the current unixODBC version available from the www.unixODBC.org site.
Note: Use the DriverConnect() API to connect to the data source if you need to pass in extra configuration information such as names of log files, etc.
EasySoft has developed an ODBC-ODBC bridge which allows you to connect to e.g. a MS SQL Server running on an NT box from your Linux web-server.
The included setup was developed for the beta 0.2.4 of that bridge but should also work fine against the release version. You can download it via the product page or via FTP as trial version. The personal editions are said to available for free in the near future.
Remember to download setups for client (Linux in the example) and server (NT in the example).
Check out the list of links to other resources in the next section. Creation of new sub packages is explained in the installation guide.
ODBC drivers and managers are usually compiled as a shared library. When running CGI scripts most HTTP daemons (aka web servers) don't pass through the path for the dynamic loader (e.g. LD_LIBRARY_PATH) to the script, thus importing the mxODBC C extension will fail with unresolved symbols because the loader doesn't find the ODBC driver/manager's libs.
To have the loader find the path to those shared libs you can either wrap the Python script with a shell script that sets the path according to your system configuration or tell the HTTP daemon to set or pass these through (see the daemon's documentation for information on how to do this; for Apache the directives are named SetEnv and PassEnv).
Thomas Heller has written a great tool which is based on the new distutils support in Python 2.0 and later. The tool allows you to freeze your application into a single standalone Windows application and is called py2exe.
When freezing mxODBC you may experience problems with py2exe related to py2exe not finding the DLLs needed by mxODBC. In this case you have to help py2exe to find the correct subpackage for Windows. This can be done by adding -i mx.ODBC.Windows,mx.DateTime to the py2exe command line: python py2exe -i mx.ODBC.Windows,mx.DateTime yourapp.py. After doing so, py2exe should have no problem finding the files mxODBC.pyd and mxDateTime.pyd needed by mx.ODBC.Windows and mx.DateTime.
Note: Freezing mxODBC together with an application and redistributing the resulting executables requires that you have obtained a proper license from eGenix.com permitting you to redistribute mxODBC along with a product. Please see the License section for more information.
There are several resources available online that should help you getting started with ODBC. Here is a small list of links useful for further reading:
Note: If you are not happy about the size of the SDK download (over 31MB), you can also grab the older 3.0 SDK which might still be available from a FTP server. Look for "odbc3sdk.exe" using e.g. FTP Search.
Microsoft also supports a whole range of (desktop) ODBC drivers for various databases and file formats. These are available under the name "ODBC Desktop Database Drivers" (search the MS web-site for the exact URL) [wx1350.exe] and also included in the more up-to-date "Microsoft Data Access Components" (MDAC) archive [mdac_typ.exe].
PDF versions of the MS ODBC SDK docs reformatted for the Solid Server (it uses ODBC as its native API) are available from SolidTech.
Last time I checked it included ODBC drivers for: Access, dBase, Excel, Oracle, Paradox, Text (flat file CSV), FoxPro, MS SQL Server.
If you need to connect to databases running on other hosts, please contact the database vendor or check the CorVu list of ODBC drivers.
Another source for commercial ODBC drivers is OpenLink. To see if they support your client/server setup check this matrix. They are giving away 2-client/10-connect licenses for free.
For a fairly large list of sources for ODBC drivers have a look on the ODBC driver page provided by the CorVu Cooperation. They also have some informative pages that describe common database functions and operators which are helpful.
If you would like to connect to a database for which you don't have a Unix ODBC driver, you can also try the ODBC-ODBC bridge from EasySoft which redirects the queries to e.g. the NT ODBC driver for the database.
Alternatively, you could write a remote client (in Python) that communicates with your database via a WinNT-Box. Most databases provide Win95/NT ODBC drivers so you can use mxODBC with the Windows ODBC manager. This method is not exactly high-performance, but cheaper (provided you can come up with a running version in less than a day's work, that is...). The Python standard lib module SocketServer.py should get you going pretty fast. Protocol and security are up to you, of course.
Here is a very simple example of how to use mxODBC. More elaborate examples of using Python Database API compatible database interfaces can be found in the Database Topic Guide on http://www.python.org/. Andrew Kuchling's introduction to the Python Database API is an especially good reading. There are also a few books on using Python DB API compatible interfaces, some of them cover mxODBC explicitly.
On Unix:
>>> import mx.ODBC
>>> import mx.ODBC.iODBC
>>> db = mx.ODBC.iODBC.DriverConnect('DSN=database;UID=user;PWD=passwd')
>>> c = db.cursor()
>>> c.execute('select count(*) from test')
>>> c.fetchone()
(305,)
>>> c.tables(None,None,None,None)
8
>>> mx.ODBC.print_resultset(c)
Column 1 | Column 2 | Column 3 | Column 4 | Column 5
---------------------------------------------------------------
'' | '' | 'test' | 'TABLE' | 'MySQL table'
'' | '' | 'test1' | 'TABLE' | 'MySQL table'
'' | '' | 'test4' | 'TABLE' | 'MySQL table'
'' | '' | 'testblobs' | 'TABLE' | 'MySQL table'
'' | '' | 'testblobs2' | 'TABLE' | 'MySQL table'
'' | '' | 'testdate' | 'TABLE' | 'MySQL table'
'' | '' | 'testdates' | 'TABLE' | 'MySQL table'
'' | '' | 'testdatetime' | 'TABLE' | 'MySQL table'
>>> c.close()
>>> db.close()
>>>
On Windows:
>>> import mx.ODBC
>>> import mx.ODBC.Windows
>>> db = mx.ODBC.Windows.DriverConnect('DSN=database;UID=user;PWD=passwd')
>>> c = db.cursor()
>>> c.execute('select count(*) from test')
>>> c.fetchone()
(305,)
>>> c.tables(None,None,None,None)
8
>>> mx.ODBC.print_resultset(c)
Column 1 | Column 2 | Column 3 | Column 4 | Column 5
---------------------------------------------------------------
'' | '' | 'test' | 'TABLE' | 'MySQL table'
'' | '' | 'test1' | 'TABLE' | 'MySQL table'
'' | '' | 'test4' | 'TABLE' | 'MySQL table'
'' | '' | 'testblobs' | 'TABLE' | 'MySQL table'
'' | '' | 'testblobs2' | 'TABLE' | 'MySQL table'
'' | '' | 'testdate' | 'TABLE' | 'MySQL table'
'' | '' | 'testdates' | 'TABLE' | 'MySQL table'
'' | '' | 'testdatetime' | 'TABLE' | 'MySQL table'
>>> c.close()
>>> db.close()
>>>
As you can see, mxODBC has the same interface on Unix and Windows which makes it an ideal basis for writing cross-platform database applications.
Note: When connecting to a database with transaction
support, you should explicitly do a .rollback()
or .commit()
prior to closing the
connection. In the example this was omitted since the
database backend MySQL does not support transactions.
[ODBC] [Adabas] dbi.py dbtypes.py showdb.py [DB2] dbi.py dbtypes.py [DBMaker] dbi.py dbtypes.py Doc/ [EasySoft] dbi.py dbtypes.py [Informix] dbi.py dbtypes.py [Misc] dbinfo.py proc.py test.py [MySQL] dbi.py dbtypes.py [Oracle] dbi.py dbtypes.py [PostgreSQL] dbi.py dbtypes.py [Solid] dbi.py dbtypes.py [SybaseASA] dbi.py dbtypes.py [SybaseASE] dbi.py dbtypes.py [Windows] dbi.py dbtypes.py [iODBC] dbi.py dbtypes.py [mxODBC] dbi.py dbtypes.py makeSQLCodes.py [unixODBC] dbi.py dbtypes.py LazyModule.py ODBC.py
Entries enclosed in brackets are packages (i.e. they are
directories that include a __init__.py file). Ones
with slashes are just simple subdirectories that are not
accessible via import
.
First, you will have to install another extension I wrote called mxDateTime which is part of the eGenix.com mx BASE package. Be sure to always fetch the latest release of both packages, since I always synchronize the two whenever something changes. If you only update one of them, you may run into problems later.
After that is installed and running, download the eGenix.com mx COMMERCIAL package which is available from the same location and unzip it to a temporary directory.
You may also want to download the ODBC reference manuals from SolidTech which I used to develop this package.
Note: The subpackages section contains database specific installation notes. You may want to read those first before continuing the setup.
Next, follow the steps below for each of the subpackage that you intend to use.
If none of them fits your database configuration, create a new directory MyDatabase first and proceed as follows (please send me the modified Setup and mxODBC.h files for inclusion in future releases).
Unix:
Instructions for compiling the C extension(s) on Unix platforms:
Be sure that you have the Python development files installed on your machine (these are usually located in /usr/local/lib/pythonX.X/config/ or /usr/lib/pythonX.X/config/). If you don't, look for a python-devel or similar installation archive for your OS distribution and install this first.
If you are setting up a new database subpackage, copy all files from the mx/mxODBC directory to the subpackage directory and then edit the mxODBC.h header file and include the appropriate header files for your database
make -f Makefile.pre.in boot
Fix the include directories, libs and lib paths (this file uses the same syntax as the Modules/Setup file that you edit to configure Python); the file also contains some database specific hints
make
If you get an error like 'unresolved symbol: SQLxxx', try to add a '-DDONT_HAVE_SQLxxx' flag to the setup line in Setup and recompile (make clean; make).
I would appreciate hearing about the changes you make in mxCOMMERCIAL.py, since I plan to enhance the installer to provide auto-detection of installed ODBC drivers. If you provide such information, you will be entitled for a free upgrade to the next version of mxODBC.
You now have a new package called ODBC with
subpackages for each of your ODBC databases. Accessing a
particular database is done by calling the connection
constructor of the database subpackage, e.g. connection =
ODBC.Adabas.Connect('host:DB','user','passwd')
. You
can use multiple databases at the same time using this
mechanism.
If you plan to use the dbi.py abstraction module
for a particular database, you can access the specific
version for that database by importing ODBC.<database
name>.dbi, e.g. ODBC.Adabas.dbi
. Note
that the dbi module is deprecated by DB API 2.0
and no longer maintained.
Please post any bug reports, questions etc. to the db-sig@python.org (see the Python Website for details on how to subscribe) or mail them directly to me.
The package includes a rudimentary test script that checks some of the database's features. As side effect this also provides a good regression test for the mxODBC interface.
To start the test, simply run the script in ODBC/Misc/test.py. It will generate a few temporary tables (named mxODBC0001, mxODBC0002, etc; no existing tables will be overwritten) and then test the interface - database communication. The tables are removed after the tests have run through. Here is some typical output:
MySQL 3.22.20a with MyODBC 2.50.22a en direct:
Subpackage Name [MySQL]: DriverConnect arguments [DSN=test;UID=root]: Clear AUTOCOMMIT ? (1/0) [0] Run tests continuously to check for leaks ? (y/n) [n] Show driver type information ? (y/n) [n] Output file [stdout]: Test suite: Connecting to the database. Connected to DBMS MySQL 2.50.22 using driver myodbc.dll 2.50.22; ODBC 02.50 BIGINT column type with 64bits : supported BINARY column type : type not supported BIT column type : supported BLOB column type with binary data : supported BLOBs with >32kB ASCII data : supported CHAR column type : supported CHAR column type with binary data : supported CHAR padding (with spaces) : not supported DATE column type : supported DATETIME column type : supported DATETIME with string values : not supported DECIMAL column type : supported DOUBLE column type : supported FLOAT column type : supported IMAGE column type : type not supported INT column type : supported INT column type using array processing : supported LONG BYTE column type : type not supported LONG BYTE with >32kB ASCII data : type not supported LONG column type : type not supported LONGs with >32kB ASCII data : type not supported LONGs with binary data : type not supported MEMO column type : type not supported MEMOs with >32kB ASCII data : type not supported MEMOs with binary data : type not supported TEXT column type : supported TIME column type : supported TIMESTAMP column type : supported TIMESTAMP data having fractions : not supported Transactions : not supported VARBINARY column type : type not supported VARCHAR column type : type not supported VARCHAR column type with binary data : type not supported Variable Bind Method : Python Type Disconnecting.
eGenix.com is providing commercial support for this package. If you are interested in receiving information about this service please see the eGenix.com Support Conditions.
© 1997-2000, Copyright by Marc-André Lemburg; All Rights Reserved. mailto: mal@egenix.com
© 2000-2004, Copyright by eGenix.com Software GmbH, Langenfeld, Germany; All Rights Reserved. mailto: info@egenix.com
This software is covered by the eGenix.com Commercial License Agreement. The text of the license is also included as file "LICENSE" in the package's main directory.
Please note that using this software in a commercial environment is not free of charge. You may use the software during an evaluation period as specified in the license, but subsequent use requires the ownership of a "Proof of Authorization" which you can buy online from eGenix.com.
Please see the eGenix.com mx Extensions Page for details about the license ordering process.
By downloading, copying, installing or otherwise using the software, you agree to be bound by the terms and conditions of the eGenix.com Commercial License Agreement.
Things that still need to be done:
Changes from 2.0.5 to 2.0.6:
Changes from 2.0.4 to 2.0.5:
Changes from 2.0.3 to 2.0.4:
Changes from 2.0.2 to 2.0.3:
Changes from 2.0.1 to 2.0.2:
Changes from 2.0.0 to 2.0.1:
Changes from 1.1.1 to 2.0.0:
Note that even though mxODBC should now be capable of dealing with improper order, you'll get higher performance by ordering the columns in such a way that only the last columns on the list need to be fetched via SQLGetData while the others can be bound directly to a memory buffer and fetched without extra call.
Changes from 1.1.0 to 1.1.1:
Changes from 1.0.1 to 1.1.0:
Some of these databases simply ignored the rollback without any notice to the programmer (e.g. MySQL).
mxODBC now does a check to make sure that the connection supports transactions and raises a NotSupportedError in case .rollback() is called on such a connection (using AUTOCOMMIT also means disabling transactions !).
Changed the default setup for MySQL: since MySQL does not support transactions, the new switch DONT_HAVE_TRANSACTIONS is defined per default in the Setup file.
Note that usage of dbi.py is deprecated starting with DB API 2.0. It will no longer be supported in future versions of mxODBC.
Changes from 1.0.0 to 1.0.1:
Changes from 0.9.0 to 1.0.0:
The strange thing about the timestamp fractions is that the ODBC standard manuals say they represent nanoseconds while an older SQL manual from IBM states they should in fact be microseconds. I've now adopted the ODBC POV, but am still not sure what is right and what wrong.
Since I couldn't make up my mind, I've introduced a compile time switch to set things up to use microseconds instead of nanoseconds: USE_MICROSECOND_FRACTIONS. See Setup.in for more information.
Some databases seem to use the old standard too: at least ADABAS does. The switch is defined in the ADABAS Setup per default.
Changes from 0.8.1 to 0.9.0:
Changes from 0.8.0 to 0.8.1:
Changes from 0.7 to 0.8.1:
int()
will do the
conversion for you. Same for floats. It is still better to
pass the "right" types as no extra conversion is done in
this case.
Changes from 0.6 to 0.7:
Changes from 0.5 to 0.6: