Spec-Zone .ru
спецификации, руководства, описания, API
|
To optimize InnoDB
transaction processing, find the ideal balance between the
performance overhead of transactional features and the workload of your server. For example, an application
might encounter performance issues if it commits thousands of times per second, and different performance issues
if it commits only every 2-3 hours.
The default MySQL setting AUTOCOMMIT=1
can impose
performance limitations on a busy database server. Where practical, wrap several related DML operations
into a single transaction, by issuing SET AUTOCOMMIT=0
or a START TRANSACTION
statement, followed by a COMMIT
statement after making all the changes.
InnoDB
must flush the log to disk at each transaction commit if that
transaction made modifications to the database. When each change is followed by a commit (as with
the default autocommit setting), the I/O throughput of the storage device puts a cap on the number
of potential operations per second.
Alternatively, for transactions that consist only of a single SELECT
statement, turning on AUTOCOMMIT
helps InnoDB
to recognize read-only transactions and optimize them. See Section 14.2.4.2.3, "Optimizations
for Read-Only Transactions" for requirements.
Avoid performing rollbacks after inserting, updating, or deleting huge numbers of rows. If a big transaction is slowing down server performance, rolling it back can make the problem worse, potentially taking several times as long to perform as the original DML operations. Killing the database process does not help, because the rollback starts again on server startup.
To minimize the chance of this issue occurring: increase the size of the buffer
pool so that all the DML changes can be cached rather than immediately written to disk; set innodb_change_buffering=all
so that update and delete operations are buffered in addition to inserts; and consider issuing COMMIT
statements periodically during the big DML operation, possibly
breaking a single delete or update into multiple statements that operate on smaller numbers of rows.
To get rid of a runaway rollback once it occurs, increase the buffer pool so that the rollback
becomes CPU-bound and runs fast, or kill the server and restart with innodb_force_recovery=3
, as explained in Section
14.2.3.13, "The InnoDB
Recovery Process".
This issue is expected to be less prominent in MySQL 5.5 and up, or in MySQL 5.1 with the InnoDB
Plugin, because the default setting innodb_change_buffering=all
allows update and delete operations
to be cached in memory, making them faster to perform in the first place, and also faster to roll
back if needed. Make sure to use this parameter setting on servers that process long-running
transactions with many inserts, updates, or deletes.
If you can afford the loss of some of the latest committed transactions if a crash
occurs, you can set the innodb_flush_log_at_trx_commit
parameter to 0. InnoDB
tries to flush the log once per second anyway, although the flush is not guaranteed. Also, set the value
of innodb_support_xa
to 0, which will reduce the number of disk flushes due to synchronizing on disk data and the binary log.
When rows are modified or deleted, the rows and associated undo
logs are not physically removed immediately, or even immediately after the transaction commits. The
old data is preserved until transactions that started earlier or concurrently are finished, so that
those transactions can access the previous state of modified or deleted rows. Thus, a long-running
transaction can prevent InnoDB
from purging data that was changed by a
different transaction.
When rows are modified or deleted within a long-running transaction, other
transactions using the READ COMMITTED
and REPEATABLE READ
isolation levels have to do more work to reconstruct
the older data if they read those same rows.
When a long-running transaction modifies a table, queries against that table from other transactions do not make use of the covering index technique. Queries that normally could retrieve all the result columns from a secondary index, instead look up the appropriate values from the table data.
If secondary index pages are found to have a PAGE_MAX_TRX_ID
that is
too new, or if records in the secondary index are delete-marked, InnoDB
may need to look up records using a clustered index.