Spec-Zone .ru
спецификации, руководства, описания, API
|
Here are code examples showing some operations whose performance, concurrency, and scalability are improved by the latest online DDL enhancements.
Example 5.1,
"Schema Setup Code for Online DDL Experiments" sets up tables named BIG_TABLE
and SMALL_TABLE
used in the subsequent
examples.
Example 5.2, "Speed and Efficiency of CREATE INDEX and DROP INDEX" illustrates the performance aspects of creating and dropping indexes.
Example
5.3, "Concurrent DML During CREATE INDEX and DROP INDEX" shows queries and DML statements running
during a DROP INDEX
operation.
Example 5.4, "Renaming a Column" demonstrates the speed improvement for renaming a column, and shows the care needed to keep the data type precisely the same when doing the rename operation.
Example 5.5, "Dropping Foreign Keys" demonstrates how foreign keys work with online DDL. Because two tables are involved in foreign key operations, there are extra locking considerations. Thus, tables with foreign keys sometimes have restrictions for online DDL operations.
Example 5.6, "Changing Auto-Increment Value" demonstrates how auto-increment columns work with online DDL. Tables with auto-increment columns sometimes have restrictions for online DDL operations.
Example 5.7,
"Controlling Concurrency with the LOCK
Clause" demonstrates the options
to permit or restrict concurrent queries and DML operations while an online DDL operation is in
progress. It shows the situations when the DDL statement might wait, or the concurrent transaction might
wait, or the concurrent transaction might cancel a DML statement due to a deadlock error.
Example 5.8, "Schema Setup Code for Online DDL Experiments" demonstrates creating and dropping multiple indexes in a single statement, which can be more efficient than using a separate statement for each index operation.
Example 5.9, "Creating and Dropping the Primary Key" demonstrates how it is more efficient to define a primary key when creating the table, and relatively expensive to add one later.
Example 5.1. Schema Setup Code for Online DDL Experiments
Here is the code that sets up the initial tables used in these demonstrations:
/* Setup code for the online DDL demonstration:- Set up some config variables.- Create 2 tables that are clones of one of the INFORMATION_SCHEMA tables that always has some data. The "small" table has a couple of thousand rows. For the "big" table, keep doubling the data until it reaches over a million rows.- Set up a primary key for the sample tables, since we are demonstrating InnoDB aspects.*/ set autocommit = 0;set foreign_key_checks = 1;set global innodb_file_per_table = 1;set old_alter_table=0;prompt mysql: use test;\! echo "Setting up 'small' table:"drop table if exists small_table;create table small_table as select * from information_schema.columns;alter table small_table add id int unsigned not null primary key auto_increment;select count(id) from small_table;\! echo "Setting up 'big' table:"drop table if exists big_table;create table big_table as select * from information_schema.columns;show create table big_table\Ginsert into big_table select * from big_table;insert into big_table select * from big_table;insert into big_table select * from big_table;insert into big_table select * from big_table;insert into big_table select * from big_table;insert into big_table select * from big_table;insert into big_table select * from big_table;insert into big_table select * from big_table;insert into big_table select * from big_table;insert into big_table select * from big_table;commit;alter table big_table add id int unsigned not null primary key auto_increment;select count(id) from big_table;
Running this code gives this output, condensed for brevity and with the most important points bolded:
Setting up 'small' table:Query OK, 0 rows affected (0.01 sec)Query OK, 1678 rows affected (0.13 sec)Records: 1678 Duplicates: 0 Warnings: 0Query OK, 1678 rows affected (0.07 sec)Records: 1678 Duplicates: 0 Warnings: 0+-----------+| count(id) |+-----------+| 1678 |+-----------+1 row in set (0.00 sec)Setting up 'big' table:Query OK, 0 rows affected (0.16 sec)Query OK, 1678 rows affected (0.17 sec)Records: 1678 Duplicates: 0 Warnings: 0*************************** 1. row *************************** Table: big_tableCreate Table: CREATE TABLE `big_table` ( `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0', `COLUMN_DEFAULT` longtext CHARACTER SET utf8, `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '', `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL, `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL, `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL, `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL, `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL, `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL, `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '', `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '', `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '') ENGINE=InnoDB DEFAULT CHARSET=latin11 row in set (0.00 sec)Query OK, 1678 rows affected (0.09 sec)Records: 1678 Duplicates: 0 Warnings: 0Query OK, 3356 rows affected (0.07 sec)Records: 3356 Duplicates: 0 Warnings: 0Query OK, 6712 rows affected (0.17 sec)Records: 6712 Duplicates: 0 Warnings: 0Query OK, 13424 rows affected (0.44 sec)Records: 13424 Duplicates: 0 Warnings: 0Query OK, 26848 rows affected (0.63 sec)Records: 26848 Duplicates: 0 Warnings: 0Query OK, 53696 rows affected (1.72 sec)Records: 53696 Duplicates: 0 Warnings: 0Query OK, 107392 rows affected (3.02 sec)Records: 107392 Duplicates: 0 Warnings: 0Query OK, 214784 rows affected (6.28 sec)Records: 214784 Duplicates: 0 Warnings: 0Query OK, 429568 rows affected (13.25 sec)Records: 429568 Duplicates: 0 Warnings: 0Query OK, 859136 rows affected (28.16 sec)Records: 859136 Duplicates: 0 Warnings: 0Query OK, 0 rows affected (0.03 sec)Query OK, 1718272 rows affected (1 min 9.22 sec)Records: 1718272 Duplicates: 0 Warnings: 0+-----------+| count(id) |+-----------+| 1718272 |+-----------+1 row in set (1.75 sec)
Example 5.2. Speed and Efficiency of CREATE INDEX and DROP INDEX
Here is a sequence of statements demonstrating the relative speed of CREATE INDEX
and DROP
INDEX
statements. For a small table, the elapsed time is less than a second whether we use the
fast or slow technique, so we look at the "rows affected" output to verify which operations can avoid the
table rebuild. For a large table, the difference in efficiency is obvious because skipping the table rebuild
saves substantial time.
\! clear\! echo "=== Create and drop index (small table, new/fast technique) ==="\! echo\! echo "Data size (kilobytes) before index created: "\! du -k data/test/small_table.ibdcreate index i_dtyp_small on small_table (data_type), algorithm=inplace;\! echo "Data size after index created: "\! du -k data/test/small_table.ibddrop index i_dtyp_small on small_table, algorithm=inplace;-- Compare against the older slower DDL.\! echo "=== Create and drop index (small table, old/slow technique) ==="\! echo\! echo "Data size (kilobytes) before index created: "\! du -k data/test/small_table.ibdcreate index i_dtyp_small on small_table (data_type), algorithm=copy;\! echo "Data size after index created: "\! du -k data/test/small_table.ibddrop index i_dtyp_small on small_table, algorithm=copy;-- In the above example, we examined the "rows affected" number,-- ideally looking for a zero figure. Let's try again with a larger-- sample size, where we'll see that the actual time taken can-- vary significantly.\! echo "=== Create and drop index (big table, new/fast technique) ==="\! echo\! echo "Data size (kilobytes) before index created: "\! du -k data/test/big_table.ibdcreate index i_dtyp_big on big_table (data_type), algorithm=inplace;\! echo "Data size after index created: "\! du -k data/test/big_table.ibddrop index i_dtyp_big on big_table, algorithm=inplace;\! echo "=== Create and drop index (big table, old/slow technique) ==="\! echo\! echo "Data size (kilobytes) before index created: "\! du -k data/test/big_table.ibdcreate index i_dtyp_big on big_table (data_type), algorithm=copy;\! echo "Data size after index created: "\! du -k data/test/big_table.ibddrop index i_dtyp_big on big_table, algorithm=copy;
Running this code gives this output, condensed for brevity and with the most important points bolded:
Query OK, 0 rows affected (0.00 sec)=== Create and drop index (small table, new/fast technique) ===Data size (kilobytes) before index created: 384 data/test/small_table.ibdQuery OK, 0 rows affected (0.04 sec)Records: 0 Duplicates: 0 Warnings: 0Data size after index created: 432 data/test/small_table.ibdQuery OK, 0 rows affected (0.02 sec)Records: 0 Duplicates: 0 Warnings: 0Query OK, 0 rows affected (0.00 sec)=== Create and drop index (small table, old/slow technique) ===Data size (kilobytes) before index created: 432 data/test/small_table.ibdQuery OK, 1678 rows affected (0.12 sec)Records: 1678 Duplicates: 0 Warnings: 0Data size after index created: 448 data/test/small_table.ibdQuery OK, 1678 rows affected (0.10 sec)Records: 1678 Duplicates: 0 Warnings: 0Query OK, 0 rows affected (0.00 sec)=== Create and drop index (big table, new/fast technique) ===Data size (kilobytes) before index created: 315392 data/test/big_table.ibdQuery OK, 0 rows affected (33.32 sec)Records: 0 Duplicates: 0 Warnings: 0Data size after index created: 335872 data/test/big_table.ibdQuery OK, 0 rows affected (0.02 sec)Records: 0 Duplicates: 0 Warnings: 0Query OK, 0 rows affected (0.00 sec)=== Create and drop index (big table, old/slow technique) ===Data size (kilobytes) before index created: 335872 data/test/big_table.ibdQuery OK, 1718272 rows affected (1 min 5.01 sec)Records: 1718272 Duplicates: 0 Warnings: 0Data size after index created: 348160 data/test/big_table.ibdQuery OK, 1718272 rows affected (46.59 sec)Records: 1718272 Duplicates: 0 Warnings: 0
Example 5.3. Concurrent DML During CREATE INDEX and DROP INDEX
Here are some snippets of code that I ran in separate mysql sessions connected to the same database, to illustrate
DML statements (insert, update, or delete) running at the same time as CREATE INDEX
and DROP INDEX
.
/*CREATE INDEX statement to run against a table while insert/update/delete statements are modifying thecolumn being indexed.*/-- We'll run this script in one session, while simultaneously creating and dropping-- an index on test/big_table.table_name in another session.use test;create index i_concurrent on big_table(table_name);
/*DROP INDEX statement to run against a table whileinsert/update/delete statements are modifying thecolumn being indexed.*/-- We'll run this script in one session, while simultaneously creating and dropping-- an index on test/big_table.table_name in another session.use test;drop index i_concurrent on big_table;
/*Some queries and insert/update/delete statements to run against a tablewhile an index is being created or dropped. Previously, these operationswould have stalled during the index create/drop period and possiblytimed out or deadlocked.*/-- We'll run this script in one session, while simultaneously creating and dropping-- an index on test/big_table.table_name in another session.-- In our test instance, that column has about 1.7M rows, with 136 different values.-- Sample values: COLUMNS (20480), ENGINES (6144), EVENTS (24576), FILES (38912), TABLES (21504), VIEWS (10240).set autocommit = 0;use test;select distinct character_set_name from big_table where table_name = 'FILES';delete from big_table where table_name = 'FILES';select distinct character_set_name from big_table where table_name = 'FILES';-- I'll issue the final rollback interactively, not via script,-- the better to control the timing.-- rollback;
Running this code gives this output, condensed for brevity and with the most important points bolded:
mysql: source concurrent_ddl_create.sqlDatabase changedQuery OK, 0 rows affected (1 min 25.15 sec)Records: 0 Duplicates: 0 Warnings: 0mysql: source concurrent_ddl_drop.sqlDatabase changedQuery OK, 0 rows affected (24.98 sec)Records: 0 Duplicates: 0 Warnings: 0mysql: source concurrent_dml.sqlQuery OK, 0 rows affected (0.00 sec)Database changed+--------------------+| character_set_name |+--------------------+| NULL || utf8 |+--------------------+2 rows in set (0.32 sec)Query OK, 38912 rows affected (1.84 sec)Empty set (0.01 sec)mysql: rollback;Query OK, 0 rows affected (1.05 sec)
Example 5.4. Renaming a Column
Here is a demonstration of using ALTER TABLE
to rename a column. We use the new, fast DDL mechanism to change
the name, then the old, slow DDL mechanism (with old_alter_table=1
) to restore
the original column name.
Notes:
Because the syntax for renaming a column also involves re-specifying the
data type, be very careful to specify exactly the same data type to avoid a costly table
rebuild. In this case, we checked the output of show create table
and copied any clauses such as
table
\GCHARACTER SET
and NOT NULL
from the
original column definition.
Again, renaming a column for a small table is fast enough that we need to examine the "rows affected" number to verify that the new DDL mechanism is more efficient than the old one. With a big table, the difference in elapsed time makes the improvement obvious.
/*Run through a sequence of 'rename column' statements.Because this operation involves only metadata, not table data,it is fast for big and small tables, with new or old DDL mechanisms.*/\! clear\! echo "Rename column (fast technique, small table):"alter table small_table change `IS_NULLABLE` `NULLABLE` varchar(3) character set utf8 not null, algorithm=inplace;\! echo "Rename back to original name (slow technique):"alter table small_table change `NULLABLE` `IS_NULLABLE` varchar(3) character set utf8 not null, algorithm=copy;\! echo "Rename column (fast technique, big table):"alter table big_table change `IS_NULLABLE` `NULLABLE` varchar(3) character set utf8 not null, algorithm=inplace;\! echo "Rename back to original name (slow technique):"alter table big_table change `NULLABLE` `IS_NULLABLE` varchar(3) character set utf8 not null, algorithm=copy;
Running this code gives this output, condensed for brevity and with the most important points bolded:
Rename column (fast technique, small table):Query OK, 0 rows affected (0.05 sec)Query OK, 0 rows affected (0.13 sec)Records: 0 Duplicates: 0 Warnings: 0Rename back to original name (slow technique):Query OK, 0 rows affected (0.00 sec)Query OK, 1678 rows affected (0.35 sec)Records: 1678 Duplicates: 0 Warnings: 0Rename column (fast technique, big table):Query OK, 0 rows affected (0.00 sec)Query OK, 0 rows affected (0.11 sec)Records: 0 Duplicates: 0 Warnings: 0Rename back to original name (slow technique):Query OK, 0 rows affected (0.00 sec)Query OK, 1718272 rows affected (1 min 0.00 sec)Records: 1718272 Duplicates: 0 Warnings: 0Query OK, 0 rows affected (0.00 sec)
Example 5.5. Dropping Foreign Keys
Here is a demonstration of foreign keys, including improvement to the speed of dropping a foreign key constraint.
/*Demonstrate aspects of foreign keys that are or aren't affected by the DDL improvements.- Create a new table with only a few values to serve as the parent table.- Set up the 'small' and 'big' tables as child tables using a foreign key.- Verify that the ON DELETE CASCADE clause makes changes ripple from parent to child tables.- Drop the foreign key constraints, and optionally associated indexes. (This is the operation that is sped up.)*/\! clear-- Make sure foreign keys are being enforced, and allow-- rollback after doing some DELETEs that affect both-- parent and child tables.set foreign_key_checks = 1;set autocommit = 0;-- Create a parent table, containing values that we know are already present-- in the child tables.drop table if exists schema_names;create table schema_names (id int unsigned not null primary key auto_increment, schema_name varchar(64) character set utf8 not null, index i_schema (schema_name)) as select distinct table_schema schema_name from small_table;show create table schema_names\Gshow create table small_table\Gshow create table big_table\G-- Creating the foreign key constraint still involves a table rebuild when foreign_key_checks=1,-- as illustrated by the "rows affected" figure.alter table small_table add constraint small_fk foreign key i_table_schema (table_schema) references schema_names(schema_name) on delete cascade;alter table big_table add constraint big_fk foreign key i_table_schema (table_schema) references schema_names(schema_name) on delete cascade;show create table small_table\Gshow create table big_table\Gselect schema_name from schema_names order by schema_name;select count(table_schema) howmany, table_schema from small_table group by table_schema;select count(table_schema) howmany, table_schema from big_table group by table_schema;-- big_table is the parent table.-- schema_names is the parent table.-- big_table is the child table.-- (One row in the parent table can have many "children" in the child table.)-- Changes to the parent table can ripple through to the child table.-- For example, removing the value 'test' from schema_names.schema_name will-- result in the removal of 20K or so rows from big_table.delete from schema_names where schema_name = 'test';select schema_name from schema_names order by schema_name;select count(table_schema) howmany, table_schema from small_table group by table_schema;select count(table_schema) howmany, table_schema from big_table group by table_schema;-- Because we've turned off autocommit, we can still get back those deleted rows-- if the DELETE was issued by mistake.rollback;select schema_name from schema_names order by schema_name;select count(table_schema) howmany, table_schema from small_table group by table_schema;select count(table_schema) howmany, table_schema from big_table group by table_schema;-- All of the cross-checking between parent and child tables would be-- deadly slow if there wasn't the requirement for the corresponding-- columns to be indexed!-- But we can get rid of the foreign key using a fast operation-- that doesn't rebuild the table.-- If we didn't specify a constraint name when setting up the foreign key, we would-- have to find the auto-generated name such as 'big_table_ibfk_1' in the-- output from 'show create table'.-- For the small table, we'll drop the foreign key and the associated index.-- Having an index on a small table is less critical.\! echo "DROP FOREIGN KEY and INDEX from small_table:"alter table small_table drop foreign key small_fk, drop index small_fk;-- For the big table, we'll drop the foreign key and leave the associated index.-- If we are still doing queries that reference the indexed column, the index is-- very important to avoid a full table scan of the big table.\! echo "DROP FOREIGN KEY from big_table:"alter table big_table drop foreign key big_fk;show create table small_table\Gshow create table big_table\G
Running this code gives this output, condensed for brevity and with the most important points bolded:
Query OK, 0 rows affected (0.00 sec)Query OK, 0 rows affected (0.00 sec)Query OK, 0 rows affected (0.01 sec)Query OK, 4 rows affected (0.03 sec)Records: 4 Duplicates: 0 Warnings: 0*************************** 1. row *************************** Table: schema_namesCreate Table: CREATE TABLE `schema_names` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `schema_name` varchar(64) CHARACTER SET utf8 NOT NULL, PRIMARY KEY (`id`), KEY `i_schema` (`schema_name`)) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=latin11 row in set (0.00 sec)*************************** 1. row *************************** Table: small_tableCreate Table: CREATE TABLE `small_table` ( `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0', `COLUMN_DEFAULT` longtext CHARACTER SET utf8, `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL, `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL, `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL, `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL, `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL, `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL, `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL, `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '', `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '', `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '', `id` int(10) unsigned NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`)) ENGINE=InnoDB AUTO_INCREMENT=1679 DEFAULT CHARSET=latin11 row in set (0.00 sec)*************************** 1. row *************************** Table: big_tableCreate Table: CREATE TABLE `big_table` ( `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0', `COLUMN_DEFAULT` longtext CHARACTER SET utf8, `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL, `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL, `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL, `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL, `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL, `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL, `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL, `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '', `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '', `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '', `id` int(10) unsigned NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`), KEY `big_fk` (`TABLE_SCHEMA`) ) ENGINE=InnoDB AUTO_INCREMENT=1718273 DEFAULT CHARSET=latin11 row in set (0.00 sec)Query OK, 1678 rows affected (0.10 sec)Records: 1678 Duplicates: 0 Warnings: 0Query OK, 1718272 rows affected (1 min 14.54 sec)Records: 1718272 Duplicates: 0 Warnings: 0*************************** 1. row *************************** Table: small_tableCreate Table: CREATE TABLE `small_table` ( `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0', `COLUMN_DEFAULT` longtext CHARACTER SET utf8, `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL, `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL, `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL, `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL, `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL, `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL, `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL, `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '', `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '', `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '', `id` int(10) unsigned NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`), KEY `small_fk` (`TABLE_SCHEMA`), CONSTRAINT `small_fk` FOREIGN KEY (`TABLE_SCHEMA`) REFERENCES `schema_names` (`schema_name`) ON DELETE CASCADE ) ENGINE=InnoDB AUTO_INCREMENT=1679 DEFAULT CHARSET=latin11 row in set (0.12 sec)*************************** 1. row *************************** Table: big_tableCreate Table: CREATE TABLE `big_table` ( `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0', `COLUMN_DEFAULT` longtext CHARACTER SET utf8, `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL, `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL, `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL, `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL, `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL, `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL, `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL, `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '', `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '', `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '', `id` int(10) unsigned NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`), KEY `big_fk` (`TABLE_SCHEMA`), CONSTRAINT `big_fk` FOREIGN KEY (`TABLE_SCHEMA`) REFERENCES `schema_names` (`schema_name`) ON DELETE CASCADE ) ENGINE=InnoDB AUTO_INCREMENT=1718273 DEFAULT CHARSET=latin11 row in set (0.01 sec)+--------------------+| schema_name |+--------------------+| information_schema || mysql || performance_schema || test |+--------------------+4 rows in set (0.00 sec)+---------+--------------------+| howmany | table_schema |+---------+--------------------+| 563 | information_schema || 286 | mysql || 786 | performance_schema || 43 | test |+---------+--------------------+4 rows in set (0.01 sec)+---------+--------------------+| howmany | table_schema |+---------+--------------------+| 576512 | information_schema || 292864 | mysql || 804864 | performance_schema || 44032 | test |+---------+--------------------+4 rows in set (2.10 sec)Query OK, 1 row affected (1.52 sec)+--------------------+| schema_name |+--------------------+| information_schema || mysql || performance_schema |+--------------------+3 rows in set (0.00 sec)+---------+--------------------+| howmany | table_schema |+---------+--------------------+| 563 | information_schema || 286 | mysql || 786 | performance_schema |+---------+--------------------+3 rows in set (0.00 sec)+---------+--------------------+| howmany | table_schema |+---------+--------------------+| 576512 | information_schema || 292864 | mysql || 804864 | performance_schema |+---------+--------------------+3 rows in set (1.74 sec)Query OK, 0 rows affected (0.60 sec)+--------------------+| schema_name |+--------------------+| information_schema || mysql || performance_schema || test |+--------------------+4 rows in set (0.00 sec)+---------+--------------------+| howmany | table_schema |+---------+--------------------+| 563 | information_schema || 286 | mysql || 786 | performance_schema || 43 | test |+---------+--------------------+4 rows in set (0.01 sec)+---------+--------------------+| howmany | table_schema |+---------+--------------------+| 576512 | information_schema || 292864 | mysql || 804864 | performance_schema || 44032 | test |+---------+--------------------+4 rows in set (1.59 sec)DROP FOREIGN KEY and INDEX from small_table:Query OK, 0 rows affected (0.02 sec)Records: 0 Duplicates: 0 Warnings: 0DROP FOREIGN KEY from big_table:Query OK, 0 rows affected (0.02 sec)Records: 0 Duplicates: 0 Warnings: 0*************************** 1. row *************************** Table: small_tableCreate Table: CREATE TABLE `small_table` ( `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0', `COLUMN_DEFAULT` longtext CHARACTER SET utf8, `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL, `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL, `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL, `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL, `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL, `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL, `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL, `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '', `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '', `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '', `id` int(10) unsigned NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`)) ENGINE=InnoDB AUTO_INCREMENT=1679 DEFAULT CHARSET=latin11 row in set (0.00 sec)*************************** 1. row *************************** Table: big_tableCreate Table: CREATE TABLE `big_table` ( `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0', `COLUMN_DEFAULT` longtext CHARACTER SET utf8, `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL, `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '', `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL, `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL, `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL, `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL, `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL, `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL, `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL, `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '', `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '', `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '', `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '', `id` int(10) unsigned NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`), KEY `big_fk` (`TABLE_SCHEMA`)) ENGINE=InnoDB AUTO_INCREMENT=1718273 DEFAULT CHARSET=latin11 row in set (0.00 sec)
Example 5.6. Changing Auto-Increment Value
Here is an illustration of increasing the auto-increment
lower limit for a table column, demonstrating how this operation now avoids a table rebuild, plus some other
fun facts about InnoDB
auto-increment columns.
/*If this script is run after foreign_key.sql, the schema_names table isalready set up. But to allow this script to run multiple times withoutrunning into duplicate ID errors, we set up the schema_names tableall over again.*/\! clear\! echo "=== Adjusting the Auto-Increment Limit for a Table ==="\! echodrop table if exists schema_names;create table schema_names (id int unsigned not null primary key auto_increment, schema_name varchar(64) character set utf8 not null, index i_schema (schema_name)) as select distinct table_schema schema_name from small_table;\! echo "Initial state of schema_names table. AUTO_INCREMENT is included in SHOW CREATE TABLE output."\! echo "Note how MySQL reserved a block of IDs, but only needed 4 of them in this transaction, so the next inserted values would get IDs 8 and 9."show create table schema_names\Gselect * from schema_names order by id;\! echo "Inserting even a tiny amount of data can produce gaps in the ID sequence."insert into schema_names (schema_name) values ('eight'), ('nine');\! echo "Bumping auto-increment lower limit to 20 (fast mechanism):"alter table schema_names auto_increment=20, algorithm=inplace;\! echo "Inserting 2 rows that should get IDs 20 and 21:"insert into schema_names (schema_name) values ('foo'), ('bar');commit;\! echo "Bumping auto-increment lower limit to 30 (slow mechanism):"alter table schema_names auto_increment=30, algorithm=copy;\! echo "Inserting 2 rows that should get IDs 30 and 31:"insert into schema_names (schema_name) values ('bletch'),('baz');commit;select * from schema_names order by id;\! echo "Final state of schema_names table. AUTO_INCREMENT value shows the next inserted row would get ID=32."show create table schema_names\G
Running this code gives this output, condensed for brevity and with the most important points bolded:
=== Adjusting the Auto-Increment Limit for a Table ===Query OK, 0 rows affected (0.01 sec)Query OK, 4 rows affected (0.02 sec)Records: 4 Duplicates: 0 Warnings: 0Initial state of schema_names table. AUTO_INCREMENT is included in SHOW CREATE TABLE output.Note how MySQL reserved a block of IDs, but only needed 4 of them in this transaction, so the next inserted values would get IDs 8 and 9.*************************** 1. row *************************** Table: schema_namesCreate Table: CREATE TABLE `schema_names` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `schema_name` varchar(64) CHARACTER SET utf8 NOT NULL, PRIMARY KEY (`id`), KEY `i_schema` (`schema_name`)) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=latin11 row in set (0.00 sec)+----+--------------------+| id | schema_name |+----+--------------------+| 1 | information_schema || 2 | mysql || 3 | performance_schema || 4 | test |+----+--------------------+4 rows in set (0.00 sec)Inserting even a tiny amount of data can produce gaps in the ID sequence.Query OK, 2 rows affected (0.00 sec)Records: 2 Duplicates: 0 Warnings: 0Query OK, 0 rows affected (0.00 sec)Bumping auto-increment lower limit to 20 (fast mechanism):Query OK, 0 rows affected (0.01 sec)Records: 0 Duplicates: 0 Warnings: 0Inserting 2 rows that should get IDs 20 and 21:Query OK, 2 rows affected (0.00 sec)Records: 2 Duplicates: 0 Warnings: 0Query OK, 0 rows affected (0.00 sec)Query OK, 0 rows affected (0.00 sec)Bumping auto-increment lower limit to 30 (slow mechanism):Query OK, 8 rows affected (0.02 sec)Records: 8 Duplicates: 0 Warnings: 0Inserting 2 rows that should get IDs 30 and 31:Query OK, 2 rows affected (0.00 sec)Records: 2 Duplicates: 0 Warnings: 0Query OK, 0 rows affected (0.01 sec)+----+--------------------+| id | schema_name |+----+--------------------+| 1 | information_schema || 2 | mysql || 3 | performance_schema || 4 | test || 8 | eight || 9 | nine || 20 | foo || 21 | bar || 30 | bletch || 31 | baz |+----+--------------------+10 rows in set (0.00 sec)Query OK, 0 rows affected (0.00 sec)Final state of schema_names table. AUTO_INCREMENT value shows the next inserted row would get ID=32.*************************** 1. row *************************** Table: schema_namesCreate Table: CREATE TABLE `schema_names` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `schema_name` varchar(64) CHARACTER SET utf8 NOT NULL, PRIMARY KEY (`id`), KEY `i_schema` (`schema_name`)) ENGINE=InnoDB AUTO_INCREMENT=32 DEFAULT CHARSET=latin11 row in set (0.00 sec)
Example 5.7. Controlling Concurrency with the LOCK
Clause
This example shows how to use the LOCK
clause of
the ALTER TABLE
statement to allow or deny concurrent access to the table
while an online DDL operation is in progress. The clause has settings that allow queries and DML
statements (LOCK=NONE
), just queries (LOCK=SHARED
), or no concurrent access at all (LOCK=EXCLUSIVE
).
In one session, we run a succession of ALTER
TABLE
statements to create and drop an index, using different values for the LOCK
clause to see what happens with waiting or deadlocking in either
session. We are using the same BIG_TABLE
table as in previous examples,
starting with approximately 1.7 million rows. For illustration purposes, we will index and query the
IS_NULLABLE
column. (Although in real life it would be silly to make an
index for a tiny column with only 2 distinct values.)
mysql: desc big_table;+--------------------------+---------------------+------+-----+---------+----------------+| Field | Type | Null | Key | Default | Extra |+--------------------------+---------------------+------+-----+---------+----------------+| TABLE_CATALOG | varchar(512) | NO | | | || TABLE_SCHEMA | varchar(64) | NO | | | || TABLE_NAME | varchar(64) | NO | | | || COLUMN_NAME | varchar(64) | NO | | | || ORDINAL_POSITION | bigint(21) unsigned | NO | | 0 | || COLUMN_DEFAULT | longtext | YES | | NULL | || IS_NULLABLE | varchar(3) | NO | | | |...+--------------------------+---------------------+------+-----+---------+----------------+21 rows in set (0.14 sec)mysql: alter table big_table add index i1(is_nullable);Query OK, 0 rows affected (20.71 sec)mysql: alter table big_table drop index i1;Query OK, 0 rows affected (0.02 sec)mysql: alter table big_table add index i1(is_nullable), lock=exclusive;Query OK, 0 rows affected (19.44 sec)mysql: alter table big_table drop index i1;Query OK, 0 rows affected (0.03 sec)mysql: alter table big_table add index i1(is_nullable), lock=shared;Query OK, 0 rows affected (16.71 sec)mysql: alter table big_table drop index i1;Query OK, 0 rows affected (0.05 sec)mysql: alter table big_table add index i1(is_nullable), lock=none;Query OK, 0 rows affected (12.26 sec)mysql: alter table big_table drop index i1;Query OK, 0 rows affected (0.01 sec)... repeat statements like the above while running queries ...... and DML statements at the same time in another session ...
Nothing dramatic happens in the session running the DDL statements. Sometimes, an ALTER TABLE
takes unusually long because it is waiting for another
transaction to finish, when that transaction modified the table during the DDL or queried the table
before the DDL:
mysql: alter table big_table add index i1(is_nullable), lock=none;Query OK, 0 rows affected (59.27 sec)mysql: -- The previous ALTER took so long because it was waiting for all the concurrentmysql: -- transactions to commit or roll back.mysql: alter table big_table drop index i1;Query OK, 0 rows affected (41.05 sec)mysql: -- Even doing a SELECT on the table in the other session first causesmysql: -- the ALTER TABLE above to stall until the transactionmysql: -- surrounding the SELECT is committed or rolled back.
Here is the log from another session running concurrently, where we issue queries and DML statements
against the table before, during, and after the DDL operations shown in the previous listings. This
first listing shows queries only. We expect the queries to be allowed during DDL operations using LOCK=NONE
or LOCK=SHARED
, and for the query
to wait until the DDL is finished if the ALTER
TABLE
statement includes LOCK=EXCLUSIVE
.
mysql: show variables like 'autocommit';+---------------+-------+| Variable_name | Value |+---------------+-------+| autocommit | ON |+---------------+-------+1 row in set (0.01 sec)mysql: -- A trial query before any ADD INDEX in the other session:mysql: -- Note: because autocommit is enabled, eachmysql: -- transaction finishes immediately after the query.mysql: select distinct is_nullable from big_table;+-------------+| is_nullable |+-------------+| NO || YES |+-------------+2 rows in set (4.49 sec)mysql: -- Index is being created with LOCK=EXCLUSIVE on the ALTER statement.mysql: -- The query waits until the DDL is finished before proceeding.mysql: select distinct is_nullable from big_table;+-------------+| is_nullable |+-------------+| NO || YES |+-------------+2 rows in set (17.26 sec)mysql: -- Index is being created with LOCK=SHARED on the ALTER statement.mysql: -- The query returns its results while the DDL is in progress.mysql: -- The same thing happens with LOCK=NONE on the ALTER statement.mysql: select distinct is_nullable from big_table;+-------------+| is_nullable |+-------------+| NO || YES |+-------------+2 rows in set (3.11 sec)mysql: -- Once the index is created, and with no DDL in progress,mysql: -- queries referencing the indexed column are very fast:mysql: select count(*) from big_table where is_nullable = 'YES';+----------+| count(*) |+----------+| 411648 |+----------+1 row in set (0.20 sec)mysql: select distinct is_nullable from big_table;+-------------+| is_nullable |+-------------+| NO || YES |+-------------+2 rows in set (0.00 sec)
Now in this concurrent session, we run some transactions including DML statements, or a combination of
DML statements and queries. We use DELETE
statements to illustrate predictable, verifiable changes to the table. Because the transactions in this
part can span multiple statements, we run these tests with autocommit
turned off.
mysql: set global autocommit = off;Query OK, 0 rows affected (0.00 sec)mysql: -- Count the rows that will be involved in our DELETE statements:mysql: select count(*) from big_table where is_nullable = 'YES';+----------+| count(*) |+----------+| 411648 |+----------+1 row in set (0.95 sec)mysql: -- After this point, any DDL statements back in the other session mysql: -- stall until we commit or roll back.mysql: delete from big_table where is_nullable = 'YES' limit 11648;Query OK, 11648 rows affected (0.14 sec)mysql: select count(*) from big_table where is_nullable = 'YES';+----------+| count(*) |+----------+| 400000 |+----------+1 row in set (1.04 sec)mysql: rollback;Query OK, 0 rows affected (0.09 sec)mysql: select count(*) from big_table where is_nullable = 'YES';+----------+| count(*) |+----------+| 411648 |+----------+1 row in set (0.93 sec)mysql: -- OK, now we're going to try that during index creation with LOCK=NONE.mysql: delete from big_table where is_nullable = 'YES' limit 11648;Query OK, 11648 rows affected (0.21 sec)mysql: -- We expect that now there will be 400000 'YES' rows left:mysql: select count(*) from big_table where is_nullable = 'YES';+----------+| count(*) |+----------+| 400000 |+----------+1 row in set (1.25 sec)mysql: -- In the other session, the ALTER TABLE is waiting before finishing,mysql: -- because _this_ transaction hasn't committed or rolled back yet.mysql: rollback;Query OK, 0 rows affected (0.11 sec)mysql: select count(*) from big_table where is_nullable = 'YES';+----------+| count(*) |+----------+| 411648 |+----------+1 row in set (0.19 sec)mysql: -- The ROLLBACK left the table in the same state we originally found it.mysql: -- Now let's make a permanent change while the index is being created,mysql: -- again with ALTER TABLE ... , LOCK=NONE.mysql: -- First, commit so the DROP INDEX in the other shell can finish;mysql: -- the previous SELECT started a transaction that accessed the table.mysql: commit;Query OK, 0 rows affected (0.00 sec)mysql: -- Now we add the index back in the other shell, then issue DML in this onemysql: -- while the DDL is running.mysql: delete from big_table where is_nullable = 'YES' limit 11648;Query OK, 11648 rows affected (0.23 sec)mysql: commit;Query OK, 0 rows affected (0.01 sec)mysql: -- In the other shell, the ADD INDEX has finished.mysql: select count(*) from big_table where is_nullable = 'YES';+----------+| count(*) |+----------+| 400000 |+----------+1 row in set (0.19 sec)mysql: -- At the point the new index is finished being created, it contains entriesmysql: -- only for the 400000 'YES' rows left when all concurrent transactions are finished.mysql: mysql: -- Now we will run a similar test, while ALTER TABLE ... , LOCK=SHARED is running.mysql: -- We expect a query to complete during the ALTER TABLE, but for the DELETEmysql: -- to run into some kind of issue.mysql: commit;Query OK, 0 rows affected (0.00 sec)mysql: -- As expected, the query returns results while the LOCK=SHARED DDL is running:mysql: select count(*) from big_table where is_nullable = 'YES';+----------+| count(*) |+----------+| 400000 |+----------+1 row in set (2.07 sec)mysql: -- The DDL in the other session is not going to finish until this transactionmysql: -- is committed or rolled back. If we tried a DELETE now and it waited becausemysql: -- of LOCK=SHARED on the DDL, both transactions would wait forever (deadlock).mysql: -- MySQL detects this condition and cancels the attempted DML statement.mysql: delete from big_table where is_nullable = 'YES' limit 100000;ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transactionmysql: -- The transaction here is still going, so in the other shell, the ADD INDEX operationmysql: -- is waiting for this transaction to commit or roll back.mysql: rollback;Query OK, 0 rows affected (0.00 sec)mysql: -- Now let's try issuing a query and some DML, on one line, while runningmysql: -- ALTER TABLE ... , LOCK=EXCLUSIVE in the other shell.mysql: -- Notice how even the query is held up until the DDL is finished.mysql: -- By the time the DELETE is issued, there is no conflicting accessmysql: -- to the table and we avoid the deadlock error.mysql: select count(*) from big_table where is_nullable = 'YES'; delete from big_table where is_nullable = 'YES' limit 100000;+----------+| count(*) |+----------+| 400000 |+----------+1 row in set (15.98 sec)Query OK, 100000 rows affected (2.81 sec)mysql: select count(*) from big_table where is_nullable = 'YES';+----------+| count(*) |+----------+| 300000 |+----------+1 row in set (0.17 sec)mysql: rollback;Query OK, 0 rows affected (1.36 sec)mysql: select count(*) from big_table where is_nullable = 'YES';+----------+| count(*) |+----------+| 400000 |+----------+1 row in set (0.19 sec)mysql: commit;Query OK, 0 rows affected (0.00 sec)mysql: -- Next, we try ALTER TABLE ... , LOCK=EXCLUSIVE in the other sessionmysql: -- and only issue DML, not any query, in the concurrent transaction here.mysql: delete from big_table where is_nullable = 'YES' limit 100000;Query OK, 100000 rows affected (16.37 sec)mysql: -- That was OK because the ALTER TABLE did not have to wait for the transactionmysql: -- here to complete. The DELETE in this session waited until the index was ready.mysql: select count(*) from big_table where is_nullable = 'YES';+----------+| count(*) |+----------+| 300000 |+----------+1 row in set (0.16 sec)mysql: commit;Query OK, 0 rows affected (0.00 sec)
In the preceding example listings, we learned that:
The LOCK
clause for ALTER TABLE
is set off from the rest of the statement by a comma.
Online DDL operations might wait before starting, until any prior transactions that access the table are committed or rolled back.
Online DDL operations might wait before completing, until any concurrent transactions that access the table are committed or rolled back.
While an online DDL operation is running, concurrent queries are relatively
straightforward, as long as the ALTER
TABLE
statement uses LOCK=NONE
or LOCK=SHARED
.
Pay attention to whether autocommit
is turned on or off. If it is turned off, be careful
to end transactions in other sessions (even just queries) before performing DDL operations on
the table.
With LOCK=SHARED
, concurrent transactions that
mix queries and DML could encounter deadlock errors and have to be restarted after the DDL is
finished.
With LOCK=NONE
, concurrent transactions can
freely mix queries and DML. The DDL operation waits until the concurrent transactions are
committed or rolled back.
With LOCK=NONE
, concurrent transactions can
freely mix queries and DML, but those transactions wait until the DDL operation is finished
before they can access the table.
Example 5.8. Schema Setup Code for Online DDL Experiments
You can create multiple indexes on a table with one ALTER TABLE
statement. This is relatively efficient, because the clustered
index of the table needs to be scanned only once (although the data is sorted separately for each new
index). For example:
CREATE TABLE T1(A INT PRIMARY KEY, B INT, C CHAR(1)) ENGINE=InnoDB;INSERT INTO T1 VALUES (1,2,'a'), (2,3,'b'), (3,2,'c'), (4,3,'d'), (5,2,'e');COMMIT;ALTER TABLE T1 ADD INDEX (B), ADD UNIQUE INDEX (C);
The above statements create table T1
with the primary key on column A
, insert several rows, then build two new indexes on columns B
and C
. If there were many rows inserted into
T1
before the ALTER
TABLE
statement, this approach is much more efficient than creating all the secondary
indexes before loading the data.
Because dropping InnoDB secondary indexes also does not require any copying of table data, it is equally
efficient to drop multiple indexes with a single ALTER TABLE
statement or multiple DROP INDEX
statements:
ALTER TABLE T1 DROP INDEX B, DROP INDEX C;
or:
DROP INDEX B ON T1;DROP INDEX C ON T1;
Example 5.9. Creating and Dropping the Primary Key
Restructuring the clustered index for an InnoDB
table always requires copying the table data. Thus, it is best to define
the primary key when you
create a table, rather than issuing ALTER TABLE ... ADD PRIMARY KEY
later, to
avoid rebuilding the table.
Defining a PRIMARY KEY
later causes the data to be copied, as in the
following example:
CREATE TABLE T2 (A INT, B INT);INSERT INTO T2 VALUES (NULL, 1);ALTER TABLE T2 ADD PRIMARY KEY (B);
When you create a UNIQUE
or PRIMARY KEY
index,
MySQL must do some extra work. For UNIQUE
indexes, MySQL checks that the
table contains no duplicate values for the key. For a PRIMARY KEY
index,
MySQL also checks that none of the PRIMARY KEY
columns contains a NULL
.
When you add a primary key using the ALGORITHM=COPY
clause, MySQL actually
converts NULL
values in the associated columns to default values: 0 for
numbers, the empty string for character-based columns and BLOBs, and January 1, 1975 for dates. This is
a non-standard behavior that Oracle recommends you not rely on. Adding a primary key using ALGORITHM=INPLACE
is only allowed when the SQL_MODE
setting includes the strict_trans_tables
or strict_all_tables
flags; when the SQL_MODE
setting
is strict, ADD PRIMARY KEY ... , ALGORITHM=INPLACE
is allowed, but the
statement can still fail if the requested primary key columns contain any NULL
values. The ALGORITHM=INPLACE
behavior
is more standard-compliant.
The following example shows the different possibilities for the ADD PRIMARY
KEY
clause. With the ALGORITHM=COPY
clause, the operation succeeds
despite the presence of NULL
values in the primary key columns; the data is
silently changed, which could cause problems. With the ALGORITHM=INPLACE
clause, the operation could fail for different reasons, because this setting considers data integrity a
high priority: the statement gives an error if the SQL_MODE
setting is not "strict"
enough, or if the primary key columns contain any NULL
values. Once we
address both of those requirements, the ALTER
TABLE
operation succeeds.
CREATE TABLE add_pk_via_copy (c1 INT, c2 VARCHAR(10), c3 DATETIME);INSERT INTO add_pk_via_copy VALUES (1,'a','...'),(NULL,NULL,NULL);ALTER TABLE add_pk_via_copy ADD PRIMARY KEY (c1,c2,c3), ALGORITHM=COPY;SELECT * FROM add_pk_via_copy;CREATE TABLE add_pk_via_inplace (c1 INT, c2 VARCHAR(10), c3 DATETIME);INSERT INTO add_pk_via_inplace VALUES (1,'a','...'),(NULL,NULL,NULL);SET sql_mode = 'strict_trans_tables';ALTER TABLE add_pk_via_inplace ADD PRIMARY KEY (c1,c2,c3), ALGORITHM=COPY;SET sql_mode = '';ALTER TABLE add_pk_via_inplace ADD PRIMARY KEY (c1,c2,c3), ALGORITHM=COPY;DELETE FROM add_pk_via_inplace WHERE c1 IS NULL OR c2 IS NULL OR c3 IS NULL;ALTER TABLE add_pk_via_inplace ADD PRIMARY KEY (c1,c2,c3), ALGORITHM=COPY;SELECT * FROM add_pk_via_inplace;
If you create a table without a primary key, InnoDB chooses one for you, which can be the first UNIQUE
key defined on NOT NULL
columns, or a
system-generated key. To avoid any uncertainty and the potential space requirement for an extra hidden
column, specify the PRIMARY KEY
clause as part of the CREATE TABLE
statement.