This section summarizes what has been added to and removed from MySQL 5.6.
The following features have been added to MySQL 5.6:
Security improvements. These security improvements were made:
MySQL now provides a method for storing authentication credentials
encrypted in an option file named
.mylogin.cnf. To create the
file, use the mysql_config_editor utility. The file
can be read later by MySQL client programs to obtain authentication credentials for
connecting to a MySQL server. mysql_config_editor writes the
.mylogin.cnf file using encryption so the credentials are
not stored as clear text, and its contents when decrypted by client programs are used only
in memory. In this way, passwords can be stored in a file in non-cleartext format and used
later without ever needing to be exposed on the command line or in an environment variable.
For more information, see Section
4.6.6, "mysql_config_editor — MySQL
MySQL now supports stronger encryption for user account passwords,
available through an authentication plugin named
sha256_password that implements SHA-256 password hashing.
This plugin is built in, so it is always available and need not be loaded explicitly. For
more information, including instructions for creating accounts that use SHA-256 passwords,
see Section 126.96.36.199, "The SHA-256
mysql.user table now has a
password_expired column. Its default value is
'N', but can be set to
'Y' with the
statement. After an account's password has been expired, all operations performed in
subsequent connections to the server using the account result in an error until the user
statement to establish a new account password. For more information, see Section
ALTER USER Syntax".
MySQL now has provision for checking password security:
In statements that assign a password supplied as a
cleartext value, the value is checked against the current password policy and
rejected if it is weak (the statement returns an
ER_NOT_VALID_PASSWORD error). This affects
PASSWORD statements. Passwords given as arguments to the
OLD_PASSWORD() functions are checked as well.
The strength of potential passwords can be assessed using
VALIDATE_PASSWORD_STRENGTH() SQL function,
which takes a password argument and returns an integer from 0 (weak) to 100
Both capabilities are implemented by the
plugin. For more information, see Section
188.8.131.52, "The Password Validation Plugin".
mysql_upgrade now produces a warning if it finds user accounts with passwords hashed with the older pre-4.1 hashing method. Such accounts should be updated to use more secure password hashing. See Section 184.108.40.206, "Password Hashing in MySQL"
On Unix platforms, mysql_install_db supports a new option,
--random-passwords, that provides for more secure MySQL
installation. Invoking mysql_install_db with
--random-passwords causes it to assign a random password
to the MySQL
root accounts, set the "password expired" flag for those accounts,
and remove the anonymous-user MySQL accounts. For additional details, see Section
4.4.3, "mysql_install_db — Initialize
MySQL Data Directory".
Logging has been modified so that passwords do not appear in plain text in statements written to the general query log, slow query log, and binary log. See Section 220.127.116.11, "Passwords and Logging".
SLAVE syntax has been modified to permit connection parameters to be specified
for connecting to the master. This provides an alternative to storing the password in the
master.info file. See Section
START SLAVE Syntax".
Changes to server defaults. Beginning with MySQL 5.6.6, several MySQL Server parameter defaults differ from the defaults in previous releases. The motivation for these changes is to provide better out-of-box performance and to reduce the need for database administrators to change settings manually. For more information, see Section 18.104.22.168, "Changes to Server Defaults".
InnoDB enhancements. These
InnoDB enhancements were added:
You can create
FULLTEXT indexes on
InnoDB tables, and query them using the
MATCH() ... AGAINST syntax. This feature includes a new
proximity search operator (
@) and several new configuration
INFORMATION_SCHEMA tables: See Section
FULLTEXT Indexes" for more information.
TABLE operations can be performed without copying the table, without blocking
inserts, updates, and deletes to the table, or both. These enhancements are known
collectively as online
DDL. See Section 5.5, "Online DDL for
InnoDB Tables" for details.
You have more flexibility to move around the .ibd
files files created in file-per-table mode to suit your
storage devices and database servers. When creating a table, you can designate a location
outside the MySQL data directory to hold the
.ibd file, for
example to put a busy table on an SSD
device or a huge table on a high-capacity HDD device. You can export a
table from one MySQL instance and import it in a different instance, without inconsistencies
or mismatches caused by buffered data, in-progress transactions, and internal bookkeeping
details such as the space
ID and LSN. See Section 22.214.171.124.34, "Improved
Tablespace Management" for details.
You can now set the
size for uncompressed tables to 8KB or 4KB, as an alternative to the default 16KB. This
setting is controlled by the
innodb_page_size configuration option. You specify the
size when creating the MySQL instance. All
within an instance share the same page size. Smaller page sizes can help to avoid redundant
or inefficient I/O for certain combinations of workload and storage devices, particularly SSD devices with small
Improvements to the algorithms for adaptive flushing make I/O operations more efficient and consistent under a variety of workloads. The new algorithm and default configuration values are expected to improve performance and concurrency for most users. Advanced users can fine-tune their I/O responsiveness through several configuration options. See Section 126.96.36.199.9, "Improvements to Buffer Pool Flushing" for details.
You can code MySQL applications that access
tables through a NoSQL-style API. This feature uses the popular memcached daemon to relay requests such as
GET for key-value pairs. These simple operations to store and
retrieve data avoid the SQL overhead such as parsing and constructing a query
execution plan. You can access the same data through the NoSQL API and SQL. For
example, you might use the NoSQL API for fast updates and lookups, and SQL for complex
queries and compatibility with existing applications. See Section
14.2.9, "InnoDB Integration with memcached" for details.
Optimizer statistics for
InnoDB tables are
gathered at more predictable intervals and can persist across server restarts, for improved
stability. You can also control the amount of sampling done for
InnoDB indexes, to make the optimizer statistics more
accurate and improve the query execution plan. See Section
188.8.131.52.10, "Persistent Optimizer Statistics for InnoDB Tables" for details.
New optimizations apply to read-only transactions,
improving performance and concurrency for ad-hoc queries and report-generating applications.
These optimizations are applied automatically when practical, or you can specify
START TRANSACTION READ ONLY to ensure the transaction is
read-only. See Section
184.108.40.206.3, "Optimizations for Read-Only Transactions" for details.
You can move the
log out of the system tablespace into one or more
separate tablespaces. The I/O patterns for the undo log make these
new tablespaces good candidates to move to SSD storage, while keeping the system tablespace
on hard disk storage. For details, see Section
220.127.116.11.4, "Separate Tablespaces for InnoDB Undo Logs".
log files now have a maximum combined size of 512GB, increased from 4GB. You can specify
the larger values through the
innodb_log_file_size option. The startup behavior now
automatically handles the situation where the size of the existing redo log files does not
match the size specified by
--innodb-read-only option lets you run a MySQL server in
read-only mode. You can access
InnoDB tables on read-only media
such as a DVD or CD, or set up a data warehouse with multiple instances all sharing the same
data directory. See Section 18.104.22.168,
"Support for Read-Only Media" for usage details.
INFORMATION_SCHEMA tables provide information about the
InnoDB buffer pool, metadata about tables, indexes, and foreign
keys from the
InnoDB data dictionary, and low-level information
about performance metrics that complements the information from the Performance Schema
InnoDB now limits the memory used to hold
table information when many tables are opened.
InnoDB has several internal performance
enhancements, including reducing contention by splitting the kernel mutex, moving flushing
operations from the main thread to a separate thread, enabling multiple purge threads, and
reducing contention for the buffer pool on large-memory systems.
InnoDB uses a new, faster algorithm to
Information about all
InnoDB deadlocks can be written to the
MySQL server error log, to help diagnose application issues.
To avoid a lengthy warmup period after
restarting the server, particularly for instances with large
InnoDB buffer pools, you can
reload pages into the buffer pool immediately after a restart. MySQL can dump a compact data
file at shutdown, then consult that data file to find the pages to
reload on the next restart. You can also manually dump or reload the buffer pool at any
time, for example during benchmarking or after complex report-generation queries. See Section
22.214.171.124.8, "Faster Restart by Preloading the InnoDB Buffer Pool" for details.
Partitioning. These table-partitioning enhancements were added:
The maximum number of partitions is increased to 8192. This number includes all partitions and all subpartitions of the table.
It is now possible to exchange a partition of a partitioned table or a
subpartition of a subpartitioned table with a nonpartitioned table that otherwise has the
same structure using the
TABLE ... EXCHANGE PARTITION statement. This can be used, for example, to
import and export partitions. For more information and examples, see Section
18.3.3, "Exchanging Partitions and Subpartitions with Tables".
Explicit selection of one or more partitions or subpartitions is now
supported for queries, as well as for many data modification statements, that act on
partitioned tables. For example, assume a table
t with some
c has 4 partitions named
Then the query
SELECT * FROM t PARTITION (p0, p1) WHERE c <
5 returns only those rows from partitions
p1 for which
c is less than 5.
The following statements support explicit partition selection:
For syntax, see the descriptions of the individual statements. For additional information and examples, see Section 18.5, "Partition Selection".
Partition lock pruning greatly improves performance of many DML and DDL
statements acting on tables with many partitions by helping to eliminate locks on partitions
that are not affected by these statements. Such statements include many
SELECT ... PARTITION,
INSERT, as well as many other statements. For more
information, including a complete listing of the statements whose performance has thus been
improved, see Section 18.6.4, "Partitioning and
Performance Schema. The Performance Schema includes several new features:
Instrumentation for table input and output. Instrumented operations include row-level accesses to persistent base tables or temporary tables. Operations that affect rows are fetch, insert, update, and delete.
Event filtering by table, based on schema and/or table names.
Event filtering by thread. More information is collected for threads.
Summary tables for table and index I/O, and for table locks.
Instrumentation for statements and stages within statements.
Configuration of instruments and consumers at server startup, which previously was possible only at runtime.
MySQL Cluster. MySQL Cluster is released as a separate product, with new
development for version 7.3 of the
storage engine being based on MySQL 5.6. Clustering support is not available in mainline MySQL Server
5.6 releases. For more information about MySQL Cluster NDB 7.3, see Chapter
17, MySQL Cluster NDB 7.3.
MySQL Cluster releases are identified by a 3-part NDB version number. Currently, MySQL Cluster NDB
7.2, based on MySQL Server 5.5, is the most recent GA release series. For more information about
MySQL Cluster NDB 7.2, see
MySQL Cluster NDB 6.3, MySQL Cluster NDB 7.0, and MySQL Cluster NDB 7.1 are also still available.
These versions of MySQL Cluster are based on MySQL Server 5.1 and documented in the MySQL 5.1 Manual; see
Replication and logging. These replication enhancements were added:
MySQL now supports transaction-based replication using global transaction identifiers (also known as "GTIDs"). This makes it possible to identify and track each transaction when it is committed on the originating server and as it is applied by any slaves.
Enabling of GTIDs in a replication setup is done primarily using the new
--enforce-gtid-consistency server options. For information
about additional options and variables introduced in support of GTIDs, see Section 126.96.36.199,
"Global Transaction ID Options and Variables".
When using GTIDs it is not necessary to refer to log files or positions within those files when starting a new slave or failing over to a new master, which greatly simplifies these tasks. For more information about provisioning servers for GTID replication with or without referring to binary log files, see Section 188.8.131.52, "Using GTIDs for Failover and Scaleout".
GTID-based replication is completely transaction-based, which makes it simple to check the consistency of masters and slaves. If all transactions committed on a given master are also committed on a given slave, consistency between the two servers is guaranteed.
For more complete information about the implementation and use of GTIDs in MySQL Replication, see Section 16.1.3, "Replication with Global Transaction Identifiers".
MySQL row-based replication now supports row image control. By logging
only those columns required for uniquely identifying and executing changes on each row (as
opposed to all columns) for each row change, it is possible to save disk space, network
resources, and memory usage. You can determine whether full or minimal rows are logged by
binlog_row_image server system variable to one of the
minimal (log required columns only),
full (log all columns), or
noblob (log all columns except for unneeded
TEXT columns). See System
variables used with the binary log, for more information.
Binary logs written and read by the MySQL Server are now crash-safe,
because only complete events (or transactions) are logged or read back. By default, the
server logs the length of the event as well as the event itself and uses this information to
verify that the event was written correctly. You can also cause the server to write
checksums for the events using CRC32 checksums by setting the
binlog_checksum system variable. To cause the server to
read checksums from the binary log, use the
master_verify_checksum system variable. The
--slave-sql-verify-checksum system variable causes the slave
SQL thread to read checksums from the relay log.
MySQL now supports logging of master connection information and of
slave relay log information to tables as well as files. Use of these tables can be
controlled independently, by the
--relay-log-info-repository server options. Setting
TABLE causes connection information to be logged in the
slave_master_info table; setting
TABLE causes relay log information to be logged to the
slave_relay_log_info table. Both of these tables are created
automatically, in the
mysql system database.
In order for replication to be crash-safe, the
tables must each use a transactional storage engine, and beginning with MySQL 5.6.6,
these tables are created using
InnoDB for this reason. (Bug #13538891) If you are
using a previous MySQL 5.6 release in which both of these tables use
MyISAM, this means that, prior to starting
replication, you must convert both of them to a transactional storage engine (such as
InnoDB) if you wish for replication to be crash-safe. You
can do this in such cases by means of the appropriate
ALTER TABLE ... ENGINE=... statements. You should
not attempt to change the storage engine used by
either of these tables while replication is actually running.
In order to guarantee crash safety on the slave, you must also run the
slave with the
--relay-log-recovery option enabled.
mysqlbinlog now has the capability to back
up a binary log in its original binary format. When invoked with the
--raw options, mysqlbinlog connects to a server,
requests the log files, and writes output files in the same format as the originals. See Section 184.108.40.206,
"Using mysqlbinlog to Back Up Binary Log
MySQL now supports delayed replication such that a slave server
deliberately lags behind the master by at least a specified amount of time. The default
delay is 0 seconds. Use the new
MASTER_DELAY option for
TO to set the delay.
Delayed replication can be used for purposes such as protecting against user mistakes on the master (a DBA can roll back a delayed slave to the time just before the disaster) or testing how the system behaves when there is a lag. See Section 16.3.9, "Delayed Replication".
A replication slave having multiple network interfaces can now be
caused to use only one of these (to the exclusion of the others) by using the
MASTER_BIND option when issuing a
CHANGE MASTER TO statement.
log_bin_basename system variable has been added. This
variable contains the complete filename and path to the binary log file. Whereas the
log_bin system variable shows only whether or not binary
logging is enabled,
log_bin_basename reflects the name set with the
relay_log_basename system variable shows the filename and
complete path to the relay log file.
MySQL Replication now supports parallel execution of transactions with
multi-threading on the slave. When parallel execution is enabled, the slave SQL thread acts
as the coordinator for a number of slave worker threads as determined by the value of the
slave_parallel_workers server system variable. The
current implementation of multi-threading on the slave assumes that data and updates are
partitioned on a per-database basis, and that updates within a given database occur in the
same relative order as they do on the master. However, it is not necessary to coordinate
transactions between different databases. Transactions can then also be distributed per
database, which means that a worker thread on the slave slave can process successive
transactions on a given database without waiting for updates to other databases to complete.
Since transactions on different databases can occur in a different order on the slave
than on the master, simply checking for the most recently executed transaction is not a
guarantee that all previous transactions on the master have been executed on the slave.
This has implications for logging and recovery when using a multi-threaded slave. For
information about how to interpret binary logging information when using multi-threading
on the slave, see Section 220.127.116.11,
SHOW SLAVE STATUS Syntax".
Optimizer enhancements. These query optimizer improvements were implemented:
The optimizer now more efficiently handles queries (and subqueries) of the following form:
SELECT ... FROM
single_table... ORDER BY
non_index_column[DESC] LIMIT [
That type of query is common in web applications that display only a few rows from a larger result set. For example:
SELECT col1, ... FROM t1 ... ORDER BY name LIMIT 10;SELECT col1, ... FROM t1 ... ORDER BY RAND() LIMIT 15;
The sort buffer has a size of
sort_buffer_size. If the sort elements for
N rows are small enough to fit in the sort
N rows if
M was specified), the server can avoid
using a merge file and perform the sort entirely in memory. For details, see Section 18.104.22.168, "Optimizing
The optimizer implements Disk-Sweep Multi-Range Read. Reading rows using a range scan on a secondary index can result in many random disk accesses to the base table when the table is large and not stored in the storage engine's cache. With the Disk-Sweep Multi-Range Read (MRR) optimization, MySQL tries to reduce the number of random disk access for range scans by first scanning the index only and collecting the keys for the relevant rows. Then the keys are sorted and finally the rows are retrieved from the base table using the order of the primary key. The motivation for Disk-sweep MRR is to reduce the number of random disk accesses and instead achieve a more sequential scan of the base table data. For more information, see Section 8.13.11, "Multi-Range Read Optimization".
The optimizer implements Index Condition Pushdown (ICP), an
optimization for the case where MySQL retrieves rows from a table using an index. Without
ICP, the storage engine traverses the index to locate rows in the base table and returns
them to the MySQL server which evaluates the
for the rows. With ICP enabled, and if parts of the
condition can be evaluated by using only fields from the index, the MySQL server pushes this
part of the
WHERE condition down to the storage engine. The
storage engine then evaluates the pushed index condition by using the index entry and only
if this is satisfied is base row be read. ICP can reduce the number of accesses the storage
engine has to do against the base table and the number of accesses the MySQL server has to
do against the storage engine. For more information, see Section
8.13.4, "Index Condition Pushdown Optimization".
EXPLAIN statement now provides execution plan information for
UPDATE statements. Previously,
EXPLAIN provided information only for
SELECT statements. In addition, the
EXPLAIN statement now can produce output in JSON format.
The optimizer more efficiently handles subqueries in the
FROM clause (that is, derived tables). Materialization of
subqueries in the
FROM clause is postponed until their contents
are needed during query execution, which improves performance. In addition, during query
execution, the optimizer may add an index to a derived table to speed up row retrieval from
it. For more information, see Section
22.214.171.124, "Optimizing Subqueries in the
The optimizer uses semi-join and materialization strategies to optimize subquery execution. See Section 126.96.36.199, "Optimizing Subqueries with Semi-Join Transformations", and Section 188.8.131.52, "Optimizing Subqueries with Subquery Materialization".
A Batched Key Access (BKA) join algorithm is now available that uses both index access to the joined table and a join buffer. The BKA algorithm supports inner join, outer join, and semi-join operations, including nested outer joins and nested semi-joins. Benefits of BKA include improved join performance due to more efficient table scanning. For more information, see Section 8.13.12, "Block Nested-Loop and Batched Key Access Joins".
The optimizer now has a tracing capability, primarily for use by
developers. The interface is provided by a set of
optimizer_trace_ system variables and the
table. For details, see
Condition handling. MySQL now supports the
GET DIAGNOSTICS statement.
GET DIAGNOSTICS provides applications a standardized way to obtain
information from the diagnostics area, such as whether the previous SQL statement produced an exception
and what it was. For more information, see Section 184.108.40.206,
GET DIAGNOSTICS Syntax".
In addition, several deficiencies in condition handler processing rules were corrected so that MySQL behavior is more like standard SQL:
Block scope is used in determining which handler to select. Previously, a stored program was treated as having a single scope for handler selection.
Condition precedence is more accurately resolved.
Diagnostics area clearing has changed. Bug #55843 caused handled
conditions to be cleared from the diagnostics area before activating the handler. This made
condition information unavailable within the handler. Now condition information is available
to the handler, which can inspect it with the
GET DIAGNOSTICS statement. The condition information is
cleared when the handler exits, if it has not already been cleared during handler execution.
Previously, handlers were activated as soon as a condition occurred. Now they are not activated until the statement in which the condition occurred finishes execution, at which point the most appropriate handler is chosen. This can make a difference for statements that raise multiple conditions, if a condition raised later during statement execution has higher precedence than an earlier condition and there are handlers in the same scope for both conditions. Previously, the handler for the first condition raised would be chosen, even if it had a lower precedence than other handlers. Now the handler for the condition with highest precedence is chosen, even if it is not the first condition raised by the statement.
For more information, see Section 220.127.116.11, "Scope Rules for Handlers".
Data types. These data type changes have been implemented:
MySQL now permits fractional seconds for
TIMESTAMP values, with up to microseconds (6 digits) precision.
See Section 11.3.6, "Fractional Seconds
in Time Values".
Previously, at most one
TIMESTAMP column per table could be automatically initialized
or updated to the current date and time. This restriction has been lifted. Any
column definition can have any combination of
ON UPDATE CURRENT_TIMESTAMP
clauses. In addition, these clauses now can be used with
DATETIME column definitions. For more information, see Section
11.3.5, "Automatic Initialization and Updating for
In MySQL, the
TIMESTAMP data type differs in nonstandard ways from other
data types in terms of default value and assignment of automatic initialization and update
attributes. These behaviors remain the default but now are deprecated, and can be turned off
by enabling the
explicit_defaults_for_timestamp system variable at server
startup. See Section
11.3.5, "Automatic Initialization and Updating for
and Section 5.1.4, "Server System Variables".
YEAR(2) data type is now deprecated.
columns in existing tables are treated as before, but
in new or altered tables are converted to
YEAR(4). For more
information, see Section
YEAR(2) Limitations and Migrating to
Host cache. MySQL now provides more information about the causes of errors that occur when clients connect to the server, as well as improved access to the host cache, which contains client IP address and host name information and is used to avoid DNS lookups. These changes have been implemented:
Connection_errors_ status variables provide
information about connection errors that do not apply to specific client IP addresses.
Counters have been added to the host cache to track errors that do
apply to specific IP addresses, and a new
host_cache Performance Schema table exposes the contents
of the host cache so that it can be examined using
SELECT statements. Access to host cache contents makes it
possible to answer questions such as how many hosts are cached, what kinds of connection
errors are occurring for which hosts, or how close host error counts are to reaching the
max_connect_errors system variable limit.
The host cache size now is configurable using the
host_cache_size system variable.
For more information, see Section
18.104.22.168, "DNS Lookup Optimization and the Host Cache", and Section
OpenGIS. The OpenGIS specification defines functions that test the
relationship between two geometry values. MySQL originally implemented these functions such that they
used object bounding rectangles and returned the same result as the corresponding MBR-based functions.
Corresponding versions are now available that use precise object shapes. These versions are named with
ST_ prefix. For example,
Contains() uses object bounding rectangles, whereas
ST_Contains() uses object shapes. For more information, see Section
22.214.171.124.2, "Functions That Test Spatial Relationships Between Geometries".
The following constructs are obsolete and have been removed in MySQL 5.6. Where alternatives are shown, applications should be updated to use them.
--log server option and the
system variable. Instead, use the
--general_log option to enable the general query log and the
--general_log_file= option to set the general query log
--log-slow-queries server option and the
log_slow_queries system variable. Instead, use the
--slow_query_log option to enable the slow query log and the
--slow_query_log_file= option to set the slow query log file
--one-thread server option. Use
--safe-mode server option.
--skip-thread-priority server option.
--table-cache server option. Use the
system variable instead.
rpl_recovery_rank system variable, and the
Rpl_status status variable.
engine_condition_pushdown system variable. Use the
engine_condition_pushdown flag of the
optimizer_switch variable instead.
sql_big_tables system variable. Use
sql_low_priority_updates system variable. Use
sql_max_join_size system variable. Use
max_long_data_size system variable. Use
SHOW AUTHORS and
modifiers for the
It is explicitly disallowed to assign the value
DEFAULT to stored procedure or function parameters or stored program local
variables (for example with a
SET statement). It remains
permissible to assign
var_name = DEFAULT
DEFAULT to system variables, as before.