Spec-Zone .ru
спецификации, руководства, описания, API
|
Typically, writing an application for the InnoDB
memcached
interface involves some degree of rewriting or adapting existing code that uses MySQL or the memcached API:
Instead of many memcached servers running on low-powered machines, you have the same number of memcached servers as MySQL servers, running on relatively high-powered machines with substantial disk storage and memory. You might reuse some existing code that works with the memcached API, but some adaptation is likely needed due to the different server configuration.
The data stored through this interface all goes into VARCHAR
, TEXT
, or BLOB
columns, and must be converted to do numeric operations. You can do
the conversion on the application side, or by using the CAST()
function in
queries.
Coming from a database background, you might be used to general-purpose SQL tables with many columns. The tables accessed by the memcached code likely have only a few or even just a single column holding data values.
You might adapt parts of your application that do single-row queries, inserts, updates, or deletes, to squeeze more performance out of critical sections of code. Both queries (read) and DML (write) operations can be substantially faster when performed through the memcached interface. The speedup for writes is typically greater than the speedup for reads, so you might focus on adapting the code that performs logging or records interactive choices on a web site.
The following sections explore these aspects in more detail.
Consider these aspects of memcached applications when adapting an existing MySQL schema or application to use the memcached interface:
memcached keys cannot contain
spaces or newlines, because those characters are used as separators in the ASCII protocol. If you
are using lookup values that contain spaces, transform or hash them into values without spaces
before using them as keys in calls to add()
, set()
,
get()
and so on. Although theoretically those characters are allowed in
keys in programs that use the binary protocol, you should always restrict the characters used in
keys to ensure compatibility with a broad range of clients.
If you have a short numeric primary key column in an
InnoDB
table, you can use that as the unique lookup key for memcached by converting the integer to a string
value. If the memcached server is being used for more
than one application, or with more than one InnoDB
table, consider
modifying the name to make sure it is unique. For example, you might prepend the table name, or the
database name and the table name, before the numeric value.
You cannot use a partitioned table for data queried or stored through the memcached interface.
The memcached protocol passes
numeric values around as strings. To store numeric values in the underlying InnoDB
table, for example to implement counters that can be used in SQL functions such as SUM()
or AVG()
:
Use VARCHAR
columns with enough characters to hold all the
digits of the largest expected number (and additional characters if appropriate for the
negative sign, decimal point, or both).
In any query that performs arithmetic using the column values, use
the CAST()
function to convert from string to integer or
other numeric type. For example:
-- Alphabetic entries are returned as zero.select cast(c2 as unsigned integer) from demo_test;-- Since there could be numeric values of 0, can't disqualify them.-- Test the string values to find the ones that are integers, and average only those.select avg(cast(c2 as unsigned integer)) from demo_test where c2 between '0' and '9999999999';-- Views let you hide the complexity of queries. The results are already converted;-- no need to repeat conversion functions and WHERE clauses each time.create view numbers as select c1 key, cast(c2 as unsigned integer) val from demo_test where c2 between '0' and '9999999999';select sum(val) from numbers;
Note that any alphabetic values in the result set are turned into 0 by the call to
CAST()
. When using functions such as AVG()
that depend on the number of rows in the result
set, include WHERE
clauses to filter out any
non-numeric values.
If the InnoDB
column you use as a key can be
longer than 250 bytes, hash it to a value that is less than 250 bytes.
To use an existing table with the memcached interface, define an entry for it in the innodb_memcache.containers
table. To make that the table the default for
all requests relayed through memcached, specify the
value default
in the name
column, then
restart the MySQL server to make that change take effect. If you are using multiple tables for
different classes of memcached data, set up multiple
entries in the innodb_memcache.containers
table with name
values of your choosing, then issue a memcached request of the form get
@@
or name
set @@
within the application to switch the table
used for subsequent requests through the memcached
API. name
For an example of using a table other than the predefined test.demo_test
table, see Example
14.5, "Specifying the Table and Column Mapping for an InnoDB + memcached Application". For
the required layout and meaning of the columns in such a table, see Section
14.2.9.7, "Internals of the InnoDB memcached Plugin".
To use multiple MySQL column values with memcached
key/value pairs, in the innodb_memcache.containers
entry associated
with the MySQL table, specify in the value_columns
field several column
names separated by comma, semicolon, space, or pipe characters; for example, col1,col2,col3
or col1|col2|col3
.
Concatenate the column values into a single string using the pipe character as a separator,
before passing that string to memcached add
or set
calls. The string is
unpacked automatically into the various columns. Each get
call
returns a single string with the column values, also delimited by the pipe separator character.
you unpack those values using the appropriate syntax depending on your application language.
Example 14.5. Specifying the Table and Column Mapping for an InnoDB + memcachedApplication
Here is an example showing how to use your own table for a MySQL
application going through the InnoDB
memcached
plugin for data manipulation.
First, we set up a table to hold some country data: the population, area in metric units, and 'R'
or 'L'
indicating if people drive on the
right or on the left.
use test;CREATE TABLE `multicol` ( `country` varchar(128) NOT NULL DEFAULT '', `population` varchar(10) DEFAULT NULL, `area_sq_km` varchar(9) DEFAULT NULL, `drive_side` varchar(1) DEFAULT NULL, `c3` int(11) DEFAULT NULL, `c4` bigint(20) unsigned DEFAULT NULL, `c5` int(11) DEFAULT NULL, PRIMARY KEY (`country`),) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Now we make a descriptor for this table so that the InnoDB
memcached plugin knows how to access it:
The sample entry in the CONTAINERS
table
has a name
column 'aaa'
; we set up
another identifier 'bbb'
. If we made a single master table for
all memcached applications to use, we would
make the ID 'default'
and skip the @@
requests to switch tables.
We specify the test.multicol
table. The
schema name is stored in one column and the table name is stored in another column.
The key column will be our unique country
value. That column was specified as the primary key when we created the table above, so we
also specify the index name 'PRIMARY'
here.
Rather than a single column to hold a composite data value, we will divide the data among three table columns, so we specify a comma-separated list of those columns that will be used when storing or retrieving values.
And for the flags, expire, and CAS values, we specify corresponding
columns based on the settings from the sample table demo.test
.
These values are typically not significant in applications using the InnoDB
memcached plugin, because MySQL keeps the data
synchronized and there is no need to worry about data expiring or being stale.
insert into innodb_memcache.containers (name,db_schema,db_table,key_columns,value_columns,flags,cas_column, expire_time_column,unique_idx_name_on_key)values ('bbb','test','multicol','country','population,area_sq_km,drive_side', 'c3','c4','c5','PRIMARY');commit;
Here is a sample Python program showing how we would access this table from a program:
No database authorization is needed, since all data manipulation is done through the memcached interface. All we need to know is the port number the memcached daemon is listening to on the local system.
We load sample values for a few arbitrary countries. (Area and population figures from Wikipedia.)
To make the program use the multicol
table, we call the switch_table()
function that does a dummy
GET
or SET
request using @@
notation. The name in the request is bbb
, which is the value we stored in innodb_memcache.containers.name
.
(In a real application, we would use a more descriptive name. This example just illustrates
that you specify a table identifier, not the table name, with the GET
@@...
request.
The utility functions to insert and query the data demonstrate how we
might turn a Python data structure into pipe-separated values for sending to MySQL with
ADD
or SET
requests, and unpack
the pipe-separated values returned by GET
requests. This extra
processing is only required when mapping the single memcached value to multiple MySQL table
columns.
import sys, osimport memcachedef connect_to_memcached(): memc = memcache.Client(['127.0.0.1:11211'], debug=0); print "Connected to memcached." return memcdef banner(message): print print "=" * len(message) print message print "=" * len(message)country_data = [("Canada","34820000","9984670","R"),("USA","314242000","9826675","R"),("Ireland","6399152","84421","L"),("UK","62262000","243610","L"),("Mexico","113910608","1972550","R"),("Denmark","5543453","43094","R"),("Norway","5002942","385252","R"),("UAE","8264070","83600","R"),("India","1210193422","3287263","L"),("China","1347350000","9640821","R"),]def switch_table(memc,table): key = "@@" + table print "Switching default table to '" + table + "' by issuing GET for '" + key + "'." result = memc.get(key)def insert_country_data(memc): banner("Inserting initial data via memcached interface") for item in country_data: country = item[0] population = item[1] area = item[2] drive_side = item[3] key = country value = "|".join([population,area,drive_side]) print "Key = " + key print "Value = " + value if memc.add(key,value): print "Added new key, value pair." else: print "Updating value for existing key." memc.set(key,value)def query_country_data(memc): banner("Retrieving data for all keys (country names)") for item in country_data: key = item[0] result = memc.get(key) print "Here is the result retrieved from the database for key " + key + ":" print result (m_population, m_area, m_drive_side) = result.split("|") print "Unpacked population value: " + m_population print "Unpacked area value : " + m_area print "Unpacked drive side value: " + m_drive_sideif __name__ == '__main__': memc = connect_to_memcached() switch_table(memc,"bbb") insert_country_data(memc) query_country_data(memc) sys.exit(0)
Here are some SQL queries to illustrate the state of the MySQL data after the script is run, and show how you could access the same data directly through SQL, or from an application written in any language using the appropriate MySQL Connector or API.
The table descriptor 'bbb'
is in place, allowing us to switch to the
multicol
table by issuing a memcached request GET
@bbb
:
mysql: use innodb_memcache;Database changedmysql: select * from containers;+------+-----------+-----------+-------------+----------------------------------+-------+------------+--------------------+------------------------+| name | db_schema | db_table | key_columns | value_columns | flags | cas_column | expire_time_column | unique_idx_name_on_key |+------+-----------+-----------+-------------+----------------------------------+-------+------------+--------------------+------------------------+| aaa | test | demo_test | c1 | c2 | c3 | c4 | c5 | PRIMARY || bbb | test | multicol | country | population,area_sq_km,drive_side | c3 | c4 | c5 | PRIMARY |+------+-----------+-----------+-------------+----------------------------------+-------+------------+--------------------+------------------------+2 rows in set (0.01 sec)
After running the script, the data is in the multicol
table, available
for traditional MySQL queries or
DML statements:
mysql: use test;Database changedmysql: select * from multicol;+---------+------------+------------+------------+------+------+------+| country | population | area_sq_km | drive_side | c3 | c4 | c5 |+---------+------------+------------+------------+------+------+------+| Canada | 34820000 | 9984670 | R | 0 | 11 | 0 || China | 1347350000 | 9640821 | R | 0 | 20 | 0 || Denmark | 5543453 | 43094 | R | 0 | 16 | 0 || India | 1210193422 | 3287263 | L | 0 | 19 | 0 || Ireland | 6399152 | 84421 | L | 0 | 13 | 0 || Mexico | 113910608 | 1972550 | R | 0 | 15 | 0 || Norway | 5002942 | 385252 | R | 0 | 17 | 0 || UAE | 8264070 | 83600 | R | 0 | 18 | 0 || UK | 62262000 | 243610 | L | 0 | 14 | 0 || USA | 314242000 | 9826675 | R | 0 | 12 | 0 |+---------+------------+------------+------------+------+------+------+10 rows in set (0.00 sec)mysql: desc multicol;+------------+---------------------+------+-----+---------+-------+| Field | Type | Null | Key | Default | Extra |+------------+---------------------+------+-----+---------+-------+| country | varchar(128) | NO | PRI | | || population | varchar(10) | YES | | NULL | || area_sq_km | varchar(9) | YES | | NULL | || drive_side | varchar(1) | YES | | NULL | || c3 | int(11) | YES | | NULL | || c4 | bigint(20) unsigned | YES | | NULL | || c5 | int(11) | YES | | NULL | |+------------+---------------------+------+-----+---------+-------+7 rows in set (0.01 sec)
Allow sufficient size to hold all necessary digits, decimal points, sign characters, leading zeros,
and so on when defining the length for columns that will be treated as numbers. Too-long values in a
string column such as a VARCHAR
are truncated by removing some
characters, which might produce a nonsensical numeric value.
We can produce reports through SQL queries, doing calculations and tests across any columns, not
just the country
key column. (Because these examples use data from only
a few countries, the numbers are for illustration purposes only.) Here, we find the average
population of countries where people drive on the right, and the average size of countries whose
names start with "U":
mysql: select avg(population) from multicol where drive_side = 'R';+-------------------+| avg(population) |+-------------------+| 261304724.7142857 |+-------------------+1 row in set (0.00 sec)mysql: select sum(area_sq_km) from multicol where country like 'U%';+-----------------+| sum(area_sq_km) |+-----------------+| 10153885 |+-----------------+1 row in set (0.00 sec)
Because the population
and area_sq_km
columns store character data rather than strongly typed numeric data, functions such as avg()
and sum()
work by converting each
value to a number first. This approach does not work for
operators such as <
or >
: for
example, when comparing character-based values, 9 > 1000
, which is
not you expect from a clause such as ORDER BY population DESC
. For the
most accurate type treatment, perform queries against views that cast numeric columns to the
appropriate types. This technique lets you issue very simple SELECT *
queries from your database applications, while ensuring that all casting, filtering, and ordering is
correct. Here, we make a view that can be queried to find the top 3 countries in descending order of
population, with the results always reflecting the latest data from the multicol
table, and with the population and area figures always treated as numbers:
mysql: create view populous_countries as select country, cast(population as unsigned integer) population, cast(area_sq_km as unsigned integer) area_sq_km, drive_side from multicol order by cast(population as unsigned integer) desc limit 3;Query OK, 0 rows affected (0.01 sec)mysql: select * from populous_countries;+---------+------------+------------+------------+| country | population | area_sq_km | drive_side |+---------+------------+------------+------------+| China | 1347350000 | 9640821 | R || India | 1210193422 | 3287263 | L || USA | 314242000 | 9826675 | R |+---------+------------+------------+------------+3 rows in set (0.00 sec)mysql: desc populous_countries;+------------+---------------------+------+-----+---------+-------+| Field | Type | Null | Key | Default | Extra |+------------+---------------------+------+-----+---------+-------+| country | varchar(128) | NO | | | || population | bigint(10) unsigned | YES | | NULL | || area_sq_km | int(9) unsigned | YES | | NULL | || drive_side | varchar(1) | YES | | NULL | |+------------+---------------------+------+-----+---------+-------+4 rows in set (0.02 sec)
Consider these aspects of MySQL and InnoDB
tables when adapting an existing
memcached application to use the MySQL integration:
If you have key values longer than a few bytes, you might find it more
efficient to use a numeric auto-increment column for the primary key in
the InnoDB
table, and create a unique secondary
index on the column holding the memcached key
values. This is because InnoDB
performs best for large-scale insertions
if the primary key values are added in sorted order (as they are with auto-increment values), and
the primary key values are duplicated in each secondary index, which can take up unnecessary space
when the primary key is a long string value.
If you store several different classes of information in memcached,
you might set up a separate InnoDB
table for each kind of data. Define
additional table identifiers in the innodb_memcache.containers
table,
and use the notation @@
to store or retrieve items from different
tables. Physically dividing the items lets you tune the characteristics of each table for best space
utilization, performance, and reliability. For example, you might enable compression
for a table that holds blog posts, but not for one that holds thumbnail images. You might back up
one table more frequently than another because it holds critical data. You might create additional
secondary
indexes on tables that are frequently used to generate reports through SQL. table_id
.key
Preferably, set up a stable set of table definitions for use with the memcached interface and leave them in place permanently.
Changes to the containers
table take effect the next time that table is
queried. The entries in that table are processed at startup, and are consulted whenever an
unrecognized table ID is requested by the @@
notation. Thus, new
entries are visible as soon as you try to use the associated table ID, but changes to existing
entries require a server restart before they take effect.
When you use the default caching policy innodb_only
, your calls to add()
, set()
, incr()
, and so on can succeed but
still trigger debugging messages such as while expecting 'STORED', got
unexpected response 'NOT_STORED
. This is because in the innodb_only
configuration, new and updated values are sent directly
to the InnoDB
table without being saved in the memory cache.
Because using InnoDB
in combination with memcached
involves writing all data to disk, whether immediately or sometime later, understand that raw performance is
expected to be somewhat lower than using memcached by itself.
Focus your tuning goals for the InnoDB
memcached plugin on achieving higher performance than
equivalent SQL operations.
Benchmarks suggest that both queries and DML operations (inserts, updates, and deletes) are faster going through the memcached interface than with traditional SQL. DML operations typically see a larger speedup. Thus, the types of applications you might adapt to use the memcached interface first are those that are write-intensive. You might also use MySQL as a data store for types of write-intensive applications that formerly used some fast, lightweight mechanism where reliability was not a priority.
The types of queries that are most suited to the simple GET
request style are
those with a single clause, or a set of AND
conditions, in the WHERE
clause:
SQL:select col from tbl where key = 'key_value';memcached:GET key_valueSQL:select col from tbl where col1 = val1 and col2 = val2 and col3 = val3;memcached:# Since you must always know these 3 values to look up the key,# combine them into a unique string and use that as the key# for all ADD, SET, and GET operations.key_value = val1 + ":" + val2 + ":" + val3GET key_valueSQL:select 'key exists!' from tbl where exists (select col1 from tbl where key = 'key_value') limit 1;memcached:# Test for existence of key by asking for its value and checking if the call succeeds,# ignoring the value itself. For existence checking, you typically only store a very# short value such as "1".GET key_value
For best performance, deploy the InnoDB
memcached plugin on machines that are configured like typical
database servers: in particular, with the majority of system RAM devoted to the InnoDB
buffer pool through the innodb_buffer_pool_size
configuration option. For systems with
multi-gigabyte buffer pools, consider raising the value of the innodb_buffer_pool_instances
configuration option for maximum throughput
when most operations involve data already cached in memory.
InnoDB
has a number of settings that let you choose the balance between high
reliability in case of a crash, and the amount of I/O overhead during high write workloads. For example,
consider setting the configuration options innodb_doublewrite=0
and innodb_flush_log_at_trx_commit=2
. Measure the performance with different
settings for the innodb_flush_method
option. If the binary log
is not turned on for the server, use the setting innodb_support_xa=0
.
For other ways to reduce or tune I/O for table operations, see Section
8.5.7, "Optimizing InnoDB
Disk I/O".
The default value of 1 for the configuration options daemon_memcached_r_batch_size
and daemon_memcached_w_batch_size
is intended for maximum reliability of results
and safety of stored or updated data.
Depending on the type of application, you might increase one or both of these settings to reduce the
overhead of frequent commit operations.
On a busy system, you might increase daemon_memcached_r_batch_size
, knowing that changes to the data made
through SQL might not become visible to memcached immediately
(that is, until N
more get
operations
were processed). When processing data where every write operation must be reliably stored, you would leave
daemon_memcached_w_batch_size
set to 1. You might increase it when processing large numbers of updates intended to only be used for
statistical analysis, where it is not critical if the last N
updates are lost in case of a crash.
For example, imagine a system that monitors traffic crossing a busy bridge, recording approximately 100,000
vehicles each day. If the application simply counts different types of vehicles to analyze traffic patterns,
it might change daemon_memcached_w_batch_size
from 1
to
100
, reducing the I/O overhead for commit operations by 99%. In case of an
unexpected outage, only a maximum of 100 records could be lost, which might be an acceptable margin of
error. If instead the application was doing automated toll collection for each car, it would keep daemon_memcached_w_batch_size
set to 1
to ensure that every toll record was immediately saved to disk.
Because of the way InnoDB
organizes the memcached key values on disk, if you have a large number of keys
to create, it can be faster to sort all the data items by the key value in your application and add
them in sorted order, rather than creating them in arbitrary order.
The memslap command, which is part of the regular memcached distribution but not included with the MySQL server, can be useful for benchmarking different configurations. It can also be used to generate sample key/value pairs that you can use in your own benchmarking. See Section 15.6.3.3.6, "libmemcached Command-Line Utilities" for details.
Unlike with the traditional memcached, with the InnoDB
+ memcached combination you
can control how "durable" are the data values produced
through calls to add
, set
, incr
,
and so on. Because MySQL places a high priority on durability and consistency of data, by default all data
written through the memcached interface is always stored to
disk, and calls to get
always return the most recent value from disk. Although
this default setting does not give the highest possible raw performance, it is still very fast compared to
the traditional SQL interface for InnoDB
tables.
As you gain experience with this feature, you can make the decision to relax the durability settings for non-critical classes of data, at the risk of possibly losing some updated values in case of an outage, or returning data that is slightly out-of-date.
One tradeoff between durability and raw performance is how frequently new and changed data is committed. If the data is critical, you want it to be committed immediately so that it is safe in case of any crash or outage. If the data is less critical, such as counters that would be reset after a crash, or debugging or logging data where you could afford to lose a few seconds worth, you might prefer the higher raw throughput that comes with less frequent commits.
When a memcached operation causes an insert, update, or delete
in the underlying InnoDB
table, that change might be committed to the
underlying table instantly (if daemon_memcached_w_batch_size=1
) or some time later (if that
configuration option value is greater than 1). In either case, the change cannot be rolled back. If you
increase the value of daemon_memcached_w_batch_size=1
to avoid high I/O overhead during busy
times, commits could become very infrequent when the workload decreases. As a safety measure, a background
thread automatically commits changes made through the memcached API at regular intervals. The interval is
controlled by the innodb_api_bk_commit_interval
configuration option, and by default is 5
seconds.
When a memcached operation causes an insert or update in the
underlying InnoDB
table, the changed data is immediately visible to other memcached requests because the new value remains in the
memory cache, even if it is not committed yet on the MySQL side.
When a memcached operation such as get
or incr
causes a query or DML operation in the
underlying InnoDB
table, you can control whether it sees the very latest data
written to the table, only data that has been committed, or other variations of transaction isolation
level. You control this feature through the innodb_api_trx_level
configuration option. The numeric values specified
with this option correspond to the familiar isolation level names such as REPEATABLE READ
. See the description of the innodb_api_trx_level
option for the full list.
The stricter the isolation level, the more certain you can be that the data you retrieve will not be rolled back or changed suddenly so that a subsequent query sees a different value. But that strictness comes with greater locking overhead that can cause waits. For a NoSQL-style application that does not use long-running transactions, you can typically stay with the default isolation level or switch to a less strict one.
By default, you can perform DDL operations
such as ALTER TABLE
on the tables being used by the InnoDB
memcached plugin. To
avoid potential slowdowns when these tables are being used for high-throughput applications, you can disable
DDL operations on these tables by turning on the innodb_api_enable_mdl
configuration option at startup. This option is
less appropriate when you are accessing the same underlying tables through both the memcached interface and SQL, because it blocks CREATE INDEX
statements on the tables, which could be important for
configuring the system to run reporting queries.
Table innodb_memcache.cache_policies
specifies whether to store data written
through the memcached on disk (innodb_only
,
the default); to store the data in memory only, as in the traditional memcached
(cache-only
); or both (caching
).
With the caching
setting, if memcached cannot find a key in memory, it searches for the value
in an InnoDB
table. Values returned from get
calls
under the caching
setting could be out-of-date, if they were updated on disk in
the InnoDB
table but not yet expired from the memory cache.
The caching policy can be set independently for get
, set
(including incr
and decr
), delete
, and flush
operations. For example:
You might allow get
and set
operations to query or update a table and the memcached memory cache at the same time (through the
caching
setting), while making delete
,
flush
, or both operate only on the in-memory copy (through the cache_only
setting). That way, deleting or flushing an item just
expires it from the cache, and the latest value is returned from the InnoDB
table the next time the item is requested.
mysql> desc innodb_memcache.cache_policies;+---------------+-------------------------------------------------------+------+-----+---------+-------+| Field | Type | Null | Key | Default | Extra |+---------------+-------------------------------------------------------+------+-----+---------+-------+| policy_name | varchar(40) | NO | PRI | NULL | || get_policy | enum('innodb_only','cache_only','caching','disabled') | NO | | NULL | || set_policy | enum('innodb_only','cache_only','caching','disabled') | NO | | NULL | || delete_policy | enum('innodb_only','cache_only','caching','disabled') | NO | | NULL | || flush_policy | enum('innodb_only','cache_only','caching','disabled') | NO | | NULL | |+---------------+-------------------------------------------------------+------+-----+---------+-------+mysql> select * from innodb_memcache.cache_policies;+--------------+-------------+-------------+---------------+--------------+| policy_name | get_policy | set_policy | delete_policy | flush_policy |+--------------+-------------+-------------+---------------+--------------+| cache_policy | innodb_only | innodb_only | innodb_only | innodb_only |+--------------+-------------+-------------+---------------+--------------+mysql> update innodb_memcache.cache_policies set set_policy = 'caching' -> where policy_name = 'cache_policy';
The cache_policies
values are only read at startup, and are tightly integrated
with the operation of the memcached plugin. After changing any
of the values in this table, uninstall the plugin and reinstall it:
mysql> uninstall plugin daemon_memcached;Query OK, 0 rows affected (2.00 sec)mysql> install plugin daemon_memcached soname "libmemcached.so";Query OK, 0 rows affected (0.00 sec)
Benchmarks suggest that the InnoDB
memcached plugin speeds up DML operations
(inserts, updates, and deletes) more than it speeds up queries. You might focus your initial development
efforts on write-intensive applications that are I/O-bound, and look for opportunities to use MySQL for new
kinds of write-intensive applications.
INSERT INTO t1 (key,val) VALUES (some_key
,some_value
);SELECT val FROM t1 WHERE key =some_key
;UPDATE t1 SET val =new_value
WHERE key =some_key
;UPDATE t1 SET val = val + x WHERE key =some_key
;DELETE FROM t1 WHERE key =some_key
;
Single-row DML statements are the most straightforward kinds of statements to turn into memcached
operations: INSERT
becomes
add
, UPDATE
becomes set
, incr
or decr
,
and DELETE
becomes delete
. When issued
through the memcached interface, these operations
are guaranteed to affect only 1 row because key
is
unique within the table.
In the preceding SQL examples, t1
refers to the table currently
being used by the InnoDB
memcached plugin based on the configuration
settings in the innodb_memcache.containers
table, key
refers to the column listed under key_columns
,
and val
refers to the column listed under value_columns
.
TRUNCATE TABLE t1;DELETE FROM t1;
Corresponds to the flush_all
operation, when t1
is configured as the table for memcached
operations as in the previous step. Removes all the rows in the table.
You can access the InnoDB table (by default, test.demo_test
) through the
standard SQL interfaces. However, there are some restrictions:
When query a table through SQL that is also being accessed through the memcached interface, remember that memcached operations can be configured to be committed
periodically rather than after every write operation. This behavior is controlled by the daemon_memcached_w_batch_size
option. If this option is set to a
value greater than 1, use READ UNCOMMITTED
queries to find the just-inserted rows:
mysql> set session TRANSACTION ISOLATION LEVEL read uncommitted;Query OK, 0 rows affected (0.00 sec)mysql> select * from demo_test;+------+------+------+------+-----------+------+------+------+------+------+------+| cx | cy | c1 | cz | c2 | ca | CB | c3 | cu | c4 | C5 |+------+------+------+------+-----------+------+------+------+------+------+------+| NULL | NULL | a11 | NULL | 123456789 | NULL | NULL | 10 | NULL | 3 | NULL |+------+------+------+------+-----------+------+------+------+------+------+------+1 row in set (0.00 sec)
To modify a table through SQL that is also being accessed through the memcached interface, remember that memcached operations can be configured to be start a new
transaction periodically rather than for every read operation. This behavior is controlled by the daemon_memcached_r_batch_size
option. If this option is set to a
value greater than 1, ...
The InnoDB
table is locked IS (shared intention)
or IX (exclusive intentional) for all operations in a transaction. If you increase daemon_memcached_r_batch_size
and daemon_memcached_w_batch_size
substantially from their default
value of 1, the table is most likely intentionally locked between each operation, preventing you
from running DDL statements on the
table.