Spec-Zone .ru
спецификации, руководства, описания, API
|
In this section we provide a detailed example illustrating how to add new MySQL Cluster data nodes online, starting with a MySQL Cluster having 2 data nodes in a single node group and concluding with a cluster having 4 data nodes in 2 node groups.
Starting configuration. For purposes of illustration, we assume a minimal configuration, and that the
cluster uses a config.ini
file containing only the following information:
[ndbd default]DataMemory = 100MIndexMemory = 100MNoOfReplicas = 2DataDir = /usr/local/mysql/var/mysql-cluster[ndbd]Id = 1HostName = 192.168.0.1[ndbd]Id = 2HostName = 192.168.0.2[mgm]HostName = 192.168.0.10Id = 10[api]Id=20HostName = 192.168.0.20[api]Id=21HostName = 192.168.0.21
We have left a gap in the sequence between data node IDs and other nodes. This make it easier later to assign node IDs that are not already in use to data nodes which are newly added.
We also assume that you have already started the cluster using the appropriate command line or my.cnf
options, and that running SHOW
in the management
client produces output similar to what is shown here:
-- NDB Cluster -- Management Client --ndb_mgm> SHOW
Connected to Management Server at: 192.168.0.10:1186Cluster Configuration---------------------[ndbd(NDB)] 2 node(s)id=1 @192.168.0.1 (5.6.11-ndb-7.3.3, Nodegroup: 0, Master)id=2 @192.168.0.2 (5.6.11-ndb-7.3.3, Nodegroup: 0)[ndb_mgmd(MGM)] 1 node(s)id=10 @192.168.0.10 (5.6.11-ndb-7.3.3)[mysqld(API)] 2 node(s)id=20 @192.168.0.20 (5.6.11-ndb-7.3.3)id=21 @192.168.0.21 (5.6.11-ndb-7.3.3)
Finally, we assume that the cluster contains a single NDBCLUSTER
table created as shown here:
USE n;CREATE TABLE ips ( id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY, country_code CHAR(2) NOT NULL, type CHAR(4) NOT NULL, ip_address varchar(15) NOT NULL, addresses BIGINT UNSIGNED DEFAULT NULL, date BIGINT UNSIGNED DEFAULT NULL) ENGINE NDBCLUSTER;
The memory usage and related information shown later in this section was generated after inserting approximately 50000 rows into this table.
In this example, we show the single-threaded ndbd being used for the data node processes. However—beginning with MySQL Cluster NDB 7.0.4—you can also apply this example, if you are using the multi-threaded ndbmtd by substituting ndbmtd for ndbd wherever it appears in the steps that follow. (Bug #43108)
Step 1: Update configuration file. Open the cluster global configuration file in a text editor and add
[ndbd]
sections corresponding to the 2 new data nodes. (We give these data nodes
IDs 3 and 4, and assume that they are to be run on host machines at addresses 192.168.0.3 and 192.168.0.4,
respectively.) After you have added the new sections, the contents of the config.ini
file should look like what is shown here, where the additions to the
file are shown in bold type:
[ndbd default]DataMemory = 100MIndexMemory = 100MNoOfReplicas = 2DataDir = /usr/local/mysql/var/mysql-cluster[ndbd]Id = 1HostName = 192.168.0.1[ndbd]Id = 2HostName = 192.168.0.2[ndbd]Id = 3HostName = 192.168.0.3[ndbd]Id = 4HostName = 192.168.0.4[mgm]HostName = 192.168.0.10Id = 10[api]Id=20HostName = 192.168.0.20[api]Id=21HostName = 192.168.0.21
Once you have made the necessary changes, save the file.
Step 2: Restart the management server. Restarting the cluster management server requires that you issue separate commands to stop the management server and then to start it again, as follows:
Stop the management server using the management client STOP
command, as shown here:
ndb_mgm> 10 STOP
Node 10 has shut down.Disconnecting to allow Management Server to shutdownshell>
Because shutting down the management server causes the management client to
terminate, you must start the management server from the system shell. For simplicity, we assume that
config.ini
is in the same directory as the management server binary, but
in practice, you must supply the correct path to the configuration file. You must also supply the --reload
or --initial
option so that the management server reads the new
configuration from the file rather than its configuration cache. If your shell's current directory is
also the same as the directory where the management server binary is located, then you can invoke the
management server as shown here:
shell> ndb_mgmd -f config.ini
--reload
2008-12-08 17:29:23 [MgmSrvr] INFO -- NDB Cluster Management Server. 5.6.11-ndb-7.3.32008-12-08 17:29:23 [MgmSrvr] INFO -- Reading cluster configuration from 'config.ini'
If you check the output of SHOW
in the management client after restarting the ndb_mgm
process, you should now see something like this:
-- NDB Cluster -- Management Client --ndb_mgm> SHOW
Connected to Management Server at: 192.168.0.10:1186Cluster Configuration---------------------[ndbd(NDB)] 2 node(s)id=1 @192.168.0.1 (5.6.11-ndb-7.3.3, Nodegroup: 0, Master)id=2 @192.168.0.2 (5.6.11-ndb-7.3.3, Nodegroup: 0)id=3 (not connected, accepting connect from 192.168.0.3)id=4 (not connected, accepting connect from 192.168.0.4)[ndb_mgmd(MGM)] 1 node(s)id=10 @192.168.0.10 (5.6.11-ndb-7.3.3)[mysqld(API)] 2 node(s)id=20 @192.168.0.20 (5.6.11-ndb-7.3.3)id=21 @192.168.0.21 (5.6.11-ndb-7.3.3)
Step 3: Perform a rolling restart of the existing data nodes. This step can be accomplished entirely
within the cluster management client using the RESTART
command, as shown here:
ndb_mgm>1 RESTART
Node 1: Node shutdown initiatedNode 1: Node shutdown completed, restarting, no start.Node 1 is being restartedndb_mgm> Node 1: Start initiated (version 7.3.3)Node 1: Started (version 7.1.29)ndb_mgm>2 RESTART
Node 2: Node shutdown initiatedNode 2: Node shutdown completed, restarting, no start.Node 2 is being restartedndb_mgm> Node 2: Start initiated (version 7.3.3)ndb_mgm> Node 2: Started (version 7.3.3)
After issuing each
command, wait until the management client reports X
RESTARTNode
before
proceeding any further.X
: Started (version ...)
You can verify that all existing data nodes were restarted using the updated configuration by checking the ndbinfo.nodes
table in the mysql client.
Step 4: Perform a rolling restart of all cluster API nodes. Shut down and restart each MySQL server
acting as an SQL node in the cluster using mysqladmin shutdown followed by mysqld_safe (or another startup script). This should be
similar to what is shown here, where password
is the MySQL root
password for a given MySQL server instance:
shell>mysqladmin -uroot -p
081208 20:19:56 mysqld_safe mysqld from pid file/usr/local/mysql/var/tonfisk.pid endedshell>password
shutdownmysqld_safe --ndbcluster --ndb-connectstring=192.168.0.10 &
081208 20:20:06 mysqld_safe Logging to '/usr/local/mysql/var/tonfisk.err'.081208 20:20:06 mysqld_safe Starting mysqld daemon with databasesfrom /usr/local/mysql/var
Of course, the exact input and output depend on how and where MySQL is installed on the system, as well as which
options you choose to start it (and whether or not some or all of these options are specified in a my.cnf
file).
Step 5: Perform an initial start of the new data nodes. From a system shell on each of the hosts for the
new data nodes, start the data nodes as shown here, using the --initial
option:
shell> ndbd -c 192.168.0.10 --initial
Unlike the case with restarting the existing data nodes, you can start the new data nodes concurrently; you do not need to wait for one to finish starting before starting the other.
Wait until both of the new data nodes have started before proceeding with the next
step. Once the new data nodes have started, you can see in the output of the management client SHOW
command that they do not yet belong to any node group (as indicated with bold
type here):
ndb_mgm> SHOW
Connected to Management Server at: 192.168.0.10:1186Cluster Configuration---------------------[ndbd(NDB)] 2 node(s)id=1 @192.168.0.1 (5.6.11-ndb-7.3.3, Nodegroup: 0, Master)id=2 @192.168.0.2 (5.6.11-ndb-7.3.3, Nodegroup: 0)id=3 @192.168.0.3 (5.6.11-ndb-7.3.3, no nodegroup)id=4 @192.168.0.4 (5.6.11-ndb-7.3.3, no nodegroup)[ndb_mgmd(MGM)] 1 node(s)id=10 @192.168.0.10 (5.6.11-ndb-7.3.3)[mysqld(API)] 2 node(s)id=20 @192.168.0.20 (5.6.11-ndb-7.3.3)id=21 @192.168.0.21 (5.6.11-ndb-7.3.3)
Step 6: Create a new node group. You can do this by issuing a CREATE
NODEGROUP
command in the cluster management client. This command takes as its argument a comma-separated
list of the node IDs of the data nodes to be included in the new node group, as shown here:
ndb_mgm> CREATE NODEGROUP 3,4
Nodegroup 1 created
By issuing SHOW
again, you can verify that data nodes 3 and 4 have joined the new
node group (again indicated in bold type):
ndb_mgm> SHOW
Connected to Management Server at: 192.168.0.10:1186Cluster Configuration---------------------[ndbd(NDB)] 2 node(s)id=1 @192.168.0.1 (5.6.11-ndb-7.3.3, Nodegroup: 0, Master)id=2 @192.168.0.2 (5.6.11-ndb-7.3.3, Nodegroup: 0)id=3 @192.168.0.3 (5.6.11-ndb-7.3.3, Nodegroup: 1)id=4 @192.168.0.4 (5.6.11-ndb-7.3.3, Nodegroup: 1)[ndb_mgmd(MGM)] 1 node(s)id=10 @192.168.0.10 (5.6.11-ndb-7.3.3)[mysqld(API)] 2 node(s)id=20 @192.168.0.20 (5.6.11-ndb-7.3.3)id=21 @192.168.0.21 (5.6.11-ndb-7.3.3)
Step 7: Redistribute cluster data. When a node group is created, existing data and indexes are not
automatically distributed to the new node group's data nodes, as you can see by issuing the appropriate REPORT
command in the management client:
ndb_mgm> ALL REPORT MEMORY
Node 1: Data usage is 5%(177 32K pages of total 3200)Node 1: Index usage is 0%(108 8K pages of total 12832)Node 2: Data usage is 5%(177 32K pages of total 3200)Node 2: Index usage is 0%(108 8K pages of total 12832)Node 3: Data usage is 0%(0 32K pages of total 3200)Node 3: Index usage is 0%(0 8K pages
of total 12832)Node 4: Data usage is 0%(0 32K pages of total 3200)Node 4: Index usage is 0%(0 8K pages of total
12832)
By using ndb_desc with the -p
option, which
causes the output to include partitioning information, you can see that the table still uses only 2 partitions
(in the Per partition info
section of the output, shown here in bold text):
shell> ndb_desc -c 192.168.0.10 -d n ips
-p
-- ips --Version: 1Fragment type: 9K Value: 6Min load factor: 78Max load factor: 80Temporary table: noNumber of attributes: 6Number of primary keys: 1Length of frm data: 340Row Checksum: 1Row GCI: 1SingleUserMode: 0ForceVarPart: 1FragmentCount: 2TableStatus: Retrieved-- Attributes --id Bigint PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCRcountry_code Char(2;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORYtype Char(4;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORYip_address Varchar(15;latin1_swedish_ci) NOT NULL AT=SHORT_VAR ST=MEMORYaddresses Bigunsigned NULL AT=FIXED ST=MEMORYdate Bigunsigned NULL AT=FIXED ST=MEMORY-- Indexes --PRIMARY KEY(id) - UniqueHashIndexPRIMARY(id) - OrderedIndex-- Per partition info --Partition Row count Commit count Frag fixed memory Frag
varsized memory0 26086 26086 1572864 5570561 26329 26329 1605632 557056NDBT_ProgramExit: 0 - OK
You can cause the data to be redistributed among all of the data nodes by performing, for each NDBCLUSTER
table, an ALTER
ONLINE TABLE ... REORGANIZE PARTITION
statement in the mysql client. After issuing the statement ALTER
ONLINE TABLE ips REORGANIZE PARTITION
, you can see using ndb_desc that the data for this table is now stored using 4
partitions, as shown here (with the relevant portions of the output in bold type):
shell> ndb_desc -c 192.168.0.10 -d n ips
-p
-- ips --Version: 16777217Fragment type: 9K Value: 6Min load factor: 78Max load factor: 80Temporary table: noNumber of attributes: 6Number of primary keys: 1Length of frm data: 341Row Checksum: 1Row GCI: 1SingleUserMode: 0ForceVarPart: 1FragmentCount: 4TableStatus: Retrieved-- Attributes --id Bigint PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCRcountry_code Char(2;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORYtype Char(4;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORYip_address Varchar(15;latin1_swedish_ci) NOT NULL AT=SHORT_VAR ST=MEMORYaddresses Bigunsigned NULL AT=FIXED ST=MEMORYdate Bigunsigned NULL AT=FIXED ST=MEMORY-- Indexes --PRIMARY KEY(id) - UniqueHashIndexPRIMARY(id) - OrderedIndex-- Per partition info --Partition Row count Commit count Frag fixed memory Frag
varsized memory0 12981 52296 1572864 5570561 13236 52515 1605632 5570562 13105 13105 819200 2949123 13093 13093
819200 294912NDBT_ProgramExit: 0 - OK
Normally, ALTER [ONLINE] TABLE
is used with a list of
partition identifiers and a set of partition definitions to create a new partitioning scheme for a table
that has already been explicitly partitioned. Its use here to redistribute data onto a new MySQL Cluster
node group is an exception in this regard; when used in this way, only the name of the table is used
following the table_name
REORGANIZE PARTITIONTABLE
keyword, and no other keywords or identifiers follow REORGANIZE PARTITION
.
For more information, see Section 13.1.7, "ALTER TABLE
Syntax".
In addition, for each table, the ALTER ONLINE
TABLE
statement should be followed by an OPTIMIZE TABLE
to reclaim wasted space. You can obtain a list of all NDBCLUSTER
tables using the following query against the INFORMATION_SCHEMA.TABLES
table:
SELECT TABLE_SCHEMA, TABLE_NAME FROM INFORMATION_SCHEMA.TABLESWHERE ENGINE = 'NDBCLUSTER';
The INFORMATION_SCHEMA.TABLES.ENGINE
value for a MySQL Cluster table
is always NDBCLUSTER
, regardless of whether the CREATE
TABLE
statement used to create the table (or ALTER TABLE
statement used to convert an existing table from a different
storage engine) used NDB
or NDBCLUSTER
in its ENGINE
option.
You can see after performing these statements in the output of ALL REPORT MEMORY
that the data and indexes are now redistributed between all cluster data nodes, as shown here:
ndb_mgm> ALL REPORT MEMORY
Node 1: Data usage is 5%(176 32K pages of total 3200)Node 1: Index usage is 0%(76 8K pages of total 12832)Node 2: Data usage is 5%(176 32K pages of total 3200)Node 2: Index usage is 0%(76 8K pages of total 12832)Node 3: Data usage is 2%(80 32K pages of total 3200)Node 3: Index usage is 0%(51 8K pages of total 12832)Node 4: Data usage is 2%(80 32K pages of total 3200)Node 4: Index usage is 0%(50 8K pages of total 12832)
Since only one DDL operation on NDBCLUSTER
tables can be executed at a time, you must wait for each ALTER ONLINE TABLE ... REORGANIZE PARTITION
statement to finish before
issuing the next one.
It is not necessary to issue ALTER ONLINE TABLE ...
REORGANIZE PARTITION
statements for NDBCLUSTER
tables created after the new
data nodes have been added; data added to such tables is distributed among all data nodes automatically.
However, in NDBCLUSTER
tables that existed prior
to the addition of the new nodes, neither existing nor new data is distributed using the new
nodes until these tables have been reorganized using ALTER ONLINE TABLE ... REORGANIZE PARTITION
.
Alternative procedure, without rolling restart. It is possible to avoid the need for a rolling restart by configuring the extra data nodes, but not starting them, when first starting the cluster. We assume, as before, that you wish to start with two data nodes—nodes 1 and 2—in one node group and later to expand the cluster to four data nodes, by adding a second node group consisting of nodes 3 and 4:
[ndbd default]DataMemory = 100MIndexMemory = 100MNoOfReplicas = 2DataDir = /usr/local/mysql/var/mysql-cluster[ndbd]Id = 1HostName = 192.168.0.1[ndbd]Id = 2HostName = 192.168.0.2[ndbd]Id = 3HostName = 192.168.0.3Nodegroup = 65536 [ndbd]Id = 4HostName = 192.168.0.4Nodegroup = 65536 [mgm]HostName = 192.168.0.10Id = 10[api]Id=20HostName = 192.168.0.20[api]Id=21HostName = 192.168.0.21
The data nodes to be brought online at a later time (nodes 3 and 4) can be configured with NodeGroup = 65536
, in which case nodes 1 and 2 can each be started as shown here:
shell> ndbd -c 192.168.0.10 --initial
The data nodes configured with NodeGroup = 65536
are treated by the management server as though you had started
nodes 1 and 2 using --nowait-nodes=3,4
after waiting for a period of time determined by the setting for the StartNoNodeGroupTimeout
data node configuration parameter. By default, this
is 15 seconds (15000 milliseconds).
StartNoNodegroupTimeout
must be the same for all data nodes in the cluster;
for this reason, you should always set it in the [ndbd default]
section of the
config.ini
file, rather than for individual data nodes.
When you are ready to add the second node group, you need only perform the following additional steps:
Start data nodes 3 and 4, invoking the data node process once for each new node:
shell> ndbd -c 192.168.0.10
--initial
Issue the appropriate CREATE NODEGROUP
command in the
management client:
ndb_mgm> CREATE NODEGROUP
3,4
In the mysql
client, issue ALTER ONLINE TABLE ...
REORGANIZE PARTITION
and OPTIMIZE TABLE
statements for each existing NDBCLUSTER
table. (As noted elsewhere in this section, existing MySQL
Cluster tables cannot use the new nodes for data distribution until this has been done.)