Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
1 My Servers
I use the following Debian servers that are all in the same network (192.168.0.x in this example):
In addition to that we need a virtual IP address : 192.168.0.105. It will be assigned to the MySQL cluster
by the load balancer so that applications have a single IP address to access the cluster.
Although we want to have two MySQL cluster nodes in our MySQL cluster, we still need a third node, the
MySQL cluster management server, for mainly one reason: if one of the two MySQL cluster nodes fails,
and the management server is not running, then the data on the two cluster nodes will become inconsistent
("split brain"). We also need it for configuring the MySQL cluster.
So normally we would need five machines for our setup:
2 MySQL cluster nodes + 1 cluster management server + 2 Load Balancers = 5
As the MySQL cluster management server does not use many resources, and the system would just sit
there doing nothing, we can put our first load balancer on the same machine, which saves us one machine,
so we end up with four machines.
(ndb_mgmd) and the cluster management client (ndb_mgm - it can be used to monitor what's going on in the
cluster). The following steps are carried out on loadb1.example.com (192.168.0.103):
loadb1.example.com:
mkdir /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
wget http://dev.mysql.com/get/Downloads/MySQL-5.0/mysql-max-5.0.19-linux-i686-\
glibc23.tar.gz/from/http://www.mirrorservice.org/sites/ftp.mysql.com/
tar xvfz mysql-max-5.0.19-linux-i686-glibc23.tar.gz
cd mysql-max-5.0.19-linux-i686-glibc23
mv bin/ndb_mgm /usr/bin
mv bin/ndb_mgmd /usr/bin
chmod 755 /usr/bin/ndb_mg*
cd /usr/src
rm -rf /usr/src/mysql-mgm
mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
vi config.ini
[NDBD DEFAULT]
NoOfReplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Section for the cluster management node
[NDB_MGMD]
# IP address of the management node (this system)
HostName=192.168.0.103
# Section for the storage nodes
[NDBD]
# IP address of the first storage node
HostName=192.168.0.101
DataDir= /var/lib/mysql-cluster
[NDBD]
# IP address of the second storage node
HostName=192.168.0.102
DataDir=/var/lib/mysql-cluster
# one [MYSQLD] per storage node
[MYSQLD]
[MYSQLD]
ndb_mgmd -f /var/lib/mysql-cluster/config.ini
It makes sense to automatically start the management server at system boot time, so we create a very
simple init script and the appropriate startup links:
loadb1.example.com:
groupadd mysql
useradd -g mysql mysql
cd /usr/local/
wget http://dev.mysql.com/get/Downloads/MySQL-5.0/mysql-max-5.0.19-linux-i686-\
glibc23.tar.gz/from/http://www.mirrorservice.org/sites/ftp.mysql.com/
tar xvfz mysql-max-5.0.19-linux-i686-glibc23.tar.gz
ln -s mysql-max-5.0.19-linux-i686-glibc23 mysql
cd mysql
scripts/mysql_install_db --user=mysql
chown -R root:mysql .
chown -R mysql data
cp support-files/mysql.server /etc/init.d/
chmod 755 /etc/init.d/mysql.server
update-rc.d mysql.server defaults
cd /usr/local/mysql/bin
mv * /usr/bin
cd ../
rm -fr /usr/local/mysql/bin
ln -s /usr/bin /usr/local/mysql/bin
vi /etc/my.cnf
[mysqld]
ndbcluster
# IP address of the cluster management node
ndb-connectstring=192.168.0.103
[mysql_cluster]
# IP address of the cluster management node
ndb-connectstring=192.168.0.103
Make sure you fill in the correct IP address of the MySQL cluster management server.
Next we create the data directories and start the MySQL server on both cluster nodes:
sql1.example.com / sql2.example.com:
mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
ndbd --initial
/etc/init.d/mysql.server start
(Please note: we have to run ndbd --initial only when the start MySQL for the first time, and
if /var/lib/mysql-cluster/config.ini on loadb1.example.com changes.)
Now is a good time to set a password for the MySQL root user:
sql1.example.com / sql2.example.com:
We want to start the cluster nodes at boot time, so we create an ndbd init script and the appropriate
system startup links:
sql1.example.com / sql2.example.com:
ndb_mgm
show;
If you see that your nodes are connected, then everything's ok!
Type
quit;
mysql -u root -p
CREATE DATABASE mysqlclustertest;
USE mysqlclustertest;
CREATE TABLE testtable (i INT) ENGINE=NDBCLUSTER;
INSERT INTO testtable () VALUES (1);
SELECT * FROM testtable;
quit;
(Have a look at the CREATE statment: We must use ENGINE=NDBCLUSTER for all database tables that we
want to get clustered! If you use another engine, then clustering will not work!)
The result of the SELECT statement should be:
mysql> SELECT * FROM testtable;
+------+
| i
|
+------+
| 1
|
+------+
1 row in set (0.03 sec)
Now we create the same database on sql2.example.com (yes, we still have to create it, but afterwards
and its data should be replicated to sql2.example.com because testtable uses
testtable
ENGINE=NDBCLUSTER):
sql2.example.com:
mysql -u root -p
CREATE DATABASE mysqlclustertest;
USE mysqlclustertest;
SELECT * FROM testtable;
The SELECT statement should deliver you the same result as before on sql1.example.com:
mysql> SELECT * FROM testtable;
+------+
| i
|
+------+
| 1
|
+------+
1 row in set (0.04 sec)
So the data was replicated from sql1.example.com to sql2.example.com. Now we insert another row
into testtable:
sql2.example.com:
Now let's go back to sql1.example.com and check if we see the new row there:
sql1.example.com:
mysql -u root -p
USE mysqlclustertest;
SELECT * FROM testtable;
quit;
killall ndbd
that all ndbd processes have terminated. If you still see ndbd processes, run another
killall ndbd
ndb_mgm
sql2.example.com:
mysql -u root -p
USE mysqlclustertest;
SELECT * FROM testtable;
quit;
Ok, all tests went fine, so let's start our sql1.example.com node again:
sql1.example.com:
ndbd
ndb_mgm
ndb_mgm>
This means that the cluster nodes sql1.example.com and sql2.example.com and also the cluster
management server have shut down.
Run
quit;
ndb_mgmd -f /var/lib/mysql-cluster/config.ini
ndbd
ndb_mgm
to see the current status of the cluster. It might take a few seconds after a restart until all nodes are
reported as connected.
Type
quit;
modprobe
modprobe
modprobe
modprobe
modprobe
modprobe
modprobe
modprobe
modprobe
modprobe
modprobe
modprobe
ip_vs_dh
ip_vs_ftp
ip_vs
ip_vs_lblc
ip_vs_lblcr
ip_vs_lc
ip_vs_nq
ip_vs_rr
ip_vs_sed
ip_vs_sh
ip_vs_wlc
ip_vs_wrr
In order to load the IPVS kernel modules at boot time, we list the modules in /etc/modules:
loadb1.example.com / loadb2.example.com:
vi /etc/modules
ip_vs_dh
ip_vs_ftp
ip_vs
ip_vs_lblc
ip_vs_lblcr
ip_vs_lc
ip_vs_nq
ip_vs_rr
ip_vs_sed
ip_vs_sh
ip_vs_wlc
ip_vs_wrr
Monkey
loadb1.example.com / loadb2.example.com:
vi /etc/apt/sources.list
apt-get update
apt-get install ultramonkey libdbi-perl libdbd-mysql-perl libmysqlclient14-dev
Now Ultra
Monkey
The libdbd-mysql-perl package we've just installed does not work with MySQL 5 (we use MySQL
5 on our MySQL cluster...), so we install the newest DBD::mysql Perl package:
loadb1.example.com / loadb2.example.com:
cd /tmp
wget http://search.cpan.org/CPAN/authors/id/C/CA/CAPTTOFU/DBD-mysql-3.0002.tar.gz
tar xvfz DBD-mysql-3.0002.tar.gz
cd DBD-mysql-3.0002
perl Makefile.PL
make
make install
vi /etc/sysctl.conf
sysctl -p
vi /etc/ha.d/ha.cf
logfacility
local0
bcast
eth0
mcast eth0 225.0.0.1 694 1 0
auto_failback off
node
loadb1
node
loadb2
respawn hacluster /usr/lib/heartbeat/ipfail
apiauth ipfail gid=haclient uid=hacluster
Please note: you must list the node names (in this case loadb1 and loadb2) as shown by
uname -n
Other than that, you don't have to change anything in the file.
vi /etc/ha.d/haresources
loadb1
\
ldirectord::ldirectord.cf \
LVSSyncDaemonSwap::master \
IPaddr2::192.168.0.105/24/eth0/192.168.0.255
You must list one of the load balancer node names (here: loadb1) and list the virtual IP address
(192.168.0.105) together with the correct netmask (24) and broadcast address (192.168.0.255). If
you are unsure about the correct settings, http://www.subnetmask.info/ might help you.
vi /etc/ha.d/authkeys
auth 3
3 md5 somerandomstring
is a password which the two heartbeat daemons on loadb1 and loadb2 use to
authenticate against each other. Use your own string here. You have the choice between three
authentication mechanisms. I use md5 as it is the most secure one.
somerandomstring
/etc/ha.d/authkeys
loadb1.example.com / loadb2.example.com:
vi /etc/ha.d/ldirectord.cf
# Global Directives
checktimeout=10
checkinterval=2
autoreload=no
logfile="local0"
quiescent=yes
virtual = 192.168.0.105:3306
service = mysql
real = 192.168.0.101:3306 gate
real = 192.168.0.102:3306 gate
checktype = negotiate
login = "ldirector"
passwd = "ldirectorpassword"
database = "ldirectordb"
Please fill in the correct virtual IP address (192.168.0.105) and the correct IP addresses of your
MySQL cluster nodes (192.168.0.101 and 192.168.0.102). 3306 is the port that MySQL runs on by
default. We also specify a MySQL user (ldirector) and password (ldirectorpassword), a database
(ldirectordb) and an SQL query. ldirectord uses this information to make test requests to the
MySQL cluster nodes to check if they are still available. We are going to create the ldirector
database with the ldirector user in the next step.
Now we create the necessary system startup links for heartbeat and remove those of ldirectord
(bacause ldirectord will be started by heartbeat):
loadb1.example.com / loadb2.example.com:
mysql -u root -p
GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
FLUSH PRIVILEGES;
CREATE DATABASE ldirectordb;
USE ldirectordb;
CREATE TABLE connectioncheck (i INT) ENGINE=NDBCLUSTER;
INSERT INTO connectioncheck () VALUES (1);
quit;
sql2.example.com:
mysql -u root -p
GRANT ALL ON ldirectordb.* TO 'ldirector'@'%' IDENTIFIED BY 'ldirectorpassword';
FLUSH PRIVILEGES;
CREATE DATABASE ldirectordb;
quit;
Finally we must configure our MySQL cluster nodes sql1.example.com and sql2.example.com to
accept requests on the virtual IP address 192.168.0.105.
sql1.example.com / sql2.example.com:
vi /etc/sysctl.conf
sysctl -p
vi /etc/network/interfaces
auto lo:0
iface lo:0 inet static
address 192.168.0.105
netmask 255.255.255.255
pre-up sysctl -p > /dev/null
ifup lo:0
/etc/init.d/ldirectord stop
/etc/init.d/heartbeat start
If you don't see errors, you should now reboot both load balancers:
loadb1.example.com / loadb2.example.com:
shutdown -r now
After the reboot we can check if both load balancers work as expected :
loadb1.example.com / loadb2.example.com:
ip addr sh eth0
The active load balancer should list the virtual IP address (192.168.0.105):
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:16:3e:45:fc:f8 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.103/24 brd 192.168.0.255 scope global eth0
inet 192.168.0.105/24 brd 192.168.0.255 scope global secondary eth0
ipvsadm -L -n
If your tests went fine, you can now try to access the MySQL database from a totally different server
in the same network (192.168.0.x) using the virtual IP address 192.168.0.105:
mysql -h 192.168.0.105 -u ldirector -p
(Please note: your MySQL client must at least be of version 4.1; older versions do not work with
MySQL 5.)
You can now switch off one of the MySQL cluster nodes for test purposes; you should then still be
able to connect to the MySQL database.
8 Annotations
There are some important things to keep in mind when running a MySQL cluster:
- All data is stored in RAM! Therefore you need lots of RAM on your cluster nodes. The formula
how much RAM you need on ech node goes like this:
(SizeofDatabase NumberOfReplicas 1.1 ) / NumberOfDataNodes
So if you have a database that is 1 GB of size, you would need 1.1 GB RAM on each node!
- The cluster management node listens on port 1186, and anyone can connect. So that's definitely not
secure, and therefore you should run your cluster in an isolated private network!
It's a good idea to have a look at the MySQL Cluster FAQ:
http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster-faq.html and also at the MySQL Cluster
documentation: http://dev.mysql.com/doc/refman/5.0/en/ndbcluster.html
Links
MySQL: http://www.mysql.com/
MySQL Cluster documentation: http://dev.mysql.com/doc/refman/5.0/en/ndbcluster.html
MySQL Cluster FAQ: http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster-faq.html
Ultra Monkey: http://www.ultramonkey.org/
The High-Availability Linux Project: http://www.linux-ha.org/