herohope 2020-07-19
总结:异步与半同步异同 默认情况下MySQL的复制是异步的,Master上所有的更新操作写入Binlog之后 并不确保所有的更新都被复制到Slave之上。异步操作虽然效率高,但是在Master/Slave出现问题的时 候,存在很高数据不同步的风险,甚至可能丢失数据。 MySQL5.5引入半同步复制功能的目的是为了保 证在master出问题的时候,至少有一台Slave的数据是完整的。在超时的情况下也可以临时转入异步复 制,保障业务的正常使用,直到一台salve追赶上之后,继续切换到半同步模式。
工作原理 相较于其它HA软件,MHA的目的在于维持MySQL Replication中Master库的高可用性,其最大特点是 可以修复多个Slave之间的差异日志,最终使所有Slave保持数据一致,然后从中选择一个充当新的Master,并 将其它Slave指向它。 -从宕机崩溃的master保存二进制日志事件(binlogevents)。 -识别含有最新更新的slave。 -应用差异的中继日志(relay log)到其它slave。 -应用从master保存的二进制日志事件(binlogevents)。 -提升一 个slave为新master。 -使其它的slave连接新的master进行复制。
目前MHA主要支持一主多从的架构,要搭建MHA,要求一个复制集群中必须最少有三台数据库服务器,一主二 从,即一台充当master,一台充当备用master,另外一台充当从库,因为至少需要三台服务器。
其中master对外提供写服务,备选master(实际的slave,主机名slave1)提供读服务,slave也提供相关的读 服务,一旦master宕机,将会把备选master提升为新的master,slave指向新的master,manager作为管理服务 器。
一、开始部署
注:时间要同步
#在四台机器都配置epel源 [ ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
#建立ssh无交互登录环境 #Manager主机: [ ~]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory ‘/root/.ssh‘. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 1c:cb:2d:4f:b1:80:ea:80:35:3b:89:48:5f:09:eb:2e The key‘s randomart image is: +--[ RSA 2048]----+ | . | | o .. | | .o. o. o . | |o+o+.. o = o | |+ =o. S + | | .+ + | | E .. . | | . | | | +-----------------+ [ ~]# for i in manager master slave1 slave2;do ssh-copy-id -i $i;done # 分发到其他主机 #master主机 [ ~]# ssh-keygen -t rsa [ ~]# for i in manager master slave1 slave2;do ssh-copy-id -i $i;done #slave1主机 [ ~]# ssh-keygen -t rsa [ ~]# for i in manager master slave1 slave2;do ssh-copy-id -i $i;done #slave2主机 [ ~]# ssh-keygen -t rsa [ ~]# for i in manager master slave1 slave2;do ssh-copy-id -i $i;done
测试ssh无交互登录
[ ~]# for i in master manager slave1 slave2;do ssh $i hostname;done master manager slave1 slave2 [ ~]# ssh master Last failed login: Thu Jul 2 17:05:01 CST 2020 from 192.168.171.150 on ssh:notty There was 1 failed login attempt since the last successful login. Last login: Thu Jul 2 16:57:27 2020 from 192.168.171.1 [ ~]# ssh slave1 Last login: Thu Jul 2 17:02:59 2020 from 192.168.171.1 [ ~]# ssh slave2 Last login: Thu Jul 2 17:03:29 2020 from 192.168.171.1 #配置hosts环境 [ ~]# vim /etc/hosts 192.168.171.150 manager 192.168.171.151 master 192.168.171.152 slave1 192.168.171.153 slave2 [ ~]# scp /etc/hosts :/etc/ [ ~]# scp /etc/hosts :/etc/ [ ~]# scp /etc/hosts :/etc/
二、配置mysql半同步复制
为了尽可能的减少主库硬件损坏宕机造成的数据丢失,因此在配置MHA的同时建议 配置成MySQL的半同步复制。
**注:**mysql半同步插件是由谷歌提供,具体位置/usr/local/mysql/lib/plugin/下: 一个是master用的semisync_master.so; 一个是slave用的semisync_slave.so;
分别在主从节点上安装相关的插件(master, Candicate master,slave)
#检测数据库是否支持动态载入 mysql>show variables like ‘%have_dynamic%‘; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | have_dynamic_loading | YES | # 显示yes表示支持 +----------------------+-------+ 1 row in set (0.00 sec) #所有mysql数据库服务器,安装半同步插件(semisync_master.so,semisync_slave.so) #要切记是所有mysql数据库服务器上都需要安装,这里我就在这写了一台该怎么安装 mysql>INSTALL PLUGIN rpl_semi_sync_master SONAME ‘semisync_master.so‘; Query OK, 0 rows affected (0.01 sec) mysql>INSTALL PLUGIN rpl_semi_sync_slave SONAME ‘semisync_slave.so‘; Query OK, 0 rows affected (0.00 sec) #检查Plugin是否已正确安装 mysql> show plugins; # 可以再最后两行看到如下内容 | rpl_semi_sync_master | ACTIVE | REPLICATION | semisync_master.so | GPL | | rpl_semi_sync_slave | ACTIVE | REPLICATION | semisync_slave.so | GPL | #也可以使用如下命令检查 mysql>select * from information_schema.plugins; #查看半同步相关信息 mysql>show variables like ‘%rpl_semi_sync%‘; +-------------------------------------------+------------+ | Variable_name | Value | +-------------------------------------------+------------+ | rpl_semi_sync_master_enabled | OFF | | rpl_semi_sync_master_timeout | 10000 | | rpl_semi_sync_master_trace_level | 32 | | rpl_semi_sync_master_wait_for_slave_count | 1 | | rpl_semi_sync_master_wait_no_slave | ON | | rpl_semi_sync_master_wait_point | AFTER_SYNC | | rpl_semi_sync_slave_enabled | OFF | | rpl_semi_sync_slave_trace_level | 32 | +-------------------------------------------+------------+ 8 rows in set (0.00 sec) #上方所示内容可以看到半同复制插件已经成功安装,只是还没有启用,所以是off
修改my.cnf文件,配置主从同步
注:若主MYSQL服务器已经存在,只是后期才搭建从MYSQL服务器,在置配数据同步前应先将主 MYSQL服务器的要同步的数据库拷贝到从MYSQL服务器上(如先在主MYSQL上备份数据库,再用备份 在从MYSQL服务器上恢复)
#master主机 [ ~]# vim /etc/my.cnf server-id = 1 log-bin = mysql-bin binlog_format = mixed log-bin-index = mysql-bin.index rpl_semi_sync_master_enabled = 1 # 1表示启用,0表示关闭,slave同样 rpl_semi_sync_master_timeout = 1000 # 毫秒单位,主服务器等待确认消息10秒后,不在等待,变为异步方式 rpl_semi_sync_slave_enabled = 1 relay_log_purge = 0 relay-log = relay-bin relay-log-index = slave-relay-bin.index #candicate master主机 [ ~]# vim /etc/my.cnf server-id = 2 log-bin = mysql-bin binlog_format = mixed log-bin-index = mysql-bin.index relay_log_purge = 0 # 0表示禁止 SQL 线程在执行完一个 relay log 后自动将其删除,对于MHA场景下,对 于某些滞后从库的恢复依赖于其他从库的relay log,因此采取禁用自动删除功能 relay-log = relay-bin relay-log-index = slave-relay-bin.index rpl_semi_sync_master_enabled = 1 rpl_semi_sync_master_timeout = 10000 rpl_semi_sync_slave_enabled = 1 #salve2主机 Server-id = 3 log-bin = mysql-bin relay-log = relay-bin relay-log-index = slave-relay-bin.index read_only = 1 rpl_semi_sync_slave_enabled = 1 #查看半同步相关信息 mysql> show variables like ‘%rpl_semi_sync%‘; +-------------------------------------------+------------+ | Variable_name | Value | +-------------------------------------------+------------+ | rpl_semi_sync_master_enabled | ON | | rpl_semi_sync_master_timeout | 10000 | | rpl_semi_sync_master_trace_level | 32 | | rpl_semi_sync_master_wait_for_slave_count | 1 | | rpl_semi_sync_master_wait_no_slave | ON | | rpl_semi_sync_master_wait_point | AFTER_SYNC | | rpl_semi_sync_slave_enabled | ON | | rpl_semi_sync_slave_trace_level | 32 | +-------------------------------------------+------------+ 8 rows in set (0.00 sec) #查看半同步状态 mysql> show status like ‘%rpl_semi_sync%‘; +--------------------------------------------+-------+ | Variable_name | Value | +--------------------------------------------+-------+ | Rpl_semi_sync_master_clients | 0 | | Rpl_semi_sync_master_net_avg_wait_time | 0 | | Rpl_semi_sync_master_net_wait_time | 0 | | Rpl_semi_sync_master_net_waits | 0 | | Rpl_semi_sync_master_no_times | 0 | | Rpl_semi_sync_master_no_tx | 0 | | Rpl_semi_sync_master_status | ON | | Rpl_semi_sync_master_timefunc_failures | 0 | | Rpl_semi_sync_master_tx_avg_wait_time | 0 | | Rpl_semi_sync_master_tx_wait_time | 0 | | Rpl_semi_sync_master_tx_waits | 0 | | Rpl_semi_sync_master_wait_pos_backtraverse | 0 | | Rpl_semi_sync_master_wait_sessions | 0 | | Rpl_semi_sync_master_yes_tx | 0 | | Rpl_semi_sync_slave_status | OFF | +--------------------------------------------+-------+ 15 rows in set (0.00 sec) #有几个状态参数值得关注的: << rpl_semi_sync_master_status :显示主服务是异步复制模式还是半同步复制模式 << rpl_semi_sync_master_clients :显示有多少个从服务器配置为半同步复制模式 << rpl_semi_sync_master_yes_tx :显示从服务器确认成功提交的数量 << rpl_semi_sync_master_no_tx :显示从服务器确认不成功提交的数量 << rpl_semi_sync_master_tx_avg_wait_time :事务因开启 semi_sync ,平均需要额外等待的时间 << rpl_semi_sync_master_net_avg_wait_time :事务进入等待队列后,到网络平均等待时间 #master主机: mysql> grant replication slave on *.* to ‘192.168.171.%‘ identified by ‘123‘; # 创建一个用于主从复制的帐号,在master和candicate master的主机上创建即可 mysql> grant all privileges on *.* to ‘192.168.171.%‘ identified by ‘123‘; # 创建MHA管理账号,所有mysql服务器上都需要 mysql> show master status; +------------------+----------+--------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +------------------+----------+--------------+------------------+-------------------+ | mysql-bin.000002 | 746 | | | | +------------------+----------+--------------+------------------+-------------------+ 1 row in set (0.00 sec) #Candicate master主机: mysql> grant replication slave on *.* to ‘192.168.171.%‘ identified by ‘123‘; mysql> grant all privileges on *.* to ‘192.168.171.%‘ identified by ‘123‘; mysql> change master to master_host=‘192.168.171.151‘,master_port=3306,master_user=‘mharep‘,master_password=‘123‘,master_log_file=‘mysql-bin.000001‘,master_log_pos=746; mysql> start slave; #slave主机: mysql> grant all privileges on *.* to ‘192.168.171.%‘ identified by ‘123‘; mysql> change master to master_host=‘192.168.171.151‘,master_port=3306,master_user=‘mharep‘,master_password=‘123‘,master_log_file=‘mysql-bin.000001‘,master_log_pos=746; mysql> start slave;
查看Candicate master和slave2 两个从服务器的状态
#查看master服务器的半同步状态 mysql> show status like ‘%rpl_semi_sync%‘; +--------------------------------------------+-------+ | Variable_name | Value | +--------------------------------------------+-------+ | Rpl_semi_sync_master_clients | 2 | | Rpl_semi_sync_master_net_avg_wait_time | 0 | | Rpl_semi_sync_master_net_wait_time | 0 | | Rpl_semi_sync_master_net_waits | 0 | | Rpl_semi_sync_master_no_times | 0 | | Rpl_semi_sync_master_no_tx | 0 | | Rpl_semi_sync_master_status | ON | | Rpl_semi_sync_master_timefunc_failures | 0 | | Rpl_semi_sync_master_tx_avg_wait_time | 0 | | Rpl_semi_sync_master_tx_wait_time | 0 | | Rpl_semi_sync_master_tx_waits | 0 | | Rpl_semi_sync_master_wait_pos_backtraverse | 0 | | Rpl_semi_sync_master_wait_sessions | 0 | | Rpl_semi_sync_master_yes_tx | 0 | | Rpl_semi_sync_slave_status | OFF | +--------------------------------------------+-------+ 15 rows in set (0.00 sec)
三、配置mysql-mha mha包括manager节点和data节点
#在所有主机上安装mha所依赖的软 件包(需要系统自带的yum源并联网) [ ~]# yum -y install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-ParallelForkManager perl-Config-IniFiles ncftp perl-Params-Validate perl-CPAN perl-TestMock-LWP.noarch perl-LWP-Authen-Negotiate.noarch perl-devel perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker
以下操作管理节点需要两个都安装, 在3台数据库节点只要安装MHA的node节点
软件下载
#在所有数据库节点上安装node节点 [ ~]# tar zxf mha4mysql-node-0.58.tar.gz [ ~]# cd mha4mysql-node-0.58/ [ mha4mysql-node-0.58]# perl Makefile.PL *** Module::AutoInstall version 1.06 *** Checking for Perl dependencies... [Core Features] - DBI ...loaded. (1.627) - DBD::mysql ...loaded. (4.023) *** Module::AutoInstall configuration finished. Checking if your kit is complete... Looks good Writing Makefile for mha4mysql::node [ mha4mysql-node-0.58]# make && make install #这里我就忽略其他两个数据库安装node了,步骤都一样
#管理节点两个都需要安装 [ ~]# tar zxf mha4mysql-node-0.58.tar.gz [ ~]# cd mha4mysql-node-0.58/ [ mha4mysql-node-0.58]# perl Makefile.PL [ mha4mysql-node-0.58]# make && make install #安装manager包 [ ~]# tar zxf mha4mysql-manager-0.58.tar.gz [ ~]# cd mha4mysql-manager-0.58/ [ mha4mysql-manager-0.58]# perl Makefile.PL *** Module::AutoInstall version 1.06 *** Checking for Perl dependencies... [Core Features] - DBI ...loaded. (1.627) - DBD::mysql ...loaded. (4.023) - Time::HiRes ...loaded. (1.9725) - Config::Tiny ...loaded. (2.14) - Log::Dispatch ...loaded. (2.41) - Parallel::ForkManager ...missing. - MHA::NodeConst ...loaded. (0.58) ==> Auto-install the 1 mandatory module(s) from CPAN? [y] y *** Dependencies will be installed the next time you type ‘make‘. *** Module::AutoInstall configuration finished. Checking if your kit is complete... Looks good Warning: prerequisite Parallel::ForkManager 0 not found. Writing Makefile for mha4mysql::manager [ mha4mysql-manager-0.58]# make && make install # 根据提示输入
[ ~]# mkdir /etc/masterha [ ~]# mkdir -p /masterha/app1 [ ~]# mkdir /scripts [ ~]# cd mha4mysql-manager-0.58/ [ mha4mysql-manager-0.58]# cp samples/conf/* /etc/masterha/ [ mha4mysql-manager-0.58]# cp samples/scripts/* /scripts/
编辑MHA配置文件
[ ~]# vim /etc/masterha/app1.cnf [server default] manager_workdir=/masterha/app1 # 设置manager的工作目录 manager_log=/masterha/app1/manager.log # 设置manager的日志 user=manager # 设置监控用户 password=123 # 监控用户的密码 ssh_user=root # ssh连接用户 repl_user=mharep # 主从复制用户 repl_password=123 # 主从复制用户密码 ping_interval=1 # 设置监控主库,发送ping包的时间间隔,默认 是3秒 [server1] hostname=192.168.171.151 port=3306 master_binlog_dir=/usr/local/mysql/data candidate_master=1 # 设置为候选master,如果设置该参数以后,发生主从切换以后将会将此从库提升为主库 [server2] hostname=192.168.171.152 port=3306 master_binlog_dir=/usr/local/mysql/data candidate_master=1 [server3] hostname=192.168.171.153 port=3306 master_binlog_dir=/usr/local/mysql/data no_master=1
SSH有效性验证
[ ~]# masterha_check_ssh --global_conf=/etc/masterha/masterha_default.cnf --conf=/etc/masterha/app1.cnf
集群复制的有效性验证(mysql必须都启动)
[ ~]# masterha_check_repl -global_conf=/etc/masterha/masterha_default.cnf --conf=/etc/masterha/app1.cnf Thu Jul 2 19:20:58 2020 - [info] Reading default configuration from /etc/masterha/masterha_default.cnf.. Thu Jul 2 19:20:58 2020 - [info] Reading application default configuration from /etc/masterha/app1.cnf.. Thu Jul 2 19:20:58 2020 - [info] Reading server configuration from /etc/masterha/app1.cnf.. Thu Jul 2 19:20:58 2020 - [info] MHA::MasterMonitor version 0.58. Thu Jul 2 19:20:59 2020 - [info] GTID failover mode = 0 Thu Jul 2 19:20:59 2020 - [info] Dead Servers: Thu Jul 2 19:20:59 2020 - [info] Alive Servers: Thu Jul 2 19:20:59 2020 - [info] 192.168.171.151(192.168.171.151:3306) Thu Jul 2 19:20:59 2020 - [info] 192.168.171.152(192.168.171.152:3306) Thu Jul 2 19:20:59 2020 - [info] 192.168.171.153(192.168.171.153:3306) Thu Jul 2 19:20:59 2020 - [info] Alive Slaves: Thu Jul 2 19:20:59 2020 - [info] 192.168.171.152(192.168.171.152:3306) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled Thu Jul 2 19:20:59 2020 - [info] Replicating from 192.168.171.151(192.168.171.151:3306) Thu Jul 2 19:20:59 2020 - [info] Primary candidate for the new Master (candidate_master is set) Thu Jul 2 19:20:59 2020 - [info] 192.168.171.153(192.168.171.153:3306) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled Thu Jul 2 19:20:59 2020 - [info] Replicating from 192.168.171.151(192.168.171.151:3306) Thu Jul 2 19:20:59 2020 - [info] Not candidate for the new Master (no_master is set) Thu Jul 2 19:20:59 2020 - [info] Current Alive Master: 192.168.171.151(192.168.171.151:3306) Thu Jul 2 19:20:59 2020 - [info] Checking slave configurations.. Thu Jul 2 19:20:59 2020 - [info] read_only=1 is not set on slave 192.168.171.152(192.168.171.152:3306). Thu Jul 2 19:20:59 2020 - [warning] relay_log_purge=0 is not set on slave 192.168.171.153(192.168.171.153:3306). Thu Jul 2 19:20:59 2020 - [info] Checking replication filtering settings.. Thu Jul 2 19:20:59 2020 - [info] binlog_do_db= , binlog_ignore_db= Thu Jul 2 19:20:59 2020 - [info] Replication filtering check ok. Thu Jul 2 19:20:59 2020 - [info] GTID (with auto-pos) is not supported Thu Jul 2 19:20:59 2020 - [info] Starting SSH connection tests.. Thu Jul 2 19:21:00 2020 - [info] All SSH connection tests passed successfully. Thu Jul 2 19:21:00 2020 - [info] Checking MHA Node version.. Thu Jul 2 19:21:01 2020 - [info] Version check ok. Thu Jul 2 19:21:01 2020 - [info] Checking SSH publickey authentication settings on the current master.. Thu Jul 2 19:21:01 2020 - [info] HealthCheck: SSH to 192.168.171.151 is reachable. Thu Jul 2 19:21:01 2020 - [info] Master MHA Node version is 0.58. Thu Jul 2 19:21:01 2020 - [info] Checking recovery script configurations on 192.168.171.151(192.168.171.151:3306).. Thu Jul 2 19:21:01 2020 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/usr/local/mysql/data --output_file=/var/tmp/save_binary_logs_test --manager_version=0.58 --start_file=mysql-bin.000003 Thu Jul 2 19:21:01 2020 - [info] Connecting to (192.168.171.151:22).. Creating /var/tmp if not exists.. ok. Checking output directory is accessible or not.. ok. Binlog found at /usr/local/mysql/data, up to mysql-bin.000003 Thu Jul 2 19:21:01 2020 - [info] Binlog setting check done. Thu Jul 2 19:21:01 2020 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers.. Thu Jul 2 19:21:01 2020 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user=‘manager‘ --slave_host=192.168.171.152 --slave_ip=192.168.171.152 --slave_port=3306 --workdir=/var/tmp --target_version=5.7.22-log --manager_version=0.58 --relay_log_info=/usr/local/mysql/data/relay-log.info --relay_dir=/usr/local/mysql/data/ --slave_pass=xxx Thu Jul 2 19:21:01 2020 - [info] Connecting to (192.168.171.152:22).. Checking slave recovery environment settings.. Opening /usr/local/mysql/data/relay-log.info ... ok. Relay log found at /usr/local/mysql/data, up to relay-bin.000002 Temporary relay log file is /usr/local/mysql/data/relay-bin.000002 Checking if super_read_only is defined and turned on.. not present or turned off, ignoring. Testing mysql connection and privileges.. mysql: [Warning] Using a password on the command line interface can be insecure. done. Testing mysqlbinlog output.. done. Cleaning up test file(s).. done. Thu Jul 2 19:21:01 2020 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user=‘manager‘ --slave_host=192.168.171.153 --slave_ip=192.168.171.153 --slave_port=3306 --workdir=/var/tmp --target_version=5.7.22-log --manager_version=0.58 --relay_log_info=/usr/local/mysql/data/relay-log.info --relay_dir=/usr/local/mysql/data/ --slave_pass=xxx Thu Jul 2 19:21:01 2020 - [info] Connecting to (192.168.171.153:22).. Checking slave recovery environment settings.. Opening /usr/local/mysql/data/relay-log.info ... ok. Relay log found at /usr/local/mysql/data, up to relay-bin.000002 Temporary relay log file is /usr/local/mysql/data/relay-bin.000002 Checking if super_read_only is defined and turned on.. not present or turned off, ignoring. Testing mysql connection and privileges.. mysql: [Warning] Using a password on the command line interface can be insecure. done. Testing mysqlbinlog output.. done. Cleaning up test file(s).. done. Thu Jul 2 19:21:02 2020 - [info] Slaves settings check done. Thu Jul 2 19:21:02 2020 - [info] 192.168.171.151(192.168.171.151:3306) (current master) +--192.168.171.152(192.168.171.152:3306) +--192.168.171.153(192.168.171.153:3306) Thu Jul 2 19:21:02 2020 - [info] Checking replication health on 192.168.171.152.. Thu Jul 2 19:21:02 2020 - [info] ok. Thu Jul 2 19:21:02 2020 - [info] Checking replication health on 192.168.171.153.. Thu Jul 2 19:21:02 2020 - [info] ok. Thu Jul 2 19:21:02 2020 - [warning] master_ip_failover_script is not defined. Thu Jul 2 19:21:02 2020 - [warning] shutdown_script is not defined. Thu Jul 2 19:21:02 2020 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK.
**验证成功的话会自动识别出所有服务器和主从状况
注:验证成功的话会自动识别出所有服务器和主从状况 在验 证时,若遇到这个错误:Can‘t exec "mysqlbinlog" ...... 解决方法是在所有服务器上执行:**
ln -s /usr/local/mysql/bin/* /usr/local/bin/ #启动 manager [ ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf &>/tmp/mha_manager.log & [1] 49094
注:在应用Unix/Linux时,我们一般想让某个程序在后台运行,于是我们将常会用 & 在程序结尾来让程 序自动运行。比如我们要运行mysql在后台: /usr/local/mysql/bin/mysqld_safe –user=mysql &。可是 有很多程序并不想mysqld一样,这样我们就需要nohup命令
#状态检查 [ ~]# masterha_check_status --conf=/etc/masterha/app1.cnf app1 (pid:49094) is running(0:PING_OK), master:192.168.171.151
故障转移验证
#模拟master宕机,查看备用master是否会接手 [ ~]# systemctl stop mysqld mysql> show slave status\G # 在slave从上查看到已经更改IP *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.171.152 # 为备用master的IP Master_User: mharep Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000002 Read_Master_Log_Pos: 746 Relay_Log_File: relay-bin.000002 Relay_Log_Pos: 320 Relay_Master_Log_File: mysql-bin.000002 Slave_IO_Running: Yes Slave_SQL_Running: Yes #查看MHA日志 [ ~]# cat /masterha/app1/manager.log Master 192.168.171.151(192.168.171.151:3306) is down! # down机 Check MHA Manager logs at manager:/masterha/app1/manager.log for details. Started automated(non-interactive) failover. The latest slave 192.168.171.152(192.168.171.152:3306) has all relay logs for recovery. Selected 192.168.171.152(192.168.171.152:3306) as a new master. 192.168.171.152(192.168.171.152:3306): OK: Applying all logs succeeded. 192.168.171.153(192.168.171.153:3306): This host has the latest relay log events. Generating relay diff files from the latest slave succeeded. 192.168.171.153(192.168.171.153:3306): OK: Applying all logs succeeded. Slave started, replicating from 192.168.171.152(192.168.171.152:3306) 192.168.171.152(192.168.171.152:3306): Resetting slave info succeeded. Master failover to 192.168.171.152(192.168.171.152:3306) completed successfully. # 更改成功
至此,MHA成功搭建
MHA Manager 端日常主要操作步骤
1)检查是否有下列文件,有则删除
发生主从切换后,MHAmanager服务会自动停掉,且在 manager_workdir(/masterha/app1)目录下面生成文件app1.failover.complete,若要启动MHA,必须先确保 无此文件) 如果有这个提示,那么删除此文件/ masterha/app1/app1.failover.complete [error] [/usr/share/perl5/vendor_perl/MHA/MasterFailover.pm, ln298] Last failover was done at 2015/01/09 10:00:47. Current time is too early to do failover again. If you want to do failover, manually remove / masterha/app1/app1.failover.complete and run this script again
# ll /masterha/app1/app1.failover.complete # ll /masterha/app1/app1.failover.error
2)检查MHA复制检查:(需要把master设置成candicatade的从服务器)
mysql> CHANGE MASTER TO MASTER_HOST=‘192.168.18.6‘, MASTER_PORT=3306, MASTER_LOG_FILE=‘mysql-bin.000002‘, MASTER_LOG_POS=154, MASTER_USER=‘mharep‘, MASTER_PASSWORD=‘123‘; Query OK, 0 rows affected, 2 warnings (0.01 sec) # masterha_check_repl --conf=/etc/masterha/app1.cnf
3)停止MHA
masterha_stop --conf=/etc/masterha/app1.cnf
4)启动MHA
#nohup masterha_manager --conf=/etc/masterha/app1.cnf &>/tmp/mha_manager.log &
当有slave 节点宕掉时,默认是启动不了的,加上 --ignore_fail_on_start 即使有节点宕掉也能启动MHA,如下:
#nohup masterha_manager --conf=/etc/masterha/app1.cnf --ignore_fail_on_start &>/tmp/mha_manager.log &
5) 检查状态
# masterha_check_status --conf=/etc/masterha /app1.cnf
6) 检查日志
#tail -f /masterha/app1/manager.log
7)主从切换后续工作重构
重构就是你的主挂了,切换到Candicate master上,Candicate master变成了主, 因此重构的一种方案原主库修复成一个新的slave 主库切换后,把原主库修复成新从库,然后重新执行以上5 步。原主库数据文件完整的情况下,可通过以下方式找出最后执行的CHANGE MASTER命令: [ ~]# grep "CHANGE MASTER TO Tue Feb 18 13:14:02 2020 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST=‘192.168.18.6‘, MASTER_PORT=3306, MASTER_LOG_FILE=‘mysqlbin.000002‘, MASTER_LOG_POS=154, MASTER_USER=‘mharep‘, MASTER_PASSWORD=‘xxx‘; Tue Feb 18 13:14:03 2020 - [info] Executed CHANGE MASTER. mysql> CHANGE MASTER TO MASTER_HOST=‘192.168.18.6‘, MASTER_PORT=3306, MASTER_LOG_FILE=‘mysql-bin.000002‘, MASTER_LOG_POS=154, MASTER_USER=‘mharep‘, MASTER_PASSWORD=‘123‘; mysql> start slave; mysql> show slave status\G #启动manager # nohup masterha_manager --conf=/etc/masterha/app1.cnf &>/tmp/mha_manager.log & # masterha_check_status --conf=/etc/masterha/app1.cnf #注意:如果正常,会显示"PING_OK",否则会显示"NOT_RUNNING",这代表MHA监控没有开启。
定期删除中继日志 在配置主从复制中,slave上设置了参数relay_log_purge=0,所以slave节点需要定期删除中继日志,建议每个slave节点删除中继日志的时间错开。
corntab -e 0 5 * * * /usr/local/bin/purge_relay_logs - -user=root --password=pwd123 -port=3306 --disable_relay_log_purge >> /var/log/purge_relay.log 2>&1
MHA后传主要写的是添加了Keepalived和脚本切换VIP