分类 MYSQL 下的文章

最后一行的值
mapped 表示该进程映射的虚拟地址空间大小,也就是该进程预先分配的虚拟内存大小,即ps出的vsz
writeable/private 表示进程所占用的私有地址空间大小,也就是该进程实际使用的内存大小
shared 表示进程和其他进程共享的内存大小
循环显示进程2318 的设备格式的最后1行,间隔2秒,

[root@orcl:/root]$ while true; do pmap -d 2318 | tail -1; sleep 2; done
mapped: 1805072K writeable/private: 8324K shared: 1558540K
mapped: 1805072K writeable/private: 8324K shared: 1558540K
mapped: 1805072K writeable/private: 8324K shared: 1558540K
mapped: 1805072K writeable/private: 8324K shared: 1558540K
mapped: 1805072K writeable/private: 8324K shared: 1558540K
mapped: 1805072K writeable/private: 8324K shared: 1558540K
mapped: 1805072K writeable/private: 8324K shared: 1558540K
mapped: 1805072K writeable/private: 8324K shared: 1558540K
mapped: 1805072K writeable/private: 8324K shared: 1558540K
mapped: 1805072K writeable/private: 8324K shared: 1558540K
mapped: 1805072K writeable/private: 8324K shared: 1558540K
mapped: 1805072K writeable/private: 8324K shared: 1558540K
mapped: 1805072K writeable/private: 8324K shared: 1558540K
mapped: 1805072K writeable/private: 8324K shared: 1558540K

mysql内存计算.zip
Mysql占用内存过高参数优化

计算mysql各项参数
http://www.mysqlcalculator.com/

key_buffer_size
指定索引缓冲区的大小,对MyISAM表起作用
query_cache_size 
查询缓存大小,默认改选项为关闭
tmp_table_size
临时表缓存
innodb_buffer_pool_size
innodb表的索引,数据,插入数据时的缓冲
innodb_additional_mem_pool_size
innodb 内部缓存
innodb_log_buffer_size
日志缓存
max_connections 
最大连接数,不超过16384
sort_buffer_size
connection级参数,在每个connection连接时分配
read_buffer_size 
MySQL读入缓冲区的大小
read_rnd_buffer_size 
随机读缓存
thread_stack 
连接线程分配空间
binlog_cache_size 
每个session的binlog缓存

1.内存优化原则

将尽量多的内存分配给MySQL做缓存,但要给操作系统和其他程序预留足够内存。
MyISAM存储引擎的数据文件读取依赖于操作系统自身的IO缓存,因此,如果有MyISAM表,就要预留更多的内存给操作系统做IO缓存。
排序区、连接区等缓存是分配给每个数据库会话(session)专用的,其默认值的设置要根据最大连接数合理分配,如果设置太大,不但浪费资源,而且在并发连接较高时会导致物理内存耗尽。

2.MyISAM 内存优化

myisam存储引擎使用 key_buffer 缓存索引块,加速myisam索引的读写速度。对于myisam表的数据块,mysql没有特别的缓存机制,完全依赖于操作系统的IO缓存。

  key_buffer_size:
    key_buffer_size决定MyISAM索引块缓存区的大小,直接影响到MyISAM表的存取效率。可以在MySQL参数文件中设置key_buffer_size的值,对于一般MyISAM数据库,建议至少将1/4可用内存分配给key_buffer_size。
    在/usr/my.cnf 中做如下配置:key_buffer_size=512M 

  read_buffer_size:
    如果需要经常顺序扫描myisam表,可以通过增大read_buffer_size的值来改善性能。但需要注意的是read_buffer_size是每个session独占的,如果默认值设置太大,就会造成内存浪费。

  read_rnd_buffer_size:

    对于需要做排序的myisam表的查询,如带有order by子句的sql,适当增加 read_rnd_buffer_size 的值,可以改善此类的sql性能。但需要注意的是 read_rnd_buffer_size 是每个session独占的,如果默认值设置太大,就会造成内存浪费。

3.InnoDB 内存优化

  innodb用一块内存区做IO缓存池,该缓存池不仅用来缓存innodb的索引块,而且也用来缓存innodb的数据块。

  innodb_buffer_pool_size:
    该变量决定了 innodb 存储引擎表数据和索引数据的最大缓存区大小。在保证操作系统及其他程序有足够内存可用的情况下,innodb_buffer_pool_size 的值越大,缓存命中率越高,访问InnoDB表需要的磁盘I/O 就越少,性能也就越高。
    innodb_buffer_pool_size=512M

  innodb_log_buffer_size:
    决定了innodb重做日志缓存的大小,对于可能产生大量更新记录的大事务,增加innodb_log_buffer_size的大小,可以避免innodb在事务提交前就执行不必要的日志写入磁盘操作。
    innodb_log_buffer_size=10M

转载链接https://www.cnblogs.com/roadlandscape/p/12808344.html,如侵权,请联系。

主从同步报错:column 0 of talbe 'XXX' cannot be converted from type 'varchar(120(bytes))' to type 'varchar(160(bytes)) utf8mb4
检查发现主库字符集是utf8,从库字符集是utf8mb4,因为从库采用逻辑恢复,创建数据库时指定的字符集类型错误导致。

[mysql@zhcz]$ crontab -l
0 2 * * * /bin/bash /home/mysql/scripts/xtrabackup.sh 41667 /tmp/mysql.sock
*/20 * * * * sh /home/mysql/binlog_backup/binbak_main.sh
0 6 * * * /bin/bash /home/mysql/scripts/backupcheck.sh

逻辑备份

#!/bin/bash
source /etc/profile
backup_date=`date +%Y%m%d%H%M`;
hostip=$(/sbin/ifconfig -a|grep inet|grep -v 127.0.0.1 |grep -v inet6|awk '{print $2}'|tr -d "addr:" |head -n 1)
path=/Data/backup/
user=bkuser
passwd=XXXXXXXXXXXX
        if [ ! -d "${path}" ]; then
             mkdir -p ${path}
        fi
mysqldump -u$user -p$passwd --all-databases --single-transaction --master-data=2 --set-gtid-purged=off --skip-exten
ded-insert --triggers --hex-blob --routines --events -e --max_allowed_packet=134217728 --net_buffer_length=16384 > ${path}${hostip}_full_${backup_date}.sql tar -zcPf ${path}${hostip}_full_${backup_date}.tar.gz -C ${path} ${hostip}_full_${backup_date}.sql
rm -f ${path}${hostip}_full_${backup_date}.sql
find ${path} -mtime +30 -name "*.tar.gz" | xargs rm -f

物理备份

[mysql@zhcz-template ~]$ cat /home/mysql/scripts/xtrabackup.sh
#!/bin/bash
#######################################Get the backup type############################################
#script:backup.sh for backup mysql database #
#author: jiangdailin tanhuafeng #
#time: 2017-11-22 #
######################################################################################################
#source ~/.bash_profile
export LANG="en_US.UTF-8"
export user=bkuser;
export password=XXXXXXXXXXXXXXX;
export port=$1;
export socket_file=$2;
export backup_date=`date +%Y%m%d`;
export week_daily=`date +%a`;
export week_num=`date +%Y%U`;
export last_week_num=`expr $week_num - 1`;
#export hostip=`/sbin/ifconfig -a|grep inet|grep -v 127.0.0.1|grep -v 192.168.* |grep -v inet6|awk '{print $2}'|tr -d "addr:" |head -n 1`;
export hostip=`/sbin/ifconfig -a|grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print $2}'|tr -d "addr:" |head -n 1`;
export ftp_server1=10.253.176.66;
export ftp_server2=10.156.85.89;
export ftp_target_dir=/Data/backup;
export backup_dir=/Data/backup/xtrabackup;
export full_bakcup_folder=${backup_dir}/${week_num}/${port}/full;
export incre_bakcup_folder=${backup_dir}/${week_num}/${port}/incre;
export log_dir=${backup_dir}/${week_num}/${port}/log;
#Determine if the full backup directory is empty
export checkpoint_file=${full_bakcup_folder}/xtrabackup_checkpoints;
export SFPORT=22
#######################################Get the backup type############################################
#If the full backup directory does not exist, the backup type is full #
#If the full backup directory exists, the single directory is empty, the backup type is full #
#If the full backup directory exists and is not null, the backup type is determined based on the date#
######################################################################################################
if [ ! -f "${checkpoint_file}" ]; then
backup_type=full
elif [ -f "${checkpoint_file}" ]; then
case "$week_daily" in 
"Mon")
backup_type=incre
;;
"Tue")
backup_type=incre
;;
"Wed")
backup_type=full
;;
"Thu")
backup_type=incre
;;
"Fri")
backup_type=incre
;;
"Sat")
backup_type=incre
;;
"Sun")
backup_type=full
;;
"*")
backup_type=error
esac
else
echo "An error occurred when retrieving the backup type"
fi
export $backup_type
export logfile=${backup_date}_${backup_type}.log;
echo "the current backup type is : " $backup_type
#full backup function
function full_backup() {
if [ ! -d "${full_bakcup_folder}" ]; then
mkdir -p ${full_bakcup_folder}
elif [ -d "${full_bakcup_folder}" ]; then
rm -r ${full_bakcup_folder}
mkdir -p ${full_bakcup_folder}
fi
if [ ! -d "${log_dir}" ]; then
mkdir -p ${log_dir}
fi
#/usr/bin/xtrabackup \
/usr/local/xtrabackup/bin/xtrabackup \
--backup \
--user=${user} \
--password=${password} \
--socket=${socket_file} \
--compress \
--compress-threads=4 \
--no-timestamp \
--target-dir=${full_bakcup_folder} > ${log_dir}/${logfile} 2>&1
return bakcup_ok=$?
}
#incremental backup function
function increment_backup() {
if [ ! -d "${full_bakcup_folder}" ]; then
echo "Incremental backup does not find a base backup" >> ${log_dir}/${logfile}
else
#Because the backup requires the directory to be empty, create an empty directory if the directory does not exist. 
#If the directory exists, delete the directory and rebuild
if [ ! -d "${incre_bakcup_folder}" ]; then
mkdir -p ${incre_bakcup_folder}
elif [ -d "${incre_bakcup_folder}" ]; then
rm -rf ${incre_bakcup_folder}
mkdir -p ${incre_bakcup_folder}
fi
if [ ! -d "${log_dir}" ]; then
mkdir -p ${log_dir}
fi
#begin the incremental backup
#/usr/bin/xtrabackup \
/usr/local/xtrabackup/bin/xtrabackup \
--backup \
--user=${user} \
--password=${password} \
--socket=${socket_file} \
--compress \
--compress-threads=4 \
--target-dir=${incre_bakcup_folder} \
--incremental-basedir=${full_bakcup_folder} \
--no-timestamp > ${log_dir}/${logfile} 2>&1
fi
return backup_ok=$?
}
#Back up judgment function
function Backup_judgment() {
if [ "${backup_type}" == "full" ]; then
echo "The full backup task completes successfully";
cd ${backup_dir}/${week_num}/
# if [ -f "${hostip}_${port}_${backup_type}_${backup_date}.md5" ];then
# rm -rf ${hostip}_${port}_${backup_type}_${backup_date}.md5
# fi
tail -n1 ${log_dir}/${logfile} > ${hostip}_${port}_${backup_date}_${backup_type}.md5
cat ${full_bakcup_folder}/xtrabackup_checkpoints >> ${hostip}_${port}_${backup_date}_${backup_type}.md5
#tar -cvf ${hostip}_${port}_${backup_date}_${backup_type}.tar ${port}/full --remove-files 2>&1
tar -cvf ${hostip}_${port}_${backup_date}_${backup_type}.tar ${port}/full  2>&1
md5sum ${hostip}_${port}_${backup_date}_${backup_type}.tar >> ${hostip}_${port}_${backup_date}_${backup_type}.md5
elif [ "${backup_type}" == "incre" ]; then
echo "The incremental backup task completes successfully";
cd ${backup_dir}/${week_num}/
# if [ -f "${hostip}_${port}_${backup_date}_${backup_type}.md5" ];then
# rm -rf ${hostip}_${port}_${backup_date}_${backup_type}.md5
# fi
tail -n1 ${log_dir}/${logfile} > ${hostip}_${port}_${backup_date}_${backup_type}.md5
cat ${incre_bakcup_folder}/xtrabackup_checkpoints >> ${hostip}_${port}_${backup_date}_${backup_type}.md5
#tar -cvf ${hostip}_${port}_${backup_date}_${backup_type}.tar ${port}/incre --remove-files 2>&1
tar -cvf ${hostip}_${port}_${backup_date}_${backup_type}.tar ${port}/incre  2>&1
md5sum ${hostip}_${port}_${backup_date}_${backup_type}.tar >> ${hostip}_${port}_${backup_date}_${backup_type}.md5
fi
}

#ftp function
function ftp_file (){
ftp -n<<!
open $ftp_server
user mysql cqfae0630
cd $ftp_target_dir
lcd ${backup_dir}/${week_num}
prompt off
bin
mput ${hostip}_${port}_${backup_date}_${backup_type}.md5 ${hostip}_${port}_${backup_date}_${backup_type}.tar
close
bye
!
}


##sftp function
sftp_file()
{
sftp -oPort=$SFPORT root@$ftp_server1 << EOF
cd $ftp_target_dir
lcd ${backup_dir}/${week_num}
put ${hostip}_${port}_${backup_date}_${backup_type}.md5 
put ${hostip}_${port}_${backup_date}_${backup_type}.tar
bye
EOF

#sftp -oPort=$SFPORT mysql@$ftp_server2 << EOF
#cd $ftp_target_dir
#lcd ${backup_dir}/${week_num}
#put ${hostip}_${port}_${backup_date}_${backup_type}.md5 
#put ${hostip}_${port}_${backup_date}_${backup_type}.tar
#bye
#EOF
}

#Backup call function
function run() {
if [ "${backup_type}" == "full" ]; then
full_backup
full_backup_ok=$?
elif [ "${backup_type}" == "incre" ]; then
increment_backup
fi
Backup_judgment
tar_ok=$?
#if [ "$tar_ok" == 0 ]; then
#sftp_file
#sftp_ok=$?
#echo $sftp_ok>/tmp/1111
#fi
#There is no way to determine if FTP has completed file transfer
#if [ "$sftp_ok" == 0 ];then
#rm ${hostip}_${port}_${backup_date}_${backup_type}.md5 ${hostip}_${port}_${backup_date}_${backup_type}.tar
#fi
}

##main task
run
#complete the backup task,delete the last week backup directory
if [ -d "${backup_dir}/${last_week_num}" ]; then
  rm -rf ${backup_dir}/${last_week_num}
fi

binlog备份

[mysql@zhcz-template ~]$ cat /home/mysql/binlog_backup/binbak_main.sh
#!/bin/sh

scripts_basedir=$(cd $(dirname $0) && pwd)

source /etc/profile
source $scripts_basedir/conf
source $scripts_basedir/lib

current_date=$(date +%Y%m%d)
current_datetime=$(date +%Y%m%d_%H%M%S)
binlog_format=$(get_db_conf 'log_bin|log-bin')
binlog_dir=$(get_db_conf 'datadir')

#日志切分
#mysql -e 'flush logs;' -u$user -p$password -P$port -h$host >> $scripts_basedir/logs/$current_datetime.log 2>&1

#if [ $(echo $?) -ne 0 ];then
#    echo "$(date +"%F %H:%M:%S")----mysql日志切分失败----error" >> $scripts_basedir/logs/$current_datetime.log
#    exit 10
#fi

#冲突检测
ps_num=$(ps -ef |grep binbak_main.sh|grep -v grep|wc -l)

if [ $ps_num -gt 2 ];then
    echo "$(date +"%F %H:%M:%S")----上个任务没执行完毕----error" >> $scripts_basedir/logs/$current_datetime.log
    exit 10
fi

#检查变量
echo "---------------------开始检查变量----------------------" >> $scripts_basedir/logs/$current_datetime.log
check_var 'backup_type' $scripts_basedir/logs/$current_datetime.log $backup_type
check_var 'save_log_days' $scripts_basedir/logs/$current_datetime.log $save_log_days
check_var 'remote_host' $scripts_basedir/logs/$current_datetime.log $remote_host
check_var 'remote_dir' $scripts_basedir/logs/$current_datetime.log $remote_dir
check_var 'rds_host' $scripts_basedir/logs/$current_datetime.log $rds_host
check_var 'rds_user' $scripts_basedir/logs/$current_datetime.log $rds_user
check_var 'rds_password' $scripts_basedir/logs/$current_datetime.log $rds_password
check_var 'rds_port' $scripts_basedir/logs/$current_datetime.log $rds_port
check_var 'rds_db' $scripts_basedir/logs/$current_datetime.log  $rds_db

check_var 'binlog_format' $scripts_basedir/logs/$current_datetime.log  $binlog_format
check_var 'binlog_dir' $scripts_basedir/logs/$current_datetime.log  $binlog_dir
echo "---------------------检查变量结束----------------------" >> $scripts_basedir/logs/$current_datetime.log


#清理备份日志
remove_log $scripts_basedir $save_log_days

echo "$(date +"%F %H:%M:%S")----binlog备份开始----start" >> $scripts_basedir/logs/$current_datetime.log

#获取上一个binlog文件
server_ip=$(get_ip)
rds_info=$(mysql -e "select offset,id from binlog_file where ip='$server_ip' and backup_type='$backup_type';"  -u$rds_user -p$rds_password -P$rds_port -h$rds_host $rds_db 2> /dev/null)

echo $rds_info
log_offset=$(echo $rds_info|awk '{print $3}')
log_id=$(echo $rds_info|awk '{print $4}')

#获取本地最后一个binlog
local_binlog=$(get_lastbinlog $binlog_dir $binlog_format 1)

binlogs=$(get_binloglist $binlog_dir $binlog_format $log_offset $local_binlog)

ssh $remote_user@$remote_host "ls -d $remote_dir/$current_date/" 2> /dev/null
if [ $? -ne 1 ];then
   ssh $remote_user@$remote_host  "mkdir -p $remote_dir/$current_date/"
fi

#传输binlog
if [ -z $log_offset ] || [ $log_offset == '000001' ];then

    #rsync -arHzP  -e 'ssh -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa_backup' $binlog_dir/$binlog_format.[0-9]*  $remote_user@$remote_host:$remote_dir/$current_date\_$backup_type\_$server_ip/  >> $scripts_basedir/logs/$current_datetime.log 2>&1
    #rsync -arHzP  $binlog_dir/$binlog_format.[0-9]*  $remote_user@$remote_host:$remote_dir/$current_date/$backup_type\_$server_ip/  >> $scripts_basedir/logs/$current_datetime.log 2>&1
    rsync -arHzP  $binlog_dir/$binlog_format.[0-9]*  $remote_user@$remote_host:$remote_dir/$current_date/$server_ip/  >> $scripts_basedir/logs/$current_datetime.log 2>&1

else 
    #rsync -arHzP  -e 'ssh -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa_backup' $binlogs  $remote_user@$remote_host:$remote_dir/$current_date\_$backup_type\_$server_ip/  >> $scripts_basedir/logs/$current_datetime.log 2>&1
    #rsync -arHzP  $binlogs  $remote_user@$remote_host:$remote_dir/$current_date/$backup_type\_$server_ip/  >> $scripts_basedir/logs/$current_datetime.log 2>&1
    rsync -arHzP  $binlogs  $remote_user@$remote_host:$remote_dir/$current_date/$server_ip/  >> $scripts_basedir/logs/$current_datetime.log 2>&1

fi


if [ $(echo $?) -ne 0 ];then
    echo "$(date +"%F %H:%M:%S")----binlog备份失败----error" >> $scripts_basedir/logs/$current_datetime.log
    exit 11
fi

#更新RDS中binlog失败
if [ -z $log_id ];then
    mysql -e "replace into binlog_file(ip,backup_type,offset) values('$server_ip','$backup_type','$local_binlog');"  -u$rds_user -p$rds_password -P$rds_port -h$rds_host $rds_db >> $scripts_basedir/logs/$current_datetime.log 2>&1
else
    mysql -e "replace into binlog_file(id,ip,backup_type,offset) values($log_id,'$server_ip','$backup_type','$local_binlog');"  -u$rds_user -p$rds_password -P$rds_port -h$rds_host $rds_db >> $scripts_basedir/logs/$current_datetime.log 2>&1
fi

if [ $(echo $?) -ne 0 ];then
    echo "$(date +"%F %H:%M:%S")----RDS更新binlog文件失败----error" >> $scripts_basedir/logs/$current_datetime.log
    exit 12
fi

echo "$(date +"%F %H:%M:%S")----binlog备份结束----end" >> $scripts_basedir/logs/$current_datetime.log

查看备份

cat /home/mysql/scripts/backupcheck.sh
#EIP=`ip a|grep inet | grep -v "127.0.0.1" | grep -v inet6 | grep brd | awk '{print $2}' | cut -d '/' -f 1`
EIP=`/usr/sbin/ip a|grep 192.168|awk '{print $2}'|cut -d '/' -f 1`
DATE=`date +%Y%m%d`
week_num=`date +%Y%U`
week_day=`date +%a`
if  [ "$week_day" = "Wed" ] || [ "$week_day" = "Sun" ]; then
logfile=/Data/backup/xtrabackup/$week_num/41667/log/"$DATE"_full.log
tarfile=/Data/backup/xtrabackup/$week_num/"$EIP"_41667_"$DATE"_full.tar
else 
logfile=/Data/backup/xtrabackup/$week_num/41667/log/"$DATE"_incre.log
tarfile=/Data/backup/xtrabackup/$week_num/"$EIP"_41667_"$DATE"_incre.tar
fi
checklog=`cat $logfile|grep "completed OK\!"|wc -l`
checkfilesize=`ls -l $tarfile|awk  '{print int($5)}'`
echo "< $DATE >" >/tmp/xtrabackup_$DATE.txt
echo "$EIP">>/tmp/xtrabackup_$DATE.txt
if [ $checklog != "0" ];then
echo "xtrabackup success!">>/tmp/xtrabackup_$DATE.txt;
else 
echo "xtrabackup fail!">>/tmp/xtrabackup_$DATE.txt;
fi
if [ $checkfilesize -gt 102400 ];then
echo "DBbackup file check success!Detail as below:">>/tmp/xtrabackup_$DATE.txt;
ls -l $tarfile>>/tmp/xtrabackup_$DATE.txt;
else
echo "DBbackup file check fail!Pls contract DBA!">>/tmp/xtrabackup_$DATE.txt;
fi
chmod 777 /tmp/xtrabackup_$DATE.txt