Skip to content

kunlun数据库手动集群安装

kunlundb edited this page Oct 9, 2021 · 1 revision

本操作在Ubuntu 20.04.2 LTS虚拟机单机搭建通过 虚拟机内存4G就可以,最好8G

该系统包括: 1.存储分片2个,每个分片3个节点,一个分片用于存储,一个分片用于元数据。 2.计算节点3个. 3.管理节点1个。 配置文件见文章末尾。

================================================================================================================= 可能需要部分安装包 1. sudo apt-get install vim python2

2. 计算节点安装缺少库,需要复制自带的库文件到系统目录
  1. sudo cp /home/kunlun/work01/postgresql-11.5-rel/lib/libpq.so.5 /usr/lib/
  2. sudo cp /home/kunlun/work01/postgresql-11.5-rel/lib/libmariadb.so.3 /usr/lib/ 之后删除计算节点的data目录,和计算节点的etc/* 再次安装
3.元数据节点创建,报错 ImportError: No module named mysql.connector
    在postgresql-11.5-rel/resources目录下
  1. tar zxf mysql-connector-python-2.1.3.tar.gz
  2. cd mysql-connector-python-2.1.3/
  3. python2 setup.py build
  4. sudo python2 setup.py install
4.psycopg2安装需要 setuptools
  1. wget https://pypi.python.org/packages/source/s/setuptools/setuptools-0.6c11.tar.gz
  2. tar zxvf setuptools-0.6c11.tar.gz
  3. cd setuptools-0.6c11
  4. python2 setup.py build
  5. sudo python2 setup.py install
5.集群创建,报错 ImportError: No module named psycopg2
    用ppostgresql-11.5-rel/resources目录下psycopg2-2.8.4.tar.gz安装
  1. tar zxf psycopg2-2.8.4.tar.gz
  2. cd psycopg2-2.8.4/
  3. python2 setup.py build
  4. sudo python2 setup.py install
    如果安装错误,先试试下面,再安装一次
  1. sudo apt-get install libpq-dev python-dev 1.sudo apt install python3-pip 1.pip install psycopg2

================================================================================================================= 安装错误,重装注意: ps -ef | grep mysql 和ps -ef | grep postgres 来查看有没有占用的端口进程,有就kill掉,或者重启系统

然后删除以下文件:

1.存储节点的 data目录 #配置JSON文件指定目录,否则报路径已经存在 1.计算节点的 data目录 1.percona-8.0.18-bin-rel/etc/* #否则报端口已经存在 1.postgresql-11.5-rel/etc/*

================================================================================================================= 一:存储节点(mysql)

1.把对应的包从官网上拉下来  http://downloads.zettadb.com/releases/0.8.3/release-binaries (不同版本的路径是不一样的)
    如:cd /home/kunlun/work01/
  1. wget http://zettatech.tpddns.cn:14000/releases/0.8.3/release-binaries/percona-8.0.18-bin-rel.tgz
  2. wget http://zettatech.tpddns.cn:14000/releases/0.8.3/release-binaries/cluster_mgr_rel.tgz
  3. wget http://zettatech.tpddns.cn:14000/releases/0.8.3/release-binaries/postgresql-11.5-rel.tgz tar -zxf 解压三个文件
2.把percona/lib添加到环境变量:
  1. sudo vim /etc/profile
  2. export LD_LIBRARY_PATH=/home/kunlun/work01/percona-8.0.18-bin-rel/lib:$LD_LIBRARY_PATH
  3. export PATH=/home/kunlun/work01/percona-8.0.18-bin-rel/bin:$PATH
  4. export LD_LIBRARY_PATH=/home/kunlun/work01/postgresql-11.5-rel/lib:$LD_LIBRARY_PATH
  5. export PATH=/home/kunlun/work01/postgresql-11.5-rel/bin:$PATH
  6. source /etc/profile
3.在percona/dba_tools下改mgr_config.json,并更改为文件名mysql_shard1.json, mysql_shard2.json
    两个文件,三个节点ip全改成对应服务器的ip
    innodb_log_dir_path,data_dir_path和log_dir_path全改成自己指定的目录,如果已经有了下一步会报错
    user全改成自己当前的用户(如当前用户是kunlun就改成kunlun)

4.在dba_tools下生成主节点0,1,2,如:

$$$ sudo python2 install-mysql.py mgr_config=./mysql_shard1.json target_node_index=0 #,1,2 修改该命令的target_node_index为1和2各生成一次,生成备用节点1和2

$$$ sudo python2 install-mysql.py mgr_config=./mysql_shard2.json target_node_index=0 #,1,2 修改该命令的target_node_index为1和2各生成一次,生成备用节点1和2

5.安装后,第二次启动,在dba_tools下

./startmysql.sh 6001 #6002 6003...

6.登录验证,在dba_tools下

./imysql.sh 6001 #6002 6003...

============================================================================================================== 二.计算节点(postgresql)

5.在postgresql/scripts修改comp-nodes.json文件,并更改为文件名postgresql_comp.json
    修改ip、user、password和datadir(这一步的端口不可以和上面mysql的端口一样,不然会冲突报错)

6.在scripts下执行

$$$ python2 install_pg.py config=postgresql_comp.json install_ids=1 #,2,3 然后各把installable_ids 改成2、3执行,共三次

7.安装后,第二次启动

python2 start_pg.py port=5401 #5402 ...

8.登录验证

psql postgres://abc:abc@127.0.0.1:5401/postgres #5402 ...

9.修改meta-shard.json,shards-config.json, 并更改为文件名postgresql_meta.json, postgresql_shards.json
    打开,修改ip,port,user,password(这里的ip要和mysql的一样,不然会连接不上mysql)
    要和第3步的ip,port对应,user为pgx,password为pgx_pwd(写死的)

10.元数据节点创建,在scripts下执行 

$$$ python2 bootstrap.py --config=./postgresql_meta.json --bootstrap_sql=./meta_inuse.sql 在执行该命令的时候要注意mysql有没有在环境变量下,用which mysql看。如果没有就用 第2步 的来做,把percona/bin放到PATH下

11.集群创建,在scripts下执行

$$$ python2 create_cluster.py --shards_config ./postgresql_shards.json --comps_config ./postgresql_comp.json --meta_config ./postgresql_meta.json --cluster_name clust1 --cluster_owner abc --cluster_biz test 如果出现了 ImportError: No module named psycopg2 该报错信息,就要在postgresql/resource里面解压一个psycopg的包,再进入到这个包里面, 用python2 setup.py build来编译,再用sudo python2 setup.py install进行安装(mysql中也有)

12.reload computing node
  1. ps -ef | grep postgres
  2. cd ~/postgresql/bin 1../pg_ctl reload -D /db/datadir/path

================================================================================================================= 集群管理节点:

13.编辑cluster_mgr.cnf
  1. cd cluster_mgr_rel/bin
  2. vim cluster_mgr1.cnf 把里面的port,ip,user,passwd改成postgresql/scripts下postgresql_meta.json的主节点的port,ip,user,passwd
14.运行cluster_mgr文件
    运行方式:./cluster_mgr cluster_mgr1.cnf &
    (cluster_mgr.cnf是刚刚编辑的cluster_mgr.cnf配置文件)(最好放后台运行)

cat cluster_mgr1-*.log #查看运行的LOG是否正常

15.验证集群管理节点构建成功
    运行一个测试脚本,用配置的computing node来连接(回到postgresql/scripts/comp-node.json查看计算节点的配置信息)
    成功运行就是构建成功

================================================================================================================= 附带节点文件

存储分片1,三个节点

  1. mysql_shard1.json
  2. {
  3. "group_uuid":"e6578c3a-547e-11ea-9780-981fd1bd410d",
    
  4. "nodes":
    
  5. [
    
  6.    {
    
  7.       "is_primary":true,
    
  8.       "ip": "127.0.0.1",
    
  9.       "port": 6001,
    
  10.       "xport": 60010,
    
  11.       "mgr_port": 60011,
    
  12.       "innodb_buffer_pool_size":"64MB",
    
  13.       "innodb_log_dir_path":"/home/kunlun/work01/data_mysql/data1_innodb_log",
    
  14.       "data_dir_path":"/home/kunlun/work01/data_mysql/data1",
    
  15.       "log_dir_path":"/home/kunlun/work01/data_mysql/data1_log",
    
  16.       "user":"kunlun",
    
  17.       "election_weight":50
    
  18.    },
    
  19.    {
    
  20.       "is_primary":false,
    
  21.       "ip": "127.0.0.1",
    
  22.       "port": 6002,
    
  23.       "xport": 60020,
    
  24.       "mgr_port": 60021,
    
  25.       "innodb_buffer_pool_size":"64MB",
    
  26.       "innodb_log_dir_path":"/home/kunlun/work01/data_mysql/data2_innodb_log",
    
  27.       "data_dir_path":"/home/kunlun/work01/data_mysql/data2",
    
  28.       "log_dir_path":"/home/kunlun/work01/data_mysql/data2_log",
    
  29.       "user":"kunlun",
    
  30.       "election_weight":50
    
  31.    },
    
  32.    {
    
  33.       "is_primary":false,
    
  34.       "ip": "127.0.0.1",
    
  35.       "port": 6003,
    
  36.       "xport": 60030,
    
  37.       "mgr_port": 60031,
    
  38.       "innodb_buffer_pool_size":"64MB",
    
  39.       "innodb_log_dir_path":"/home/kunlun/work01/data_mysql/data3_innodb_log",
    
  40.       "data_dir_path":"/home/kunlun/work01/data_mysql/data3",
    
  41.       "log_dir_path":"/home/kunlun/work01/data_mysql/data3_log",
    
  42.       "user":"kunlun",
    
  43.       "election_weight":50
    
  44.    }
    
  45. ]
    
  46. }

存储分片2,三个节点

  1. mysql_shard2.json
  2. {
  3. "group_uuid":"e6578c3b-547e-11ea-9780-981fd1bd410d",
    
  4. "nodes":
    
  5. [
    
  6.    {
    
  7.       "is_primary":true,
    
  8.       "ip": "127.0.0.1",
    
  9.       "port": 6004,
    
  10.       "xport": 60040,
    
  11.       "mgr_port": 60041,
    
  12.       "innodb_buffer_pool_size":"64MB",
    
  13.       "innodb_log_dir_path":"/home/kunlun/work01/data_mysql/data4_innodb_log",
    
  14.       "data_dir_path":"/home/kunlun/work01/data_mysql/data4",
    
  15.       "log_dir_path":"/home/kunlun/work01/data_mysql/data4_log",
    
  16.       "user":"kunlun",
    
  17.       "election_weight":50
    
  18.    },
    
  19.    {
    
  20.       "is_primary":false,
    
  21.       "ip": "127.0.0.1",
    
  22.       "port": 6005,
    
  23.       "xport": 60050,
    
  24.       "mgr_port": 60051,
    
  25.       "innodb_buffer_pool_size":"64MB",
    
  26.       "innodb_log_dir_path":"/home/kunlun/work01/data_mysql/data5_innodb_log",
    
  27.       "data_dir_path":"/home/kunlun/work01/data_mysql/data5",
    
  28.       "log_dir_path":"/home/kunlun/work01/data_mysql/data5_log",
    
  29.       "user":"kunlun",
    
  30.       "election_weight":50
    
  31.    },
    
  32.    {
    
  33.       "is_primary":false,
    
  34.       "ip": "127.0.0.1",
    
  35.       "port": 6006,
    
  36.       "xport": 60060,
    
  37.       "mgr_port": 60061,
    
  38.       "innodb_buffer_pool_size":"64MB",
    
  39.       "innodb_log_dir_path":"/home/kunlun/work01/data_mysql/data6_innodb_log",
    
  40.       "data_dir_path":"/home/kunlun/work01/data_mysql/data6",
    
  41.       "log_dir_path":"/home/kunlun/work01/data_mysql/data6_log",
    
  42.       "user":"kunlun",
    
  43.       "election_weight":50
    
  44.    }
    
  45. ]
    
  46. }

计算节点,三个节点

  1. postgresql_comp.json
  2. [
  3. {
  4.   "id":1,
    
  5.   "name":"comp1",
    
  6.   "ip":"127.0.0.1",
    
  7.   "port":5401,
    
  8.   "user":"abc",
    
  9.   "password":"abc",
    
  10.   "datadir":"/home/kunlun/work01/data_post/pg_data_dir1"
    
  11. },
  12. {
  13.   "id":2,
    
  14.   "name":"comp2",
    
  15.   "ip":"127.0.0.1",
    
  16.   "port":5402,
    
  17.   "user":"abc",
    
  18.   "password":"abc",
    
  19.   "datadir":"/home/kunlun/work01/data_post/pg_data_dir2"
    
  20. },
  21. {
  22.   "id":3,
    
  23.   "name":"comp3",
    
  24.   "ip":"127.0.0.1",
    
  25.   "port":5403,
    
  26.   "user":"abc",
    
  27.   "password":"abc",
    
  28.   "datadir":"/home/kunlun/work01/data_post/pg_data_dir3"
    
  29. }
  30. ]

元数据节点3个,指向存储节点的一个分片

  1. postgresql_meta.json
  2. [
  3. {
  4.   "ip": "127.0.0.1",
    
  5.   "port": 6004,
    
  6.   "user": "pgx",
    
  7.   "password":"pgx_pwd"
    
  8. },
  9. {
  10.   "ip": "127.0.0.1",
    
  11.   "port": 6005,
    
  12.   "user": "pgx",
    
  13.   "password":"pgx_pwd"
    
  14. },
  15. {
  16.   "ip": "127.0.0.1",
    
  17.   "port": 6006,
    
  18.   "user": "pgx",
    
  19.   "password":"pgx_pwd"
    
  20. }
  21. ]

共享分片,就是存储分片的6个节点

  1. postgresql_shards.json
  2. [
  3. {
  4. "shard_name": "shard1",
  5. "shard_nodes":
  6. [
  7.    {
    
  8.       "ip": "127.0.0.1",
    
  9.       "port": 6001,
    
  10.       "user": "pgx",
    
  11.       "password":"pgx_pwd"
    
  12.    },
    
  13.    {
    
  14.       "ip": "127.0.0.1",
    
  15.       "port": 6002,
    
  16.       "user": "pgx",
    
  17.       "password":"pgx_pwd"
    
  18.    },
    
  19.    {
    
  20.       "ip": "127.0.0.1",
    
  21.       "port": 6003,
    
  22.       "user": "pgx",
    
  23.       "password":"pgx_pwd"
    
  24.    }
    
  25. ]
  26. },
  27. {
  28. "shard_name": "shard2",
  29. "shard_nodes":
  30. [
  31.    {
    
  32.       "ip": "127.0.0.1",
    
  33.       "port": 6004,
    
  34.       "user": "pgx",
    
  35.       "password":"pgx_pwd"
    
  36.    },   
    
  37.    {
    
  38.       "ip": "127.0.0.1",
    
  39.       "port": 6005,
    
  40.       "user": "pgx",
    
  41.       "password":"pgx_pwd"
    
  42.    },
    
  43.    {
    
  44.       "ip": "127.0.0.1",
    
  45.       "port": 6006,
    
  46.       "user": "pgx",
    
  47.       "password":"pgx_pwd"
    
  48.    }
    
  49. ]
  50. }
  51. ]