Centos6下drbd9安装与基本配置

前端之家收集整理的这篇文章主要介绍了Centos6下drbd9安装与基本配置前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

操作环境

在宿主机10.10.200.227中通过kvm创建两台vm,通过这两台VM进行安装配置drbd。

宿主机:

10.10.200.227

Centos7/KVM/QEMU emulator version 1.5.3/libvirtd (libvirt) 3.9.0/bridge-utils,1.5

VM:

10.10.200.230/10.10.200.231

Centos6.9/drbd9

环境准备

@H_502_19@配置VM

vm的xml文件如下,VM的硬盘vdb作为drbd的数据盘使用。

2个vm的xml类似,只需要更改名称以及硬盘路径:

<domain type='kvm' id='13'>
  <name>test_centos1</name>
  <uuid>d6f67555-412b-493d-b884-ca4e0e1f708b</uuid>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='localtime'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/home/centos1.qcow2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/drbd1/disk.img'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/home/centos.iso'/>
      <backingStore/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='piix3-uhci'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:16:3e:5d:aa:a8'/>
      <source bridge='br0'/>
      <target dev='vnet0'/>
      <model type='e1000'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+107:+107</label>
    <imagelabel>+107:+107</imagelabel>
  </seclabel>
</domain>

分别启动两个vm,如下

[root@kvm-node ~]# virsh list
 Id    Name                           State
----------------------------------------------------
 13    test_centos1                   running
 14    test_centos2                   running

修改/etc/hosts文件,后来在*.res配置文件中,要使用到主机名

[root@drbd-node3 drbd.d]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.10.200.230   drbd-node3
10.10.200.231   drbd-node4

安装配置drbd

@H_502_19@ @H_502_19@安装drbd

在drbd官网https://www.linbit.com/en/drbd-community/drbd-download/下载drbd/drbdmanager/drbd-utils

[root@drbd-node3 ~]# ls
anaconda-ks.cfg  drbd-9.0.14-1.tar.gz  drbdmanage-0.99.16.tar.gz  drbd-utils-9.3.1.tar.gz  install.log  install.log.syslog

安装drbd

[root@drbd-node3 ~]# tar -zxvf drbd-9.0.14-1.tar.gz
[root@drbd-node3 ~]# cd drbd-9.0.14-1
[root@drbd-node3 drbd-9.0.14-1]# make && make install

安装成功后会提示下述信息

Module build was successful.
=======================================================================
  With DRBD module version 8.4.5,we split out the management tools
  into their own repository at https://github.com/LINBIT/drbd-utils
  (tarball at http://links.linbit.com/drbd-download)

  That started out as "drbd-utils version 8.9.0",has a different release cycle,and provides compatible drbdadm,drbdsetup and drbdMeta tools
  for DRBD module versions 8.3,8.4 and 9.

  Again: to manage DRBD 9 kernel modules and above,you want drbd-utils >= 9.3 from above url.
=======================================================================

加载drbd内核模块,说明drbd内核模块已经加载成功

[root@drbd-node3 drbd-9.0.14-1]# modprobe drbd
[root@drbd-node3 drbd-9.0.14-1]# lsmod | grep drbd
drbd                  532835  0 
libcrc32c               1246  1 drbd

安装drbd-utils,这个过程有点慢!

[root@drbd-node3 ~]# tar -zxvf drbd-utils-9.3.1.tar.gz
[root@drbd-node3 drbd-utils-9.3.1]# ./autogen.sh
[root@drbd-node3 drbd-utils-9.3.1]# ./configure --prefix=/usr --localstatedir=/var --sysconfdir=/etc
[root@drbd-node3 drbd-utils-9.3.1]# make && make install

安装drbdmanage

[root@drbd-node3 ~]# tar -zxvf drbdmanage-0.99.16.tar.gz
[root@drbd-node3 drbdmanage-0.99.16]# python setup.py install

完成上述3个软件包的安装,drbd就完成安装了!

@H_502_19@配置drbd

查看drbd配置文件,在/etc/目录下,其主要配置文件有两种,一个是global_common.conf,另一个是*.res配置文件,在/etc/drbd.d/目录下,两台VM上的drbd配置文件相同

[root@drbd-node3 etc]# vi drbd.conf 
# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example

include "drbd.d/global_common.conf";
include "drbd.d/*.res";

编辑global_common.conf配置文件,主要是远程复制协议的选择,这里选择protocol C。

[root@drbd-node3 etc]# vi drbd.d/global_common.conf 
# DRBD is the result of over a decade of development by LINBIT.
# In case you need professional services for DRBD or have
# feature requests visit http://www.linbit.com

global {
        usage-count no;

        # Decide what kind of udev symlinks you want for "implicit" volumes
        # (those without explicit volume <vnr> {} block,implied vnr=0):
        # /dev/drbd/by-resource/<resource>/<vnr>   (explicit volumes)
        # /dev/drbd/by-resource/<resource>         (default for implict)
        udev-always-use-vnr; # treat implicit the same as explicit volumes

        # minor-count dialog-refresh disable-ip-verification
        # cmd-timeout-short 5; cmd-timeout-medium 121; cmd-timeout-long 600;
}

common {
        handlers {
                # These are EXAMPLE handlers only.
                # They may have severe implications,# like hard resetting the node under certain circumstances.
                # Be careful when choosing your poison.

                pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
                # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
                # split-brain "/usr/lib/drbd/notify-split-brain.sh root";
                # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
                # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
                # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
                # quorum-lost "/usr/lib/drbd/notify-quorum-lost.sh root";
        }

        startup {
                # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
        }

        options {
                # cpu-mask on-no-data-accessible

                # RECOMMENDED for three or more storage nodes with DRBD 9:
                # quorum majority;
                # on-no-quorum suspend-io | io-error;
        }

        disk {
                # size on-io-error fencing disk-barrier disk-flushes
                # disk-drain md-flushes resync-rate resync-after al-extents
                # c-plan-ahead c-delay-target c-fill-target c-max-rate
                # c-min-rate disk-timeout
        }

        net {
                # protocol timeout max-epoch-size max-buffers
                # connect-int ping-int sndbuf-size rcvbuf-size ko-count
                # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri
                # after-sb-1pri after-sb-2pri always-asbp rr-conflict
                # ping-timeout data-integrity-alg tcp-cork on-congestion
                # congestion-fill congestion-extents csums-alg verify-alg
                # use-rle
                protocol C;
        }
}

创建r1.res,r1为drbd volume的名称

[root@drbd-node3 drbd.d]# vi r1.res 
resource r1 {
        on drbd-node3 {
                device /dev/drbd1;
                disk   /dev/vdb;
                address 10.10.200.230:7789;
                Meta-disk internal;
        }

        on drbd-node4 {
                device /dev/drbd1;
                disk   /dev/vdb;
                address 10.10.200.231:7789;
                Meta-disk internal;
        }

}
初始化资源

在节点1上初始化资源r1

[root@drbd-node3 drbd.d]# drbdadm create-md r1
md_offset 536870907904
al_offset 536870875136
bm_offset 536854491136

Found some data

 ==> This might destroy existing data! <==

Do you want to proceed?
[need to type 'yes' to confirm] yes

initializing activity log
initializing bitmap (16000 KB) to all zero
ioctl(/dev/vdb,BLKZEROOUT,[536854491136,16384000]) Failed: Inappropriate ioctl for device
Using slow(er) fallback.
100%
Writing Meta data...
New drbd Meta data block successfully created.

启动r1

[root@drbd-node3 drbd.d]# drbdadm up r1

在节点2上面,初始化资源r1

[root@drbd-node4 drbd.d]# drbdadm create-md r1
md_offset 536870907904
al_offset 536870875136
bm_offset 536854491136

Found some data

 ==> This might destroy existing data! <==

Do you want to proceed?
[need to type 'yes' to confirm] yes

initializing activity log
initializing bitmap (16000 KB) to all zero
ioctl(/dev/vdb,16384000]) Failed: Inappropriate ioctl for device
Using slow(er) fallback.
100%
Writing Meta data...
New drbd Meta data block successfully created.
[root@drbd-node4 drbd.d]# drbdadm up r1

查看两个节点上r1的主副角色

节点1

[root@drbd-node3 drbd.d]# drbdadm role r1
Secondary

节点2

[root@drbd-node4 drbd.d]# drbdadm role r1
Secondary

两个节点都处于secondary状态,我们将节点1设置为主节点

[root@drbd-node3 drbd.d]# drbdadm primary r1 --force

在查看节点1上r1的状态为主节点

[root@drbd-node3 drbd.d]# drbdadm role r1
Primary

查看节点1上drbd状态

[root@drbd-node3 drbd.d]# drbd-overview 
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.

 1:r1/0  Connected(2*) Primar/Second UpToDa/Incons

查看节点2上drbd状态

[root@drbd-node4 drbd.d]# drbd-overview 
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.

 1:r1/0  Connected(2*) Second/Primar Incons/UpToDa
节点1在与节点2同步,通过iostat可以查看到vdb读写信息
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
vda               0.00     0.00    0.00    4.00     0.00     0.01     6.00     0.00    0.75    0.00    0.75   0.50   0.20
vdb               0.00     9.00    0.00  114.00     0.00    31.26   561.54     0.41    3.63    0.00    3.63   1.43  16.30
scd0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-0              0.00     0.00    0.00    3.00     0.00     0.01     8.00     0.00    1.00    0.00    1.00   0.67   0.20
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    8.67    0.00    6.94   84.39

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
vda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
vdb               0.00     9.00    0.00  113.00     0.00    32.03   580.53     0.39    3.42    0.00    3.42   1.35  15.30
scd0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00

待两边同步完成后,在查看节点1和节点2状态时,会显示如下信息

[root@drbd-node2 home]# drbd-overview 
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.

 0:r0/0  Connected(2*) Second/Primar UpToDa/UpToDa
以上就完成了基础的DRBD的安装与配置
原文链接:https://www.f2er.com/centos/374193.html

猜你在找的CentOS相关文章