Adam Weremczuk
2018-10-10 13:23:50 UTC
Hi all,
I'm trying out DRBD Pacemaker HA Cluster on Proxmox 5.2
I have 2 identical servers connected with 2 x 1 Gbps links in bond_mode
balance-rr.
The bond is working fine; I get a transfer rate of 150 MB/s with scp.
Following this guide:
https://www.theurbanpenguin.com/drbd-pacemaker-ha-cluster-ubuntu-16-04/
was going smoothly up until:
drbdadm -- --overwrite-data-of-peer primary r0/0
cat /proc/drbd
version: 8.4.10 (api:1/proto:86-101)
srcversion: 17A0C3A0AF9492ED4B9A418
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:10944 nr:0 dw:0 dr:10992 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f
oos:3898301536
[>....................] sync'ed: 0.1% (3806932/3806944)M
finish: 483:25:13 speed: 2,188 (2,188) K/sec
The transfer rate is horribly slow and at this pace it's going to take
20 days for two 4 TB volumes to sync!
That's almost 15 times slower comparing with the guide video (8:30):
The volumes have been zeroed and contain no live data yet.
My sdb disks are logical drives (hardware RAID) set up as RAID50 with
the defaults:
Strip size: 128 KB
Access policy: RW
Read policy: Normal
Write policy: Write Back with BBU
IO policy: Direct
Drive Cache: Disable
Disable BGI: No
Performance looks good when tested with hdparm:
hdparm -tT /dev/sdb1
/dev/sdb1:
Timing cached reads: 15056 MB in 1.99 seconds = 7550.46 MB/sec
Timing buffered disk reads: 2100 MB in 3.00 seconds = 699.81 MB/sec
The volumes have been zeroed and contain no live data yet.
Any idea why the sync rate is so painfully slow and how to improve it?
Regards,
Adam
I'm trying out DRBD Pacemaker HA Cluster on Proxmox 5.2
I have 2 identical servers connected with 2 x 1 Gbps links in bond_mode
balance-rr.
The bond is working fine; I get a transfer rate of 150 MB/s with scp.
Following this guide:
https://www.theurbanpenguin.com/drbd-pacemaker-ha-cluster-ubuntu-16-04/
was going smoothly up until:
drbdadm -- --overwrite-data-of-peer primary r0/0
cat /proc/drbd
version: 8.4.10 (api:1/proto:86-101)
srcversion: 17A0C3A0AF9492ED4B9A418
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:10944 nr:0 dw:0 dr:10992 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f
oos:3898301536
[>....................] sync'ed: 0.1% (3806932/3806944)M
finish: 483:25:13 speed: 2,188 (2,188) K/sec
The transfer rate is horribly slow and at this pace it's going to take
20 days for two 4 TB volumes to sync!
That's almost 15 times slower comparing with the guide video (8:30):
The volumes have been zeroed and contain no live data yet.
My sdb disks are logical drives (hardware RAID) set up as RAID50 with
the defaults:
Strip size: 128 KB
Access policy: RW
Read policy: Normal
Write policy: Write Back with BBU
IO policy: Direct
Drive Cache: Disable
Disable BGI: No
Performance looks good when tested with hdparm:
hdparm -tT /dev/sdb1
/dev/sdb1:
Timing cached reads: 15056 MB in 1.99 seconds = 7550.46 MB/sec
Timing buffered disk reads: 2100 MB in 3.00 seconds = 699.81 MB/sec
The volumes have been zeroed and contain no live data yet.
Any idea why the sync rate is so painfully slow and how to improve it?
Regards,
Adam