Discussion:
[PVE-User] Proxmox Ceph with differents HDD Size
Gilberto Nunes
2018-08-22 19:04:32 UTC
Permalink
Hi there


It's possible create a Ceph cluster with 4 servers, which has differents
disk sizes:

Server A - 2x 4TB
Server B, C - 2x 8TB
Server D - 2x 4TB

This is ok?

Thanks

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
Chance Ellis
2018-08-22 19:22:59 UTC
Permalink
Yes, you can mix and match drive sizes on ceph.

Caution: heterogeneous environments do provide challenges. You will want to set your osd weight on the 8TB drives to 2x what the 4TB drives are. In doing so, however, realize the 8TB drives will be expected to "perform" 2x as much as the 4TB drives. If the 8TB are not 2x "faster" the cluster will slow down as the 8TB drives are over worked. To resolve this phenomenon, look into primary_affinity. Primary_affinity allows you to adjust the amount of load on a disk without reducing the amount of data it can contain.

References:
https://ceph.com/geen-categorie/ceph-primary-affinity/
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/1.2.3/html/storage_strategies/primary-affinity



On 8/22/18, 3:05 PM, "pve-user on behalf of Gilberto Nunes" <pve-user-***@pve.proxmox.com on behalf of ***@gmail.com> wrote:

Hi there


It's possible create a Ceph cluster with 4 servers, which has differents
disk sizes:

Server A - 2x 4TB
Server B, C - 2x 8TB
Server D - 2x 4TB

This is ok?

Thanks

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
_______________________________________________
pve-user mailing list
pve-***@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Brian :
2018-08-22 22:19:41 UTC
Permalink
Its really not a great idea because the larger drives will tend to
get more writes so your performance won't be as good as all the same
size where the writes will be distributed more evenly.

On Wed, Aug 22, 2018 at 8:05 PM Gilberto Nunes
Post by Gilberto Nunes
Hi there
It's possible create a Ceph cluster with 4 servers, which has differents
Server A - 2x 4TB
Server B, C - 2x 8TB
Server D - 2x 4TB
This is ok?
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Eneko Lacunza
2018-08-29 12:04:56 UTC
Permalink
You should change the weight of the 8TB disk, so that they have the same
as the other 4TB disks.

Thanks should fix the performance issue, but you'd waste half space on
those 8TB disks :)
Post by Brian :
Its really not a great idea because the larger drives will tend to
get more writes so your performance won't be as good as all the same
size where the writes will be distributed more evenly.
On Wed, Aug 22, 2018 at 8:05 PM Gilberto Nunes
Post by Gilberto Nunes
Hi there
It's possible create a Ceph cluster with 4 servers, which has differents
Server A - 2x 4TB
Server B, C - 2x 8TB
Server D - 2x 4TB
This is ok?
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es
Gilberto Nunes
2018-08-30 12:16:18 UTC
Permalink
Hi there Eneko

Sorry.... Can you show me how can I do that? I meant, change de weight???

Thanks

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
Post by Eneko Lacunza
You should change the weight of the 8TB disk, so that they have the same
as the other 4TB disks.
Thanks should fix the performance issue, but you'd waste half space on
those 8TB disks :)
Its really not a great idea because the larger drives will tend to
Post by Brian :
get more writes so your performance won't be as good as all the same
size where the writes will be distributed more evenly.
On Wed, Aug 22, 2018 at 8:05 PM Gilberto Nunes
Post by Gilberto Nunes
Hi there
It's possible create a Ceph cluster with 4 servers, which has differents
Server A - 2x 4TB
Server B, C - 2x 8TB
Server D - 2x 4TB
This is ok?
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Mark Schouten
2018-08-30 12:22:02 UTC
Permalink
Post by Eneko Lacunza
You should change the weight of the 8TB disk, so that they have the same
as the other 4TB disks.
Thanks should fix the performance issue, but you'd waste half space on
those 8TB disks :)
Wouldn't it be more efficient to do just place a 4Tb and a 8Tb disk in
each server?

Changing weight will not cause the available space counters to drop
accordingly, I think. So it's probably confusing..
--
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | ***@tuxis.nl
Gilberto Nunes
2018-08-30 12:30:47 UTC
Permalink
The environmente has this configuration:

CEPH-01
4x 4 TB

CEPH-02
4x 3 TB

CEPH-03
2x 3 TB
1x 2 TB

CEPH-04
4x 2 TB

CEPH-05
2x 8 TB

CEPH-06
2x 3 TB
1x 2 TB
1x 1 TB


Any advice to, at least, mitigate the low performance?

Thanks

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
Post by Mark Schouten
Post by Eneko Lacunza
You should change the weight of the 8TB disk, so that they have the same
as the other 4TB disks.
Thanks should fix the performance issue, but you'd waste half space on
those 8TB disks :)
Wouldn't it be more efficient to do just place a 4Tb and a 8Tb disk in
each server?
Changing weight will not cause the available space counters to drop
accordingly, I think. So it's probably confusing..
--
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Mark Schouten
2018-08-30 12:37:48 UTC
Permalink
Post by Gilberto Nunes
Any advice to, at least, mitigate the low performance?
Balance the number of spinning disks and the size per server. This will
probably be the safest.

It's not said that not balancing degrades performance, it's said that
it might potentially cause degraded performance.
--
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | ***@tuxis.nl
Eneko Lacunza
2018-08-30 12:57:53 UTC
Permalink
Post by Mark Schouten
Post by Gilberto Nunes
Any advice to, at least, mitigate the low performance?
Balance the number of spinning disks and the size per server. This will
probably be the safest.
It's not said that not balancing degrades performance, it's said that
it might potentially cause degraded performance.
Yes I agree, although that might probably overload the biggest disks,
too. But all depends on the space and performance requirements/desires,
really :)

Cheers
Eneko
--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es
Gilberto Nunes
2018-08-30 13:23:54 UTC
Permalink
SO, what you guys think about this HDD distribuiton?

CEPH-01
1x 3 TB
1x 2 TB

CEPH-02
1x 4 TB
1x 3 TB

CEPH-03
1x 4 TB
1x 3 TB

CEPH-04
1x 4 TB
1x 3 TB
1x 2 TB

CEPH-05
1x 8 TB
1x 2 TB

CEPH-06
1x 3 TB
1x 1 TB
1x 8 TB


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
Post by Mark Schouten
Post by Gilberto Nunes
Any advice to, at least, mitigate the low performance?
Balance the number of spinning disks and the size per server. This will
probably be the safest.
It's not said that not balancing degrades performance, it's said that
it might potentially cause degraded performance.
Yes I agree, although that might probably overload the biggest disks, too.
But all depends on the space and performance requirements/desires, really :)
Cheers
Eneko
--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Gilberto Nunes
2018-08-30 13:27:56 UTC
Permalink
Right now the ceph are very slow

343510/2089155 objects misplaced (16.443%)
Status

HEALTH_WARN
Monitors
pve-ceph01:
pve-ceph02:
pve-ceph03:
pve-ceph04:
pve-ceph05:
pve-ceph06:
OSDs
In Out
Up 21 0
Down 0 0
Total: 21
PGs
active+clean:
157

active+recovery_wait+remapped:
1

active+remapped+backfill_wait:
82

active+remapped+backfilling:
2

active+undersized+degraded+remapped+backfill_wait:
8

Usage
7.68 TiB of 62.31 TiB
Reads:
Writes:
IOPS: Reads:
IOPS: Writes:
<http://www.proxmox.com/products/proxmox-ve/subscription-service-plans>
()
Degraded data redundancy: 21495/2089170 objects degraded (1.029%), 8 pgs
degraded, 8 pgs undersized

pg 21.0 is stuck undersized for 63693.346103, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,9]
pg 21.2 is stuck undersized for 63693.346973, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,10]
pg 21.6f is stuck undersized for 62453.277248, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,5]
pg 21.8b is stuck undersized for 63693.361835, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,8]
pg 21.c3 is stuck undersized for 63693.321337, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,9]
pg 21.c5 is stuck undersized for 66587.797684, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,8]
pg 21.d4 is stuck undersized for 62453.047415, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,6]
pg 21.e1 is stuck undersized for 62453.276631, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,5]




---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
Post by Gilberto Nunes
SO, what you guys think about this HDD distribuiton?
CEPH-01
1x 3 TB
1x 2 TB
CEPH-02
1x 4 TB
1x 3 TB
CEPH-03
1x 4 TB
1x 3 TB
CEPH-04
1x 4 TB
1x 3 TB
1x 2 TB
CEPH-05
1x 8 TB
1x 2 TB
CEPH-06
1x 3 TB
1x 1 TB
1x 8 TB
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Post by Eneko Lacunza
Post by Mark Schouten
Post by Gilberto Nunes
Any advice to, at least, mitigate the low performance?
Balance the number of spinning disks and the size per server. This will
probably be the safest.
It's not said that not balancing degrades performance, it's said that
it might potentially cause degraded performance.
Yes I agree, although that might probably overload the biggest disks,
too. But all depends on the space and performance requirements/desires,
really :)
Cheers
Eneko
--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Phil Schwarz
2018-08-30 14:47:44 UTC
Permalink
Hope you did change a single disk at a time !

Be warned (if not) that moving an OSD from a server to another triggers
a rebalancing of almost the complete datas stored upon in order to
follow crushmap.

For instance exchanging two OSDs between servers result in a complete
rebalance of the two OSDS,a ccording to my knowledge.

16% of misplaced datas could be acceptable or not depending on your
needs of redundancy and throughput, but it's not a low value that could
be underestimated.

Best regards
Post by Gilberto Nunes
Right now the ceph are very slow
343510/2089155 objects misplaced (16.443%)
Status
HEALTH_WARN
Monitors
OSDs
In Out
Up 21 0
Down 0 0
Total: 21
PGs
157
1
82
2
8
Usage
7.68 TiB of 62.31 TiB
<http://www.proxmox.com/products/proxmox-ve/subscription-service-plans>
()
Degraded data redundancy: 21495/2089170 objects degraded (1.029%), 8 pgs
degraded, 8 pgs undersized
pg 21.0 is stuck undersized for 63693.346103, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,9]
pg 21.2 is stuck undersized for 63693.346973, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,10]
pg 21.6f is stuck undersized for 62453.277248, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,5]
pg 21.8b is stuck undersized for 63693.361835, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,8]
pg 21.c3 is stuck undersized for 63693.321337, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,9]
pg 21.c5 is stuck undersized for 66587.797684, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,8]
pg 21.d4 is stuck undersized for 62453.047415, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,6]
pg 21.e1 is stuck undersized for 62453.276631, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,5]
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Post by Gilberto Nunes
SO, what you guys think about this HDD distribuiton?
CEPH-01
1x 3 TB
1x 2 TB
CEPH-02
1x 4 TB
1x 3 TB
CEPH-03
1x 4 TB
1x 3 TB
CEPH-04
1x 4 TB
1x 3 TB
1x 2 TB
CEPH-05
1x 8 TB
1x 2 TB
CEPH-06
1x 3 TB
1x 1 TB
1x 8 TB
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Gilberto Nunes
2018-08-31 14:08:33 UTC
Permalink
Thanks for all buddies that replied my messages.
Indeed I used

ceph osd primary-affinity <osd-id> <weight>

And we felt some performance increment.

What's help here is that we have 6 proxmox ceph server:

ceph01 - HDD with 5 900 rpm
ceph02 - HDD with 7 200 rpm
ceph03 - HDD with 7 200 rpm
ceph04 - HDD with 7 200 rpm
ceph05 - HDD with 5 900 rpm
ceph06 - HDD with 5 900 rpm

So what I do is define weight 0 to HDD's with 5 900 rpm and define weight 1
to HDD's with 7 200 rpm.

ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 62.31059 root default
-3 14.55438 host pve-ceph01
0 hdd 3.63860 osd.0 up 1.00000 0
1 hdd 3.63860 osd.1 up 1.00000 0
2 hdd 3.63860 osd.2 up 1.00000 0
3 hdd 3.63860 osd.3 up 1.00000 0
-5 10.91559 host pve-ceph02
4 hdd 2.72890 osd.4 up 1.00000 1.00000
5 hdd 2.72890 osd.5 up 1.00000 1.00000
6 hdd 2.72890 osd.6 up 1.00000 1.00000
7 hdd 2.72890 osd.7 up 1.00000 1.00000
-7 7.27708 host pve-ceph03
8 hdd 2.72890 osd.8 up 1.00000 1.00000
9 hdd 2.72890 osd.9 up 1.00000 1.00000
10 hdd 1.81929 osd.10 up 1.00000 1.00000
-9 7.27716 host pve-ceph04
11 hdd 1.81929 osd.11 up 1.00000 1.00000
12 hdd 1.81929 osd.12 up 1.00000 1.00000
13 hdd 1.81929 osd.13 up 1.00000 1.00000
14 hdd 1.81929 osd.14 up 1.00000 1.00000
-11 14.55460 host pve-ceph05
15 hdd 7.27730 osd.15 up 1.00000 0
16 hdd 7.27730 osd.16 up 1.00000 0
-13 7.73178 host pve-ceph06
17 hdd 0.90959 osd.17 up 1.00000 0
18 hdd 2.72890 osd.18 up 1.00000 0
19 hdd 1.36440 osd.19 up 1.00000 0
20 hdd 2.72890 osd.20 up 1.00000 0

Tha's it! Thanks again.


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
Post by Phil Schwarz
Hope you did change a single disk at a time !
Be warned (if not) that moving an OSD from a server to another triggers
a rebalancing of almost the complete datas stored upon in order to
follow crushmap.
For instance exchanging two OSDs between servers result in a complete
rebalance of the two OSDS,a ccording to my knowledge.
16% of misplaced datas could be acceptable or not depending on your
needs of redundancy and throughput, but it's not a low value that could
be underestimated.
Best regards
Post by Gilberto Nunes
Right now the ceph are very slow
343510/2089155 objects misplaced (16.443%)
Status
HEALTH_WARN
Monitors
OSDs
In Out
Up 21 0
Down 0 0
Total: 21
PGs
157
1
82
2
8
Usage
7.68 TiB of 62.31 TiB
<http://www.proxmox.com/products/proxmox-ve/subscription-service-plans>
()
Degraded data redundancy: 21495/2089170 objects degraded (1.029%), 8 pgs
degraded, 8 pgs undersized
pg 21.0 is stuck undersized for 63693.346103, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,9]
pg 21.2 is stuck undersized for 63693.346973, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,10]
pg 21.6f is stuck undersized for 62453.277248, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,5]
pg 21.8b is stuck undersized for 63693.361835, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,8]
pg 21.c3 is stuck undersized for 63693.321337, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,9]
pg 21.c5 is stuck undersized for 66587.797684, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,8]
pg 21.d4 is stuck undersized for 62453.047415, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,6]
pg 21.e1 is stuck undersized for 62453.276631, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,5]
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Post by Gilberto Nunes
SO, what you guys think about this HDD distribuiton?
CEPH-01
1x 3 TB
1x 2 TB
CEPH-02
1x 4 TB
1x 3 TB
CEPH-03
1x 4 TB
1x 3 TB
CEPH-04
1x 4 TB
1x 3 TB
1x 2 TB
CEPH-05
1x 8 TB
1x 2 TB
CEPH-06
1x 3 TB
1x 1 TB
1x 8 TB
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Gilberto Nunes
2018-09-01 14:28:27 UTC
Permalink
HI again

Last message I thing that I figure out was happen to my ceph 6 server
cluster, but I didn't at all!
Cluster still slow performance.
'till this morning.
I reweight the osd in the low speed disk's with this command:

ceph osd crush reweight osd.ID weight

Now everything is ok!

Thanks to the list.


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
Post by Gilberto Nunes
Thanks for all buddies that replied my messages.
Indeed I used
ceph osd primary-affinity <osd-id> <weight>
And we felt some performance increment.
ceph01 - HDD with 5 900 rpm
ceph02 - HDD with 7 200 rpm
ceph03 - HDD with 7 200 rpm
ceph04 - HDD with 7 200 rpm
ceph05 - HDD with 5 900 rpm
ceph06 - HDD with 5 900 rpm
So what I do is define weight 0 to HDD's with 5 900 rpm and define weight
1 to HDD's with 7 200 rpm.
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 62.31059 root default
-3 14.55438 host pve-ceph01
0 hdd 3.63860 osd.0 up 1.00000 0
1 hdd 3.63860 osd.1 up 1.00000 0
2 hdd 3.63860 osd.2 up 1.00000 0
3 hdd 3.63860 osd.3 up 1.00000 0
-5 10.91559 host pve-ceph02
4 hdd 2.72890 osd.4 up 1.00000 1.00000
5 hdd 2.72890 osd.5 up 1.00000 1.00000
6 hdd 2.72890 osd.6 up 1.00000 1.00000
7 hdd 2.72890 osd.7 up 1.00000 1.00000
-7 7.27708 host pve-ceph03
8 hdd 2.72890 osd.8 up 1.00000 1.00000
9 hdd 2.72890 osd.9 up 1.00000 1.00000
10 hdd 1.81929 osd.10 up 1.00000 1.00000
-9 7.27716 host pve-ceph04
11 hdd 1.81929 osd.11 up 1.00000 1.00000
12 hdd 1.81929 osd.12 up 1.00000 1.00000
13 hdd 1.81929 osd.13 up 1.00000 1.00000
14 hdd 1.81929 osd.14 up 1.00000 1.00000
-11 14.55460 host pve-ceph05
15 hdd 7.27730 osd.15 up 1.00000 0
16 hdd 7.27730 osd.16 up 1.00000 0
-13 7.73178 host pve-ceph06
17 hdd 0.90959 osd.17 up 1.00000 0
18 hdd 2.72890 osd.18 up 1.00000 0
19 hdd 1.36440 osd.19 up 1.00000 0
20 hdd 2.72890 osd.20 up 1.00000 0
Tha's it! Thanks again.
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Post by Phil Schwarz
Hope you did change a single disk at a time !
Be warned (if not) that moving an OSD from a server to another triggers
a rebalancing of almost the complete datas stored upon in order to
follow crushmap.
For instance exchanging two OSDs between servers result in a complete
rebalance of the two OSDS,a ccording to my knowledge.
16% of misplaced datas could be acceptable or not depending on your
needs of redundancy and throughput, but it's not a low value that could
be underestimated.
Best regards
Post by Gilberto Nunes
Right now the ceph are very slow
343510/2089155 objects misplaced (16.443%)
Status
HEALTH_WARN
Monitors
OSDs
In Out
Up 21 0
Down 0 0
Total: 21
PGs
157
1
82
2
8
Usage
7.68 TiB of 62.31 TiB
<http://www.proxmox.com/products/proxmox-ve/subscription-service-plans>
()
Degraded data redundancy: 21495/2089170 objects degraded (1.029%), 8 pgs
degraded, 8 pgs undersized
pg 21.0 is stuck undersized for 63693.346103, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,9]
pg 21.2 is stuck undersized for 63693.346973, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,10]
pg 21.6f is stuck undersized for 62453.277248, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,5]
pg 21.8b is stuck undersized for 63693.361835, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,8]
pg 21.c3 is stuck undersized for 63693.321337, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,9]
pg 21.c5 is stuck undersized for 66587.797684, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,8]
pg 21.d4 is stuck undersized for 62453.047415, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,6]
pg 21.e1 is stuck undersized for 62453.276631, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,5]
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Post by Gilberto Nunes
SO, what you guys think about this HDD distribuiton?
CEPH-01
1x 3 TB
1x 2 TB
CEPH-02
1x 4 TB
1x 3 TB
CEPH-03
1x 4 TB
1x 3 TB
CEPH-04
1x 4 TB
1x 3 TB
1x 2 TB
CEPH-05
1x 8 TB
1x 2 TB
CEPH-06
1x 3 TB
1x 1 TB
1x 8 TB
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Loading...