Discussion:
[PVE-User] PRoxmox and ceph with just 3 server.
Gilberto Nunes
2018-08-30 14:47:25 UTC
Permalink
Hi there

It's possible create a scenario with 3 PowerEdge r540, with Proxmox and
Ceph.
The server has this configuration:

32 GB memory
SAS 2x 300 GB
SSD 1x 480 GB

2 VM with SQL and Windows server.

Thanks

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
Martin Maurer
2018-08-30 18:40:51 UTC
Permalink
Hello,

Not really. Please read in detail the following:

https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2018-02.41761/
Post by Gilberto Nunes
Hi there
It's possible create a scenario with 3 PowerEdge r540, with Proxmox and
Ceph.
32 GB memory
SAS 2x 300 GB
SSD 1x 480 GB
2 VM with SQL and Windows server.
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
--
Best Regards,

Martin Maurer

***@proxmox.com
http://www.proxmox.com

____________________________________________________________________
Proxmox Server Solutions GmbH
Bräuhausgasse 37, 1050 Vienna, Austria
Commercial register no.: FN 258879 f
Registration office: Handelsgericht Wien
Gilberto Nunes
2018-08-30 18:46:50 UTC
Permalink
Hi Martin.

Not really worried about highest performance, but to know if it will work
properly, mainly HA!
I plan work with mesh network too.

Tanks a lot

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
Post by Martin Maurer
Hello,
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2018-02.41761/
Post by Gilberto Nunes
Hi there
It's possible create a scenario with 3 PowerEdge r540, with Proxmox and
Ceph.
32 GB memory
SAS 2x 300 GB
SSD 1x 480 GB
2 VM with SQL and Windows server.
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
--
Best Regards,
Martin Maurer
http://www.proxmox.com
____________________________________________________________________
Proxmox Server Solutions GmbH
Bräuhausgasse 37, 1050 Vienna, Austria
Commercial register no.: FN 258879 f
Registration office: Handelsgericht Wien
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Gilberto Nunes
2018-08-30 22:21:28 UTC
Permalink
An HPE Server will remain after deploy 3 servers with proxmox and ceph.
I thing I will use this HPE server as 4th node!


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
if HA is important, you should consider having a 4th ceph osd server (does
not have to also be proxmox)
with ceph's default of 3 replicas, that you will want to use in a
production setup, you do not have any failure domain.
IOW the loss of any one node = a degraded ceph cluster. if you have an
additional node, ceph will rebalance and return to HEALTH_OK on the failure
of a node.
with vm's iops are important so you must keep latency to a minimum.
both of these are explained a bit more in detail in the link he posted.
kind regards
Ronny Aasen
Post by Gilberto Nunes
Hi Martin.
Not really worried about highest performance, but to know if it will work
properly, mainly HA!
I plan work with mesh network too.
Tanks a lot
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Hello,
Post by Martin Maurer
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-
2018-02.41761/
Hi there
Post by Gilberto Nunes
It's possible create a scenario with 3 PowerEdge r540, with Proxmox and
Ceph.
32 GB memory
SAS 2x 300 GB
SSD 1x 480 GB
2 VM with SQL and Windows server.
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
--
Best Regards,
Martin Maurer
http://www.proxmox.com
____________________________________________________________________
Proxmox Server Solutions GmbH
Bräuhausgasse 37, 1050 Vienna, Austria
Commercial register no.: FN 258879 f
Registration office: Handelsgericht Wien
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Gilberto Nunes
2018-08-31 11:01:53 UTC
Permalink
This post might be inappropriate. Click to display it.
Eneko Lacunza
2018-08-31 11:10:27 UTC
Permalink
You can do so from CLI:

ceph osd crush reweight osd.N

https://ceph.com/geen-categorie/difference-between-ceph-osd-reweight-and-ceph-osd-crush-reweight/
Post by Gilberto Nunes
Thanks a lot for all this advice guys.
I still learn with Ceph.
So I have a doubt regarding how to change the weight from certain hdd
Is there some command to do that?
when adding a older machine to your cluster, keep in mind that the
slowest node with determine the overall speed of the ceph cluster (since
a vm's disk will be spread all over)
for RBD vm's you want low latency, so use things like
nvram > ssd > hdd with osd latency significant difference here.
100Gb/25Gb > 40Gb/10Gb (1Gb is useless in this case imho)
as long as you have enough cores, higher ghz is better then lower ghz.
due to lower latency
kind regards.
Ronny Aasen
Post by Gilberto Nunes
An HPE Server will remain after deploy 3 servers with proxmox and ceph.
I thing I will use this HPE server as 4th node!
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
if HA is important, you should consider having a 4th ceph osd server
(does
Post by Gilberto Nunes
not have to also be proxmox)
with ceph's default of 3 replicas, that you will want to use in a
production setup, you do not have any failure domain.
IOW the loss of any one node = a degraded ceph cluster. if you have an
additional node, ceph will rebalance and return to HEALTH_OK on the
failure
Post by Gilberto Nunes
of a node.
with vm's iops are important so you must keep latency to a minimum.
both of these are explained a bit more in detail in the link he posted.
kind regards
Ronny Aasen
Post by Gilberto Nunes
Hi Martin.
Not really worried about highest performance, but to know if it will
work
Post by Gilberto Nunes
Post by Gilberto Nunes
properly, mainly HA!
I plan work with mesh network too.
Tanks a lot
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Hello,
Post by Martin Maurer
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-
2018-02.41761/
Hi there
Post by Gilberto Nunes
It's possible create a scenario with 3 PowerEdge r540, with Proxmox
and
Post by Gilberto Nunes
Post by Gilberto Nunes
Post by Martin Maurer
Post by Gilberto Nunes
Ceph.
32 GB memory
SAS 2x 300 GB
SSD 1x 480 GB
2 VM with SQL and Windows server.
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
--
Best Regards,
Martin Maurer
http://www.proxmox.com
____________________________________________________________________
Proxmox Server Solutions GmbH
Bräuhausgasse 37, 1050 Vienna, Austria
Commercial register no.: FN 258879 f
Registration office: Handelsgericht Wien
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es
Yannis Milios
2018-08-31 11:15:57 UTC
Permalink
This seems a good reading as well...
https://ceph.com/geen-categorie/ceph-osd-reweight/
Post by Eneko Lacunza
ceph osd crush reweight osd.N
https://ceph.com/geen-categorie/difference-between-ceph-osd-reweight-and-ceph-osd-crush-reweight/
Post by Gilberto Nunes
Thanks a lot for all this advice guys.
I still learn with Ceph.
So I have a doubt regarding how to change the weight from certain hdd
Is there some command to do that?
when adding a older machine to your cluster, keep in mind that the
slowest node with determine the overall speed of the ceph cluster (since
a vm's disk will be spread all over)
for RBD vm's you want low latency, so use things like
nvram > ssd > hdd with osd latency significant difference here.
100Gb/25Gb > 40Gb/10Gb (1Gb is useless in this case imho)
as long as you have enough cores, higher ghz is better then lower ghz.
due to lower latency
kind regards.
Ronny Aasen
Post by Gilberto Nunes
An HPE Server will remain after deploy 3 servers with proxmox and ceph.
I thing I will use this HPE server as 4th node!
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
if HA is important, you should consider having a 4th ceph osd server
(does
Post by Gilberto Nunes
not have to also be proxmox)
with ceph's default of 3 replicas, that you will want to use in a
production setup, you do not have any failure domain.
IOW the loss of any one node = a degraded ceph cluster. if you have
an
Post by Gilberto Nunes
Post by Gilberto Nunes
additional node, ceph will rebalance and return to HEALTH_OK on the
failure
Post by Gilberto Nunes
of a node.
with vm's iops are important so you must keep latency to a minimum.
both of these are explained a bit more in detail in the link he
posted.
Post by Gilberto Nunes
Post by Gilberto Nunes
kind regards
Ronny Aasen
Post by Gilberto Nunes
Hi Martin.
Not really worried about highest performance, but to know if it will
work
Post by Gilberto Nunes
Post by Gilberto Nunes
properly, mainly HA!
I plan work with mesh network too.
Tanks a lot
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Hello,
Post by Martin Maurer
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-
2018-02.41761/
Hi there
Post by Gilberto Nunes
It's possible create a scenario with 3 PowerEdge r540, with Proxmox
and
Post by Gilberto Nunes
Post by Gilberto Nunes
Post by Martin Maurer
Post by Gilberto Nunes
Ceph.
32 GB memory
SAS 2x 300 GB
SSD 1x 480 GB
2 VM with SQL and Windows server.
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
--
Best Regards,
Martin Maurer
http://www.proxmox.com
____________________________________________________________________
Proxmox Server Solutions GmbH
Bräuhausgasse 37, 1050 Vienna, Austria
Commercial register no.: FN 258879 f
Registration office: Handelsgericht Wien
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Gilberto Nunes
2018-08-31 11:25:25 UTC
Permalink
This post might be inappropriate. Click to display it.
Eneko Lacunza
2018-08-31 06:13:35 UTC
Permalink
Hi Gilberto,

It's technically possible. I don't know what performance you expect for
those 2 SQL servers though (don't expect much).

Cheers
Post by Gilberto Nunes
Hi there
It's possible create a scenario with 3 PowerEdge r540, with Proxmox and
Ceph.
32 GB memory
SAS 2x 300 GB
SSD 1x 480 GB
2 VM with SQL and Windows server.
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es
Loading...