Discussion:
[PVE-User] How to use lvm on zfs ?
Denis Morejon
2018-08-07 12:13:50 UTC
Permalink
Hi:

I installed a Proxmox 5.1 server with 4 sata hdds. I built a Raidz1
(Raid 5 aquivalent) to introduce storage redundance. But no lvm is
present. I want to use lvm storage on top of zpool. What should I do ?
Mark Schouten
2018-08-07 12:19:03 UTC
Permalink
Post by Denis Morejon
I installed a Proxmox 5.1 server with 4 sata hdds. I built a Raidz1
(Raid 5 aquivalent) to introduce storage redundance. But no lvm is
present. I want to use lvm storage on top of zpool. What should I do ?
I'm curious about why you would want to do that..
--
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | ***@tuxis.nl
Denis Morejon
2018-08-07 13:32:17 UTC
Permalink
Post by Mark Schouten
Post by Denis Morejon
I installed a Proxmox 5.1 server with 4 sata hdds. I built a Raidz1
(Raid 5 aquivalent) to introduce storage redundance. But no lvm is
present. I want to use lvm storage on top of zpool. What should I do ?
I'm curious about why you would want to do that..
I don't understand your question. Do you refer to either using zfs or
lvm/zfs ?
Denis Morejon
2018-08-07 13:42:14 UTC
Permalink
When I install proxmox 5.1 with Raidz1, It mounts this zfs device
"rpool/ROOT/pve-1" on "/". This device

is the ENTIRE pool with the 4 hdds in raid 5. So that there is no extra
space to conform any volumen group, not even pve!!!

So local-lvm is not active by default. Then, when you add this node to
others with local-lvm storage active, And you try to migrate VMs between

them, there are problems...

That's why I pretend to use lvm on all cluster nodes!!! Some ones with
just lvm and others with lvm/zfs.
Post by Mark Schouten
Post by Denis Morejon
I installed a Proxmox 5.1 server with 4 sata hdds. I built a Raidz1
(Raid 5 aquivalent) to introduce storage redundance. But no lvm is
present. I want to use lvm storage on top of zpool. What should I do ?
I'm curious about why you would want to do that..
Mark Schouten
2018-08-07 12:49:46 UTC
Permalink
Post by Denis Morejon
So local-lvm is not active by default. Then, when you add this node to
others with local-lvm storage active, And you try to migrate VMs between
them, there are problems...
I don't think the actual technique used is relevant for migrating local
storage, but just the name of the storage.. You can create images on
ZFS, you don't need LVM to be able to create VM's with storage.
--
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | ***@tuxis.nl
Denis Morejon
2018-08-07 14:49:43 UTC
Permalink
Post by Mark Schouten
Post by Denis Morejon
So local-lvm is not active by default. Then, when you add this node to
others with local-lvm storage active, And you try to migrate VMs between
them, there are problems...
I don't think the actual technique used is relevant for migrating local
storage, but just the name of the storage.. You can create images on
ZFS, you don't need LVM to be able to create VM's with storage.
I don't have a cluster wide ZFS storage. If I did, I would define a ZFS
storage better, So that the cluster uses it, and that's all.
I have a mixture of 4 professional servers with hardware Raid
controllers, and 4 not professional servers (PCs) with 4 hdds each one.
I want to create just one proxmox cluster with all these 8 servers.
In the 4 PCs, I had to install proxmox using the zfs Raidz1 advanced
option, So that Proxmox to be installed  on a zfs Raid5.
Up to this stage,  I have redundancy, in such a way that I can remove
any sata hdd and the proxmox can startup ok. And that's what I need ZFS.

So, on 4 nodes of the cluster I am able to use local-lvm to put CTs and
VMs over there, but I am not able to put VMs and CTs on local-lvm in the
others.
That's why I want to create pve VGs on the zfs nodes.
Mark Schouten
2018-08-07 13:57:19 UTC
Permalink
Post by Denis Morejon
So, on 4 nodes of the cluster I am able to use local-lvm to put CTs and
VMs over there, but I am not able to put VMs and CTs on local-lvm in the
others.
That's why I want to create pve VGs on the zfs nodes.
Like I said, I think you should be able to rename 'local-zfs' in
'local-lvm' in /etc/pve/storage.cfg, and not worry about LVM anymore.
But maybe we're misunderstanding each other.
--
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | ***@tuxis.nl
Denis Morejon
2018-08-07 15:30:00 UTC
Permalink
Post by Mark Schouten
Post by Denis Morejon
So, on 4 nodes of the cluster I am able to use local-lvm to put CTs and
VMs over there, but I am not able to put VMs and CTs on local-lvm in the
others.
That's why I want to create pve VGs on the zfs nodes.
Like I said, I think you should be able to rename 'local-zfs' in
'local-lvm' in /etc/pve/storage.cfg, and not worry about LVM anymore.
But maybe we're misunderstanding each other.
These is not possible because I don't use zfs on the eight nodes ? Just
in 4 of them and modifying /etc/pve/storage.cfg is a cluster wide operation!
More over. I use zfs under the Proxmox (In the S.O level) to join the
internal disks. So that proxmox node NEVER sees any zfs, just the local
storage (Dir /var/lib/vz) that is mounted on the internal zpool
(rpool/ROOT/pve-1) that is the zfs pool (The 4 disks together seen as one).

Now the problem is that I have 4 nodes with local-lvm storage active and
4 nodes with just local storage active, because in these  last nodes the
local-lvm is disabled! (Due to the non-existence of any lmv volumen group)

So, the migrations of MVs between all these nodes cause problems.
Mark Schouten
2018-08-07 14:45:45 UTC
Permalink
Post by Denis Morejon
These is not possible because I don't use zfs on the eight nodes ? Just
in 4 of them and modifying /etc/pve/storage.cfg is a cluster wide operation!
Ah yes, crap. That's right..
Post by Denis Morejon
Now the problem is that I have 4 nodes with local-lvm storage active and
4 nodes with just local storage active, because in these last nodes
the
local-lvm is disabled! (Due to the non-existence of any lmv volumen group)
So, the migrations of MVs between all these nodes cause problems.
Ok. Not sure if this is supposed to work, but what if you create a ZFS
Volume (zfs create -V 100G rpool/lvm) and make that a PV (pvcreate
/dev/zvol/rpool/lvm) and make a VG (vgcreate pve /dev/zvol/rpool/lvm)
and then a LV (lvcreate -L100% pve/data) ? Does that allow you to use
local-lvm?

(Not 100% sure about the commands, check before executing)
--
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | ***@tuxis.nl
Denis Morejon
2018-08-07 17:36:09 UTC
Permalink
Post by Mark Schouten
Post by Denis Morejon
These is not possible because I don't use zfs on the eight nodes ? Just
in 4 of them and modifying /etc/pve/storage.cfg is a cluster wide operation!
Ah yes, crap. That's right..
Post by Denis Morejon
Now the problem is that I have 4 nodes with local-lvm storage active and
4 nodes with just local storage active, because in these last nodes
the
local-lvm is disabled! (Due to the non-existence of any lmv volumen group)
So, the migrations of MVs between all these nodes cause problems.
Ok. Not sure if this is supposed to work, but what if you create a ZFS
Volume (zfs create -V 100G rpool/lvm) and make that a PV (pvcreate
/dev/zvol/rpool/lvm) and make a VG (vgcreate pve /dev/zvol/rpool/lvm)
and then a LV (lvcreate -L100% pve/data) ? Does that allow you to use
local-lvm?
(Not 100% sure about the commands, check before executing)
This is the idea Mark! But I suspect I have no space to create an
additional zfs volume since the one mounted on "/" occupied all the space.
So I have to know how reduce It first, and then create the new one on
the remaining space.
Yannis Milios
2018-08-07 21:51:12 UTC
Permalink
Post by Mark Schouten
(zfs create -V 100G rpool/lvm) and make that a PV (pvcreate
Post by Denis Morejon
Post by Mark Schouten
/dev/zvol/rpool/lvm) and make a VG (vgcreate pve /dev/zvol/rpool/lvm)
and then a LV (lvcreate -L100% pve/data)
Try the above as it was suggested to you ...
Post by Mark Schouten
Post by Denis Morejon
But I suspect I have no space to create an
Post by Mark Schouten
additional zfs volume since the one mounted on "/" occupied all the space
No, that's a wrong assumption, zfs does not pre-allocate the whole space of
the pool, even if looks like it does so. In short there is no need to
"shrink" the pool in order to create a zvol as it was suggested above...
Still, the whole idea of having LVM ontop of ZFS/zvol is a mess, but if you
insist, it's up to you ...
A combination of Linux RAID + LVM would look much more elegant in your
case, but for that you have to reinstall PVE by using the Debian iso.
During the installation create a linux raid array with lvm on top and then
add PVE repos ass described in the wiki:

https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
Woods, Ken A (DNR)
2018-08-07 22:49:30 UTC
Permalink
Because this really is a really bad idea, I just want to echo Yannis and say:

...the whole idea of having LVM on top of ZFS/zvol is a mess....



kw


-----Original Message-----
From: pve-user [mailto:pve-user-***@pve.proxmox.com] On Behalf Of Yannis Milios
Sent: Tuesday, August 07, 2018 1:51 PM
To: PVE User List
Subject: Re: [PVE-User] How to use lvm on zfs ?
Post by Mark Schouten
(zfs create -V 100G rpool/lvm) and make that a PV (pvcreate
Post by Denis Morejon
Post by Mark Schouten
/dev/zvol/rpool/lvm) and make a VG (vgcreate pve
/dev/zvol/rpool/lvm) and then a LV (lvcreate -L100% pve/data)
Try the above as it was suggested to you ...
Post by Mark Schouten
Post by Denis Morejon
But I suspect I have no space to create an
Post by Mark Schouten
additional zfs volume since the one mounted on "/" occupied all the space
No, that's a wrong assumption, zfs does not pre-allocate the whole space of the pool, even if looks like it does so. In short there is no need to "shrink" the pool in order to create a zvol as it was suggested above...
Still, the whole idea of having LVM ontop of ZFS/zvol is a mess, but if you insist, it's up to you ...
A combination of Linux RAID + LVM would look much more elegant in your case, but for that you have to reinstall PVE by using the Debian iso.
During the installation create a linux raid array with lvm on top and then add PVE repos ass described in the wiki:

https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
_______________________________________________
pve-user mailing list
pve-***@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Denis Morejon
2018-08-08 13:23:16 UTC
Permalink
Post by Yannis Milios
Post by Mark Schouten
(zfs create -V 100G rpool/lvm) and make that a PV (pvcreate
Post by Denis Morejon
Post by Mark Schouten
/dev/zvol/rpool/lvm) and make a VG (vgcreate pve /dev/zvol/rpool/lvm)
and then a LV (lvcreate -L100% pve/data)
Try the above as it was suggested to you ...
Post by Mark Schouten
Post by Denis Morejon
But I suspect I have no space to create an
Post by Mark Schouten
additional zfs volume since the one mounted on "/" occupied all the space
No, that's a wrong assumption, zfs does not pre-allocate the whole space of
the pool, even if looks like it does so. In short there is no need to
"shrink" the pool in order to create a zvol as it was suggested above...
Still, the whole idea of having LVM ontop of ZFS/zvol is a mess, but if you
insist, it's up to you ...
A combination of Linux RAID + LVM would look much more elegant in your
case, but for that you have to reinstall PVE by using the Debian iso.
During the installation create a linux raid array with lvm on top and then
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
That's right. Now I understand that lvm/zfs would be a mess. Mainly because
zfs doesn't create a block devices such as partitions on which I could
do pvcreate ...
and make It part of a lvm volumen group.

After a (zfs create -V 100G rpool/lvm) a have to do a losetup to create
a loop device an so on...

Instead, I will keep zfs Raid mounted on "/" (local storage) on the last
4 Proxmox, remove the local-lvm storage from all Proxmox, and resize the
local storage of the first 4 Proxmox . In such a way that all the 8
Proxmox have just local storage making the migration of VMs between
nodes easy.
Post by Yannis Milios
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Denis Morejon
2018-08-08 13:32:48 UTC
Permalink
Why does Proxmox team have not incorporated a software Raid in the
install process ? So that we could include redundancy and lvm advantages
when using local disks.
Post by Denis Morejon
Post by Yannis Milios
  (zfs create -V 100G rpool/lvm) and make that a PV (pvcreate
Post by Denis Morejon
Post by Mark Schouten
/dev/zvol/rpool/lvm) and make a VG (vgcreate pve /dev/zvol/rpool/lvm)
and then a LV (lvcreate -L100% pve/data)
Try the above as it was suggested to you ...
Post by Denis Morejon
But I suspect I have no space to create an
Post by Mark Schouten
additional zfs volume since the one mounted on "/" occupied all the space
No, that's a wrong assumption, zfs does not pre-allocate the whole space of
the pool, even if looks like it does so. In short there is no need to
"shrink" the pool in order to create a zvol as it was suggested above...
Still, the whole idea of having LVM ontop of ZFS/zvol is a mess, but if you
insist, it's up to you ...
A combination of Linux RAID + LVM would look much more elegant in your
case, but for that you have to reinstall PVE by using the Debian iso.
During the installation create a linux raid array with lvm on top and then
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
That's right. Now I understand that lvm/zfs would be a mess. Mainly because
zfs doesn't create a block devices such as partitions on which I could
do pvcreate ...
and make It part of a lvm volumen group.
After a (zfs create -V 100G rpool/lvm) a have to do a losetup to
create a loop device an so on...
Instead, I will keep zfs Raid mounted on "/" (local storage) on the
last 4 Proxmox, remove the local-lvm storage from all Proxmox, and
resize the local storage of the first 4 Proxmox . In such a way that
all the 8 Proxmox have just local storage making the migration of VMs
between nodes easy.
Post by Yannis Milios
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Andreas Heinlein
2018-08-08 12:40:38 UTC
Permalink
Post by Denis Morejon
Why does Proxmox team have not incorporated a software Raid in the
install process ? So that we could include redundancy and lvm
advantages when using local disks.
Because ZFS offers redundancy and LVM features (and much more) in a more
modern way, e.g. during a rebuild only used blocks need to be
resilvered, resulting in much greater speed. ZFS is intended to entirely
replace MD-RAID and LVM.

Only drawback of ZFS is that it needs bare metal disk access and must
not (or at least should not) be used with hardware RAID controllers.
This makes it difficult to use with older hardware, e.g. HP ProLiants
which only have HP SmartArray controllers as disk controllers. It is
possible to put some RAID controllers in HBA mode, though ZFS docs
advise against it.

Andreas
dorsy
2018-08-08 12:55:58 UTC
Permalink
I'd say that it is more convinient to support one method.
Also mentioned in this thread that zfs could be considered to be a
successor of MDraid+LVM.

It is still a debian system with cutom kernel and some pve packages on
top, so You could do anything just like on any standard debian system.
Post by Denis Morejon
Why does Proxmox team have not incorporated a software Raid in the
install process ? So that we could include redundancy and lvm
advantages when using local disks.
Post by Denis Morejon
Post by Yannis Milios
  (zfs create -V 100G rpool/lvm) and make that a PV (pvcreate
Post by Denis Morejon
Post by Mark Schouten
/dev/zvol/rpool/lvm) and make a VG (vgcreate pve
/dev/zvol/rpool/lvm)
and then a LV (lvcreate -L100% pve/data)
Try the above as it was suggested to you ...
Post by Denis Morejon
But I suspect I have no space to create an
Post by Mark Schouten
additional zfs volume since the one mounted on "/" occupied all the space
No, that's a wrong assumption, zfs does not pre-allocate the whole space of
the pool, even if looks like it does so. In short there is no need to
"shrink" the pool in order to create a zvol as it was suggested above...
Still, the whole idea of having LVM ontop of ZFS/zvol is a mess, but if you
insist, it's up to you ...
A combination of Linux RAID + LVM would look much more elegant in your
case, but for that you have to reinstall PVE by using the Debian iso.
During the installation create a linux raid array with lvm on top and then
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
That's right. Now I understand that lvm/zfs would be a mess. Mainly because
zfs doesn't create a block devices such as partitions on which I
could do pvcreate ...
and make It part of a lvm volumen group.
After a (zfs create -V 100G rpool/lvm) a have to do a losetup to
create a loop device an so on...
Instead, I will keep zfs Raid mounted on "/" (local storage) on the
last 4 Proxmox, remove the local-lvm storage from all Proxmox, and
resize the local storage of the first 4 Proxmox . In such a way that
all the 8 Proxmox have just local storage making the migration of VMs
between nodes easy.
Post by Yannis Milios
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Dietmar Maurer
2018-08-08 13:04:55 UTC
Permalink
Post by Denis Morejon
Why does Proxmox team have not incorporated a software Raid in the
install process ?
Because we consider mdraid unreliable and dangerous.
Post by Denis Morejon
So that we could include redundancy and lvm advantages
when using local disks.
Sorry, but we have software raid included - ZFS provides that.

Andreas Heinlein
2018-08-07 14:19:50 UTC
Permalink
Post by Denis Morejon
I don't have a cluster wide ZFS storage. If I did, I would define a
ZFS storage better, So that the cluster uses it, and that's all.
I have a mixture of 4 professional servers with hardware Raid
controllers, and 4 not professional servers (PCs) with 4 hdds each one.
I want to create just one proxmox cluster with all these 8 servers.
In the 4 PCs, I had to install proxmox using the zfs Raidz1 advanced
option, So that Proxmox to be installed  on a zfs Raid5.
Up to this stage,  I have redundancy, in such a way that I can remove
any sata hdd and the proxmox can startup ok. And that's what I need ZFS.
So, on 4 nodes of the cluster I am able to use local-lvm to put CTs
and VMs over there, but I am not able to put VMs and CTs on local-lvm
in the others.
That's why I want to create pve VGs on the zfs nodes.
Hello,

if I understand you correctly, you essentially don't need/want to use
ZFS as a storage/file system, but used it anyway as a replacement for
the missing hardware RAID controller in 4 of your machines. This way you
aim to achieve the same configuration on all 8 machines despite
different hardware. Correct?

I am no expert here, but I guess this approach is not good. You may
think about using classic linux software RAID (aka mdadm RAID) here.
This is, according to this page:
https://pve.proxmox.com/wiki/Software_RAID not officially supported. But
I know it works, I did that some time ago on a test install. IIRC I even
did that using the Proxmox installer, hidden somewhere under "Advanced
options". If not, it is still possible to install a plain debian system
on RAID+LVM and "convert" that to a proxmox node later:
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch

Andreas
Loading...