Discussion:
[PVE-User] create zfs pool issue
lyt_yudi
2015-07-28 03:09:12 UTC
Permalink
hi,all

when create a new zfs pool, got this error, how can i fix that?

# zpool create -f -o ashift=12 tank mirror /dev/sdb /dev/sdc mirror /dev/sdd /dev/sde
cannot create 'tank': invalid argument for this pool operation

# cat /var/log/syslog
….
Jul 28 11:04:35 test01 kernel: sdb: sdb1 sdb9
Jul 28 11:04:35 test01 kernel: sdc: sdc1 sdc9
Jul 28 11:04:35 test01 kernel: sdd: sdd1 sdd9
Jul 28 11:04:36 test01 kernel: sde: sde1 sde9
….

thanks.

# pveversion -v
proxmox-ve-2.6.32: 3.4-159 (running kernel: 2.6.32-40-pve)
pve-manager: 3.4-8 (running version: 3.4-8/5f8f4e78)
pve-kernel-2.6.32-40-pve: 2.6.32-159
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-18
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Pongrácz István
2015-07-28 06:02:45 UTC
Permalink
Hi,

When you create a new pool, /dev/sdb is not really a valid device.

Try to create a pool using their id, like this:


zpool create -f -o ashift=12 tank mirror ata-Hitachi_HTS545050A7E380_TEJ52139CA9VNS ata-Hitachi_HTS545050A7E380_TEJ52139CAVXNS mirror ata-Hitachi_HTS545050A7E380_TEJ52139CBYP0S ata-WDC_WD5000BEVT-00A0RT0_WD-WXN1AB0X5490

You can use ids from here: ls /dev/disk/by-id/

If you want to use their name, just use this:

zpool create -f -o ashift=12 tank mirror sdb sdc mirror sdd sde

The second variant is not recommended, because if your system boots up and you put a new disk, it is easy to mess the pool. Just avoid to use sdX as device, use their ids instead.

Bye,
István



----------------eredeti üzenet-----------------
Feladó: "lyt_yudi" ***@icloud.com ata-Hitachi_HTS545050A7E380_TEJ52139CAVXNS
Címzett: "proxmoxve"
Dátum: Tue, 28 Jul 2015 11:09:12 +0800
-------------------------------------------------
Post by lyt_yudi
hi,all
when create a new zfs pool, got this error, how can i fix that?
# zpool create -f -o ashift=12 tank mirror /dev/sdb /dev/sdc mirror /dev/sdd
/dev/sde
cannot create 'tank': invalid argument for this pool operation
# cat /var/log/syslog
….
Jul 28 11:04:35 test01 kernel: sdb: sdb1 sdb9
Jul 28 11:04:35 test01 kernel: sdc: sdc1 sdc9
Jul 28 11:04:35 test01 kernel: sdd: sdd1 sdd9
Jul 28 11:04:36 test01 kernel: sde: sde1 sde9
….
thanks.
# pveversion -v
proxmox-ve-2.6.32: 3.4-159 (running kernel: 2.6.32-40-pve)
pve-manager: 3.4-8 (running version: 3.4-8/5f8f4e78)
pve-kernel-2.6.32-40-pve: 2.6.32-159
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-18
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
_______________________________________________
pve-user mailing list
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
lyt_yudi
2015-07-28 06:33:33 UTC
Permalink
so, thanks you reply.
Post by Pongrácz István
Anyway, are you sure these drives are not part of existing pool?
Please check it before you destroy the previous possible pool.
no. no pool before.
Post by Pongrácz István
When you create a new pool, /dev/sdb is not really a valid device.
zpool create -f -o ashift=12 tank mirror ata-Hitachi_HTS545050A7E380_TEJ52139CA9VNS ata-Hitachi_HTS545050A7E380_TEJ52139CAVXNS mirror ata-Hitachi_HTS545050A7E380_TEJ52139CBYP0S ata-WDC_WD5000BEVT-00A0RT0_WD-WXN1AB0X5490
You can use ids from here: ls /dev/disk/by-id/
this is the new device. and new installed pve.

now, have same error:

***@test01:~# ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root 10 Jul 28 14:19 dm-name-pve-data -> ../../dm-2
lrwxrwxrwx 1 root root 10 Jul 28 14:19 dm-name-pve-root -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jul 28 14:19 dm-name-pve-swap -> ../../dm-1
lrwxrwxrwx 1 root root 10 Jul 28 14:19 dm-uuid-LVM-aPi3sFbYQvfr6pfqSnlFw09WyB59R5Fs9N1R7uLnuXXbra9t5Z8C2O2KRwfYjv9g -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jul 28 14:19 dm-uuid-LVM-aPi3sFbYQvfr6pfqSnlFw09WyB59R5FslCcuYnHf81IC4bVJSn6JS5ncZrJe01Ed -> ../../dm-2
lrwxrwxrwx 1 root root 10 Jul 28 14:19 dm-uuid-LVM-aPi3sFbYQvfr6pfqSnlFw09WyB59R5FswBYsrv7Dv92ras6Qejla7nqN2QNVwWSU -> ../../dm-1
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d49980e0a03a9af -> ../../sda
lrwxrwxrwx 1 root root 10 Jul 28 14:19 scsi-36d4ae5209bc954001d49980e0a03a9af-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jul 28 14:19 scsi-36d4ae5209bc954001d49980e0a03a9af-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jul 28 14:19 scsi-36d4ae5209bc954001d49980e0a03a9af-part3 -> ../../sda3
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d49982a0bb99e94 -> ../../sdb
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998410d19abe2 -> ../../sdc
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998670f5dc028 -> ../../sdd
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d49987f10c71ca8 -> ../../sde
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d49989311f53449 -> ../../sdf
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998af13a574b6 -> ../../sdg
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998c414e00719 -> ../../sdh
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998d816111615 -> ../../sdi
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998ea172230cb -> ../../sdj
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998fd18429cc6 -> ../../sdk
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d499910196ef02f -> ../../sdl
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d49980e0a03a9af -> ../../sda
lrwxrwxrwx 1 root root 10 Jul 28 14:19 wwn-0x6d4ae5209bc954001d49980e0a03a9af-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jul 28 14:19 wwn-0x6d4ae5209bc954001d49980e0a03a9af-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jul 28 14:19 wwn-0x6d4ae5209bc954001d49980e0a03a9af-part3 -> ../../sda3
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d49982a0bb99e94 -> ../../sdb
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998410d19abe2 -> ../../sdc
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998670f5dc028 -> ../../sdd
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d49987f10c71ca8 -> ../../sde
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d49989311f53449 -> ../../sdf
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998af13a574b6 -> ../../sdg
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998c414e00719 -> ../../sdh
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998d816111615 -> ../../sdi
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998ea172230cb -> ../../sdj
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998fd18429cc6 -> ../../sdk
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d499910196ef02f -> ../../sdl
***@test01:~# zpool create -f -o ashift=12 tank mirror scsi-36d4ae5209bc954001d49982a0bb99e94 scsi-36d4ae5209bc954001d4998410d19abe2 mirror scsi-36d4ae5209bc954001d4998670f5dc028 scsi-36d4ae5209bc954001d49987f10c71ca8
cannot create 'tank01': invalid argument for this pool operation

***@test01:~# zpool create -f -o ashift=12 tank mirror wwn-0x6d4ae5209bc954001d49982a0bb99e94 wwn-0x6d4ae5209bc954001d4998410d19abe2 mirror wwn-0x6d4ae5209bc954001d4998670f5dc028 wwn-0x6d4ae5209bc954001d49987f10c71ca8
cannot create 'tank01': invalid argument for this pool operation

***@test01:~# zpool status
no pools available
Post by Pongrácz István
zpool create -f -o ashift=12 tank mirror sdb sdc mirror sdd sde
The second variant is not recommended, because if your system boots up and you put a new disk, it is easy to mess the pool. Just avoid to use sdX as device, use their ids instead.
***@test01:~# zpool create -f -o ashift=12 tank mirror sdb sdc mirror sdd sde
cannot create 'tank': invalid argument for this pool operation

and the pve system use lvm on sda.
# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 13G 476K 13G 1% /run
/dev/mapper/pve-root 50G 1.2G 46G 3% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 26G 47M 26G 1% /run/shm
/dev/mapper/pve-data 1.8T 33M 1.8T 1% /var/lib/vz
/dev/fuse 30M 112K 30M 1% /etc/pve

thanks.
Pongrácz István
2015-07-28 06:10:57 UTC
Permalink
Anyway, are you sure these drives are not part of existing pool?
Please check it before you destroy the previous possible pool.

Cheers,
István


----------------eredeti üzenet-----------------
Feladó: "lyt_yudi" ***@icloud.com
Címzett: "proxmoxve"
Dátum: Tue, 28 Jul 2015 11:09:12 +0800
-------------------------------------------------
Post by lyt_yudi
hi,all
when create a new zfs pool, got this error, how can i fix that?
# zpool create -f -o ashift=12 tank mirror /dev/sdb /dev/sdc mirror /dev/sdd
/dev/sde
cannot create 'tank': invalid argument for this pool operation
# cat /var/log/syslog
….
Jul 28 11:04:35 test01 kernel: sdb: sdb1 sdb9
Jul 28 11:04:35 test01 kernel: sdc: sdc1 sdc9
Jul 28 11:04:35 test01 kernel: sdd: sdd1 sdd9
Jul 28 11:04:36 test01 kernel: sde: sde1 sde9
….
thanks.
# pveversion -v
proxmox-ve-2.6.32: 3.4-159 (running kernel: 2.6.32-40-pve)
pve-manager: 3.4-8 (running version: 3.4-8/5f8f4e78)
pve-kernel-2.6.32-40-pve: 2.6.32-159
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-18
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
_______________________________________________
pve-user mailing list
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Pongrácz István
2015-07-28 07:52:23 UTC
Permalink
Interesting.

Are these drives are direct attached hard drives, without any hw raid card or something like that?
Pongrácz István
2015-07-28 08:34:34 UTC
Permalink
Did you upgrade your system, especially zfs?

You should reboot your server after such an upgrade, because the kernel module and userland part cannot work together.



----------------eredeti ÃŒzenet-----------------
Feladó: "lyt_yudi" ***@icloud.com
Címzett: "Pongràžˆàž‚cz Istvàžˆàž‚n"

CC: "proxmoxve"

Dátum: Tue, 28 Jul 2015 14:33:33 +0800
----------------------------------------------------------
Post by lyt_yudi
so, thanks you reply.
Post by Pongrácz István
Anyway, are you sure these drives are not part of existing pool?
Please check it before you destroy the previous possible pool.
no. no pool before.
Post by Pongrácz István
When you create a new pool, /dev/sdb is not really a valid device.
zpool create -f -o ashift=12 tank mirror
ata-Hitachi_HTS545050A7E380_TEJ52139CA9VNS ata-Hitachi_HTS545050A7E380_TEJ52139CAVXNS mirror
ata-Hitachi_HTS545050A7E380_TEJ52139CBYP0S ata-WDC_WD5000BEVT-00A0RT0_WD-WXN1AB0X5490
You can use ids from here: ls /dev/disk/by-id/
this is the new device. and new installed pve.
total 0
lrwxrwxrwx 1 root root 10 Jul 28 14:19 dm-name-pve-data -> ../../dm-2
lrwxrwxrwx 1 root root 10 Jul 28 14:19 dm-name-pve-root -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jul 28 14:19 dm-name-pve-swap -> ../../dm-1
lrwxrwxrwx 1 root root 10 Jul 28 14:19
dm-uuid-LVM-aPi3sFbYQvfr6pfqSnlFw09WyB59R5Fs9N1R7uLnuXXbra9t5Z8C2O2KRwf
Yjv9g -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jul 28 14:19
dm-uuid-LVM-aPi3sFbYQvfr6pfqSnlFw09WyB59R5FslCcuYnHf81IC4bVJSn6JS5ncZrJ
e01Ed -> ../../dm-2
lrwxrwxrwx 1 root root 10 Jul 28 14:19
dm-uuid-LVM-aPi3sFbYQvfr6pfqSnlFw09WyB59R5FswBYsrv7Dv92ras6Qejla7nqN2QN
VwWSU -> ../../dm-1
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d49980e0a03a9af ->
../../sda
lrwxrwxrwx 1 root root 10 Jul 28 14:19
scsi-36d4ae5209bc954001d49980e0a03a9af-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jul 28 14:19
scsi-36d4ae5209bc954001d49980e0a03a9af-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jul 28 14:19
scsi-36d4ae5209bc954001d49980e0a03a9af-part3 -> ../../sda3
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d49982a0bb99e94 ->
../../sdb
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998410d19abe2 ->
../../sdc
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998670f5dc028 ->
../../sdd
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d49987f10c71ca8 ->
../../sde
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d49989311f53449 ->
../../sdf
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998af13a574b6 ->
../../sdg
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998c414e00719 ->
../../sdh
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998d816111615 ->
../../sdi
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998ea172230cb ->
../../sdj
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998fd18429cc6 ->
../../sdk
lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d499910196ef02f ->
../../sdl
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d49980e0a03a9af ->
../../sda
lrwxrwxrwx 1 root root 10 Jul 28 14:19
wwn-0x6d4ae5209bc954001d49980e0a03a9af-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jul 28 14:19
wwn-0x6d4ae5209bc954001d49980e0a03a9af-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jul 28 14:19
wwn-0x6d4ae5209bc954001d49980e0a03a9af-part3 -> ../../sda3
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d49982a0bb99e94 ->
../../sdb
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998410d19abe2 ->
../../sdc
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998670f5dc028 ->
../../sdd
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d49987f10c71ca8 ->
../../sde
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d49989311f53449 ->
../../sdf
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998af13a574b6 ->
../../sdg
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998c414e00719 ->
../../sdh
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998d816111615 ->
../../sdi
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998ea172230cb ->
../../sdj
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998fd18429cc6 ->
../../sdk
lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d499910196ef02f ->
../../sdl
scsi-36d4ae5209bc954001d49982a0bb99e94 scsi-36d4ae5209bc954001d4998410d19abe2 mirror
scsi-36d4ae5209bc954001d4998670f5dc028 scsi-36d4ae5209bc954001d49987f10c71ca8
cannot create 'tank01': invalid argument for this pool operation
wwn-0x6d4ae5209bc954001d49982a0bb99e94 wwn-0x6d4ae5209bc954001d4998410d19abe2 mirror
wwn-0x6d4ae5209bc954001d4998670f5dc028 wwn-0x6d4ae5209bc954001d49987f10c71ca8
cannot create 'tank01': invalid argument for this pool operation
no pools available
Post by Pongrácz István
zpool create -f -o ashift=12 tank mirror sdb sdc mirror sdd sde
The second variant is not recommended, because if your system boots up and you put a
new disk, it is easy to mess the pool. Just avoid to use sdX as device, use their ids
instead.
cannot create 'tank': invalid argument for this pool operation
and the pve system use lvm on sda.
# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 13G 476K 13G 1% /run
/dev/mapper/pve-root 50G 1.2G 46G 3% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 26G 47M 26G 1% /run/shm
/dev/mapper/pve-data 1.8T 33M 1.8T 1% /var/lib/vz
/dev/fuse 30M 112K 30M 1% /etc/pve
thanks.
lyt_yudi
2015-07-28 10:59:08 UTC
Permalink
Post by Pongrácz István
Did you upgrade your system, especially zfs?
You should reboot your server after such an upgrade, because the kernel module and userland part cannot work together.
yes, it¡¯s reboot. but no effect.

maybe it¡¯s bug of zfs

in forum, had got another problem.

http://forum.proxmox.com/threads/23002-After-Upgrade-ZFS-pool-gone
Pongrácz István
2015-07-29 20:12:41 UTC
Permalink
Hi,

I reproduced the situation. There is a workaround with the new packages, check the las few steps below.

Steps:


I installed a new PVE 3.4 from iso.

Changed the pve repository to pve-no-subscription

and did an apt-get update, apt-get dist-upgrade

kernel and zfs got upgraded, including newly generated initramfs images

reboot

tried to create a new pool using new drives -> zpool create -o ashift=12 -f tank2 mirror ata-VBOX_HARDDISK_VB1da9b627-78ce7cfc ata-VBOX_HARDDISK_VB33d82a74-c68d4c2e mirror ata-VBOX_HARDDISK_VB747687a6-65a6e895 ata-VBOX_HARDDISK_VBb90fec86-d12f314e
cannot create 'tank2': invalid argument for this pool operation

specified an old format:
zpool create -o ashift=12 -o version=28 -f tank2 mirror ata-VBOX_HARDDISK_VB1da9b627-78ce7cfc ata-VBOX_HARDDISK_VB33d82a74-c68d4c2e mirror ata-VBOX_HARDDISK_VB747687a6-65a6e895 ata-VBOX_HARDDISK_VBb90fec86-d12f314e
-> it worked

pool: tank2
state: ONLINE
status: The pool is formatted using a legacy on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on software that does not support
feature flags.
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank2 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-VBOX_HARDDISK_VB1da9b627-78ce7cfc ONLINE 0 0 0
ata-VBOX_HARDDISK_VB33d82a74-c68d4c2e ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ata-VBOX_HARDDISK_VB747687a6-65a6e895 ONLINE 0 0 0
ata-VBOX_HARDDISK_VBb90fec86-d12f314e ONLINE 0 0 0

I tried to upgrade the new pool:
zpool upgrade tank2
This system supports ZFS pool feature flags.
Successfully upgraded 'tank2' from version 28 to feature flags.
Enabled the following features on 'tank2':
async_destroy
empty_bpobj
lz4_compress
spacemap_histogram
enabled_txg
hole_birth
extensible_dataset
embedded_data
bookmarks
cannot set property for 'tank2': invalid argument for this pool operation

creating new "filesystems" under tank2 is working

reboot -> surprise -> tank 2 started using sdg/sdh etc. instead of their IDs

still got cannot set property for 'tank2': invalid argument for this pool operation on zpool upgrade

zpool destroy tank2 is working

creating tank2 without -o version=28 failed again

creating tank2 with -o version=28 is working




So, at this moment that's all. I will check it later.

Bye,

István
Dietmar Maurer
2015-07-30 16:51:35 UTC
Permalink
Post by Pongrácz István
So, at this moment that's all. I will check it later.
seem we have a version mismatch in zfs kernel modules - will try to fix ...
Dietmar Maurer
2015-07-31 05:19:36 UTC
Permalink
Post by Pongrácz István
I reproduced the situation. There is a workaround with the new packages, check
the las few steps below.
I just upload a new kernel with updated zfs modules:

pve-kernel-2.6.32-40-pve_2.6.32-160_amd64.deb

Please can you update and test?
Pongrácz István
2015-07-31 06:21:32 UTC
Permalink
Hi,

After the upgrade (pve 3.4) I got a new kernel and some other stuff as follows:
novnc-pve amd64 0.5-3 [372 kB]
pve-kernel-2.6.32-40-pve amd64 2.6.32-160 [36.5 MB]
pve-manager amd64 3.4-9 [3880 kB]
proxmox-ve-2.6.32 all 3.4-160 [4738 B]

Confirmed, the new kernel solved the problem for me.

I just did a update & upgrade cycle.



Tests:
- upgrade the tank2 -> done without any error
- destroying tank2 -> done without any problem
- recreating tank2 without specifying the version -> done without any problem


Log:

# zpool upgrade tank2
This system supports ZFS pool feature flags.

Enabled the following features on 'tank2':
filesystem_limits
large_blocks

***@pve34 :~# zpool upgrade tank2
This system supports ZFS pool feature flags.

Pool 'tank2' already has all supported features enabled.
***@pve34 :~# zpool destroy tank2
***@pve34 :~# zpool create -o ashift=12 -f tank2 mirror ata-VBOX_HARDDISK_VB1da9b627-78ce7cfc ata-VBOX_HARDDISK_VB33d82a74-c68d4c2e mirror ata-VBOX_HARDDISK_VB747687a6-65a6e895 ata-VBOX_HARDDISK_VBb90fec86-d12f314e
***@pve34 :~#

Thanks Dietmar :)

Bye,
István

----------------eredeti üzenet-----------------
Feladó: "Dietmar Maurer" ***@proxmox.com
Címzett: "lyt_yudi" ***@icloud.com , "Pongrácz István"
CC: "proxmoxve"
Dátum: Fri, 31 Jul 2015 07:19:36 +0200 (CEST)
-------------------------------------------------
Post by Dietmar Maurer
Post by Pongrácz István
I reproduced the situation. There is a workaround with the new packages,
check
the las few steps below.
pve-kernel-2.6.32-40-pve_2.6.32-160_amd64.deb
Please can you update and test?
lyt_yudi
2015-07-31 07:15:22 UTC
Permalink
Post by Pongrácz István
Thanks Dietmar :)
Great! thanks you.

Continue reading on narkive:
Search results for '[PVE-User] create zfs pool issue' (Questions and Answers)
3
replies
what is the zfs?
started 2006-11-02 01:13:32 UTC
hardware
Loading...