Discussion:
[PVE-User] Filesystem corruption on a VM?
Marco Gaiarin
2018-10-23 08:58:58 UTC
Permalink
In a PVE 4.4 cluster i continue to get FS errors like:

Oct 22 20:51:10 vdmsv1 kernel: [268329.890910] EXT4-fs error (device sda6): ext4_mb_generate_buddy:758: group 932, block bitmap and bg descriptor inconsistent: 30722 vs 32768 free clusters

and

Oct 23 09:43:16 vdmsv1 kernel: [314655.032561] EXT4-fs error (device sdb1): ext4_validate_block_bitmap:384: comm kworker/u8:2: bg 12: bad block bitmap checksum
Oct 23 09:43:16 vdmsv1 kernel: [314655.034265] EXT4-fs (sdb1): Delayed block allocation failed for inode 2632026 at logical offset 2048 with max blocks 1640 with error 74
Oct 23 09:43:16 vdmsv1 kernel: [314655.034335] EXT4-fs (sdb1): This should not happen!! Data will be lost

Host run 4.4.134-1-pve kernel, and guest is a debian stretch
(4.9.0-8-amd64), and in the same cluster, but also in other clusters, i
have other stretch VMs running in the same host kernel, without
troubles.

Googling around lead me to old jessie bugs (kernels 3.16):

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1423672
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=818502#22

or things i make it hard to correlate with:

https://access.redhat.com/solutions/155873


Someone have some hints?! Thanks.

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Marco Gaiarin
2018-11-14 17:40:54 UTC
Permalink
I come back on this:

> In a PVE 4.4 cluster i continue to get FS errors like:
> Oct 22 20:51:10 vdmsv1 kernel: [268329.890910] EXT4-fs error (device sda6): ext4_mb_generate_buddy:758: group 932, block bitmap and bg descriptor inconsistent: 30722 vs 32768 free clusters
> and
> Oct 23 09:43:16 vdmsv1 kernel: [314655.032561] EXT4-fs error (device sdb1): ext4_validate_block_bitmap:384: comm kworker/u8:2: bg 12: bad block bitmap checksum
> Oct 23 09:43:16 vdmsv1 kernel: [314655.034265] EXT4-fs (sdb1): Delayed block allocation failed for inode 2632026 at logical offset 2048 with max blocks 1640 with error 74
> Oct 23 09:43:16 vdmsv1 kernel: [314655.034335] EXT4-fs (sdb1): This should not happen!! Data will be lost
> Host run 4.4.134-1-pve kernel, and guest is a debian stretch
> (4.9.0-8-amd64), and in the same cluster, but also in other clusters, i
> have other stretch VMs running in the same host kernel, without
> troubles.

> Googling around lead me to old jessie bugs (kernels 3.16):
> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1423672
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=818502#22

Seems the bug is really this. I've increased the RAM of the problematic
VM, and FS corruption deasppeared.


Effectively, all other VMs have plently of free ram, this was a bit
full.


I know that PVE 4.4 is EOL, but still i'm seeking feedback. For
example, is a 'host' kernel bug, or a 'guest' kernel bug?


Thanks.

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Luis G. Coralle
2018-11-14 19:03:45 UTC
Permalink
Hi, I have a lot of VM ( debian 8 and debian 9 ) with 512 MB of RAM on PVE
4.4-24 version and have not problem.
Have you enough free space on the storage?
How much ram memory do you have on PVE?

El mié., 14 de nov. de 2018 a la(s) 14:41, Marco Gaiarin (***@sv.lnf.it)
escribió:

>
> I come back on this:
>
> > In a PVE 4.4 cluster i continue to get FS errors like:
> > Oct 22 20:51:10 vdmsv1 kernel: [268329.890910] EXT4-fs error (device
> sda6): ext4_mb_generate_buddy:758: group 932, block bitmap and bg
> descriptor inconsistent: 30722 vs 32768 free clusters
> > and
> > Oct 23 09:43:16 vdmsv1 kernel: [314655.032561] EXT4-fs error (device
> sdb1): ext4_validate_block_bitmap:384: comm kworker/u8:2: bg 12: bad block
> bitmap checksum
> > Oct 23 09:43:16 vdmsv1 kernel: [314655.034265] EXT4-fs (sdb1): Delayed
> block allocation failed for inode 2632026 at logical offset 2048 with max
> blocks 1640 with error 74
> > Oct 23 09:43:16 vdmsv1 kernel: [314655.034335] EXT4-fs (sdb1): This
> should not happen!! Data will be lost
> > Host run 4.4.134-1-pve kernel, and guest is a debian stretch
> > (4.9.0-8-amd64), and in the same cluster, but also in other clusters, i
> > have other stretch VMs running in the same host kernel, without
> > troubles.
>
> > Googling around lead me to old jessie bugs (kernels 3.16):
> > https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1423672
> > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=818502#22
>
> Seems the bug is really this. I've increased the RAM of the problematic
> VM, and FS corruption deasppeared.
>
>
> Effectively, all other VMs have plently of free ram, this was a bit
> full.
>
>
> I know that PVE 4.4 is EOL, but still i'm seeking feedback. For
> example, is a 'host' kernel bug, or a 'guest' kernel bug?
>
>
> Thanks.
>
> --
> dott. Marco Gaiarin GNUPG Key ID:
> 240A3D66
> Associazione ``La Nostra Famiglia''
> http://www.lanostrafamiglia.it/
> Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento
> (PN)
> marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f
> +39-0434-842797
>
> Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
> http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
> (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
> _______________________________________________
> pve-user mailing list
> pve-***@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>


--
Luis G. Coralle
Secretaría de TIC
Facultad de Informática
Universidad Nacional del Comahue
(+54) 299-4490300 Int 647
Marco Gaiarin
2018-11-15 10:56:42 UTC
Permalink
Mandi! Luis G. Coralle
In chel di` si favelave...

> Hi, I have a lot of VM ( debian 8 and debian 9 ) with 512 MB of RAM on PVE
> 4.4-24 version and have not problem.

...i have a second cluster, but with ceph storage, not iSCSI/SAN, with
simlar VM, but no troubles at all. True.


> Have you enough free space on the storage?

Now, yes. As just stated, i've had a temporary fill of SAN space
(something on my trim tasks, or on the SAN, goes wrong) but now all are
back as normal.


> How much ram memory do you have on PVE?

Nodes have 64GB of RAM, 52% full.

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Mark Schouten
2018-11-15 11:13:06 UTC
Permalink
Obviously, a misbehaving SAN is a much better explanation for filesystemcorruption..
Mark


From: Marco Gaiarin (***@sv.lnf.it)
Date: 15-11-2018 11:57
To: pve-***@pve.proxmox.com
Subject: Re: [PVE-User] Filesystem corruption on a VM?


Mandi! Luis G. Coralle
 In chel di` si favelave...

> Hi, I have a lot of VM ( debian 8 and debian 9 ) with 512 MB of RAM on PVE
> 4.4-24 version and have not problem.

...i have a second cluster, but with ceph storage, not iSCSI/SAN, with
simlar VM, but no troubles at all. True.


> Have you enough free space on the storage?

Now, yes. As just stated, i've had a temporary fill of SAN space
(something on my trim tasks, or on the SAN, goes wrong) but now all are
back as normal.


> How much ram memory do you have on PVE?

Nodes have 64GB of RAM, 52% full.

--
dott. Marco Gaiarin                            GNUPG Key ID: 240A3D66
 Associazione ``La Nostra Famiglia''          http://www.lanostrafamiglia.it/
 Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento (PN)
 marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f +39-0434-842797

          Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
     http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
     (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
_______________________________________________
pve-user mailing list
pve-***@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



--

Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | ***@tuxis.nl
 
Marco Gaiarin
2018-11-15 11:35:50 UTC
Permalink
Mandi! Mark Schouten
In chel di` si favelave...

> Obviously, a misbehaving SAN is a much better explanation for filesystemcorruption..

Sure, but:

a) errors start a bit befose the SAN trouble

b) this is the only VM/LXC that have troubles

c) i've tried to unmount, reformat and remount a disk/partition (was
the squid spool) and errors come back again.


It is really strange...

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Daniel Berteaud
2018-11-15 11:38:09 UTC
Permalink
Le 15/11/2018 à 12:35, Marco Gaiarin a écrit :
> Mandi! Mark Schouten
> In chel di` si favelave...
>
>> Obviously, a misbehaving SAN is a much better explanation for filesystemcorruption..
> Sure, but:
>
> a) errors start a bit befose the SAN trouble
>
> b) this is the only VM/LXC that have troubles
>
> c) i've tried to unmount, reformat and remount a disk/partition (was
> the squid spool) and errors come back again.
>
>
> It is really strange...


Not that strange. It's expected to have FS corruption if they resides on
a thin provisionned volume, which itself has no space left. Lucky you
only had one FS corrupted.


++

--

Logo FWS

*Daniel Berteaud*

FIREWALL-SERVICES SAS.
Société de Services en Logiciels Libres
Tel : 05 56 64 15 32
Matrix: @dani:fws.fr
/www.firewall-services.com/
Marco Gaiarin
2018-11-15 11:49:39 UTC
Permalink
Mandi! Daniel Berteaud
In chel di` si favelave...

> Not that strange. It's expected to have FS corruption if they resides on
> a thin provisionned volume, which itself has no space left. Lucky you
> only had one FS corrupted.

...but currently space is OK (really: space on VM images pool was never on
shortage, was the 'DATA' pool...), and i've many time done 'e2fsck' on
filesystem (as stated, i've also reformatted one...) and errors pop up back
again...

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Daniel Berteaud
2018-11-15 11:58:16 UTC
Permalink
Le 15/11/2018 à 12:49, Marco Gaiarin a écrit :
> Mandi! Daniel Berteaud
> In chel di` si favelave...
>
>> Not that strange. It's expected to have FS corruption if they resides on
>> a thin provisionned volume, which itself has no space left. Lucky you
>> only had one FS corrupted.
> ...but currently space is OK (really: space on VM images pool was never on
> shortage, was the 'DATA' pool...), and i've many time done 'e2fsck' on
> filesystem (as stated, i've also reformatted one...) and errors pop up back
> again...


If at one time, the storage pool went out of space, then the FS is most
likely corrupted. Fixing the space issue will prevent further
corruption, but won't fix the already corrupted FS. You said

> As just stated, i've had a temporary fill of SAN space

I don't know what this SAN hosted.


Anyway, If errors come back after reformating the volume, then you still
have something not fixed. Please tell us how are things configured, what
kind of storage it's using, which layers are involved etc... (thin prov,
iSCSI, LVM on top etc...)


--

Logo FWS

*Daniel Berteaud*

FIREWALL-SERVICES SAS.
Société de Services en Logiciels Libres
Tel : 05 56 64 15 32
Matrix: @dani:fws.fr
/www.firewall-services.com/
Marco Gaiarin
2018-11-15 13:24:20 UTC
Permalink
Mandi! Daniel Berteaud
In chel di` si favelave...

> If at one time, the storage pool went out of space, then the FS is most
> likely corrupted. Fixing the space issue will prevent further
> corruption, but won't fix the already corrupted FS. You said

But *I* fix every day FS corruption! Every night i reboot the VMs that
have:
fsck.mode=force

as grub boot parameters. In logs, i can se that FS get fixed.

Nov 13 23:44:20 vdmsv1 kernel: [ 0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.9.0-8-amd64 root=UUID=587fe965-e914-4c0b-a497-a0c71c7e0301 ro quiet fsck.mode=force
Nov 13 23:44:20 vdmsv1 systemd-fsck[644]: /dev/sda6: 15062/8495104 files (3.0% non-contiguous), 1687411/33949952 blocks
Nov 13 23:44:20 vdmsv1 systemd-fsck[647]: /dev/sdb1: 113267/6553600 files (1.9% non-contiguous), 1590050/26214144 blocks


> Anyway, If errors come back after reformating the volume, then you still
> have something not fixed.

Reading the Ubuntu, Debian and RH bugs in my initial posts, seems to me
that this is not the case.
The trouble seems exactly the same: same errors, same partial fix
incrementing the available RAM to the VM.


> Please tell us how are things configured, what
> kind of storage it's using, which layers are involved etc... (thin prov,
> iSCSI, LVM on top etc...)

HS MSA 1040 SAN, exporting iSCSI volumes via LVM. The 'thin' part is on
the SAN side, eg no thin-LVM, no ZFS on top of it, ...


Another error popup now:

Nov 15 13:44:44 vdmsv1 kernel: [136834.664486] EXT4-fs error (device sda6): ext4_mb_generate_buddy:759: group 957, block bitmap and bg descriptor inconsistent: 32747 vs 32768 free clusters
Nov 15 13:44:44 vdmsv1 kernel: [136834.671565] EXT4-fs error (device sda6): ext4_mb_generate_buddy:759: group 958, block bitmap and bg descriptor inconsistent: 32765 vs 32768 free clusters
Nov 15 13:44:44 vdmsv1 kernel: [136834.813465] JBD2: Spotted dirty metadata buffer (dev = sda6, blocknr = 0). There's a risk of filesystem corruption in case of system crash.

increasing the VM ram from 8 to 12 GB lead to a 1,5 day interval between
errors, while before errors was every 'less than a day'.


This night another 4GB of RAM, another stop and start, ...

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Daniel Berteaud
2018-11-15 15:18:45 UTC
Permalink
Le 15/11/2018 à 14:24, Marco Gaiarin a écrit :
> Mandi! Daniel Berteaud
> In chel di` si favelave...
>
>> If at one time, the storage pool went out of space, then the FS is most
>> likely corrupted. Fixing the space issue will prevent further
>> corruption, but won't fix the already corrupted FS. You said
> But *I* fix every day FS corruption! Every night i reboot the VMs that
> have:
> fsck.mode=force


Then probably the issue is somewhere on the underlying block on your
SAN. You should destroy and recreate the image.


++

--

Logo FWS

*Daniel Berteaud*

FIREWALL-SERVICES SAS.
Société de Services en Logiciels Libres
Tel : 05 56 64 15 32
Matrix: @dani:fws.fr
/www.firewall-services.com/
Marco Gaiarin
2018-11-16 10:32:32 UTC
Permalink
Mandi! Daniel Berteaud
In chel di` si favelave...

> Then probably the issue is somewhere on the underlying block on your
> SAN. You should destroy and recreate the image.

OK.

Because the disks that expose the trouble is:

1) the one that contain /

2) the one that contain /var/cache/squid, and so is 'disposable'.


Can i simply stop, backup the VM and recreate it back? Or can be risky,
i can take with me some FS corruption?

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Marco Gaiarin
2018-11-19 14:08:22 UTC
Permalink
> This night another 4GB of RAM, another stop and start, ...

OK, with 16GB of ram 5 days passed without FS errors.


Also, the other VM, same stretch kernel, roughly same configuration,
start to expose same errors:

Nov 18 10:12:21 vdmsv2 kernel: [584252.496880] EXT4-fs error (device sda6): ext4_mb_generate_buddy:758: group 104, block bitmap and bg descriptor inconsistent: 2048 vs 32768 free clusters
Nov 18 10:12:21 vdmsv2 kernel: [584252.590564] JBD2: Spotted dirty metadata buffer (dev = sda6, blocknr = 0). There's a risk of filesystem corruption in case of system crash.

Note that this VM was built *AFTER* my SAN glitches happens.

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Gerald Brandt
2018-11-15 12:10:03 UTC
Permalink
On 2018-10-23 3:58 a.m., Marco Gaiarin wrote:
> In a PVE 4.4 cluster i continue to get FS errors like:
>
> Oct 22 20:51:10 vdmsv1 kernel: [268329.890910] EXT4-fs error (device sda6): ext4_mb_generate_buddy:758: group 932, block bitmap and bg descriptor inconsistent: 30722 vs 32768 free clusters
>
> and
>
> Oct 23 09:43:16 vdmsv1 kernel: [314655.032561] EXT4-fs error (device sdb1): ext4_validate_block_bitmap:384: comm kworker/u8:2: bg 12: bad block bitmap checksum
> Oct 23 09:43:16 vdmsv1 kernel: [314655.034265] EXT4-fs (sdb1): Delayed block allocation failed for inode 2632026 at logical offset 2048 with max blocks 1640 with error 74
> Oct 23 09:43:16 vdmsv1 kernel: [314655.034335] EXT4-fs (sdb1): This should not happen!! Data will be lost
>
> Host run 4.4.134-1-pve kernel, and guest is a debian stretch
> (4.9.0-8-amd64), and in the same cluster, but also in other clusters, i
> have other stretch VMs running in the same host kernel, without
> troubles.
>
> Googling around lead me to old jessie bugs (kernels 3.16):
>
> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1423672
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=818502#22
>
> or things i make it hard to correlate with:
>
> https://access.redhat.com/solutions/155873
>
>
> Someone have some hints?! Thanks.
>

I've only had filesystem corruption when using XFS in a VM.


Gerald
Marco Gaiarin
2018-11-15 13:25:08 UTC
Permalink
Mandi! Gerald Brandt
In chel di` si favelave...

> I've only had filesystem corruption when using XFS in a VM.

The same VM have two XFS filesystem, that never get corrupted. ;(

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Daniel Berteaud
2018-11-15 13:32:11 UTC
Permalink
Le 15/11/2018 à 13:10, Gerald Brandt a écrit :
> I've only had filesystem corruption when using XFS in a VM.


In my experience, XFS has been more reliable, and robust. But anyway,
99.9% of the time, FS corruption is caused by one of the underlying layers


++

--

Logo FWS

*Daniel Berteaud*

FIREWALL-SERVICES SAS.
Société de Services en Logiciels Libres
Tel : 05 56 64 15 32
Matrix: @dani:fws.fr
/www.firewall-services.com/
Gerald Brandt
2018-11-15 13:37:27 UTC
Permalink
Interesting. My XFS VM was corrupted every night when I did a snapshot
backup. I switched to a shutdown backup and the issue went away.

Gerald

On 2018-11-15 7:32 a.m., Daniel Berteaud wrote:
> Le 15/11/2018 à 13:10, Gerald Brandt a écrit :
>> I've only had filesystem corruption when using XFS in a VM.
>
> In my experience, XFS has been more reliable, and robust. But anyway,
> 99.9% of the time, FS corruption is caused by one of the underlying layers
>
>
> ++
>
Marco Gaiarin
2018-11-15 14:56:00 UTC
Permalink
Mandi! Daniel Berteaud
In chel di` si favelave...

> In my experience, XFS has been more reliable, and robust. But anyway,
> 99.9% of the time, FS corruption is caused by one of the underlying layers

...but the 'underlying layers' is the same of half a dozen other
VM/LXC, that have to trouble at all...

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Woods, Ken A (DNR)
2018-11-15 16:24:21 UTC
Permalink
> On Nov 15, 2018, at 04:32, Daniel Berteaud <***@firewall-services.com> wrote:
>
>> Le 15/11/2018 à 13:10, Gerald Brandt a écrit :
>> I've only had filesystem corruption when using XFS in a VM.
>
>
> In my experience, XFS has been more reliable, and robust. But anyway,
> 99.9% of the time, FS corruption is caused by one of the underlying layers

Gerald—What is /dev/sda6/ ?
I’m thinking it’s not healthy. Move the image to another device and see if the problem continues.

>
>
> ++
>
> --
>
> Logo FWS
>
> *Daniel Berteaud*
>
> FIREWALL-SERVICES SAS.
> Société de Services en Logiciels Libres
> Tel : 05 56 64 15 32
> Matrix: @dani:fws.fr
> /https://urldefense.proofpoint.com/v2/url?u=http-3A__www.firewall-2Dservices.com_&d=DwIGaQ&c=teXCf5DW4bHgLDM-H5_GmQ&r=THf3d3FQjCY5FQHo3goSprNAh9vsOWPUM7J0jwvvVwM&m=QGMkIpgehPOKsmLfNw6PIROaQjqtjXbSMlpBj5QrMj4&s=upSZV4QynZA1V5Ni9r86nH7oUVIuBMr-WOErXOiVuoM&e=
>
> _______________________________________________
> pve-user mailing list
> pve-***@pve.proxmox.com
> https://urldefense.proofpoint.com/v2/url?u=https-3A__pve.proxmox.com_cgi-2Dbin_mailman_listinfo_pve-2Duser&d=DwIGaQ&c=teXCf5DW4bHgLDM-H5_GmQ&r=THf3d3FQjCY5FQHo3goSprNAh9vsOWPUM7J0jwvvVwM&m=QGMkIpgehPOKsmLfNw6PIROaQjqtjXbSMlpBj5QrMj4&s=PevpVpqhrRp4m8QYDfbsjX6Uv1vbWGlL3dHiZLjiZpM&e=
Woods, Ken A (DNR)
2018-11-15 16:25:46 UTC
Permalink
> On Nov 15, 2018, at 07:24, Woods, Ken A (DNR) <***@alaska.gov> wrote:
>
>
>>> On Nov 15, 2018, at 04:32, Daniel Berteaud <***@firewall-services.com> wrote:
>>>
>>> Le 15/11/2018 à 13:10, Gerald Brandt a écrit :
>>> I've only had filesystem corruption when using XFS in a VM.
>>
>>
>> In my experience, XFS has been more reliable, and robust. But anyway,
>> 99.9% of the time, FS corruption is caused by one of the underlying layers
>
> Gerald—What is /dev/sda6/ ?

s/Marco/Gerald

> I’m thinking it’s not healthy. Move the image to another device and see if the problem continues.
>
>>
>>
>> ++
>>
>> --
>>
>> Logo FWS
>>
>> *Daniel Berteaud*
>>
>> FIREWALL-SERVICES SAS.
>> Société de Services en Logiciels Libres
>> Tel : 05 56 64 15 32
>> Matrix: @dani:fws.fr
>> /https://urldefense.proofpoint.com/v2/url?u=http-3A__www.firewall-2Dservices.com_&d=DwIGaQ&c=teXCf5DW4bHgLDM-H5_GmQ&r=THf3d3FQjCY5FQHo3goSprNAh9vsOWPUM7J0jwvvVwM&m=QGMkIpgehPOKsmLfNw6PIROaQjqtjXbSMlpBj5QrMj4&s=upSZV4QynZA1V5Ni9r86nH7oUVIuBMr-WOErXOiVuoM&e=
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-***@pve.proxmox.com
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__pve.proxmox.com_cgi-2Dbin_mailman_listinfo_pve-2Duser&d=DwIGaQ&c=teXCf5DW4bHgLDM-H5_GmQ&r=THf3d3FQjCY5FQHo3goSprNAh9vsOWPUM7J0jwvvVwM&m=QGMkIpgehPOKsmLfNw6PIROaQjqtjXbSMlpBj5QrMj4&s=PevpVpqhrRp4m8QYDfbsjX6Uv1vbWGlL3dHiZLjiZpM&e=
Loading...