Hi,
I have plans to implement storage replication for rbd in proxmox,
like for zfs export|import. (with rbd export-diff |rbd import-diff )
I'll try to work on it next month.
I'm not sure that currently a plugin infrastructe in done in code,
and that it's able to manage storages with differents name.
Can't tell if it'll be hard to implement, but the workflow is almost the same.
I'll try to look also at rbd mirror, but it's only work with librbd in qemu, not with krbd,
so it can't be implemented for container.
----- Mail original -----
De: "Mark Adams" <***@openvs.co.uk>
À: "proxmoxve" <pve-***@pve.proxmox.com>
Envoyé: Mardi 13 Mars 2018 18:52:21
Objet: Re: [PVE-User] pve-csync version of pve-zsync?
Hi Alwin,
I might have to take another look at it, but have you actually done this
with 2 proxmox clusters? I can't remember the exact part I got stuck on as
it was quite a while ago, but it wasn't as straight forward as you suggest.
I think you couldn't use the same cluster name, which in turn created
issues trying to use the "remote" (backup/dr/whatever you wanna call it)
cluster with proxmox because it needed to be called ceph.
The docs I was referring to were the ceph ones yes. Some of the options
listed in that doc do not work in the current proxmox version (I think the
doc hasn't been updated for newer versions...)
Regards,
Mark
Post by Alwin AntreichPost by Mark AdamsHi Alwin,
The last I looked at it, rbd mirror only worked if you had different
cluster names. Tried to get it working with proxmox but to no avail,
without really messing with how proxmox uses ceph I'm not sure it's
feasible, as proxmox assumes the default cluster name for everything...
That isn't mentioned anywhere in the ceph docs, they use for ease of
explaining two different cluster names.
If you have a config file named after the cluster, then you can specifiy
it on the command line.
http://docs.ceph.com/docs/master/rados/configuration/
ceph-conf/#running-multiple-clusters
Post by Mark AdamsAlso the documentation was a bit poor for it IMO.
Which documentation do you mean?
? -> http://docs.ceph.com/docs/master/rbd/rbd-mirroring/
Post by Mark AdamsWould also be nice to choose specifically which VM's you want to be
mirroring, rather than the whole cluster.
It is done either per pool or image separately. See the link above.
Post by Mark AdamsI've manually done rbd export-diff and rbd import-diff between 2 separate
proxmox clusters over ssh, and it seems to work really well... It would
just be nice to have a tool like pve-zsync so I don't have to write some
script myself. Seems to me like something that would be desirable as part
of proxmox as well?
That would basically implement the ceph rbd mirror feature.
Post by Mark AdamsCheers,
Mark
Post by Alwin AntreichHi Mark,
Post by Mark AdamsHi All,
Has anyone looked at or thought of making a version of pve-zsync for
ceph?
Post by Mark AdamsThis would be great for DR scenarios...
How easy do you think this would be to do? I imagine it wouId it be
quite
Post by Mark AdamsPost by Alwin AntreichPost by Mark Adamssimilar to pve-zsync, but using rbd export-diff and rbd import-diff
instead
Post by Mark Adamsof zfs send and zfs receive? so could the existing script be
relatively
Post by Mark AdamsPost by Alwin AntreichPost by Mark Adamseasily modified? (I know nothing about perl....)
Cheers,
Mark
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Isn't ceph mirror already what you want? It can mirror a image or a
whole pool. It keeps track of changes and serves remote image deletes
(adjustable delay).
_______________________________________________
pve-user mailing list
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
pve-***@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user