1.
ceph osd purge {id} --yes-i-really-mean-it
2. Navigate to the host where you keep the master copy of the cluster’s ceph.conf file.
ssh {admin-host}
cd /etc/ceph
vim ceph.conf
3. Remove the OSD entry from your ceph.conf file (if it exists).
[osd.1]
host = {hostname}
________________________________
From: Woods, Ken A (DNR)
Sent: Monday, July 2, 2018 4:48:30 PM
To: PVE User List
Subject: Re: [PVE-User] pveceph createosd after destroyed osd
You're thinking "proxmox". Try thinking "ceph" instead. Sure, ceph runs with proxmox, but what you're really doing is using a pretty GUI that sits on top of debian, running ceph and kvm.
Anyway, perhaps the GUI does all the steps needed? Perhaps not.
If it were me, I'd NOT reinstall, as that's likely not going to fix the issue.
Follow the directions in the page I linked and see if that helps.
________________________________
From: pve-user <pve-user-***@pve.proxmox.com> on behalf of Mark Adams <***@openvs.co.uk>
Sent: Monday, July 2, 2018 4:41:39 PM
To: PVE User List
Subject: Re: [PVE-User] pveceph createosd after destroyed osd
Hi, Thanks for your response!
No, I didn't do any of that on the cli - I just did stop in the webgui,
then out, then destroy.
Note that there was no VM's or data at all on this test ceph cluster - I
had deleted it all before doing this. I was basically just removing it all
so the OSD numbers looked "nicer" for the final setup.
It's not a huge deal, I can just reinstall proxmox. But it concerns me that
it seems so fragile using the webgui to do this. I want to know where I
went wrong? Is there somewhere that a signature is being stored so when you
try to add that same drive again (even though I ticked "remove partitions")
it doesn't add back in to the ceph cluster in the next sequential order
from the last current "live" or "valid" drive?
Is it just a rule that you never actually remove drives? you just set them
stopped/out?
Regards,
Mark
On 3 July 2018 at 01:34, Woods, Ken A (DNR) <***@alaska.gov> wrote:
> http://docs.ceph.com/docs/mimic/rados/operations/add-or-
> rm-osds/#removing-osds-manual
>
> Are you sure you followed the directions?
>
> ________________________________
> From: pve-user <pve-user-***@pve.proxmox.com> on behalf of Mark Adams
> <***@openvs.co.uk>
> Sent: Monday, July 2, 2018 4:05:51 PM
> To: pve-***@pve.proxmox.com
> Subject: [PVE-User] pveceph createosd after destroyed osd
>
> Currently running the newest 5.2-1 version, I had a test cluster which was
> working fine. I since added more disks, first stopping, then setting out,
> then destroying each osd so I could recreate it all from scratch.
>
> However, when adding a new osd (either via GUI or pveceph CLI) it seems to
> show a successful create, however does not show in the gui as an osd under
> the host.
>
> It's like the osd information is being stored by proxmox/ceph somewhere
> else and not being correctly removed and recreated?
>
> I can see that the newly created disk (after it being destroyed) is
> down/out.
>
> Is this by design? is there a way to force the disk back? shouldn't it show
> in the gui once you create it again?
>
> Thanks!
>
>
_______________________________________________
pve-user mailing list
pve-***@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user