* Fwd: Prevent rbd mapping/mounting on multiple hosts workaround [not found] <CAAVaRixeAPiyK2Xx1=nXAZBD=8ckctgFV=6Ag2EyTsAGhPrPUw@mail.gmail.com> @ 2016-02-05 14:24 ` Mauricio Garavaglia 2016-02-08 15:41 ` Gregory Farnum 0 siblings, 1 reply; 5+ messages in thread From: Mauricio Garavaglia @ 2016-02-05 14:24 UTC (permalink / raw) To: ceph-devel Hello, In the January Tech Talk (PostgreSQL on Ceph under Mesos/Aurora with Docker [https://youtu.be/OqlC7S3cUKs]) we presented a challenge we are facing at Medallia when running databases on ceph under mesos/aurora/docker; which is related to prevent mapping/mounting the same rbd image in two hosts at the same time during network partitions. As a workaround it was mentioned that we are wrapping rbd in a shell script that executes extra logic around certain operations: - On Map; rbd lock add <image> - If no success; then - "rbd status <image>": check for Watchers, 3 times each 15 secs - If found, ABORT the mapping. The image is still in use in a host that is healthy - "ceph osd blacklist add <previous lock holder>". Image locked without a watcher - steal the lock in <image> - map the image - On Unmap; - rbd lock remove - On reboot of server; - "ceph osd blacklist rm <self>" I was wondering if this mechanism could be incorporated as part of the rbd CLI, of course controlled by an option during map. We'll be happy to work on it, but want to check the feasibility of having the patch accepted. Mauricio Garavaglia mauricio@medallia.com ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Prevent rbd mapping/mounting on multiple hosts workaround 2016-02-05 14:24 ` Fwd: Prevent rbd mapping/mounting on multiple hosts workaround Mauricio Garavaglia @ 2016-02-08 15:41 ` Gregory Farnum 2016-02-08 20:44 ` Jason Dillaman 0 siblings, 1 reply; 5+ messages in thread From: Gregory Farnum @ 2016-02-08 15:41 UTC (permalink / raw) To: Mauricio Garavaglia, Jason Dillaman; +Cc: ceph-devel On Fri, Feb 5, 2016 at 6:24 AM, Mauricio Garavaglia <mauricio@medallia.com> wrote: > Hello, > > In the January Tech Talk (PostgreSQL on Ceph under Mesos/Aurora with > Docker [https://youtu.be/OqlC7S3cUKs]) we presented a challenge we are > facing at Medallia when running databases on ceph under > mesos/aurora/docker; which is related to prevent mapping/mounting the > same rbd image in two hosts at the same time during network > partitions. > > As a workaround it was mentioned that we are wrapping rbd in a shell > script that executes extra logic around certain operations: > > - On Map; rbd lock add <image> > - If no success; then > - "rbd status <image>": check for Watchers, 3 times each 15 secs > - If found, ABORT the mapping. The image is > still in use in a host that is healthy > - "ceph osd blacklist add <previous lock holder>". > Image locked without a watcher > - steal the lock in <image> > - map the image > > - On Unmap; > - rbd lock remove > > - On reboot of server; > - "ceph osd blacklist rm <self>" > > I was wondering if this mechanism could be incorporated as part of the > rbd CLI, of course controlled by an option during map. We'll be happy > to work on it, but want to check the feasibility of having the patch > accepted. I actually thought we had a disable-by-default config option in later releases that grab the locks before allowing a mount, but now I can't find it. Jason? -Greg ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Prevent rbd mapping/mounting on multiple hosts workaround 2016-02-08 15:41 ` Gregory Farnum @ 2016-02-08 20:44 ` Jason Dillaman 2016-02-08 22:41 ` Bill Sanders 0 siblings, 1 reply; 5+ messages in thread From: Jason Dillaman @ 2016-02-08 20:44 UTC (permalink / raw) To: Gregory Farnum; +Cc: Mauricio Garavaglia, ceph-devel Within librbd there is support for blacklisting clients before stealing the exclusive lock. I don't remember any such enhancement to the rbd CLI's map command. In general it sounds like a good feature request. The automatic unblacklist on reboot would be outside the scope of any rbd CLI change. I added a new tracker ticket for the feature request. [1] http://tracker.ceph.com/issues/14700 -- Jason Dillaman ----- Original Message ----- > From: "Gregory Farnum" <gfarnum@redhat.com> > To: "Mauricio Garavaglia" <mauricio@medallia.com>, "Jason Dillaman" <dillaman@redhat.com> > Cc: "ceph-devel" <ceph-devel@vger.kernel.org> > Sent: Monday, February 8, 2016 10:41:40 AM > Subject: Re: Prevent rbd mapping/mounting on multiple hosts workaround > > On Fri, Feb 5, 2016 at 6:24 AM, Mauricio Garavaglia > <mauricio@medallia.com> wrote: > > Hello, > > > > In the January Tech Talk (PostgreSQL on Ceph under Mesos/Aurora with > > Docker [https://youtu.be/OqlC7S3cUKs]) we presented a challenge we are > > facing at Medallia when running databases on ceph under > > mesos/aurora/docker; which is related to prevent mapping/mounting the > > same rbd image in two hosts at the same time during network > > partitions. > > > > As a workaround it was mentioned that we are wrapping rbd in a shell > > script that executes extra logic around certain operations: > > > > - On Map; rbd lock add <image> > > - If no success; then > > - "rbd status <image>": check for Watchers, 3 times each 15 > > secs > > - If found, ABORT the mapping. The image is > > still in use in a host that is healthy > > - "ceph osd blacklist add <previous lock holder>". > > Image locked without a watcher > > - steal the lock in <image> > > - map the image > > > > - On Unmap; > > - rbd lock remove > > > > - On reboot of server; > > - "ceph osd blacklist rm <self>" > > > > I was wondering if this mechanism could be incorporated as part of the > > rbd CLI, of course controlled by an option during map. We'll be happy > > to work on it, but want to check the feasibility of having the patch > > accepted. > > I actually thought we had a disable-by-default config option in later > releases that grab the locks before allowing a mount, but now I can't > find it. Jason? > -Greg > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Prevent rbd mapping/mounting on multiple hosts workaround 2016-02-08 20:44 ` Jason Dillaman @ 2016-02-08 22:41 ` Bill Sanders 2016-02-09 1:25 ` Jason Dillaman 0 siblings, 1 reply; 5+ messages in thread From: Bill Sanders @ 2016-02-08 22:41 UTC (permalink / raw) To: Jason Dillaman; +Cc: Gregory Farnum, Mauricio Garavaglia, ceph-devel So the idea here to prevent multiple hosts from mapping the same RBD image simultaneously? If so, please do try to keep it optional (even if it's the default)... I'm not sure who else might, but Teradata relies on this functionality. :) Thanks, Bill On Mon, Feb 8, 2016 at 12:44 PM, Jason Dillaman <dillaman@redhat.com> wrote: > Within librbd there is support for blacklisting clients before stealing the exclusive lock. I don't remember any such enhancement to the rbd CLI's map command. In general it sounds like a good feature request. The automatic unblacklist on reboot would be outside the scope of any rbd CLI change. I added a new tracker ticket for the feature request. > > [1] http://tracker.ceph.com/issues/14700 > > -- > > Jason Dillaman > > > ----- Original Message ----- >> From: "Gregory Farnum" <gfarnum@redhat.com> >> To: "Mauricio Garavaglia" <mauricio@medallia.com>, "Jason Dillaman" <dillaman@redhat.com> >> Cc: "ceph-devel" <ceph-devel@vger.kernel.org> >> Sent: Monday, February 8, 2016 10:41:40 AM >> Subject: Re: Prevent rbd mapping/mounting on multiple hosts workaround >> >> On Fri, Feb 5, 2016 at 6:24 AM, Mauricio Garavaglia >> <mauricio@medallia.com> wrote: >> > Hello, >> > >> > In the January Tech Talk (PostgreSQL on Ceph under Mesos/Aurora with >> > Docker [https://youtu.be/OqlC7S3cUKs]) we presented a challenge we are >> > facing at Medallia when running databases on ceph under >> > mesos/aurora/docker; which is related to prevent mapping/mounting the >> > same rbd image in two hosts at the same time during network >> > partitions. >> > >> > As a workaround it was mentioned that we are wrapping rbd in a shell >> > script that executes extra logic around certain operations: >> > >> > - On Map; rbd lock add <image> >> > - If no success; then >> > - "rbd status <image>": check for Watchers, 3 times each 15 >> > secs >> > - If found, ABORT the mapping. The image is >> > still in use in a host that is healthy >> > - "ceph osd blacklist add <previous lock holder>". >> > Image locked without a watcher >> > - steal the lock in <image> >> > - map the image >> > >> > - On Unmap; >> > - rbd lock remove >> > >> > - On reboot of server; >> > - "ceph osd blacklist rm <self>" >> > >> > I was wondering if this mechanism could be incorporated as part of the >> > rbd CLI, of course controlled by an option during map. We'll be happy >> > to work on it, but want to check the feasibility of having the patch >> > accepted. >> >> I actually thought we had a disable-by-default config option in later >> releases that grab the locks before allowing a mount, but now I can't >> find it. Jason? >> -Greg >> -- >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Prevent rbd mapping/mounting on multiple hosts workaround 2016-02-08 22:41 ` Bill Sanders @ 2016-02-09 1:25 ` Jason Dillaman 0 siblings, 0 replies; 5+ messages in thread From: Jason Dillaman @ 2016-02-09 1:25 UTC (permalink / raw) To: Bill Sanders; +Cc: Gregory Farnum, Mauricio Garavaglia, ceph-devel Yes, and 100% concur that this needs to be optional -- plenty of valid use-cases where the device is mapped concurrently. Thanks, Jason Dillaman ----- Original Message ----- > From: "Bill Sanders" <billysanders@gmail.com> > To: "Jason Dillaman" <dillaman@redhat.com> > Cc: "Gregory Farnum" <gfarnum@redhat.com>, "Mauricio Garavaglia" <mauricio@medallia.com>, "ceph-devel" > <ceph-devel@vger.kernel.org> > Sent: Monday, February 8, 2016 5:41:37 PM > Subject: Re: Prevent rbd mapping/mounting on multiple hosts workaround > > So the idea here to prevent multiple hosts from mapping the same RBD > image simultaneously? If so, please do try to keep it optional (even > if it's the default)... I'm not sure who else might, but Teradata > relies on this functionality. :) > > Thanks, > Bill > > On Mon, Feb 8, 2016 at 12:44 PM, Jason Dillaman <dillaman@redhat.com> wrote: > > Within librbd there is support for blacklisting clients before stealing the > > exclusive lock. I don't remember any such enhancement to the rbd CLI's > > map command. In general it sounds like a good feature request. The > > automatic unblacklist on reboot would be outside the scope of any rbd CLI > > change. I added a new tracker ticket for the feature request. > > > > [1] http://tracker.ceph.com/issues/14700 > > > > -- > > > > Jason Dillaman > > > > > > ----- Original Message ----- > >> From: "Gregory Farnum" <gfarnum@redhat.com> > >> To: "Mauricio Garavaglia" <mauricio@medallia.com>, "Jason Dillaman" > >> <dillaman@redhat.com> > >> Cc: "ceph-devel" <ceph-devel@vger.kernel.org> > >> Sent: Monday, February 8, 2016 10:41:40 AM > >> Subject: Re: Prevent rbd mapping/mounting on multiple hosts workaround > >> > >> On Fri, Feb 5, 2016 at 6:24 AM, Mauricio Garavaglia > >> <mauricio@medallia.com> wrote: > >> > Hello, > >> > > >> > In the January Tech Talk (PostgreSQL on Ceph under Mesos/Aurora with > >> > Docker [https://youtu.be/OqlC7S3cUKs]) we presented a challenge we are > >> > facing at Medallia when running databases on ceph under > >> > mesos/aurora/docker; which is related to prevent mapping/mounting the > >> > same rbd image in two hosts at the same time during network > >> > partitions. > >> > > >> > As a workaround it was mentioned that we are wrapping rbd in a shell > >> > script that executes extra logic around certain operations: > >> > > >> > - On Map; rbd lock add <image> > >> > - If no success; then > >> > - "rbd status <image>": check for Watchers, 3 times each > >> > 15 > >> > secs > >> > - If found, ABORT the mapping. The image is > >> > still in use in a host that is healthy > >> > - "ceph osd blacklist add <previous lock holder>". > >> > Image locked without a watcher > >> > - steal the lock in <image> > >> > - map the image > >> > > >> > - On Unmap; > >> > - rbd lock remove > >> > > >> > - On reboot of server; > >> > - "ceph osd blacklist rm <self>" > >> > > >> > I was wondering if this mechanism could be incorporated as part of the > >> > rbd CLI, of course controlled by an option during map. We'll be happy > >> > to work on it, but want to check the feasibility of having the patch > >> > accepted. > >> > >> I actually thought we had a disable-by-default config option in later > >> releases that grab the locks before allowing a mount, but now I can't > >> find it. Jason? > >> -Greg > >> -- > >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > >> the body of a message to majordomo@vger.kernel.org > >> More majordomo info at http://vger.kernel.org/majordomo-info.html > >> > > -- > > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > > the body of a message to majordomo@vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2016-02-09 1:25 UTC | newest] Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <CAAVaRixeAPiyK2Xx1=nXAZBD=8ckctgFV=6Ag2EyTsAGhPrPUw@mail.gmail.com> 2016-02-05 14:24 ` Fwd: Prevent rbd mapping/mounting on multiple hosts workaround Mauricio Garavaglia 2016-02-08 15:41 ` Gregory Farnum 2016-02-08 20:44 ` Jason Dillaman 2016-02-08 22:41 ` Bill Sanders 2016-02-09 1:25 ` Jason Dillaman
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.