* rest mgmt api
@ 2013-02-06 17:25 Sage Weil
2013-02-06 18:15 ` Yehuda Sadeh
0 siblings, 1 reply; 17+ messages in thread
From: Sage Weil @ 2013-02-06 17:25 UTC (permalink / raw)
To: ceph-devel
One of the goals for cuttlefish is to improvement manageability of the
system. This will invovle both cleaning up the CLI and adding a REST API
to do everything the CLI current does. There are a few implementation
choices to make.
Currenty the 'ceph' tool has a bunch of code to send messages to the
monitor and wait for replies. This is 90% of what users currently can do.
For the most part, the commands are interpreted by the monitor. A small
subset of commands (ceph tell ..., ceph pg <pgid> ...) will send commands
directory to OSDs.
There are two main options for a REST endpoint that we've discussed so
far:
1- Wrap the above in a clean library (probably integrating most of the
code into Objecter/MonClient.. see wip-monc for a start on this). Wrap
libcephadmin in python and make a simple HTTP/REST front-end. Users would
deploy mgmt endpoints in addition to/alongside monitors and everything
else. If they want the rest api at all.
2- Embed a web server in the ceph-mon daemons, and push the current admin
'client' functionality there. Come up with some basic authentication so
that this doesn't break the current security model.
Note that neither of these solves the HA issue directly; HTTP is a
client/server protocol, so whoever is using the API can specify only one
server endpoint. If it is a monitor, they'll need to be prepare to fail
over to another in their code, or set up a load balancer. Same goes for
the restful endpoint, if it fails. The difference is that a single
endpoint can proxy to whichever monitors are in quorum, so a much smaller
set of errors (endpoint machine crash, buggy endpoint) affect availability
of the API.
The somewhat orthogonal question is how to clean up the CLI usage,
parsing, and get equivalence in the new REST API.
One option is to create a basic framework in the monitor so that there is
a table of api commands. The 'parsing' would be regularized and validated
in a generic way. The rest endpoint would pass the URI through in a
generic way (sanitized json?) that can be matched against the same table.
Another option is to support a single set of commands on the monitor side,
and do the REST->CLI or CLI->REST or CLI,REST->json translation on the
client side. The command registry framework above would live in the CLI
utility and REST endpoints instead (or libcephadmin). This means that the
monitor code is simpler, but also means that the libcephadmin or ceph tool
and REST endpoint need to be running the latest code to be able to send
the latest commands to the monitor. It also means multiple places where
the command set is defined (mon and endpoint and cli).
Either way there is a fair bit of refactoring, but it should be a net
cleanup. I'd like to get some consensus on how to proceed before we
expend too much effort. Ian scheduled a quick chat Friday, so let's get
any other suggestions or ideas out before then so we can move forward...
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-06 17:25 rest mgmt api Sage Weil
@ 2013-02-06 18:15 ` Yehuda Sadeh
2013-02-06 19:05 ` Wido den Hollander
` (2 more replies)
0 siblings, 3 replies; 17+ messages in thread
From: Yehuda Sadeh @ 2013-02-06 18:15 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel
On Wed, Feb 6, 2013 at 9:25 AM, Sage Weil <sage@inktank.com> wrote:
> One of the goals for cuttlefish is to improvement manageability of the
> system. This will invovle both cleaning up the CLI and adding a REST API
> to do everything the CLI current does. There are a few implementation
> choices to make.
>
> Currenty the 'ceph' tool has a bunch of code to send messages to the
> monitor and wait for replies. This is 90% of what users currently can do.
> For the most part, the commands are interpreted by the monitor. A small
> subset of commands (ceph tell ..., ceph pg <pgid> ...) will send commands
> directory to OSDs.
>
>
> There are two main options for a REST endpoint that we've discussed so
> far:
>
> 1- Wrap the above in a clean library (probably integrating most of the
> code into Objecter/MonClient.. see wip-monc for a start on this). Wrap
> libcephadmin in python and make a simple HTTP/REST front-end. Users would
> deploy mgmt endpoints in addition to/alongside monitors and everything
> else. If they want the rest api at all.
>
> 2- Embed a web server in the ceph-mon daemons, and push the current admin
> 'client' functionality there. Come up with some basic authentication so
> that this doesn't break the current security model.
I'm in favor of the more modular and flexible approach, #1.
>
> Note that neither of these solves the HA issue directly; HTTP is a
> client/server protocol, so whoever is using the API can specify only one
> server endpoint. If it is a monitor, they'll need to be prepare to fail
> over to another in their code, or set up a load balancer. Same goes for
> the restful endpoint, if it fails. The difference is that a single
> endpoint can proxy to whichever monitors are in quorum, so a much smaller
> set of errors (endpoint machine crash, buggy endpoint) affect availability
> of the API.
Right. However, that's really an orthogonal issue. It'll be easier
scaling the HTTP endpoints if they're decoupled from the monitors.
>
>
> The somewhat orthogonal question is how to clean up the CLI usage,
> parsing, and get equivalence in the new REST API.
>
> One option is to create a basic framework in the monitor so that there is
> a table of api commands. The 'parsing' would be regularized and validated
> in a generic way. The rest endpoint would pass the URI through in a
> generic way (sanitized json?) that can be matched against the same table.
>
> Another option is to support a single set of commands on the monitor side,
> and do the REST->CLI or CLI->REST or CLI,REST->json translation on the
> client side. The command registry framework above would live in the CLI
> utility and REST endpoints instead (or libcephadmin). This means that the
> monitor code is simpler, but also means that the libcephadmin or ceph tool
> and REST endpoint need to be running the latest code to be able to send
> the latest commands to the monitor. It also means multiple places where
> the command set is defined (mon and endpoint and cli).
I'd rather keep the clients dumb, not involve them with the actual
configuration logic. Will make our lives easier in the long run.
>
> Either way there is a fair bit of refactoring, but it should be a net
> cleanup. I'd like to get some consensus on how to proceed before we
> expend too much effort. Ian scheduled a quick chat Friday, so let's get
> any other suggestions or ideas out before then so we can move forward...
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-06 18:15 ` Yehuda Sadeh
@ 2013-02-06 19:05 ` Wido den Hollander
2013-02-06 19:22 ` Dan Mick
2013-02-06 19:25 ` Joao Eduardo Luis
2 siblings, 0 replies; 17+ messages in thread
From: Wido den Hollander @ 2013-02-06 19:05 UTC (permalink / raw)
To: Yehuda Sadeh; +Cc: Sage Weil, ceph-devel
On 02/06/2013 07:15 PM, Yehuda Sadeh wrote:
> On Wed, Feb 6, 2013 at 9:25 AM, Sage Weil <sage@inktank.com> wrote:
>> One of the goals for cuttlefish is to improvement manageability of the
>> system. This will invovle both cleaning up the CLI and adding a REST API
>> to do everything the CLI current does. There are a few implementation
>> choices to make.
>>
>> Currenty the 'ceph' tool has a bunch of code to send messages to the
>> monitor and wait for replies. This is 90% of what users currently can do.
>> For the most part, the commands are interpreted by the monitor. A small
>> subset of commands (ceph tell ..., ceph pg <pgid> ...) will send commands
>> directory to OSDs.
>>
>>
>> There are two main options for a REST endpoint that we've discussed so
>> far:
>>
>> 1- Wrap the above in a clean library (probably integrating most of the
>> code into Objecter/MonClient.. see wip-monc for a start on this). Wrap
>> libcephadmin in python and make a simple HTTP/REST front-end. Users would
>> deploy mgmt endpoints in addition to/alongside monitors and everything
>> else. If they want the rest api at all.
>>
>> 2- Embed a web server in the ceph-mon daemons, and push the current admin
>> 'client' functionality there. Come up with some basic authentication so
>> that this doesn't break the current security model.
>
> I'm in favor of the more modular and flexible approach, #1.
>
+1 I'd go for #1. When there is a simple library which can do this stuff
your not tied into Python, but once can also directly link against it or
do languages like Java or PHP.
Python can do the REST stuff and you can loadbalance over that.
>>
>> Note that neither of these solves the HA issue directly; HTTP is a
>> client/server protocol, so whoever is using the API can specify only one
>> server endpoint. If it is a monitor, they'll need to be prepare to fail
>> over to another in their code, or set up a load balancer. Same goes for
>> the restful endpoint, if it fails. The difference is that a single
>> endpoint can proxy to whichever monitors are in quorum, so a much smaller
>> set of errors (endpoint machine crash, buggy endpoint) affect availability
>> of the API.
>
> Right. However, that's really an orthogonal issue. It'll be easier
> scaling the HTTP endpoints if they're decoupled from the monitors.
>
>>
>>
>> The somewhat orthogonal question is how to clean up the CLI usage,
>> parsing, and get equivalence in the new REST API.
>>
>> One option is to create a basic framework in the monitor so that there is
>> a table of api commands. The 'parsing' would be regularized and validated
>> in a generic way. The rest endpoint would pass the URI through in a
>> generic way (sanitized json?) that can be matched against the same table.
>>
>> Another option is to support a single set of commands on the monitor side,
>> and do the REST->CLI or CLI->REST or CLI,REST->json translation on the
>> client side. The command registry framework above would live in the CLI
>> utility and REST endpoints instead (or libcephadmin). This means that the
>> monitor code is simpler, but also means that the libcephadmin or ceph tool
>> and REST endpoint need to be running the latest code to be able to send
>> the latest commands to the monitor. It also means multiple places where
>> the command set is defined (mon and endpoint and cli).
>
> I'd rather keep the clients dumb, not involve them with the actual
> configuration logic. Will make our lives easier in the long run.
>
>>
>> Either way there is a fair bit of refactoring, but it should be a net
>> cleanup. I'd like to get some consensus on how to proceed before we
>> expend too much effort. Ian scheduled a quick chat Friday, so let's get
>> any other suggestions or ideas out before then so we can move forward...
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-06 18:15 ` Yehuda Sadeh
2013-02-06 19:05 ` Wido den Hollander
@ 2013-02-06 19:22 ` Dan Mick
2013-02-06 19:25 ` Joao Eduardo Luis
2 siblings, 0 replies; 17+ messages in thread
From: Dan Mick @ 2013-02-06 19:22 UTC (permalink / raw)
To: Yehuda Sadeh; +Cc: Sage Weil, ceph-devel
On 02/06/2013 10:15 AM, Yehuda Sadeh wrote:
> On Wed, Feb 6, 2013 at 9:25 AM, Sage Weil <sage@inktank.com> wrote:
>> One of the goals for cuttlefish is to improvement manageability of the
>> system. This will invovle both cleaning up the CLI and adding a REST API
>> to do everything the CLI current does. There are a few implementation
>> choices to make.
>>
>> Currenty the 'ceph' tool has a bunch of code to send messages to the
>> monitor and wait for replies. This is 90% of what users currently can do.
>> For the most part, the commands are interpreted by the monitor. A small
>> subset of commands (ceph tell ..., ceph pg <pgid> ...) will send commands
>> directory to OSDs.
>>
>>
>> There are two main options for a REST endpoint that we've discussed so
>> far:
>>
>> 1- Wrap the above in a clean library (probably integrating most of the
>> code into Objecter/MonClient.. see wip-monc for a start on this). Wrap
>> libcephadmin in python and make a simple HTTP/REST front-end. Users would
>> deploy mgmt endpoints in addition to/alongside monitors and everything
>> else. If they want the rest api at all.
>>
>> 2- Embed a web server in the ceph-mon daemons, and push the current admin
>> 'client' functionality there. Come up with some basic authentication so
>> that this doesn't break the current security model.
>
> I'm in favor of the more modular and flexible approach, #1.
The more I think about it, the more I am too. I can imagine admin
flexibility being key: not just redundancy/failover, but also access
perms, and the more independent the REST channel is from the normal auth
channels, the better, I think.
>> Note that neither of these solves the HA issue directly; HTTP is a
>> client/server protocol, so whoever is using the API can specify only one
>> server endpoint. If it is a monitor, they'll need to be prepare to fail
>> over to another in their code, or set up a load balancer. Same goes for
>> the restful endpoint, if it fails. The difference is that a single
>> endpoint can proxy to whichever monitors are in quorum, so a much smaller
>> set of errors (endpoint machine crash, buggy endpoint) affect availability
>> of the API.
>
> Right. However, that's really an orthogonal issue. It'll be easier
> scaling the HTTP endpoints if they're decoupled from the monitors.
and easier to rewrite/redeploy, and authenticate, etc.
>> The somewhat orthogonal question is how to clean up the CLI usage,
>> parsing, and get equivalence in the new REST API.
>>
>> One option is to create a basic framework in the monitor so that there is
>> a table of api commands. The 'parsing' would be regularized and validated
>> in a generic way. The rest endpoint would pass the URI through in a
>> generic way (sanitized json?) that can be matched against the same table.
>>
>> Another option is to support a single set of commands on the monitor side,
>> and do the REST->CLI or CLI->REST or CLI,REST->json translation on the
>> client side. The command registry framework above would live in the CLI
>> utility and REST endpoints instead (or libcephadmin). This means that the
>> monitor code is simpler, but also means that the libcephadmin or ceph tool
>> and REST endpoint need to be running the latest code to be able to send
>> the latest commands to the monitor. It also means multiple places where
>> the command set is defined (mon and endpoint and cli).
>
> I'd rather keep the clients dumb, not involve them with the actual
> configuration logic. Will make our lives easier in the long run.
Yes, and I really really want to simplify/table-ify the command parsing
code.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-06 18:15 ` Yehuda Sadeh
2013-02-06 19:05 ` Wido den Hollander
2013-02-06 19:22 ` Dan Mick
@ 2013-02-06 19:25 ` Joao Eduardo Luis
2013-02-06 19:34 ` Sage Weil
2013-02-06 19:36 ` Dan Mick
2 siblings, 2 replies; 17+ messages in thread
From: Joao Eduardo Luis @ 2013-02-06 19:25 UTC (permalink / raw)
To: Yehuda Sadeh; +Cc: Sage Weil, ceph-devel
On 02/06/2013 06:15 PM, Yehuda Sadeh wrote:
> On Wed, Feb 6, 2013 at 9:25 AM, Sage Weil <sage@inktank.com> wrote:
>> One of the goals for cuttlefish is to improvement manageability of the
>> system. This will invovle both cleaning up the CLI and adding a REST API
>> to do everything the CLI current does. There are a few implementation
>> choices to make.
>>
>> Currenty the 'ceph' tool has a bunch of code to send messages to the
>> monitor and wait for replies. This is 90% of what users currently can do.
>> For the most part, the commands are interpreted by the monitor. A small
>> subset of commands (ceph tell ..., ceph pg <pgid> ...) will send commands
>> directory to OSDs.
>>
>>
>> There are two main options for a REST endpoint that we've discussed so
>> far:
>>
>> 1- Wrap the above in a clean library (probably integrating most of the
>> code into Objecter/MonClient.. see wip-monc for a start on this). Wrap
>> libcephadmin in python and make a simple HTTP/REST front-end. Users would
>> deploy mgmt endpoints in addition to/alongside monitors and everything
>> else. If they want the rest api at all.
>>
>> 2- Embed a web server in the ceph-mon daemons, and push the current admin
>> 'client' functionality there. Come up with some basic authentication so
>> that this doesn't break the current security model.
>
> I'm in favor of the more modular and flexible approach, #1.
Also in favour of #1 rather than #2. I believe the monitors should be
as light as possible, and embedding a web server is going on the
opposite direction. Also, relying on libraries to interface with the
monitors allows the users to employ them however they see fit.
>> Note that neither of these solves the HA issue directly; HTTP is a
>> client/server protocol, so whoever is using the API can specify only one
>> server endpoint. If it is a monitor, they'll need to be prepare to fail
>> over to another in their code, or set up a load balancer. Same goes for
>> the restful endpoint, if it fails. The difference is that a single
>> endpoint can proxy to whichever monitors are in quorum, so a much smaller
>> set of errors (endpoint machine crash, buggy endpoint) affect availability
>> of the API.
>
> Right. However, that's really an orthogonal issue. It'll be easier
> scaling the HTTP endpoints if they're decoupled from the monitors.
Again, I'm with Yehuda on this one. And I would also assume that by
relying on a library, instead of on a embedded web server, this could be
smoothed over by letting the library's monclient hunt for a new monitor,
thus freeing whatever is in the middle from taking on this responsibility.
>> The somewhat orthogonal question is how to clean up the CLI usage,
>> parsing, and get equivalence in the new REST API.
>>
>> One option is to create a basic framework in the monitor so that there is
>> a table of api commands. The 'parsing' would be regularized and validated
>> in a generic way. The rest endpoint would pass the URI through in a
>> generic way (sanitized json?) that can be matched against the same table.
>>
>> Another option is to support a single set of commands on the monitor side,
>> and do the REST->CLI or CLI->REST or CLI,REST->json translation on the
>> client side. The command registry framework above would live in the CLI
>> utility and REST endpoints instead (or libcephadmin). This means that the
>> monitor code is simpler, but also means that the libcephadmin or ceph tool
>> and REST endpoint need to be running the latest code to be able to send
>> the latest commands to the monitor. It also means multiple places where
>> the command set is defined (mon and endpoint and cli).
>
> I'd rather keep the clients dumb, not involve them with the actual
> configuration logic. Will make our lives easier in the long run.
I'm all for keeping clients dumb, but I believe that the responsibility
of outputting in a human-readable format should be theirs.
My take on this is to keep the current behaviour (client issues a
command and the monitor handles it as it sees fit), but all
communication should be done in json, either to or from the monitors.
This would allow us to provide more information on each result, getting
rid of all the annoying format on the reply messages and simplify a
great deal of code on the monitor end by removing the silly need of
returning on either plain-text or json. We would then let the
client-side libraries deal with converting it to whichever format the
user wants (plain-text, xml, w/e). As for new commands on the monitor
that are not present on the library, replies to said commands could then
be presented just in json, or we could come up with a standardized way
to always convert json into a human-readable format/any other format.
This would however mean to be able to parse json on the monitors (which
we do not currently do, although we do produce json output). I can't
say I have strong feeling for the client->monitor communication to be
done in json, but for sake of coherency I do think it would be best.
-Joao
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-06 19:25 ` Joao Eduardo Luis
@ 2013-02-06 19:34 ` Sage Weil
2013-02-06 19:51 ` Dimitri Maziuk
2013-02-06 19:36 ` Dan Mick
1 sibling, 1 reply; 17+ messages in thread
From: Sage Weil @ 2013-02-06 19:34 UTC (permalink / raw)
To: Joao Eduardo Luis; +Cc: Yehuda Sadeh, ceph-devel
On Wed, 6 Feb 2013, Joao Eduardo Luis wrote:
> On 02/06/2013 06:15 PM, Yehuda Sadeh wrote:
> > On Wed, Feb 6, 2013 at 9:25 AM, Sage Weil <sage@inktank.com> wrote:
> > > One of the goals for cuttlefish is to improvement manageability of the
> > > system. This will invovle both cleaning up the CLI and adding a REST API
> > > to do everything the CLI current does. There are a few implementation
> > > choices to make.
> > >
> > > Currenty the 'ceph' tool has a bunch of code to send messages to the
> > > monitor and wait for replies. This is 90% of what users currently can do.
> > > For the most part, the commands are interpreted by the monitor. A small
> > > subset of commands (ceph tell ..., ceph pg <pgid> ...) will send commands
> > > directory to OSDs.
> > >
> > >
> > > There are two main options for a REST endpoint that we've discussed so
> > > far:
> > >
> > > 1- Wrap the above in a clean library (probably integrating most of the
> > > code into Objecter/MonClient.. see wip-monc for a start on this). Wrap
> > > libcephadmin in python and make a simple HTTP/REST front-end. Users would
> > > deploy mgmt endpoints in addition to/alongside monitors and everything
> > > else. If they want the rest api at all.
> > >
> > > 2- Embed a web server in the ceph-mon daemons, and push the current admin
> > > 'client' functionality there. Come up with some basic authentication so
> > > that this doesn't break the current security model.
> >
> > I'm in favor of the more modular and flexible approach, #1.
>
> Also in favour of #1 rather than #2. I believe the monitors should be as
> light as possible, and embedding a web server is going on the opposite
> direction. Also, relying on libraries to interface with the monitors allows
> the users to employ them however they see fit.
>
> > > Note that neither of these solves the HA issue directly; HTTP is a
> > > client/server protocol, so whoever is using the API can specify only one
> > > server endpoint. If it is a monitor, they'll need to be prepare to fail
> > > over to another in their code, or set up a load balancer. Same goes for
> > > the restful endpoint, if it fails. The difference is that a single
> > > endpoint can proxy to whichever monitors are in quorum, so a much smaller
> > > set of errors (endpoint machine crash, buggy endpoint) affect availability
> > > of the API.
> >
> > Right. However, that's really an orthogonal issue. It'll be easier
> > scaling the HTTP endpoints if they're decoupled from the monitors.
>
> Again, I'm with Yehuda on this one. And I would also assume that by relying on
> a library, instead of on a embedded web server, this could be smoothed over by
> letting the library's monclient hunt for a new monitor, thus freeing whatever
> is in the middle from taking on this responsibility.
>
> > > The somewhat orthogonal question is how to clean up the CLI usage,
> > > parsing, and get equivalence in the new REST API.
> > >
> > > One option is to create a basic framework in the monitor so that there is
> > > a table of api commands. The 'parsing' would be regularized and validated
> > > in a generic way. The rest endpoint would pass the URI through in a
> > > generic way (sanitized json?) that can be matched against the same table.
> > >
> > > Another option is to support a single set of commands on the monitor side,
> > > and do the REST->CLI or CLI->REST or CLI,REST->json translation on the
> > > client side. The command registry framework above would live in the CLI
> > > utility and REST endpoints instead (or libcephadmin). This means that the
> > > monitor code is simpler, but also means that the libcephadmin or ceph tool
> > > and REST endpoint need to be running the latest code to be able to send
> > > the latest commands to the monitor. It also means multiple places where
> > > the command set is defined (mon and endpoint and cli).
> >
> > I'd rather keep the clients dumb, not involve them with the actual
> > configuration logic. Will make our lives easier in the long run.
>
> I'm all for keeping clients dumb, but I believe that the responsibility of
> outputting in a human-readable format should be theirs.
>
> My take on this is to keep the current behaviour (client issues a command and
> the monitor handles it as it sees fit), but all communication should be done
> in json, either to or from the monitors. This would allow us to provide more
> information on each result, getting rid of all the annoying format on the
> reply messages and simplify a great deal of code on the monitor end by
> removing the silly need of returning on either plain-text or json. We would
> then let the client-side libraries deal with converting it to whichever format
> the user wants (plain-text, xml, w/e). As for new commands on the monitor
> that are not present on the library, replies to said commands could then be
> presented just in json, or we could come up with a standardized way to always
> convert json into a human-readable format/any other format.
One idea related to this: create a PlainFormatter so that all of the
monitor-side code is generic, but pass the format (xml, json, plain) as a
field in the request. That way the client doesn't need to re-format.
Although just making json prettier (something that looks more like yaml,
perhaps) on the client side might be pretty easy too. We just already
have the Formatter abstraction, so it would be less work to get there,
albeit with a slightly more complex wire protocol.
> This would however mean to be able to parse json on the monitors (which we do
> not currently do, although we do produce json output). I can't say I have
> strong feeling for the client->monitor communication to be done in json, but
> for sake of coherency I do think it would be best.
We're (about to be) doing this already in radosgw and a handful of other
places, so the json parsing shouldn't be a problem.
I think the one caveat here is that having a single registry for commands
in the monitor means that commands can come in two flavors: vector<string>
(cli) and URL (presumably in json form). But a single command
dispatch/registry framework will make that distinction pretty simple...
sage
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-06 19:25 ` Joao Eduardo Luis
2013-02-06 19:34 ` Sage Weil
@ 2013-02-06 19:36 ` Dan Mick
1 sibling, 0 replies; 17+ messages in thread
From: Dan Mick @ 2013-02-06 19:36 UTC (permalink / raw)
To: Joao Eduardo Luis; +Cc: Yehuda Sadeh, Sage Weil, ceph-devel
> My take on this is to keep the current behaviour (client issues a
> command and the monitor handles it as it sees fit), but all
> communication should be done in json, either to or from the monitors.
> This would allow us to provide more information on each result, getting
> rid of all the annoying format on the reply messages and simplify a
> great deal of code on the monitor end by removing the silly need of
> returning on either plain-text or json. We would then let the
> client-side libraries deal with converting it to whichever format the
> user wants (plain-text, xml, w/e). As for new commands on the monitor
> that are not present on the library, replies to said commands could then
> be presented just in json, or we could come up with a standardized way
> to always convert json into a human-readable format/any other format.
>
> This would however mean to be able to parse json on the monitors (which
> we do not currently do, although we do produce json output). I can't
> say I have strong feeling for the client->monitor communication to be
> done in json, but for sake of coherency I do think it would be best.
Sage and I talked briefly about this yesterday; my take is that the
input doesn't do very much mechanical validation at all; most of it is
semantic and based on partial results (i.e. "you can't ask for that from
this object" or "that object you want to operate on doesn't exist"), so
I'm not sure we win much from JSON input. There are a few places where
numbers need to be validated but that's about it; the input is not
complex, it's pretty much just a list of words for the most part.
but there's also no reason to rule it out for future
perhaps-more-complex command inputs.
I think having the commands be self-describing and table-driven will
solve a whole lot of ugliness, and one of the self-description vectors
can be "this parameter should be validated in this standard way".
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-06 19:34 ` Sage Weil
@ 2013-02-06 19:51 ` Dimitri Maziuk
2013-02-06 20:14 ` Sage Weil
0 siblings, 1 reply; 17+ messages in thread
From: Dimitri Maziuk @ 2013-02-06 19:51 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel
[-- Attachment #1: Type: text/plain, Size: 636 bytes --]
On 02/06/2013 01:34 PM, Sage Weil wrote:
> I think the one caveat here is that having a single registry for commands
> in the monitor means that commands can come in two flavors: vector<string>
> (cli) and URL (presumably in json form). But a single command
> dispatch/registry framework will make that distinction pretty simple...
Any reason you can't have your CLI json-encode the commands (or,
conversely, your cgi/wsgi/php/servlet URL handler decode them into
vector<string>) before passing them on to the monitor?
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 254 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-06 19:51 ` Dimitri Maziuk
@ 2013-02-06 20:14 ` Sage Weil
2013-02-06 21:19 ` Dan Mick
` (2 more replies)
0 siblings, 3 replies; 17+ messages in thread
From: Sage Weil @ 2013-02-06 20:14 UTC (permalink / raw)
To: Dimitri Maziuk; +Cc: ceph-devel
On Wed, 6 Feb 2013, Dimitri Maziuk wrote:
> On 02/06/2013 01:34 PM, Sage Weil wrote:
>
> > I think the one caveat here is that having a single registry for commands
> > in the monitor means that commands can come in two flavors: vector<string>
> > (cli) and URL (presumably in json form). But a single command
> > dispatch/registry framework will make that distinction pretty simple...
>
> Any reason you can't have your CLI json-encode the commands (or,
> conversely, your cgi/wsgi/php/servlet URL handler decode them into
> vector<string>) before passing them on to the monitor?
We can, but they won't necessarily look the same, because it is unlikely
we can make a sane 1:1 translation of the CLI to REST that makes sense,
and it would be nice to avoid baking knowledge about the individual
commands into the client side.
ceph osd pool create <poolname> <numpgs>
vs
/osd/pool/?op=create&poolname=foo&numpgs=bar
or whatever. I know next to nothing about REST API design best practices,
but I'm guessing it doesn't look like a CLI.
sage
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-06 20:14 ` Sage Weil
@ 2013-02-06 21:19 ` Dan Mick
2013-02-06 22:20 ` Dimitri Maziuk
2013-02-11 20:04 ` Gregory Farnum
2 siblings, 0 replies; 17+ messages in thread
From: Dan Mick @ 2013-02-06 21:19 UTC (permalink / raw)
To: Sage Weil; +Cc: Dimitri Maziuk, ceph-devel
On 02/06/2013 12:14 PM, Sage Weil wrote:
> On Wed, 6 Feb 2013, Dimitri Maziuk wrote:
>> On 02/06/2013 01:34 PM, Sage Weil wrote:
>>
>>> I think the one caveat here is that having a single registry for commands
>>> in the monitor means that commands can come in two flavors: vector<string>
>>> (cli) and URL (presumably in json form). But a single command
>>> dispatch/registry framework will make that distinction pretty simple...
>>
>> Any reason you can't have your CLI json-encode the commands (or,
>> conversely, your cgi/wsgi/php/servlet URL handler decode them into
>> vector<string>) before passing them on to the monitor?
>
> We can, but they won't necessarily look the same, because it is unlikely
> we can make a sane 1:1 translation of the CLI to REST that makes sense,
> and it would be nice to avoid baking knowledge about the individual
> commands into the client side.
>
> ceph osd pool create <poolname> <numpgs>
> vs
> /osd/pool/?op=create&poolname=foo&numpgs=bar
>
> or whatever. I know next to nothing about REST API design best practices,
> but I'm guessing it doesn't look like a CLI.
>
> sage
Well we could easily package, say, "mon getmap" into a json
vector-of-strings for transmission as the http payload of the PUT
request, *or* encode them as a simple "s1=mon&s2=getmap", but again, the
data format is so simple that I'm not sure it buys much. We definitely
want all the interpretation centralized in the daemons, I think, so this
is just string marshalling however we choose to do it.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-06 20:14 ` Sage Weil
2013-02-06 21:19 ` Dan Mick
@ 2013-02-06 22:20 ` Dimitri Maziuk
2013-02-07 1:45 ` Jeff Mitchell
2013-02-11 20:04 ` Gregory Farnum
2 siblings, 1 reply; 17+ messages in thread
From: Dimitri Maziuk @ 2013-02-06 22:20 UTC (permalink / raw)
To: ceph-devel
[-- Attachment #1: Type: text/plain, Size: 1356 bytes --]
On 02/06/2013 02:14 PM, Sage Weil wrote:
> On Wed, 6 Feb 2013, Dimitri Maziuk wrote:
>> Any reason you can't have your CLI json-encode the commands (or,
>> conversely, your cgi/wsgi/php/servlet URL handler decode them into
>> vector<string>) before passing them on to the monitor?
>
> We can, but they won't necessarily look the same, because it is unlikely
> we can make a sane 1:1 translation of the CLI to REST that makes sense,
> and it would be nice to avoid baking knowledge about the individual
> commands into the client side.
>
> ceph osd pool create <poolname> <numpgs>
> vs
> /osd/pool/?op=create&poolname=foo&numpgs=bar
>
> or whatever. I know next to nothing about REST API design best practices,
> but I'm guessing it doesn't look like a CLI.
(Last I looked "?op=create&poolname=foo" was the Old Busted CGI, The New
Shiny Hotness(tm) was supposed to look like "/create/foo" -- and I never
understood how the optional parameters are supposed to work. But that's
beside the point.)
To me you sounded like you the piece that actually does the work
(daemon?) should understand both (and have a built-in httpd on top).
What I meant is it should know just one and let the UI modules do the
translation.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 254 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-06 22:20 ` Dimitri Maziuk
@ 2013-02-07 1:45 ` Jeff Mitchell
0 siblings, 0 replies; 17+ messages in thread
From: Jeff Mitchell @ 2013-02-07 1:45 UTC (permalink / raw)
To: Dimitri Maziuk; +Cc: ceph-devel
Dimitri Maziuk wrote:
> (Last I looked "?op=create&poolname=foo" was the Old Busted CGI, The New
> Shiny Hotness(tm) was supposed to look like "/create/foo" -- and I never
> understood how the optional parameters are supposed to work. But that's
> beside the point.)
They're different. One is using the path to interpret functionality; one is using query parameters. The former requires custom path parsing/interpreting code for your particular application; the latter is a very well supported/understood way of getting key/value pairs.
Neither is right or wrong, they're just different. People seem to prefer the path method these days because it seems cleaner/nicer; the other thing people do is just POST the parameters instead of GETting, which lets you still use key/value parameters but not have an ugly URL.
--Jeff
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-06 20:14 ` Sage Weil
2013-02-06 21:19 ` Dan Mick
2013-02-06 22:20 ` Dimitri Maziuk
@ 2013-02-11 20:04 ` Gregory Farnum
2013-02-11 22:00 ` Sage Weil
2 siblings, 1 reply; 17+ messages in thread
From: Gregory Farnum @ 2013-02-11 20:04 UTC (permalink / raw)
To: Sage Weil; +Cc: Dimitri Maziuk, ceph-devel
On Wed, Feb 6, 2013 at 12:14 PM, Sage Weil <sage@inktank.com> wrote:
> On Wed, 6 Feb 2013, Dimitri Maziuk wrote:
>> On 02/06/2013 01:34 PM, Sage Weil wrote:
>>
>> > I think the one caveat here is that having a single registry for commands
>> > in the monitor means that commands can come in two flavors: vector<string>
>> > (cli) and URL (presumably in json form). But a single command
>> > dispatch/registry framework will make that distinction pretty simple...
>>
>> Any reason you can't have your CLI json-encode the commands (or,
>> conversely, your cgi/wsgi/php/servlet URL handler decode them into
>> vector<string>) before passing them on to the monitor?
>
> We can, but they won't necessarily look the same, because it is unlikely
> we can make a sane 1:1 translation of the CLI to REST that makes sense,
> and it would be nice to avoid baking knowledge about the individual
> commands into the client side.
I disagree and am with Joao on this one — the monitor parsing is
ridiculous as it stand right now, and we should be trying to get rid
of the manual string parsing. The monitors should be parsing JSON
commands that are sent by the client; it makes validation and the
logic control flow a lot easier. We're going to want some level of
intelligence in the clients so that they can tailor themselves to the
appropriate UI conventions, and having two different parsing paths in
the monitors is just asking for trouble: they will get out of sync and
have different kinds of parsing errors.
What we could do is have the monitors speak JSON only, and then give
the clients a minimal intelligence so that the CLI could (for
instance) prettify the options for commands it knows about, but still
allow pass-through for access to newer commands it hasn't yet heard
of.
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-11 20:04 ` Gregory Farnum
@ 2013-02-11 22:00 ` Sage Weil
2013-02-11 22:38 ` Dimitri Maziuk
2013-02-11 22:41 ` Gregory Farnum
0 siblings, 2 replies; 17+ messages in thread
From: Sage Weil @ 2013-02-11 22:00 UTC (permalink / raw)
To: Gregory Farnum; +Cc: Dimitri Maziuk, ceph-devel
On Mon, 11 Feb 2013, Gregory Farnum wrote:
> On Wed, Feb 6, 2013 at 12:14 PM, Sage Weil <sage@inktank.com> wrote:
> > On Wed, 6 Feb 2013, Dimitri Maziuk wrote:
> >> On 02/06/2013 01:34 PM, Sage Weil wrote:
> >>
> >> > I think the one caveat here is that having a single registry for commands
> >> > in the monitor means that commands can come in two flavors: vector<string>
> >> > (cli) and URL (presumably in json form). But a single command
> >> > dispatch/registry framework will make that distinction pretty simple...
> >>
> >> Any reason you can't have your CLI json-encode the commands (or,
> >> conversely, your cgi/wsgi/php/servlet URL handler decode them into
> >> vector<string>) before passing them on to the monitor?
> >
> > We can, but they won't necessarily look the same, because it is unlikely
> > we can make a sane 1:1 translation of the CLI to REST that makes sense,
> > and it would be nice to avoid baking knowledge about the individual
> > commands into the client side.
>
> I disagree and am with Joao on this one ? the monitor parsing is
> ridiculous as it stand right now, and we should be trying to get rid
> of the manual string parsing. The monitors should be parsing JSON
> commands that are sent by the client; it makes validation and the
No argument that the current parsing code is bad...
> logic control flow a lot easier. We're going to want some level of
> intelligence in the clients so that they can tailor themselves to the
> appropriate UI conventions, and having two different parsing paths in
What do you mean by tailor to UI conventions?
> the monitors is just asking for trouble: they will get out of sync and
> have different kinds of parsing errors.
>
> What we could do is have the monitors speak JSON only, and then give
> the clients a minimal intelligence so that the CLI could (for
> instance) prettify the options for commands it knows about, but still
> allow pass-through for access to newer commands it hasn't yet heard
> of.
That doesn't really help; it means the mon still has to understand the
CLI grammar.
What we are talking about is the difference between:
[ 'osd', 'down', '123' ]
and
{
URI: '/osd/down',
OSD-Id: 123
}
or however we generically translate the HTTP request into JSON. Once we
normalize the code, calling it "parsing" is probably misleading. The top
(CLI) fragment will match against a rule like:
[ STR("osd"), STR("down"), POSINT ]
or however we encode the syntax, while the below would match against
{ .prefix = "/osd/down",
.fields = [ "OSD-Id": POSINT ]
}
..or something. I'm making this syntax up, but you get the idea: there
would be a strict format for the request and generic code that validates
it and passes the resulting arguments/matches into a function like
int do_command_osd_down(int n);
regardless of which type of input pattern it matched.
Obviously we'll need 100% testing coverage for both the RESTful and CLI
variants, whether we do the above or whether the CLI is translating one
into the other via duplicated knowledge of the command set.
FWIW you could pass the CLI command as JSON, but that's no different than
encoding vector<string>; it's still a different way to describing the same
command.
If the parsing code is wrapping in a single library that validates typed
fields or positional arguments/flags, I don't think this is going to turn
into anything remotely like the same wild-west horror that the current
code represents. And if we were building this from scratch with no
legacy, I'd argue that the same model is still pretty good... unless we
recast the entire CLI in terms of a generic URI+field model that
matches the REST API perfectly.
Now.. if that is the route want to go, that is another choice. We could:
- redesign a fresh CLI with commands like
ceph /osd/123 mark=down
ceph /pool/foo create pg_num=123
- make this a programmatic transformation to/from a REST request, like
/osd/123?command=mark&status=down
/pool/foo?command=create&pg_num=123
(or whatever the REST requests are "supposed" to look like)
- hard-code a client-side mapping for legacy commands only
- only add new commands in the new syntax
That means retraining users and only adding new commands in the new model
of things. And dreaming up said model...
sage
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-11 22:00 ` Sage Weil
@ 2013-02-11 22:38 ` Dimitri Maziuk
2013-02-11 22:41 ` Gregory Farnum
1 sibling, 0 replies; 17+ messages in thread
From: Dimitri Maziuk @ 2013-02-11 22:38 UTC (permalink / raw)
To: Sage Weil; +Cc: Gregory Farnum, ceph-devel
[-- Attachment #1: Type: text/plain, Size: 1691 bytes --]
On 02/11/2013 04:00 PM, Sage Weil wrote:
> On Mon, 11 Feb 2013, Gregory Farnum wrote:
...
> That doesn't really help; it means the mon still has to understand the
> CLI grammar.
>
> What we are talking about is the difference between:
>
> [ 'osd', 'down', '123' ]
>
> and
>
> {
> URI: '/osd/down',
> OSD-Id: 123
> }
>
> or however we generically translate the HTTP request into JSON.
I think the setup we have in mind is where the MON reads something like
{"who:"osd", "which":"123", "what":"down", "when":"now"} from a socket
(pipe, whatever),
the CLI reads "osd down 123 now" from the prompt and pushes {"who:"osd",
"which":"123", "what":"down", "when":"now"} into that socket,
the webapp gets whatever: "/osd/down/123/now" or
?who=osd&command=down&id=123&when=now" from whoever impersonates the
browser and pipes {"who:"osd", "which":"123", "what":"down",
"when":"now"} into that same socket,
and all three of them are three completely separate applications that
don't try to do what they don't need to.
> FWIW you could pass the CLI command as JSON, but that's no different than
> encoding vector<string>; it's still a different way to describing the same
> command.
The devil is of course in the details: in (e.g.) python json.loads() the
string and gives you the map you could plug into a lookup table or
something to get right to the function call. My c++ is way rusty, I've
no idea what's available in boost &co -- if you have to roll your own
json parser then you indeed don't care how that vector<string> is encoded.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 254 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-11 22:00 ` Sage Weil
2013-02-11 22:38 ` Dimitri Maziuk
@ 2013-02-11 22:41 ` Gregory Farnum
2013-02-12 0:51 ` Sage Weil
1 sibling, 1 reply; 17+ messages in thread
From: Gregory Farnum @ 2013-02-11 22:41 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel
On Mon, Feb 11, 2013 at 2:00 PM, Sage Weil <sage@inktank.com> wrote:
> On Mon, 11 Feb 2013, Gregory Farnum wrote:
>> On Wed, Feb 6, 2013 at 12:14 PM, Sage Weil <sage@inktank.com> wrote:
>> > On Wed, 6 Feb 2013, Dimitri Maziuk wrote:
>> >> On 02/06/2013 01:34 PM, Sage Weil wrote:
>> >>
>> >> > I think the one caveat here is that having a single registry for commands
>> >> > in the monitor means that commands can come in two flavors: vector<string>
>> >> > (cli) and URL (presumably in json form). But a single command
>> >> > dispatch/registry framework will make that distinction pretty simple...
>> >>
>> >> Any reason you can't have your CLI json-encode the commands (or,
>> >> conversely, your cgi/wsgi/php/servlet URL handler decode them into
>> >> vector<string>) before passing them on to the monitor?
>> >
>> > We can, but they won't necessarily look the same, because it is unlikely
>> > we can make a sane 1:1 translation of the CLI to REST that makes sense,
>> > and it would be nice to avoid baking knowledge about the individual
>> > commands into the client side.
>>
>> I disagree and am with Joao on this one ? the monitor parsing is
>> ridiculous as it stand right now, and we should be trying to get rid
>> of the manual string parsing. The monitors should be parsing JSON
>> commands that are sent by the client; it makes validation and the
>
> No argument that the current parsing code is bad...
>
>> logic control flow a lot easier. We're going to want some level of
>> intelligence in the clients so that they can tailor themselves to the
>> appropriate UI conventions, and having two different parsing paths in
>
> What do you mean by tailor to UI conventions?
Implementing and/or allowing positional versus named parameters, to
toss off one suggestion. Obviously the CLI will want to allow input
data in a format different than an API, but a port to a different
platform might prefer named parameters instead of positional ones, or
whatever.
Basically I'm agreeing that we as users want to be able to input data
differently and have it mean the same thing ;) ....
>> the monitors is just asking for trouble: they will get out of sync and
>> have different kinds of parsing errors.
>>
>> What we could do is have the monitors speak JSON only, and then give
>> the clients a minimal intelligence so that the CLI could (for
>> instance) prettify the options for commands it knows about, but still
>> allow pass-through for access to newer commands it hasn't yet heard
>> of.
>
> That doesn't really help; it means the mon still has to understand the
> CLI grammar.
>
> What we are talking about is the difference between:
>
> [ 'osd', 'down', '123' ]
>
> and
>
> {
> URI: '/osd/down',
> OSD-Id: 123
> }
>
> or however we generically translate the HTTP request into JSON. Once we
> normalize the code, calling it "parsing" is probably misleading. The top
> (CLI) fragment will match against a rule like:
>
> [ STR("osd"), STR("down"), POSINT ]
>
> or however we encode the syntax, while the below would match against
>
> { .prefix = "/osd/down",
> .fields = [ "OSD-Id": POSINT ]
> }
>
> ..or something. I'm making this syntax up, but you get the idea: there
> would be a strict format for the request and generic code that validates
> it and passes the resulting arguments/matches into a function like
>
> int do_command_osd_down(int n);
>
> regardless of which type of input pattern it matched.
...but my instinct is to want one canonical code path in the monitors,
not two. Two allows for discrepancies in what each method allows to
come in that we're not going to have if they all come in to the
monitor in a single form. So I say that the canonicalization should
happen client-side, and the enforcement should happen server-side (and
probably client-side as well, but that's just for courtesy).
You've suggested that we want the monitors to do the parsing so that
old clients will work, but given that new commands in the monitors
often require new capabilities in the clients, having it be slightly
more awkward to send new commands to new monitors from old clients
doesn't seem like such a big deal to me — if somebody's running
monitor version .64 and client ceph tool version .60 and wants to use
a new thing, I don't feel bad about making them give the CLI a command
which completely specifies what the JSON looks like, instead of using
the pretty wrapping they'd get if they upgraded their client.
Having a canonicalized format also means that when we return errors
they can be a lot more useful, since the monitor can specify what
fields it received and which ones were bad, instead of just outputting
a string from whichever line of code actually broke. Consider an
incoming command whose canonical form is
[ 'crush', 'add', '123', '1.0' ]
And the parsing code runs through that and it fails and the string
going back says "error: does not specify weight!". But the user looks
and says "yes I did, it's 1.0!"
Versus if the error came back as
"Received command: ['area': 'crush', 'command': 'add', 'osd name':
'123', 'osd id': '1.0', 'CRUSH weight': MISSING ]. Error: does not
specify weight!"
to pick one random example that people have had trouble with. That's a
lot harder if we need to support two different methods of parsing
incoming and support that format outgoing.
And to use the same example subject, if we have a standard encoding
coming in to the monitor, it's a lot easier to handle different
versions of commands from users and clients — perhaps the client
interface was originally
"ceph crush add osd.123 123 1.0"
but we later decided that was stupid and switched it to just be
"ceph crush add 123 1.0".
If the format is specified by strings in the monitor, a client sending
the old style fails. Whereas if it's incoming fields, then we can
switch the parsing requirement from
{"field": "crush", "command": "add", "name": STR().POS_INT(), "id":
POS_INT(), "weight": POS_FLOAT}
to
{"field": "crush", "command": "add", OPTIONAL("name":
STR().POS_INT()), "id": POS_INT(), "weight": POS_FLOAT}
Or even something that incorporates versioning explicitly, like
{"command version": "0.56" "field": "crush", "command": "add",
VERSION(0.48, "name": STR().POS_INT()), "id": POS_INT(), "weight":
POS_FLOAT}
instead of requiring users to change all their habits and tools
instantly upon upgrade.
Hell, there's probably a way we can export that parse table back out
to the clients on request so they can use a programmatic
pretty-fication for anything they don't recognize and pretty-fi
already.
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: rest mgmt api
2013-02-11 22:41 ` Gregory Farnum
@ 2013-02-12 0:51 ` Sage Weil
0 siblings, 0 replies; 17+ messages in thread
From: Sage Weil @ 2013-02-12 0:51 UTC (permalink / raw)
To: Gregory Farnum; +Cc: ceph-devel
On Mon, 11 Feb 2013, Gregory Farnum wrote:
> [...]
> ...but my instinct is to want one canonical code path in the monitors,
> not two. Two allows for discrepancies in what each method allows to
> [...]
Yeah, I'm convinced.
Just chatted with Dan and Josh a bit about this. Josh had the interesting
idea that the specification of what commands are supported could be
requested from the monitor is some canonical form (say, a blob of JSON),
and then enforced at the client. That would be translated into an
argparse config for the CLI, and a simple matching/validation table for
the REST endpoint.
That might be worth the complexity to get the best of both worlds... but
first Dan is looking at whether Python's argparse will do everything we
want for the CLI end of things.
In the meantime, the first set of tasks still stand: move the ceph tool
cruft into MonClient and Objecter and out of tool/common.cc for starters.
sage
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2013-02-12 0:51 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-06 17:25 rest mgmt api Sage Weil
2013-02-06 18:15 ` Yehuda Sadeh
2013-02-06 19:05 ` Wido den Hollander
2013-02-06 19:22 ` Dan Mick
2013-02-06 19:25 ` Joao Eduardo Luis
2013-02-06 19:34 ` Sage Weil
2013-02-06 19:51 ` Dimitri Maziuk
2013-02-06 20:14 ` Sage Weil
2013-02-06 21:19 ` Dan Mick
2013-02-06 22:20 ` Dimitri Maziuk
2013-02-07 1:45 ` Jeff Mitchell
2013-02-11 20:04 ` Gregory Farnum
2013-02-11 22:00 ` Sage Weil
2013-02-11 22:38 ` Dimitri Maziuk
2013-02-11 22:41 ` Gregory Farnum
2013-02-12 0:51 ` Sage Weil
2013-02-06 19:36 ` Dan Mick
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.