All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] libceph: kill the "trail" portion of message data
@ 2013-03-10 20:35 Alex Elder
  2013-03-10 20:36 ` [PATCH 1/4] libceph: have osd requests support pagelist data Alex Elder
                   ` (4 more replies)
  0 siblings, 5 replies; 7+ messages in thread
From: Alex Elder @ 2013-03-10 20:35 UTC (permalink / raw)
  To: ceph-devel

The trail portion of message data was added to support
two distinct sets of data for an osd request--one a
pagelist for providing parameters to object method
calls; and a second a page array for receiving data
back from the result of such a call.

It's always been a bit of a weird thing bolted onto
a message though, and with the rework of the messenger
code it can now be removed.

This series eliminates the trail by allowing the osd
client to record a (non-trail) pagelist for data, and
using the fact that we now distinguish incoming from
outgoing data to allow that to be specified distinct
from the page array used for the incoming response.

Having done this, we can eliminate the trail from the
ceph message structure, and then that allows some
code to be simplified.

These patches are available in the "review/wip-kill-trail"
branch of the ceph-client git repository.  That branch
is based on branch "review/wip-cursor".

					-Alex

[PATCH 1/4] libceph: have osd requests support pagelist data
[PATCH 2/4] libceph: kill osd request r_trail
[PATCH 3/4] libceph: kill message trail
[PATCH 4/4] libceph: more cleanup of write_partial_msg_pages()


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/4] libceph: have osd requests support pagelist data
  2013-03-10 20:35 [PATCH 0/4] libceph: kill the "trail" portion of message data Alex Elder
@ 2013-03-10 20:36 ` Alex Elder
  2013-03-10 20:36 ` [PATCH 2/4] libceph: kill osd request r_trail Alex Elder
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Alex Elder @ 2013-03-10 20:36 UTC (permalink / raw)
  To: ceph-devel

Add support for recording a ceph pagelist as data associated with an
osd request.

Signed-off-by: Alex Elder <elder@inktank.com>
---
 include/linux/ceph/osd_client.h |    4 +++-
 net/ceph/osd_client.c           |    3 +++
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/include/linux/ceph/osd_client.h
b/include/linux/ceph/osd_client.h
index bcf3f72..cf0ba93 100644
--- a/include/linux/ceph/osd_client.h
+++ b/include/linux/ceph/osd_client.h
@@ -53,6 +53,7 @@ struct ceph_osd {
 enum ceph_osd_data_type {
 	CEPH_OSD_DATA_TYPE_NONE,
 	CEPH_OSD_DATA_TYPE_PAGES,
+	CEPH_OSD_DATA_TYPE_PAGELIST,
 #ifdef CONFIG_BLOCK
 	CEPH_OSD_DATA_TYPE_BIO,
 #endif /* CONFIG_BLOCK */
@@ -68,8 +69,9 @@ struct ceph_osd_data {
 			bool		pages_from_pool;
 			bool		own_pages;
 		};
+		struct ceph_pagelist	*pagelist;
 #ifdef CONFIG_BLOCK
-		struct bio       *bio;
+		struct bio       	*bio;
 #endif /* CONFIG_BLOCK */
 	};
 };
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index 6b78903..8fa3300 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -1757,6 +1757,9 @@ static void ceph_osdc_msg_data_set(struct ceph_msg
*msg,
 		if (osd_data->length)
 			ceph_msg_data_set_pages(msg, osd_data->pages,
 				osd_data->length, osd_data->alignment);
+	} else if (osd_data->type == CEPH_OSD_DATA_TYPE_PAGELIST) {
+		BUG_ON(!osd_data->pagelist->length);
+		ceph_msg_data_set_pagelist(msg, osd_data->pagelist);
 #ifdef CONFIG_BLOCK
 	} else if (osd_data->type == CEPH_OSD_DATA_TYPE_BIO) {
 		ceph_msg_data_set_bio(msg, osd_data->bio);
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/4] libceph: kill osd request r_trail
  2013-03-10 20:35 [PATCH 0/4] libceph: kill the "trail" portion of message data Alex Elder
  2013-03-10 20:36 ` [PATCH 1/4] libceph: have osd requests support pagelist data Alex Elder
@ 2013-03-10 20:36 ` Alex Elder
  2013-03-10 20:36 ` [PATCH 3/4] libceph: kill message trail Alex Elder
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Alex Elder @ 2013-03-10 20:36 UTC (permalink / raw)
  To: ceph-devel

The osd trail is a pagelist, used only for a CALL osd operation
to hold the class and method names, along with any input data for
the call.

It is only currently used by the rbd client, and when it's used it
is the only bit of outbound data in the osd request.  Since we
already support (non-trail) pagelist data in a message, we can
just save this outbound CALL data in the "normal" pagelist rather
than the trail, and get rid of the trail entirely.

The existing pagelist support depends on the pagelist being
dynamically allocated, and ownership of it is passed to the
messenger once it's been attached to a message.  (That is to say,
the messenger releases and frees the pagelist when it's done with
it).  That means we need to dynamically allocate the pagelist also.

Note that we simply assert that the allocation of a pagelist
structure succeeds.  Appending to a pagelist might require a dynamic
allocation, so we're already assuming we won't run into trouble
doing so (we're just ignore any failures--and that should be fixed
at some point).

This resolves:
    http://tracker.ceph.com/issues/4407

Signed-off-by: Alex Elder <elder@inktank.com>
---
 include/linux/ceph/osd_client.h |    1 -
 net/ceph/osd_client.c           |   23 ++++++++++++-----------
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/include/linux/ceph/osd_client.h
b/include/linux/ceph/osd_client.h
index cf0ba93..1dab291 100644
--- a/include/linux/ceph/osd_client.h
+++ b/include/linux/ceph/osd_client.h
@@ -134,7 +134,6 @@ struct ceph_osd_request {

 	struct ceph_osd_data r_data_in;
 	struct ceph_osd_data r_data_out;
-	struct ceph_pagelist r_trail;	      /* trailing part of data out */
 };

 struct ceph_osd_event {
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index 8fa3300..d0a9fc4 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -138,7 +138,6 @@ void ceph_osdc_release_request(struct kref *kref)
 	}

 	ceph_put_snap_context(req->r_snapc);
-	ceph_pagelist_release(&req->r_trail);
 	if (req->r_mempool)
 		mempool_free(req, req->r_osdc->req_mempool);
 	else
@@ -202,7 +201,6 @@ struct ceph_osd_request
*ceph_osdc_alloc_request(struct ceph_osd_client *osdc,

 	req->r_data_in.type = CEPH_OSD_DATA_TYPE_NONE;
 	req->r_data_out.type = CEPH_OSD_DATA_TYPE_NONE;
-	ceph_pagelist_init(&req->r_trail);

 	/* create request message; allow space for oid */
 	if (use_mempool)
@@ -227,7 +225,7 @@ static u64 osd_req_encode_op(struct ceph_osd_request
*req,
 			      struct ceph_osd_req_op *src)
 {
 	u64 out_data_len = 0;
-	u64 tmp;
+	struct ceph_pagelist *pagelist;

 	dst->op = cpu_to_le16(src->op);

@@ -246,18 +244,23 @@ static u64 osd_req_encode_op(struct
ceph_osd_request *req,
 			cpu_to_le32(src->extent.truncate_seq);
 		break;
 	case CEPH_OSD_OP_CALL:
+		pagelist = kmalloc(sizeof (*pagelist), GFP_NOFS);
+		BUG_ON(!pagelist);
+		ceph_pagelist_init(pagelist);
+
 		dst->cls.class_len = src->cls.class_len;
 		dst->cls.method_len = src->cls.method_len;
 		dst->cls.indata_len = cpu_to_le32(src->cls.indata_len);
-
-		tmp = req->r_trail.length;
-		ceph_pagelist_append(&req->r_trail, src->cls.class_name,
+		ceph_pagelist_append(pagelist, src->cls.class_name,
 				     src->cls.class_len);
-		ceph_pagelist_append(&req->r_trail, src->cls.method_name,
+		ceph_pagelist_append(pagelist, src->cls.method_name,
 				     src->cls.method_len);
-		ceph_pagelist_append(&req->r_trail, src->cls.indata,
+		ceph_pagelist_append(pagelist, src->cls.indata,
 				     src->cls.indata_len);
-		out_data_len = req->r_trail.length - tmp;
+
+		req->r_data_out.type = CEPH_OSD_DATA_TYPE_PAGELIST;
+		req->r_data_out.pagelist = pagelist;
+		out_data_len = pagelist->length;
 		break;
 	case CEPH_OSD_OP_STARTSYNC:
 		break;
@@ -1782,8 +1785,6 @@ int ceph_osdc_start_request(struct ceph_osd_client
*osdc,

 	ceph_osdc_msg_data_set(req->r_reply, &req->r_data_in);
 	ceph_osdc_msg_data_set(req->r_request, &req->r_data_out);
-	if (req->r_trail.length)
-		ceph_msg_data_set_trail(req->r_request, &req->r_trail);

 	register_request(osdc, req);

-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/4] libceph: kill message trail
  2013-03-10 20:35 [PATCH 0/4] libceph: kill the "trail" portion of message data Alex Elder
  2013-03-10 20:36 ` [PATCH 1/4] libceph: have osd requests support pagelist data Alex Elder
  2013-03-10 20:36 ` [PATCH 2/4] libceph: kill osd request r_trail Alex Elder
@ 2013-03-10 20:36 ` Alex Elder
  2013-03-10 20:36 ` [PATCH 4/4] libceph: more cleanup of write_partial_msg_pages() Alex Elder
  2013-03-11 22:44 ` [PATCH 0/4] libceph: kill the "trail" portion of message data Josh Durgin
  4 siblings, 0 replies; 7+ messages in thread
From: Alex Elder @ 2013-03-10 20:36 UTC (permalink / raw)
  To: ceph-devel

The wart that is the ceph message trail can now be removed, because
its only user was the osd client, and the previous patch made that
no longer hte case.

The result allows write_partial_msg_pages() to be simplified
considerably.

Signed-off-by: Alex Elder <elder@inktank.com>
---
 include/linux/ceph/messenger.h |    4 ----
 net/ceph/messenger.c           |   44
+++++-----------------------------------
 2 files changed, 5 insertions(+), 43 deletions(-)

diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index b53b9ef..0e4536c 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -69,7 +69,6 @@ struct ceph_messenger {
 #ifdef CONFIG_BLOCK
 #define ceph_msg_has_bio(m)		((m)->b.type == CEPH_MSG_DATA_BIO)
 #endif /* CONFIG_BLOCK */
-#define ceph_msg_has_trail(m)		((m)->t.type == CEPH_MSG_DATA_PAGELIST)

 enum ceph_msg_data_type {
 	CEPH_MSG_DATA_NONE,	/* message contains no data payload */
@@ -155,7 +154,6 @@ struct ceph_msg {
 #ifdef CONFIG_BLOCK
 	struct ceph_msg_data	b;	/* bio */
 #endif /* CONFIG_BLOCK */
-	struct ceph_msg_data	t;	/* trail */

 	struct ceph_connection *con;
 	struct list_head list_head;	/* links for connection lists */
@@ -295,8 +293,6 @@ extern void ceph_msg_data_set_pages(struct ceph_msg
*msg, struct page **pages,
 extern void ceph_msg_data_set_pagelist(struct ceph_msg *msg,
 				struct ceph_pagelist *pagelist);
 extern void ceph_msg_data_set_bio(struct ceph_msg *msg, struct bio *bio);
-extern void ceph_msg_data_set_trail(struct ceph_msg *msg,
-				struct ceph_pagelist *trail);

 extern struct ceph_msg *ceph_msg_new(int type, int front_len, gfp_t flags,
 				     bool can_fail);
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 12d416f..b7ddf2b 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -1105,8 +1105,6 @@ static void prepare_message_data(struct ceph_msg *msg,
 		ceph_msg_data_cursor_init(&msg->p);
 	if (ceph_msg_has_pagelist(msg))
 		ceph_msg_data_cursor_init(&msg->l);
-	if (ceph_msg_has_trail(msg))
-		ceph_msg_data_cursor_init(&msg->t);

 	msg_pos->did_page_crc = false;
 }
@@ -1388,7 +1386,7 @@ out:
 }

 static void out_msg_pos_next(struct ceph_connection *con, struct page
*page,
-			size_t len, size_t sent, bool in_trail)
+			size_t len, size_t sent)
 {
 	struct ceph_msg *msg = con->out_msg;
 	struct ceph_msg_pos *msg_pos = &con->out_msg_pos;
@@ -1399,9 +1397,7 @@ static void out_msg_pos_next(struct
ceph_connection *con, struct page *page,

 	msg_pos->data_pos += sent;
 	msg_pos->page_pos += sent;
-	if (in_trail)
-		need_crc = ceph_msg_data_advance(&msg->t, sent);
-	else if (ceph_msg_has_pages(msg))
+	if (ceph_msg_has_pages(msg))
 		need_crc = ceph_msg_data_advance(&msg->p, sent);
 	else if (ceph_msg_has_pagelist(msg))
 		need_crc = ceph_msg_data_advance(&msg->l, sent);
@@ -1471,14 +1467,6 @@ static int write_partial_message_data(struct
ceph_connection *con)
 	bool do_datacrc = !con->msgr->nocrc;
 	int ret;
 	int total_max_write;
-	bool in_trail = false;
-	size_t trail_len = 0;
-	size_t trail_off = data_len;
-
-	if (ceph_msg_has_trail(msg)) {
-		trail_len = msg->t.pagelist->length;
-		trail_off -= trail_len;
-	}

 	dout("%s %p msg %p page %d offset %d\n", __func__,
 	     con, msg, msg_pos->page, msg_pos->page_pos);
@@ -1498,16 +1486,9 @@ static int write_partial_message_data(struct
ceph_connection *con)
 		bool use_cursor = false;
 		bool last_piece = true;	/* preserve existing behavior */

-		in_trail = in_trail || msg_pos->data_pos >= trail_off;
-		if (!in_trail)
-			total_max_write = trail_off - msg_pos->data_pos;
+		total_max_write = data_len - msg_pos->data_pos;

-		if (in_trail) {
-			BUG_ON(!ceph_msg_has_trail(msg));
-			use_cursor = true;
-			page = ceph_msg_data_next(&msg->t, &page_offset,
-							&length, &last_piece);
-		} else if (ceph_msg_has_pages(msg)) {
+		if (ceph_msg_has_pages(msg)) {
 			use_cursor = true;
 			page = ceph_msg_data_next(&msg->p, &page_offset,
 							&length, &last_piece);
@@ -1542,7 +1523,7 @@ static int write_partial_message_data(struct
ceph_connection *con)
 		if (ret <= 0)
 			goto out;

-		out_msg_pos_next(con, page, length, (size_t) ret, in_trail);
+		out_msg_pos_next(con, page, length, (size_t) ret);
 	}

 	dout("%s %p msg %p done\n", __func__, con, msg);
@@ -3135,17 +3116,6 @@ void ceph_msg_data_set_bio(struct ceph_msg *msg,
struct bio *bio)
 }
 EXPORT_SYMBOL(ceph_msg_data_set_bio);

-void ceph_msg_data_set_trail(struct ceph_msg *msg, struct ceph_pagelist
*trail)
-{
-	BUG_ON(!trail);
-	BUG_ON(!trail->length);
-	BUG_ON(msg->b.type != CEPH_MSG_DATA_NONE);
-
-	msg->t.type = CEPH_MSG_DATA_PAGELIST;
-	msg->t.pagelist = trail;
-}
-EXPORT_SYMBOL(ceph_msg_data_set_trail);
-
 /*
  * construct a new message with given type, size
  * the new msg has a ref count of 1.
@@ -3169,7 +3139,6 @@ struct ceph_msg *ceph_msg_new(int type, int
front_len, gfp_t flags,
 	ceph_msg_data_init(&m->p);
 	ceph_msg_data_init(&m->l);
 	ceph_msg_data_init(&m->b);
-	ceph_msg_data_init(&m->t);

 	/* front */
 	m->front_max = front_len;
@@ -3335,9 +3304,6 @@ void ceph_msg_last_put(struct kref *kref)
 		m->l.pagelist = NULL;
 	}

-	if (ceph_msg_has_trail(m))
-		m->t.pagelist = NULL;
-
 	if (m->pool)
 		ceph_msgpool_put(m->pool, m);
 	else
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/4] libceph: more cleanup of write_partial_msg_pages()
  2013-03-10 20:35 [PATCH 0/4] libceph: kill the "trail" portion of message data Alex Elder
                   ` (2 preceding siblings ...)
  2013-03-10 20:36 ` [PATCH 3/4] libceph: kill message trail Alex Elder
@ 2013-03-10 20:36 ` Alex Elder
  2013-03-11  5:08   ` [PATCH 4/4, v2] " Alex Elder
  2013-03-11 22:44 ` [PATCH 0/4] libceph: kill the "trail" portion of message data Josh Durgin
  4 siblings, 1 reply; 7+ messages in thread
From: Alex Elder @ 2013-03-10 20:36 UTC (permalink / raw)
  To: ceph-devel

Basically all cases in write_partial_msg_pages() use the cursor, and
as a result we can simplify that function quite a bit.

Signed-off-by: Alex Elder <elder@inktank.com>
---
 net/ceph/messenger.c |   21 +++++++--------------
 1 file changed, 7 insertions(+), 14 deletions(-)

diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index b7ddf2b..6c88db4 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -1466,7 +1466,6 @@ static int write_partial_message_data(struct
ceph_connection *con)
 	unsigned int data_len = le32_to_cpu(msg->hdr.data_len);
 	bool do_datacrc = !con->msgr->nocrc;
 	int ret;
-	int total_max_write;

 	dout("%s %p msg %p page %d offset %d\n", __func__,
 	     con, msg, msg_pos->page, msg_pos->page_pos);
@@ -1480,36 +1479,30 @@ static int write_partial_message_data(struct
ceph_connection *con)
 	 * been revoked, so use the zero page.
 	 */
 	while (data_len > msg_pos->data_pos) {
-		struct page *page = NULL;
+		struct page *page;
 		size_t page_offset;
 		size_t length;
-		bool use_cursor = false;
-		bool last_piece = true;	/* preserve existing behavior */
-
-		total_max_write = data_len - msg_pos->data_pos;
+		bool last_piece;

 		if (ceph_msg_has_pages(msg)) {
-			use_cursor = true;
 			page = ceph_msg_data_next(&msg->p, &page_offset,
 							&length, &last_piece);
 		} else if (ceph_msg_has_pagelist(msg)) {
-			use_cursor = true;
 			page = ceph_msg_data_next(&msg->l, &page_offset,
 							&length, &last_piece);
 #ifdef CONFIG_BLOCK
 		} else if (ceph_msg_has_bio(msg)) {
-			use_cursor = true;
 			page = ceph_msg_data_next(&msg->b, &page_offset,
 							&length, &last_piece);
 #endif
 		} else {
-			page = zero_page;
-		}
-		if (!use_cursor) {
-			length = min_t(int, PAGE_SIZE - msg_pos->page_pos,
-					    total_max_write);
+			unsigned int end;

+			last_piece = data_len <= PAGE_SIZE;
 			page_offset = msg_pos->page_pos;
+			end = last_piece ? data_len : PAGE_SIZE;
+			length = end - page_offset;
+			page = zero_page;
 		}
 		if (do_datacrc && !msg_pos->did_page_crc) {
 			u32 crc = le32_to_cpu(msg->footer.data_crc);
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/4, v2] libceph: more cleanup of write_partial_msg_pages()
  2013-03-10 20:36 ` [PATCH 4/4] libceph: more cleanup of write_partial_msg_pages() Alex Elder
@ 2013-03-11  5:08   ` Alex Elder
  0 siblings, 0 replies; 7+ messages in thread
From: Alex Elder @ 2013-03-11  5:08 UTC (permalink / raw)
  To: ceph-devel

I was a little sloppy in dealing with the zero_page case in
write_partial_message_data(), and I believe it led to some
problems I saw in testing.  So I've modified it so the length
and last_piece values are calculated correctly, and that's
reflected in this updated version of the patch.

					-Alex

Basically all cases in write_partial_msg_pages() use the cursor, and
as a result we can simplify that function quite a bit.

Signed-off-by: Alex Elder <elder@inktank.com>
---
v2: fixed length and last_piece calculation for zero_page

 net/ceph/messenger.c |   20 ++++++--------------
 1 file changed, 6 insertions(+), 14 deletions(-)

diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index b7ddf2b..6818c2f 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -1466,7 +1466,6 @@ static int write_partial_message_data(struct
ceph_connection *con)
 	unsigned int data_len = le32_to_cpu(msg->hdr.data_len);
 	bool do_datacrc = !con->msgr->nocrc;
 	int ret;
-	int total_max_write;

 	dout("%s %p msg %p page %d offset %d\n", __func__,
 	     con, msg, msg_pos->page, msg_pos->page_pos);
@@ -1480,36 +1479,29 @@ static int write_partial_message_data(struct
ceph_connection *con)
 	 * been revoked, so use the zero page.
 	 */
 	while (data_len > msg_pos->data_pos) {
-		struct page *page = NULL;
+		struct page *page;
 		size_t page_offset;
 		size_t length;
-		bool use_cursor = false;
-		bool last_piece = true;	/* preserve existing behavior */
-
-		total_max_write = data_len - msg_pos->data_pos;
+		bool last_piece;

 		if (ceph_msg_has_pages(msg)) {
-			use_cursor = true;
 			page = ceph_msg_data_next(&msg->p, &page_offset,
 							&length, &last_piece);
 		} else if (ceph_msg_has_pagelist(msg)) {
-			use_cursor = true;
 			page = ceph_msg_data_next(&msg->l, &page_offset,
 							&length, &last_piece);
 #ifdef CONFIG_BLOCK
 		} else if (ceph_msg_has_bio(msg)) {
-			use_cursor = true;
 			page = ceph_msg_data_next(&msg->b, &page_offset,
 							&length, &last_piece);
 #endif
 		} else {
-			page = zero_page;
-		}
-		if (!use_cursor) {
-			length = min_t(int, PAGE_SIZE - msg_pos->page_pos,
-					    total_max_write);
+			unsigned int resid = data_len - msg_pos->data_pos;

+			page = zero_page;
 			page_offset = msg_pos->page_pos;
+			length = min(resid, PAGE_SIZE - page_offset);
+			last_piece = length == resid;
 		}
 		if (do_datacrc && !msg_pos->did_page_crc) {
 			u32 crc = le32_to_cpu(msg->footer.data_crc);
-- 
1.7.9.5



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/4] libceph: kill the "trail" portion of message data
  2013-03-10 20:35 [PATCH 0/4] libceph: kill the "trail" portion of message data Alex Elder
                   ` (3 preceding siblings ...)
  2013-03-10 20:36 ` [PATCH 4/4] libceph: more cleanup of write_partial_msg_pages() Alex Elder
@ 2013-03-11 22:44 ` Josh Durgin
  4 siblings, 0 replies; 7+ messages in thread
From: Josh Durgin @ 2013-03-11 22:44 UTC (permalink / raw)
  To: Alex Elder; +Cc: ceph-devel

On 03/10/2013 01:35 PM, Alex Elder wrote:
> The trail portion of message data was added to support
> two distinct sets of data for an osd request--one a
> pagelist for providing parameters to object method
> calls; and a second a page array for receiving data
> back from the result of such a call.
>
> It's always been a bit of a weird thing bolted onto
> a message though, and with the rework of the messenger
> code it can now be removed.
>
> This series eliminates the trail by allowing the osd
> client to record a (non-trail) pagelist for data, and
> using the fact that we now distinguish incoming from
> outgoing data to allow that to be specified distinct
> from the page array used for the incoming response.
>
> Having done this, we can eliminate the trail from the
> ceph message structure, and then that allows some
> code to be simplified.
>
> These patches are available in the "review/wip-kill-trail"
> branch of the ceph-client git repository.  That branch
> is based on branch "review/wip-cursor".
>
> 					-Alex
>
> [PATCH 1/4] libceph: have osd requests support pagelist data
> [PATCH 2/4] libceph: kill osd request r_trail
> [PATCH 3/4] libceph: kill message trail
> [PATCH 4/4] libceph: more cleanup of write_partial_msg_pages()

These look good.

Reviewed-by: Josh Durgin <josh.durgin@inktank.com>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2013-03-11 22:44 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-03-10 20:35 [PATCH 0/4] libceph: kill the "trail" portion of message data Alex Elder
2013-03-10 20:36 ` [PATCH 1/4] libceph: have osd requests support pagelist data Alex Elder
2013-03-10 20:36 ` [PATCH 2/4] libceph: kill osd request r_trail Alex Elder
2013-03-10 20:36 ` [PATCH 3/4] libceph: kill message trail Alex Elder
2013-03-10 20:36 ` [PATCH 4/4] libceph: more cleanup of write_partial_msg_pages() Alex Elder
2013-03-11  5:08   ` [PATCH 4/4, v2] " Alex Elder
2013-03-11 22:44 ` [PATCH 0/4] libceph: kill the "trail" portion of message data Josh Durgin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.