All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9]
@ 2013-04-04 16:16 Alex Elder
  2013-04-04 16:18 ` [PATCH 1/9] ceph: use page_offset() in ceph_writepages_start() Alex Elder
                   ` (9 more replies)
  0 siblings, 10 replies; 20+ messages in thread
From: Alex Elder @ 2013-04-04 16:16 UTC (permalink / raw)
  To: ceph-devel

(The following patches are available in branch "review/wip-3761"
on the ceph-client git respository.)

These are actually a few sets of patches but I'm just going to
post them as a single series this time.

					-Alex

[PATCH 1/9] ceph: use page_offset() in ceph_writepages_start()
    Fixes a potential bug in ceph_writepages_start().

[PATCH 2/9] libceph: drop ceph_osd_request->r_con_filling_msg
    Removes a no-longer-needed field, to simplify code.

[PATCH 3/9] libceph: record length of bio list with bio
[PATCH 4/9] libceph: record message data length
    Has each message maintain its data length, so we can
    avoid depending on what's in the message's header.

[PATCH 5/9] libceph: don't build request in ceph_osdc_new_request()
[PATCH 6/9] ceph: define ceph_writepages_osd_request()
[PATCH 7/9] ceph: kill ceph alloc_page_vec()
[PATCH 8/9] libceph: hold off building osd request
[PATCH 9/9] ceph: build osd request message later for writepages
    Defers "building" a request message until right before
    it's submitted to the osd client to start its execution.
    Also stops having the length field in a message header
    get updated by the file system code.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/9] ceph: use page_offset() in ceph_writepages_start()
  2013-04-04 16:16 [PATCH 0/9] Alex Elder
@ 2013-04-04 16:18 ` Alex Elder
  2013-04-04 16:18 ` [PATCH 2/9] libceph: drop ceph_osd_request->r_con_filling_msg Alex Elder
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Alex Elder @ 2013-04-04 16:18 UTC (permalink / raw)
  To: ceph-devel

There's one spot in ceph_writepages_start() that open-codes what
page_offset() does safely.  Use the macro so we don't have to worry
about wrapping.

This resolves:
    http://tracker.ceph.com/issues/4648

Signed-off-by: Alex Elder <elder@inktank.com>
---
 fs/ceph/addr.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 67d4965..6a5a08e 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -900,7 +900,7 @@ get_more_pages:
 		}

 		/* submit the write */
-		offset = req->r_data_out.pages[0]->index << PAGE_CACHE_SHIFT;
+		offset = page_offset(req->r_data_out.pages[0]);
 		len = min((snap_size ? snap_size : i_size_read(inode)) - offset,
 			  (u64)locked_pages << PAGE_CACHE_SHIFT);
 		dout("writepages got %d pages at %llu~%llu\n",
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/9] libceph: drop ceph_osd_request->r_con_filling_msg
  2013-04-04 16:16 [PATCH 0/9] Alex Elder
  2013-04-04 16:18 ` [PATCH 1/9] ceph: use page_offset() in ceph_writepages_start() Alex Elder
@ 2013-04-04 16:18 ` Alex Elder
  2013-04-04 16:18 ` [PATCH 3/9] libceph: record length of bio list with bio Alex Elder
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Alex Elder @ 2013-04-04 16:18 UTC (permalink / raw)
  To: ceph-devel

A field in an osd request keeps track of whether a connection is
currently filling the request's reply message.  This patch gets rid
of that field.

An osd request includes two messages--a request and a reply--and
they're both associated with the connection that existed to its
the target osd at the time the request was created.

An osd request can be dropped early, even when it's in flight.
And at that time both messages are released.  It's possible the
reply message has been supplied to its connection to receive
an incoming response message at the time the osd request gets
dropped.  So ceph_osdc_release_request() revokes that message
from the connection before releasing it so things get cleaned up
properly.

Previously this may have caused a problem, because the connection
that a message was associated with might have gone away before the
revoke request.  And to avoid any problems using that connection,
the osd client held a reference to it when it supplies its response
message.

However since this commit:
    38941f80 libceph: have messages point to their connection
all messages hold a reference to the connection they are associated
with whenever the connection is actively operating on the message
(i.e. while the message is queued to send or sending, and when it
data is being received into it).  And if a message has no connection
associated with it, ceph_msg_revoke_incoming() won't do anything
when asked to revoke it.

As a result, there is no need to keep an additional reference to the
connection associated with a message when we hand the message to the
messenger when it calls our alloc_msg() method to receive something.
If the connection *were* operating on it, it would have its own
reference, and if not, there's no work to be done when we need to
revoke it.

So get rid of the osd request's r_con_filling_msg field.

This resolves:
    http://tracker.ceph.com/issues/4647

Signed-off-by: Alex Elder <elder@inktank.com>
---
 include/linux/ceph/osd_client.h |    2 --
 net/ceph/osd_client.c           |   29 +++++------------------------
 2 files changed, 5 insertions(+), 26 deletions(-)

diff --git a/include/linux/ceph/osd_client.h
b/include/linux/ceph/osd_client.h
index 5fd2cbf..3b5ba31 100644
--- a/include/linux/ceph/osd_client.h
+++ b/include/linux/ceph/osd_client.h
@@ -89,8 +89,6 @@ struct ceph_osd_request {
 	int              r_pg_osds[CEPH_PG_MAX_SIZE];
 	int              r_num_pg_osds;

-	struct ceph_connection *r_con_filling_msg;
-
 	struct ceph_msg  *r_request, *r_reply;
 	int               r_flags;     /* any additional flags for the osd */
 	u32               r_sent;      /* >0 if r_request is sending/sent */
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index ca79cad..e088792 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -91,15 +91,10 @@ void ceph_osdc_release_request(struct kref *kref)

 	if (req->r_request)
 		ceph_msg_put(req->r_request);
-	if (req->r_con_filling_msg) {
-		dout("%s revoking msg %p from con %p\n", __func__,
-		     req->r_reply, req->r_con_filling_msg);
+	if (req->r_reply) {
 		ceph_msg_revoke_incoming(req->r_reply);
-		req->r_con_filling_msg->ops->put(req->r_con_filling_msg);
-		req->r_con_filling_msg = NULL;
-	}
-	if (req->r_reply)
 		ceph_msg_put(req->r_reply);
+	}

 	if (req->r_data_in.type == CEPH_OSD_DATA_TYPE_PAGES &&
 			req->r_data_in.own_pages) {
@@ -1353,16 +1348,6 @@ static void handle_reply(struct ceph_osd_client
*osdc, struct ceph_msg *msg,
 	for (i = 0; i < numops; i++)
 		req->r_reply_op_result[i] = ceph_decode_32(&p);

-	/*
-	 * if this connection filled our message, drop our reference now, to
-	 * avoid a (safe but slower) revoke later.
-	 */
-	if (req->r_con_filling_msg == con && req->r_reply == msg) {
-		dout(" dropping con_filling_msg ref %p\n", con);
-		req->r_con_filling_msg = NULL;
-		con->ops->put(con);
-	}
-
 	if (!req->r_got_reply) {
 		unsigned int bytes;

@@ -2199,13 +2184,10 @@ static struct ceph_msg *get_reply(struct
ceph_connection *con,
 		goto out;
 	}

-	if (req->r_con_filling_msg) {
+	if (req->r_reply->con)
 		dout("%s revoking msg %p from old con %p\n", __func__,
-		     req->r_reply, req->r_con_filling_msg);
-		ceph_msg_revoke_incoming(req->r_reply);
-		req->r_con_filling_msg->ops->put(req->r_con_filling_msg);
-		req->r_con_filling_msg = NULL;
-	}
+		     req->r_reply, req->r_reply->con);
+	ceph_msg_revoke_incoming(req->r_reply);

 	if (front > req->r_reply->front.iov_len) {
 		pr_warning("get_reply front %d > preallocated %d\n",
@@ -2236,7 +2218,6 @@ static struct ceph_msg *get_reply(struct
ceph_connection *con,
 		}
 	}
 	*skip = 0;
-	req->r_con_filling_msg = con->ops->get(con);
 	dout("get_reply tid %lld %p\n", tid, m);

 out:
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/9] libceph: record length of bio list with bio
  2013-04-04 16:16 [PATCH 0/9] Alex Elder
  2013-04-04 16:18 ` [PATCH 1/9] ceph: use page_offset() in ceph_writepages_start() Alex Elder
  2013-04-04 16:18 ` [PATCH 2/9] libceph: drop ceph_osd_request->r_con_filling_msg Alex Elder
@ 2013-04-04 16:18 ` Alex Elder
  2013-04-04 16:19 ` [PATCH 4/9] libceph: record message data length Alex Elder
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Alex Elder @ 2013-04-04 16:18 UTC (permalink / raw)
  To: ceph-devel

When assigning a bio pointer to an osd request, we don't have an
efficient way of knowing the total length bytes in the bio list.
That information is available at the point it's set up by the rbd
code, so record it with the osd data when it's set.

This and the next patch are related to maintaining the length of a
message's data independent of the message header, as described here:
    http://tracker.ceph.com/issues/4589

Signed-off-by: Alex Elder <elder@inktank.com>
---
 drivers/block/rbd.c             |    1 +
 include/linux/ceph/osd_client.h |    5 ++++-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 02d821e..9fb51b5 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -1352,6 +1352,7 @@ static struct ceph_osd_request *rbd_osd_req_create(
 		rbd_assert(obj_request->bio_list != NULL);
 		osd_data->type = CEPH_OSD_DATA_TYPE_BIO;
 		osd_data->bio = obj_request->bio_list;
+		osd_data->bio_length = obj_request->length;
 		break;
 	case OBJ_REQUEST_PAGES:
 		osd_data->type = CEPH_OSD_DATA_TYPE_PAGES;
diff --git a/include/linux/ceph/osd_client.h
b/include/linux/ceph/osd_client.h
index 3b5ba31..fdda93e 100644
--- a/include/linux/ceph/osd_client.h
+++ b/include/linux/ceph/osd_client.h
@@ -71,7 +71,10 @@ struct ceph_osd_data {
 		};
 		struct ceph_pagelist	*pagelist;
 #ifdef CONFIG_BLOCK
-		struct bio       	*bio;
+		struct {
+			struct bio	*bio;		/* list of bios */
+			size_t		bio_length;	/* total in list */
+		};
 #endif /* CONFIG_BLOCK */
 	};
 };
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 4/9] libceph: record message data length
  2013-04-04 16:16 [PATCH 0/9] Alex Elder
                   ` (2 preceding siblings ...)
  2013-04-04 16:18 ` [PATCH 3/9] libceph: record length of bio list with bio Alex Elder
@ 2013-04-04 16:19 ` Alex Elder
  2013-04-04 18:34   ` [PATCH 4/9, v2] " Alex Elder
  2013-04-04 16:19 ` [PATCH 5/9] libceph: don't build request in ceph_osdc_new_request() Alex Elder
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Alex Elder @ 2013-04-04 16:19 UTC (permalink / raw)
  To: ceph-devel

Keep track of the length of the data portion for a message in a
separate field in the ceph_msg structure.  This information has
been maintained in wire byte order in the message header, but
that's going to change soon.

Signed-off-by: Alex Elder <elder@inktank.com>
---
 include/linux/ceph/messenger.h |    4 +++-
 net/ceph/messenger.c           |    9 ++++++++-
 net/ceph/osd_client.c          |    2 +-
 3 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index 3181321..b832c0c 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -139,6 +139,7 @@ struct ceph_msg {
 	struct kvec front;              /* unaligned blobs of message */
 	struct ceph_buffer *middle;

+	size_t			data_length;
 	struct ceph_msg_data	*data;	/* data payload */

 	struct ceph_connection *con;
@@ -270,7 +271,8 @@ extern void ceph_msg_data_set_pages(struct ceph_msg
*msg, struct page **pages,
 				size_t length, size_t alignment);
 extern void ceph_msg_data_set_pagelist(struct ceph_msg *msg,
 				struct ceph_pagelist *pagelist);
-extern void ceph_msg_data_set_bio(struct ceph_msg *msg, struct bio *bio);
+extern void ceph_msg_data_set_bio(struct ceph_msg *msg, struct bio *bio,
+				size_t length);

 extern struct ceph_msg *ceph_msg_new(int type, int front_len, gfp_t flags,
 				     bool can_fail);
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index ee16086..7812c33 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2981,6 +2981,7 @@ void ceph_msg_data_set_pages(struct ceph_msg *msg,
struct page **pages,

 	BUG_ON(!pages);
 	BUG_ON(!length);
+	BUG_ON(msg->data_length);
 	BUG_ON(msg->data != NULL);

 	data = ceph_msg_data_create(CEPH_MSG_DATA_PAGES);
@@ -2990,6 +2991,7 @@ void ceph_msg_data_set_pages(struct ceph_msg *msg,
struct page **pages,
 	data->alignment = alignment & ~PAGE_MASK;

 	msg->data = data;
+	msg->data_length = length;
 }
 EXPORT_SYMBOL(ceph_msg_data_set_pages);

@@ -3000,6 +3002,7 @@ void ceph_msg_data_set_pagelist(struct ceph_msg *msg,

 	BUG_ON(!pagelist);
 	BUG_ON(!pagelist->length);
+	BUG_ON(msg->data_length);
 	BUG_ON(msg->data != NULL);

 	data = ceph_msg_data_create(CEPH_MSG_DATA_PAGELIST);
@@ -3007,14 +3010,17 @@ void ceph_msg_data_set_pagelist(struct ceph_msg
*msg,
 	data->pagelist = pagelist;

 	msg->data = data;
+	msg->data_length = pagelist->length;
 }
 EXPORT_SYMBOL(ceph_msg_data_set_pagelist);

-void ceph_msg_data_set_bio(struct ceph_msg *msg, struct bio *bio)
+void ceph_msg_data_set_bio(struct ceph_msg *msg, struct bio *bio,
+		size_t length)
 {
 	struct ceph_msg_data *data;

 	BUG_ON(!bio);
+	BUG_ON(msg->data_length);
 	BUG_ON(msg->data != NULL);

 	data = ceph_msg_data_create(CEPH_MSG_DATA_BIO);
@@ -3022,6 +3028,7 @@ void ceph_msg_data_set_bio(struct ceph_msg *msg,
struct bio *bio)
 	data->bio = bio;

 	msg->data = data;
+	msg->data_length = length;
 }
 EXPORT_SYMBOL(ceph_msg_data_set_bio);

diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index e088792..0b4951e 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -1848,7 +1848,7 @@ static void ceph_osdc_msg_data_set(struct ceph_msg
*msg,
 		ceph_msg_data_set_pagelist(msg, osd_data->pagelist);
 #ifdef CONFIG_BLOCK
 	} else if (osd_data->type == CEPH_OSD_DATA_TYPE_BIO) {
-		ceph_msg_data_set_bio(msg, osd_data->bio);
+		ceph_msg_data_set_bio(msg, osd_data->bio, osd_data->bio_length);
 #endif
 	} else {
 		BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_NONE);
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 5/9] libceph: don't build request in ceph_osdc_new_request()
  2013-04-04 16:16 [PATCH 0/9] Alex Elder
                   ` (3 preceding siblings ...)
  2013-04-04 16:19 ` [PATCH 4/9] libceph: record message data length Alex Elder
@ 2013-04-04 16:19 ` Alex Elder
  2013-04-04 16:19 ` [PATCH 6/9] ceph: define ceph_writepages_osd_request() Alex Elder
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Alex Elder @ 2013-04-04 16:19 UTC (permalink / raw)
  To: ceph-devel

This patch moves the call to ceph_osdc_build_request() out of
ceph_osdc_new_request() and into its caller.

This is in order to defer formatting osd operation information into
the request message until just before request is started.

The only unusual (ab)user of ceph_osdc_build_request() is
ceph_writepages_start(), where the final length of write request may
change (downward) based on the current inode size or the oldest
snapshot context with dirty data for the inode.

The remaining callers don't change anything in the request after has
been built.

This means the ops array is now supplied by the caller.  It also
means there is no need to pass the mtime to ceph_osdc_new_request()
(it gets provided to ceph_osdc_build_request()).  And rather than
passing a do_sync flag, have the number of ops in the ops array
supplied imply adding a second STARTSYNC operation after the READ or
WRITE requested.

This and some of the patches that follow are related to having the
messenger (only) be responsible for filling the content of the
message header, as described here:
    http://tracker.ceph.com/issues/4589

Signed-off-by: Alex Elder <elder@inktank.com>
---
 fs/ceph/addr.c                  |   36 ++++++++++++++++++++++-------------
 fs/ceph/file.c                  |   20 +++++++++++++-------
 include/linux/ceph/osd_client.h |   12 ++++++------
 net/ceph/osd_client.c           |   40
++++++++++++++++++++-------------------
 4 files changed, 63 insertions(+), 45 deletions(-)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 6a5a08e..de7aac0 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -284,7 +284,9 @@ static int start_read(struct inode *inode, struct
list_head *page_list, int max)
 		&ceph_inode_to_client(inode)->client->osdc;
 	struct ceph_inode_info *ci = ceph_inode(inode);
 	struct page *page = list_entry(page_list->prev, struct page, lru);
+	struct ceph_vino vino;
 	struct ceph_osd_request *req;
+	struct ceph_osd_req_op op;
 	u64 off;
 	u64 len;
 	int i;
@@ -308,16 +310,17 @@ static int start_read(struct inode *inode, struct
list_head *page_list, int max)
 	len = nr_pages << PAGE_CACHE_SHIFT;
 	dout("start_read %p nr_pages %d is %lld~%lld\n", inode, nr_pages,
 	     off, len);
-
-	req = ceph_osdc_new_request(osdc, &ci->i_layout, ceph_vino(inode),
-				    off, &len,
-				    CEPH_OSD_OP_READ, CEPH_OSD_FLAG_READ,
-				    NULL, 0,
+	vino = ceph_vino(inode);
+	req = ceph_osdc_new_request(osdc, &ci->i_layout, vino, off, &len,
+				    1, &op, CEPH_OSD_OP_READ,
+				    CEPH_OSD_FLAG_READ, NULL,
 				    ci->i_truncate_seq, ci->i_truncate_size,
-				    NULL, false);
+				    false);
 	if (IS_ERR(req))
 		return PTR_ERR(req);

+	ceph_osdc_build_request(req, off, 1, &op, NULL, vino.snap, NULL);
+
 	/* build page vector */
 	nr_pages = calc_pages_for(0, len);
 	pages = kmalloc(sizeof(*pages) * nr_pages, GFP_NOFS);
@@ -736,6 +739,7 @@ retry:
 	last_snapc = snapc;

 	while (!done && index <= end) {
+		struct ceph_osd_req_op ops[2];
 		unsigned i;
 		int first;
 		pgoff_t next;
@@ -825,20 +829,22 @@ get_more_pages:

 			/* ok */
 			if (locked_pages == 0) {
+				struct ceph_vino vino;
+				int num_ops = do_sync ? 2 : 1;
+
 				/* prepare async write request */
 				offset = (u64) page_offset(page);
 				len = wsize;
+				vino = ceph_vino(inode);
+				/* BUG_ON(vino.snap != CEPH_NOSNAP); */
 				req = ceph_osdc_new_request(&fsc->client->osdc,
-					    &ci->i_layout,
-					    ceph_vino(inode),
-					    offset, &len,
+					    &ci->i_layout, vino, offset, &len,
+					    num_ops, ops,
 					    CEPH_OSD_OP_WRITE,
 					    CEPH_OSD_FLAG_WRITE |
 						    CEPH_OSD_FLAG_ONDISK,
-					    snapc, do_sync,
-					    ci->i_truncate_seq,
-					    ci->i_truncate_size,
-					    &inode->i_mtime, true);
+					    snapc, ci->i_truncate_seq,
+					    ci->i_truncate_size, true);

 				if (IS_ERR(req)) {
 					rc = PTR_ERR(req);
@@ -846,6 +852,10 @@ get_more_pages:
 					break;
 				}

+				ceph_osdc_build_request(req, offset,
+					num_ops, ops, snapc, vino.snap,
+					&inode->i_mtime);
+
 				req->r_data_out.type = CEPH_OSD_DATA_TYPE_PAGES;
 				req->r_data_out.length = len;
 				req->r_data_out.alignment = 0;
diff --git a/fs/ceph/file.c b/fs/ceph/file.c
index 0e8230d..f341c90 100644
--- a/fs/ceph/file.c
+++ b/fs/ceph/file.c
@@ -475,14 +475,17 @@ static ssize_t ceph_sync_write(struct file *file,
const char __user *data,
 	struct inode *inode = file->f_dentry->d_inode;
 	struct ceph_inode_info *ci = ceph_inode(inode);
 	struct ceph_fs_client *fsc = ceph_inode_to_client(inode);
+	struct ceph_snap_context *snapc;
+	struct ceph_vino vino;
 	struct ceph_osd_request *req;
+	struct ceph_osd_req_op ops[2];
+	int num_ops = 1;
 	struct page **pages;
 	int num_pages;
 	long long unsigned pos;
 	u64 len;
 	int written = 0;
 	int flags;
-	int do_sync = 0;
 	int check_caps = 0;
 	int page_align, io_align;
 	unsigned long buf_align;
@@ -516,7 +519,7 @@ static ssize_t ceph_sync_write(struct file *file,
const char __user *data,
 	if ((file->f_flags & (O_SYNC|O_DIRECT)) == 0)
 		flags |= CEPH_OSD_FLAG_ACK;
 	else
-		do_sync = 1;
+		num_ops++;	/* Also include a 'startsync' command. */

 	/*
 	 * we may need to do multiple writes here if we span an object
@@ -527,16 +530,19 @@ more:
 	buf_align = (unsigned long)data & ~PAGE_MASK;
 	len = left;

+	snapc = ci->i_snap_realm->cached_context;
+	vino = ceph_vino(inode);
 	req = ceph_osdc_new_request(&fsc->client->osdc, &ci->i_layout,
-				    ceph_vino(inode), pos, &len,
-				    CEPH_OSD_OP_WRITE, flags,
-				    ci->i_snap_realm->cached_context,
-				    do_sync,
+				    vino, pos, &len, num_ops, ops,
+				    CEPH_OSD_OP_WRITE, flags, snapc,
 				    ci->i_truncate_seq, ci->i_truncate_size,
-				    &mtime, false);
+				    false);
 	if (IS_ERR(req))
 		return PTR_ERR(req);

+	ceph_osdc_build_request(req, pos, num_ops, ops,
+				snapc, vino.snap, &mtime);
+
 	/* write from beginning of first page, regardless of io alignment */
 	page_align = file->f_flags & O_DIRECT ? buf_align : io_align;
 	num_pages = calc_pages_for(page_align, len);
diff --git a/include/linux/ceph/osd_client.h
b/include/linux/ceph/osd_client.h
index fdda93e..ffaf907 100644
--- a/include/linux/ceph/osd_client.h
+++ b/include/linux/ceph/osd_client.h
@@ -243,12 +243,12 @@ extern void osd_req_op_watch_init(struct
ceph_osd_req_op *op, u16 opcode,

 extern struct ceph_osd_request *ceph_osdc_alloc_request(struct
ceph_osd_client *osdc,
 					       struct ceph_snap_context *snapc,
-					       unsigned int num_op,
+					       unsigned int num_ops,
 					       bool use_mempool,
 					       gfp_t gfp_flags);

 extern void ceph_osdc_build_request(struct ceph_osd_request *req, u64 off,
-				    unsigned int num_op,
+				    unsigned int num_ops,
 				    struct ceph_osd_req_op *src_ops,
 				    struct ceph_snap_context *snapc,
 				    u64 snap_id,
@@ -257,11 +257,11 @@ extern void ceph_osdc_build_request(struct
ceph_osd_request *req, u64 off,
 extern struct ceph_osd_request *ceph_osdc_new_request(struct
ceph_osd_client *,
 				      struct ceph_file_layout *layout,
 				      struct ceph_vino vino,
-				      u64 offset, u64 *len, int op, int flags,
+				      u64 offset, u64 *len,
+				      int num_ops, struct ceph_osd_req_op *ops,
+				      int opcode, int flags,
 				      struct ceph_snap_context *snapc,
-				      int do_sync, u32 truncate_seq,
-				      u64 truncate_size,
-				      struct timespec *mtime,
+				      u32 truncate_seq, u64 truncate_size,
 				      bool use_mempool);

 extern void ceph_osdc_set_request_linger(struct ceph_osd_client *osdc,
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index 0b4951e..115790a 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -512,9 +512,7 @@ void ceph_osdc_build_request(struct ceph_osd_request
*req,
 	msg->front.iov_len = msg_size;
 	msg->hdr.front_len = cpu_to_le32(msg_size);

-	dout("build_request msg_size was %d num_ops %d\n", (int)msg_size,
-	     num_ops);
-	return;
+	dout("build_request msg_size was %d\n", (int)msg_size);
 }
 EXPORT_SYMBOL(ceph_osdc_build_request);

@@ -532,18 +530,15 @@ EXPORT_SYMBOL(ceph_osdc_build_request);
 struct ceph_osd_request *ceph_osdc_new_request(struct ceph_osd_client
*osdc,
 					       struct ceph_file_layout *layout,
 					       struct ceph_vino vino,
-					       u64 off, u64 *plen,
+					       u64 off, u64 *plen, int num_ops,
+					       struct ceph_osd_req_op *ops,
 					       int opcode, int flags,
 					       struct ceph_snap_context *snapc,
-					       int do_sync,
 					       u32 truncate_seq,
 					       u64 truncate_size,
-					       struct timespec *mtime,
 					       bool use_mempool)
 {
-	struct ceph_osd_req_op ops[2];
 	struct ceph_osd_request *req;
-	unsigned int num_op = do_sync ? 2 : 1;
 	u64 objnum = 0;
 	u64 objoff = 0;
 	u64 objlen = 0;
@@ -553,7 +548,7 @@ struct ceph_osd_request
*ceph_osdc_new_request(struct ceph_osd_client *osdc,

 	BUG_ON(opcode != CEPH_OSD_OP_READ && opcode != CEPH_OSD_OP_WRITE);

-	req = ceph_osdc_alloc_request(osdc, snapc, num_op, use_mempool,
+	req = ceph_osdc_alloc_request(osdc, snapc, num_ops, use_mempool,
 					GFP_NOFS);
 	if (!req)
 		return ERR_PTR(-ENOMEM);
@@ -578,7 +573,12 @@ struct ceph_osd_request
*ceph_osdc_new_request(struct ceph_osd_client *osdc,

 	osd_req_op_extent_init(&ops[0], opcode, objoff, objlen,
 				truncate_size, truncate_seq);
-	if (do_sync)
+	/*
+	 * A second op in the ops array means the caller wants to
+	 * also issue a include a 'startsync' command so that the
+	 * osd will flush data quickly.
+	 */
+	if (num_ops > 1)
 		osd_req_op_init(&ops[1], CEPH_OSD_OP_STARTSYNC);

 	req->r_file_layout = *layout;  /* keep a copy */
@@ -587,9 +587,6 @@ struct ceph_osd_request
*ceph_osdc_new_request(struct ceph_osd_client *osdc,
 		vino.ino, objnum);
 	req->r_oid_len = strlen(req->r_oid);

-	ceph_osdc_build_request(req, off, num_op, ops,
-				snapc, vino.snap, mtime);
-
 	return req;
 }
 EXPORT_SYMBOL(ceph_osdc_new_request);
@@ -2047,17 +2044,20 @@ int ceph_osdc_readpages(struct ceph_osd_client
*osdc,
 {
 	struct ceph_osd_request *req;
 	struct ceph_osd_data *osd_data;
+	struct ceph_osd_req_op op;
 	int rc = 0;

 	dout("readpages on ino %llx.%llx on %llu~%llu\n", vino.ino,
 	     vino.snap, off, *plen);
-	req = ceph_osdc_new_request(osdc, layout, vino, off, plen,
+	req = ceph_osdc_new_request(osdc, layout, vino, off, plen, 1, &op,
 				    CEPH_OSD_OP_READ, CEPH_OSD_FLAG_READ,
-				    NULL, 0, truncate_seq, truncate_size, NULL,
+				    NULL, truncate_seq, truncate_size,
 				    false);
 	if (IS_ERR(req))
 		return PTR_ERR(req);

+	ceph_osdc_build_request(req, off, 1, &op, NULL, vino.snap, NULL);
+
 	/* it may be a short read due to an object boundary */

 	osd_data = &req->r_data_in;
@@ -2092,19 +2092,21 @@ int ceph_osdc_writepages(struct ceph_osd_client
*osdc, struct ceph_vino vino,
 {
 	struct ceph_osd_request *req;
 	struct ceph_osd_data *osd_data;
+	struct ceph_osd_req_op op;
 	int rc = 0;
 	int page_align = off & ~PAGE_MASK;

-	BUG_ON(vino.snap != CEPH_NOSNAP);
-	req = ceph_osdc_new_request(osdc, layout, vino, off, &len,
+	BUG_ON(vino.snap != CEPH_NOSNAP);	/* snapshots aren't writeable */
+	req = ceph_osdc_new_request(osdc, layout, vino, off, &len, 1, &op,
 				    CEPH_OSD_OP_WRITE,
 				    CEPH_OSD_FLAG_ONDISK | CEPH_OSD_FLAG_WRITE,
-				    snapc, 0,
-				    truncate_seq, truncate_size, mtime,
+				    snapc, truncate_seq, truncate_size,
 				    true);
 	if (IS_ERR(req))
 		return PTR_ERR(req);

+	ceph_osdc_build_request(req, off, 1, &op, snapc, CEPH_NOSNAP, mtime);
+
 	/* it may be a short write due to an object boundary */
 	osd_data = &req->r_data_out;
 	osd_data->type = CEPH_OSD_DATA_TYPE_PAGES;
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 6/9] ceph: define ceph_writepages_osd_request()
  2013-04-04 16:16 [PATCH 0/9] Alex Elder
                   ` (4 preceding siblings ...)
  2013-04-04 16:19 ` [PATCH 5/9] libceph: don't build request in ceph_osdc_new_request() Alex Elder
@ 2013-04-04 16:19 ` Alex Elder
  2013-04-04 16:19 ` [PATCH 7/9] ceph: kill ceph alloc_page_vec() Alex Elder
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Alex Elder @ 2013-04-04 16:19 UTC (permalink / raw)
  To: ceph-devel

Mostly for readability, define ceph_writepages_osd_request() and
use it to allocate the osd request for ceph_writepages_start().

Signed-off-by: Alex Elder <elder@inktank.com>
---
 fs/ceph/addr.c |   34 ++++++++++++++++++++++++----------
 1 file changed, 24 insertions(+), 10 deletions(-)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index de7aac0..5b4ac17 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -654,6 +654,26 @@ static void alloc_page_vec(struct ceph_fs_client *fsc,
 	}
 }

+static struct ceph_osd_request *
+ceph_writepages_osd_request(struct inode *inode, u64 offset, u64 *len,
+				struct ceph_snap_context *snapc,
+				int num_ops, struct ceph_osd_req_op *ops)
+{
+	struct ceph_fs_client *fsc;
+	struct ceph_inode_info *ci;
+	struct ceph_vino vino;
+
+	fsc = ceph_inode_to_client(inode);
+	ci = ceph_inode(inode);
+	vino = ceph_vino(inode);
+	/* BUG_ON(vino.snap != CEPH_NOSNAP); */
+
+	return ceph_osdc_new_request(&fsc->client->osdc, &ci->i_layout,
+			vino, offset, len, num_ops, ops, CEPH_OSD_OP_WRITE,
+			CEPH_OSD_FLAG_WRITE|CEPH_OSD_FLAG_ONDISK,
+			snapc, ci->i_truncate_seq, ci->i_truncate_size, true);
+}
+
 /*
  * initiate async writeback
  */
@@ -835,16 +855,9 @@ get_more_pages:
 				/* prepare async write request */
 				offset = (u64) page_offset(page);
 				len = wsize;
-				vino = ceph_vino(inode);
-				/* BUG_ON(vino.snap != CEPH_NOSNAP); */
-				req = ceph_osdc_new_request(&fsc->client->osdc,
-					    &ci->i_layout, vino, offset, &len,
-					    num_ops, ops,
-					    CEPH_OSD_OP_WRITE,
-					    CEPH_OSD_FLAG_WRITE |
-						    CEPH_OSD_FLAG_ONDISK,
-					    snapc, ci->i_truncate_seq,
-					    ci->i_truncate_size, true);
+				req = ceph_writepages_osd_request(inode,
+							offset, &len, snapc,
+							num_ops, ops);

 				if (IS_ERR(req)) {
 					rc = PTR_ERR(req);
@@ -852,6 +865,7 @@ get_more_pages:
 					break;
 				}

+				vino = ceph_vino(inode);
 				ceph_osdc_build_request(req, offset,
 					num_ops, ops, snapc, vino.snap,
 					&inode->i_mtime);
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 7/9] ceph: kill ceph alloc_page_vec()
  2013-04-04 16:16 [PATCH 0/9] Alex Elder
                   ` (5 preceding siblings ...)
  2013-04-04 16:19 ` [PATCH 6/9] ceph: define ceph_writepages_osd_request() Alex Elder
@ 2013-04-04 16:19 ` Alex Elder
  2013-04-04 16:20 ` [PATCH 8/9] libceph: hold off building osd request Alex Elder
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Alex Elder @ 2013-04-04 16:19 UTC (permalink / raw)
  To: ceph-devel

There is a helper function alloc_page_vec() that, despite its
generic sounding name depends heavily on an osd request structure
being populated with certain information.

There is only one place this function is used, and it ends up
being a bit simpler to just open code what it does, so get
rid of the helper.

The real motivation for this is deferring building the of the osd
request message, and this is a step in that direction.

Signed-off-by: Alex Elder <elder@inktank.com>
---
 fs/ceph/addr.c |   45 ++++++++++++++++++---------------------------
 1 file changed, 18 insertions(+), 27 deletions(-)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 5b4ac17..e976c6d 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -631,29 +631,6 @@ static void writepages_finish(struct
ceph_osd_request *req,
 	ceph_osdc_put_request(req);
 }

-/*
- * allocate a page vec, either directly, or if necessary, via a the
- * mempool.  we avoid the mempool if we can because req->r_data_out.length
- * may be less than the maximum write size.
- */
-static void alloc_page_vec(struct ceph_fs_client *fsc,
-			   struct ceph_osd_request *req)
-{
-	size_t size;
-	int num_pages;
-
-	num_pages = calc_pages_for((u64)req->r_data_out.alignment,
-					(u64)req->r_data_out.length);
-	size = sizeof (struct page *) * num_pages;
-	req->r_data_out.pages = kmalloc(size, GFP_NOFS);
-	if (!req->r_data_out.pages) {
-		req->r_data_out.pages = mempool_alloc(fsc->wb_pagevec_pool,
-							GFP_NOFS);
-		req->r_data_out.pages_from_pool = 1;
-		WARN_ON(!req->r_data_out.pages);
-	}
-}
-
 static struct ceph_osd_request *
 ceph_writepages_osd_request(struct inode *inode, u64 offset, u64 *len,
 				struct ceph_snap_context *snapc,
@@ -851,6 +828,9 @@ get_more_pages:
 			if (locked_pages == 0) {
 				struct ceph_vino vino;
 				int num_ops = do_sync ? 2 : 1;
+				size_t size;
+				struct page **pages;
+				mempool_t *pool = NULL;

 				/* prepare async write request */
 				offset = (u64) page_offset(page);
@@ -870,13 +850,24 @@ get_more_pages:
 					num_ops, ops, snapc, vino.snap,
 					&inode->i_mtime);

+				req->r_callback = writepages_finish;
+				req->r_inode = inode;
+
+				max_pages = calc_pages_for(0, (u64)len);
+				size = max_pages * sizeof (*pages);
+				pages = kmalloc(size, GFP_NOFS);
+				if (!pages) {
+					pool = fsc->wb_pagevec_pool;
+
+					pages = mempool_alloc(pool, GFP_NOFS);
+					WARN_ON(!pages);
+				}
+
+				req->r_data_out.pages = pages;
+				req->r_data_out.pages_from_pool = !!pool;
 				req->r_data_out.type = CEPH_OSD_DATA_TYPE_PAGES;
 				req->r_data_out.length = len;
 				req->r_data_out.alignment = 0;
-				max_pages = calc_pages_for(0, (u64)len);
-				alloc_page_vec(fsc, req);
-				req->r_callback = writepages_finish;
-				req->r_inode = inode;
 			}

 			/* note position of first page in pvec */
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 8/9] libceph: hold off building osd request
  2013-04-04 16:16 [PATCH 0/9] Alex Elder
                   ` (6 preceding siblings ...)
  2013-04-04 16:19 ` [PATCH 7/9] ceph: kill ceph alloc_page_vec() Alex Elder
@ 2013-04-04 16:20 ` Alex Elder
  2013-04-04 16:20 ` [PATCH 9/9] ceph: build osd request message later for writepages Alex Elder
  2013-04-05  3:03 ` [PATCH 0/9] Josh Durgin
  9 siblings, 0 replies; 20+ messages in thread
From: Alex Elder @ 2013-04-04 16:20 UTC (permalink / raw)
  To: ceph-devel

Defer building the osd request until just before submitting it in
all callers except ceph_writepages_start().  (That caller will be
handed in the next patch.)

Signed-off-by: Alex Elder <elder@inktank.com>
---
 fs/ceph/addr.c        |    4 ++--
 fs/ceph/file.c        |    7 ++++---
 net/ceph/osd_client.c |    8 ++++----
 3 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index e976c6d..125d0a8 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -319,8 +319,6 @@ static int start_read(struct inode *inode, struct
list_head *page_list, int max)
 	if (IS_ERR(req))
 		return PTR_ERR(req);

-	ceph_osdc_build_request(req, off, 1, &op, NULL, vino.snap, NULL);
-
 	/* build page vector */
 	nr_pages = calc_pages_for(0, len);
 	pages = kmalloc(sizeof(*pages) * nr_pages, GFP_NOFS);
@@ -351,6 +349,8 @@ static int start_read(struct inode *inode, struct
list_head *page_list, int max)
 	req->r_callback = finish_read;
 	req->r_inode = inode;

+	ceph_osdc_build_request(req, off, 1, &op, NULL, vino.snap, NULL);
+
 	dout("start_read %p starting %p %lld~%lld\n", inode, req, off, len);
 	ret = ceph_osdc_start_request(osdc, req, false);
 	if (ret < 0)
diff --git a/fs/ceph/file.c b/fs/ceph/file.c
index f341c90..66b8469 100644
--- a/fs/ceph/file.c
+++ b/fs/ceph/file.c
@@ -540,9 +540,6 @@ more:
 	if (IS_ERR(req))
 		return PTR_ERR(req);

-	ceph_osdc_build_request(req, pos, num_ops, ops,
-				snapc, vino.snap, &mtime);
-
 	/* write from beginning of first page, regardless of io alignment */
 	page_align = file->f_flags & O_DIRECT ? buf_align : io_align;
 	num_pages = calc_pages_for(page_align, len);
@@ -583,6 +580,10 @@ more:
 	req->r_data_out.alignment = page_align;
 	req->r_inode = inode;

+	/* BUG_ON(vino.snap != CEPH_NOSNAP); */
+	ceph_osdc_build_request(req, pos, num_ops, ops,
+				snapc, vino.snap, &mtime);
+
 	ret = ceph_osdc_start_request(&fsc->client->osdc, req, false);
 	if (!ret) {
 		if (req->r_safe_callback) {
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index 115790a..9ca693d 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -2056,8 +2056,6 @@ int ceph_osdc_readpages(struct ceph_osd_client *osdc,
 	if (IS_ERR(req))
 		return PTR_ERR(req);

-	ceph_osdc_build_request(req, off, 1, &op, NULL, vino.snap, NULL);
-
 	/* it may be a short read due to an object boundary */

 	osd_data = &req->r_data_in;
@@ -2069,6 +2067,8 @@ int ceph_osdc_readpages(struct ceph_osd_client *osdc,
 	dout("readpages  final extent is %llu~%llu (%llu bytes align %d)\n",
 	     off, *plen, osd_data->length, page_align);

+	ceph_osdc_build_request(req, off, 1, &op, NULL, vino.snap, NULL);
+
 	rc = ceph_osdc_start_request(osdc, req, false);
 	if (!rc)
 		rc = ceph_osdc_wait_request(osdc, req);
@@ -2105,8 +2105,6 @@ int ceph_osdc_writepages(struct ceph_osd_client
*osdc, struct ceph_vino vino,
 	if (IS_ERR(req))
 		return PTR_ERR(req);

-	ceph_osdc_build_request(req, off, 1, &op, snapc, CEPH_NOSNAP, mtime);
-
 	/* it may be a short write due to an object boundary */
 	osd_data = &req->r_data_out;
 	osd_data->type = CEPH_OSD_DATA_TYPE_PAGES;
@@ -2115,6 +2113,8 @@ int ceph_osdc_writepages(struct ceph_osd_client
*osdc, struct ceph_vino vino,
 	osd_data->alignment = page_align;
 	dout("writepages %llu~%llu (%llu bytes)\n", off, len, osd_data->length);

+	ceph_osdc_build_request(req, off, 1, &op, snapc, CEPH_NOSNAP, mtime);
+
 	rc = ceph_osdc_start_request(osdc, req, true);
 	if (!rc)
 		rc = ceph_osdc_wait_request(osdc, req);
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 9/9] ceph: build osd request message later for writepages
  2013-04-04 16:16 [PATCH 0/9] Alex Elder
                   ` (7 preceding siblings ...)
  2013-04-04 16:20 ` [PATCH 8/9] libceph: hold off building osd request Alex Elder
@ 2013-04-04 16:20 ` Alex Elder
  2013-04-05  3:03 ` [PATCH 0/9] Josh Durgin
  9 siblings, 0 replies; 20+ messages in thread
From: Alex Elder @ 2013-04-04 16:20 UTC (permalink / raw)
  To: ceph-devel

Hold off building the osd request message in ceph_writepages_start()
until just before it will be submitted to the osd client for
execution.

We'll still create the request and allocate the page pointer array
after we learn we have at least one page to write.  A local variable
will be used to keep track of the allocated array of pages.  Wait
until just before submitting the request for assigning that page
array pointer to the request message.

Create ands use a new function osd_req_op_extent_update() whose
purpose is to serve this one spot where the length value supplied
when an osd request's op was initially formatted might need to get
changed (reduced, never increased) before submitting the request.

Previously, ceph_writepages_start() assigned the message header's
data length because of this update.  That's no longer necessary,
because ceph_osdc_build_request() will recalculate the right
value to use based on the content of the ops in the request.

Signed-off-by: Alex Elder <elder@inktank.com>
---
 fs/ceph/addr.c                  |   59
++++++++++++++++++++++-----------------
 include/linux/ceph/osd_client.h |    1 +
 net/ceph/osd_client.c           |   13 +++++++++
 3 files changed, 47 insertions(+), 26 deletions(-)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 125d0a8..e0dd74c 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -737,10 +737,14 @@ retry:

 	while (!done && index <= end) {
 		struct ceph_osd_req_op ops[2];
+		int num_ops = do_sync ? 2 : 1;
+		struct ceph_vino vino;
 		unsigned i;
 		int first;
 		pgoff_t next;
 		int pvec_pages, locked_pages;
+		struct page **pages = NULL;
+		mempool_t *pool = NULL;	/* Becomes non-null if mempool used */
 		struct page *page;
 		int want;
 		u64 offset, len;
@@ -824,16 +828,19 @@ get_more_pages:
 				break;
 			}

-			/* ok */
+			/*
+			 * We have something to write.  If this is
+			 * the first locked page this time through,
+			 * allocate an osd request and a page array
+			 * that it will use.
+			 */
 			if (locked_pages == 0) {
-				struct ceph_vino vino;
-				int num_ops = do_sync ? 2 : 1;
 				size_t size;
-				struct page **pages;
-				mempool_t *pool = NULL;
+
+				BUG_ON(pages);

 				/* prepare async write request */
-				offset = (u64) page_offset(page);
+				offset = (u64)page_offset(page);
 				len = wsize;
 				req = ceph_writepages_osd_request(inode,
 							offset, &len, snapc,
@@ -845,11 +852,6 @@ get_more_pages:
 					break;
 				}

-				vino = ceph_vino(inode);
-				ceph_osdc_build_request(req, offset,
-					num_ops, ops, snapc, vino.snap,
-					&inode->i_mtime);
-
 				req->r_callback = writepages_finish;
 				req->r_inode = inode;

@@ -858,16 +860,9 @@ get_more_pages:
 				pages = kmalloc(size, GFP_NOFS);
 				if (!pages) {
 					pool = fsc->wb_pagevec_pool;
-
 					pages = mempool_alloc(pool, GFP_NOFS);
-					WARN_ON(!pages);
+					BUG_ON(!pages);
 				}
-
-				req->r_data_out.pages = pages;
-				req->r_data_out.pages_from_pool = !!pool;
-				req->r_data_out.type = CEPH_OSD_DATA_TYPE_PAGES;
-				req->r_data_out.length = len;
-				req->r_data_out.alignment = 0;
 			}

 			/* note position of first page in pvec */
@@ -885,7 +880,7 @@ get_more_pages:
 			}

 			set_page_writeback(page);
-			req->r_data_out.pages[locked_pages] = page;
+			pages[locked_pages] = page;
 			locked_pages++;
 			next = page->index + 1;
 		}
@@ -914,18 +909,30 @@ get_more_pages:
 			pvec.nr -= i-first;
 		}

-		/* submit the write */
-		offset = page_offset(req->r_data_out.pages[0]);
+		/* Format the osd request message and submit the write */
+
+		offset = page_offset(pages[0]);
 		len = min((snap_size ? snap_size : i_size_read(inode)) - offset,
 			  (u64)locked_pages << PAGE_CACHE_SHIFT);
 		dout("writepages got %d pages at %llu~%llu\n",
 		     locked_pages, offset, len);

-		/* revise final length, page count */
+		req->r_data_out.type = CEPH_OSD_DATA_TYPE_PAGES;
+		req->r_data_out.pages = pages;
 		req->r_data_out.length = len;
-		req->r_request_ops[0].extent.length = cpu_to_le64(len);
-		req->r_request_ops[0].payload_len = cpu_to_le32(len);
-		req->r_request->hdr.data_len = cpu_to_le32(len);
+		req->r_data_out.alignment = 0;
+		req->r_data_out.pages_from_pool = !!pool;
+
+		pages = NULL;	/* request message now owns the pages array */
+		pool = NULL;
+
+		/* Update the write op length in case we changed it */
+
+		osd_req_op_extent_update(&ops[0], len);
+
+		vino = ceph_vino(inode);
+		ceph_osdc_build_request(req, offset, num_ops, ops,
+					snapc, vino.snap, &inode->i_mtime);

 		rc = ceph_osdc_start_request(&fsc->client->osdc, req, true);
 		BUG_ON(rc);
diff --git a/include/linux/ceph/osd_client.h
b/include/linux/ceph/osd_client.h
index ffaf907..5ee1a37 100644
--- a/include/linux/ceph/osd_client.h
+++ b/include/linux/ceph/osd_client.h
@@ -234,6 +234,7 @@ extern void osd_req_op_init(struct ceph_osd_req_op
*op, u16 opcode);
 extern void osd_req_op_extent_init(struct ceph_osd_req_op *op, u16 opcode,
 					u64 offset, u64 length,
 					u64 truncate_size, u32 truncate_seq);
+extern void osd_req_op_extent_update(struct ceph_osd_req_op *op, u64
length);
 extern void osd_req_op_cls_init(struct ceph_osd_req_op *op, u16 opcode,
 					const char *class, const char *method,
 					const void *request_data,
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index 9ca693d..426ca1f 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -296,6 +296,19 @@ void osd_req_op_extent_init(struct ceph_osd_req_op
*op, u16 opcode,
 }
 EXPORT_SYMBOL(osd_req_op_extent_init);

+void osd_req_op_extent_update(struct ceph_osd_req_op *op, u64 length)
+{
+	u64 previous = op->extent.length;
+
+	if (length == previous)
+		return;		/* Nothing to do */
+	BUG_ON(length > previous);
+
+	op->extent.length = length;
+	op->payload_len -= previous - length;
+}
+EXPORT_SYMBOL(osd_req_op_extent_update);
+
 void osd_req_op_cls_init(struct ceph_osd_req_op *op, u16 opcode,
 			const char *class, const char *method,
 			const void *request_data, size_t request_data_size)
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 4/9, v2] libceph: record message data length
  2013-04-04 16:19 ` [PATCH 4/9] libceph: record message data length Alex Elder
@ 2013-04-04 18:34   ` Alex Elder
  0 siblings, 0 replies; 20+ messages in thread
From: Alex Elder @ 2013-04-04 18:34 UTC (permalink / raw)
  To: ceph-devel

I found a problem whose fix belongs in this patch.
When the data field is reset in ceph_msg_last_put(),
the data_length field should be reset to zero as well.
Not doing so triggered an assertion for a re-used
message that came from a message pool.  The branch
"review/wip-3761" has been updated to reflect this
change.

					-Alex




Keep track of the length of the data portion for a message in a
separate field in the ceph_msg structure.  This information has
been maintained in wire byte order in the message header, but
that's going to change soon.

Signed-off-by: Alex Elder <elder@inktank.com>
---
v2:  Reset data_length to 0 in ceph_msg_last_put().

 include/linux/ceph/messenger.h |    4 +++-
 net/ceph/messenger.c           |   10 +++++++++-
 net/ceph/osd_client.c          |    2 +-
 3 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
index 3181321..b832c0c 100644
--- a/include/linux/ceph/messenger.h
+++ b/include/linux/ceph/messenger.h
@@ -139,6 +139,7 @@ struct ceph_msg {
 	struct kvec front;              /* unaligned blobs of message */
 	struct ceph_buffer *middle;

+	size_t			data_length;
 	struct ceph_msg_data	*data;	/* data payload */

 	struct ceph_connection *con;
@@ -270,7 +271,8 @@ extern void ceph_msg_data_set_pages(struct ceph_msg
*msg, struct page **pages,
 				size_t length, size_t alignment);
 extern void ceph_msg_data_set_pagelist(struct ceph_msg *msg,
 				struct ceph_pagelist *pagelist);
-extern void ceph_msg_data_set_bio(struct ceph_msg *msg, struct bio *bio);
+extern void ceph_msg_data_set_bio(struct ceph_msg *msg, struct bio *bio,
+				size_t length);

 extern struct ceph_msg *ceph_msg_new(int type, int front_len, gfp_t flags,
 				     bool can_fail);
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index ee16086..fa9b4d0 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2981,6 +2981,7 @@ void ceph_msg_data_set_pages(struct ceph_msg *msg,
struct page **pages,

 	BUG_ON(!pages);
 	BUG_ON(!length);
+	BUG_ON(msg->data_length);
 	BUG_ON(msg->data != NULL);

 	data = ceph_msg_data_create(CEPH_MSG_DATA_PAGES);
@@ -2990,6 +2991,7 @@ void ceph_msg_data_set_pages(struct ceph_msg *msg,
struct page **pages,
 	data->alignment = alignment & ~PAGE_MASK;

 	msg->data = data;
+	msg->data_length = length;
 }
 EXPORT_SYMBOL(ceph_msg_data_set_pages);

@@ -3000,6 +3002,7 @@ void ceph_msg_data_set_pagelist(struct ceph_msg *msg,

 	BUG_ON(!pagelist);
 	BUG_ON(!pagelist->length);
+	BUG_ON(msg->data_length);
 	BUG_ON(msg->data != NULL);

 	data = ceph_msg_data_create(CEPH_MSG_DATA_PAGELIST);
@@ -3007,14 +3010,17 @@ void ceph_msg_data_set_pagelist(struct ceph_msg
*msg,
 	data->pagelist = pagelist;

 	msg->data = data;
+	msg->data_length = pagelist->length;
 }
 EXPORT_SYMBOL(ceph_msg_data_set_pagelist);

-void ceph_msg_data_set_bio(struct ceph_msg *msg, struct bio *bio)
+void ceph_msg_data_set_bio(struct ceph_msg *msg, struct bio *bio,
+		size_t length)
 {
 	struct ceph_msg_data *data;

 	BUG_ON(!bio);
+	BUG_ON(msg->data_length);
 	BUG_ON(msg->data != NULL);

 	data = ceph_msg_data_create(CEPH_MSG_DATA_BIO);
@@ -3022,6 +3028,7 @@ void ceph_msg_data_set_bio(struct ceph_msg *msg,
struct bio *bio)
 	data->bio = bio;

 	msg->data = data;
+	msg->data_length = length;
 }
 EXPORT_SYMBOL(ceph_msg_data_set_bio);

@@ -3200,6 +3207,7 @@ void ceph_msg_last_put(struct kref *kref)
 	}
 	ceph_msg_data_destroy(m->data);
 	m->data = NULL;
+	m->data_length = 0;

 	if (m->pool)
 		ceph_msgpool_put(m->pool, m);
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index e088792..0b4951e 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -1848,7 +1848,7 @@ static void ceph_osdc_msg_data_set(struct ceph_msg
*msg,
 		ceph_msg_data_set_pagelist(msg, osd_data->pagelist);
 #ifdef CONFIG_BLOCK
 	} else if (osd_data->type == CEPH_OSD_DATA_TYPE_BIO) {
-		ceph_msg_data_set_bio(msg, osd_data->bio);
+		ceph_msg_data_set_bio(msg, osd_data->bio, osd_data->bio_length);
 #endif
 	} else {
 		BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_NONE);
-- 
1.7.9.5



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/9]
  2013-04-04 16:16 [PATCH 0/9] Alex Elder
                   ` (8 preceding siblings ...)
  2013-04-04 16:20 ` [PATCH 9/9] ceph: build osd request message later for writepages Alex Elder
@ 2013-04-05  3:03 ` Josh Durgin
  2013-04-05 12:09   ` Alex Elder
  9 siblings, 1 reply; 20+ messages in thread
From: Josh Durgin @ 2013-04-05  3:03 UTC (permalink / raw)
  To: Alex Elder; +Cc: ceph-devel

On 04/04/2013 09:16 AM, Alex Elder wrote:
> (The following patches are available in branch "review/wip-3761"
> on the ceph-client git respository.)
>
> These are actually a few sets of patches but I'm just going to
> post them as a single series this time.
>
> 					-Alex
>
> [PATCH 1/9] ceph: use page_offset() in ceph_writepages_start()
>      Fixes a potential bug in ceph_writepages_start().
>
> [PATCH 2/9] libceph: drop ceph_osd_request->r_con_filling_msg
>      Removes a no-longer-needed field, to simplify code.
>
> [PATCH 3/9] libceph: record length of bio list with bio
> [PATCH 4/9] libceph: record message data length
>      Has each message maintain its data length, so we can
>      avoid depending on what's in the message's header.
>
> [PATCH 5/9] libceph: don't build request in ceph_osdc_new_request()
> [PATCH 6/9] ceph: define ceph_writepages_osd_request()
> [PATCH 7/9] ceph: kill ceph alloc_page_vec()
> [PATCH 8/9] libceph: hold off building osd request
> [PATCH 9/9] ceph: build osd request message later for writepages
>      Defers "building" a request message until right before
>      it's submitted to the osd client to start its execution.
>      Also stops having the length field in a message header
>      get updated by the file system code.

These all look good. The one thing I'm uncertain about is changing
the mempool allocation failure from a WARN to a BUG, but it seems
there's no good way to recover at that point.

Reviewed-by: Josh Durgin <josh.durgin@inktank.com>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/9]
  2013-04-05  3:03 ` [PATCH 0/9] Josh Durgin
@ 2013-04-05 12:09   ` Alex Elder
  0 siblings, 0 replies; 20+ messages in thread
From: Alex Elder @ 2013-04-05 12:09 UTC (permalink / raw)
  To: Josh Durgin; +Cc: ceph-devel

On 04/04/2013 10:03 PM, Josh Durgin wrote:
> On 04/04/2013 09:16 AM, Alex Elder wrote:
>> (The following patches are available in branch "review/wip-3761"
>> on the ceph-client git respository.)
>>
>> These are actually a few sets of patches but I'm just going to
>> post them as a single series this time.
>>
>>                     -Alex
>>
>> [PATCH 1/9] ceph: use page_offset() in ceph_writepages_start()
>>      Fixes a potential bug in ceph_writepages_start().

. . .

>> [PATCH 9/9] ceph: build osd request message later for writepages
>>      Defers "building" a request message until right before
>>      it's submitted to the osd client to start its execution.
>>      Also stops having the length field in a message header
>>      get updated by the file system code.
> 
> These all look good. The one thing I'm uncertain about is changing
> the mempool allocation failure from a WARN to a BUG, but it seems
> there's no good way to recover at that point.

It's reality.  About 20 lines later, pages is dereferenced.
I think it's better to stop at the point of the failure
and report exactly where it occurred than to (most likely)
crash more mysteriously a little later on.

If we exhaust the mempool, it wasn't big enough, and
that's a bug in the size of the mempool or the design.

Thanks a lot for the review.  More on their way shortly.

					-Alex

> Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/9]
  2017-04-14 21:06 kusumi.tomohiro
@ 2017-04-26 18:43 ` Jens Axboe
  0 siblings, 0 replies; 20+ messages in thread
From: Jens Axboe @ 2017-04-26 18:43 UTC (permalink / raw)
  To: kusumi.tomohiro; +Cc: fio, Tomohiro Kusumi

On Sat, Apr 15 2017, kusumi.tomohiro@gmail.com wrote:
> From: Tomohiro Kusumi <tkusumi@tuxera.com>
> 
> "Fix num2str() output when modulo != -1U" is a resend of below,
> which I believe has been neither acked nor nacked.
> http://www.spinics.net/lists/fio/msg05719.html
> 
> The patch is the same, but added a link to below comment to the commit message.
> http://www.spinics.net/lists/fio/msg05720.html

Applied, thanks.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 0/9]
@ 2017-04-14 21:06 kusumi.tomohiro
  2017-04-26 18:43 ` Jens Axboe
  0 siblings, 1 reply; 20+ messages in thread
From: kusumi.tomohiro @ 2017-04-14 21:06 UTC (permalink / raw)
  To: axboe, fio; +Cc: Tomohiro Kusumi

From: Tomohiro Kusumi <tkusumi@tuxera.com>

"Fix num2str() output when modulo != -1U" is a resend of below,
which I believe has been neither acked nor nacked.
http://www.spinics.net/lists/fio/msg05719.html

The patch is the same, but added a link to below comment to the commit message.
http://www.spinics.net/lists/fio/msg05720.html

All the rest are new ones.

Tomohiro Kusumi (9):
  Fix num2str() output when modulo != -1U
  Drop the only local variable declaration within a for-loop (C99)
  Make lib/strntol.c a stand-alone library
  Make lib/pattern.c a stand-alone library
  Make lib/rand.c a stand-alone library
  Make lib/zipf.c a stand-alone library
  Make lib/mountcheck.c a stand-alone library
  Make oslib/strlcat.c a stand-alone library
  Make oslib/linux-dev-lookup.c a stand-alone library

 lib/mountcheck.c         |  2 +-
 lib/num2str.c            | 34 +++++++++++++++++++++-------------
 lib/pattern.c            |  9 ++++++++-
 lib/rand.c               |  2 +-
 lib/strntol.c            |  2 +-
 lib/zipf.c               |  1 -
 oslib/linux-dev-lookup.c |  3 +--
 oslib/strlcat.c          |  2 +-
 parse.c                  |  3 ++-
 9 files changed, 36 insertions(+), 22 deletions(-)

-- 
2.9.3



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/9]
  2014-06-25  1:00 ` Suman Anna
@ 2014-06-25  1:09   ` Suman Anna
  -1 siblings, 0 replies; 20+ messages in thread
From: Suman Anna @ 2014-06-25  1:09 UTC (permalink / raw)
  To: Tony Lindgren
  Cc: Paul Walmsley, Dave Gerlach, Jassi Brar, linux-omap,
	linux-arm-kernel, devicetree

On 06/24/2014 08:00 PM, Suman Anna wrote:
> Hi Tony, Paul,

Please ignore this, resent the same message and the series with the
proper subject.

regards
Suman

> 
> This patch series adds the minimal mailbox DT nodes for the SoCs that are
> currently missing them (OMAP4, AM335x, DRA7). It also limits the legacy
> mailbox platform device creation only for non-DT boot, and cleans up the
> legacy hwmod addresses and attributes used for creating the sub-mailbox
> devices. The sub-mailboxes in DT boot are not created until the OMAP
> mailbox DT adoption series, and is not an issue since some of the other
> required hwmod data for using legacy-mode devices have already been
> cleaned up.
> 
> The patches are based on 3.16-rc2. The series do not have any order
> dependencies with the OMAP mailbox cleanup series [1], and can be applied
> in any order. The following shows the boot logs on various OMAP platforms
> with just these patches on top of 3.16-rc2:
>   OMAP2 (SDP2430)           : http://slexy.org/view/s21gGdJxXP
>   OMAP3 (BeagleXM)          : http://slexy.org/view/s2n8Pc83Rp
>   OMAP4 (PandaBoard)        : http://slexy.org/view/s21StNWKPz
>   OMAP5 (OMAP5 uEVM)        : http://slexy.org/view/s2y3t6HZtk
>   DRA7  (DRA7 EVM)          : http://slexy.org/view/s2qY23Mt97
>   AM33xx (BeagleBone Black) : http://slexy.org/view/s2ce8jj35O
>   AM43xx (AM437x GP EVM)    : http://slexy.org/view/s2nttmOLSq
> 
> [1] http://marc.info/?l=linux-omap&m=140365705821115&w=2
> 
> Suman Anna (9):
>   ARM: dts: OMAP4: Add mailbox node
>   ARM: dts: AM33xx: Add mailbox node
>   ARM: dts: AM4372: Correct mailbox node data
>   ARM: dts: DRA7: Add mailbox nodes
>   ARM: DRA7: hwmod_data: Add mailbox hwmod data
>   ARM: OMAP2+: Avoid mailbox legacy device creation for DT-boot
>   ARM: OMAP2: hwmod_data: Remove legacy mailbox data and addrs
>   ARM: OMAP4: hwmod_data: Remove legacy mailbox addrs
>   ARM: AM33xx: hwmod_data: Remove legacy mailbox addrs
> 
>  arch/arm/boot/dts/am33xx.dtsi                      |   7 +
>  arch/arm/boot/dts/am4372.dtsi                      |   7 +-
>  arch/arm/boot/dts/dra7.dtsi                        |  91 ++++++
>  arch/arm/boot/dts/omap4.dtsi                       |   7 +
>  arch/arm/mach-omap2/devices.c                      |   2 +-
>  arch/arm/mach-omap2/omap_hwmod_2420_data.c         |  14 -
>  arch/arm/mach-omap2/omap_hwmod_2430_data.c         |  13 -
>  .../omap_hwmod_2xxx_3xxx_interconnect_data.c       |   9 -
>  .../omap_hwmod_33xx_43xx_interconnect_data.c       |  10 -
>  arch/arm/mach-omap2/omap_hwmod_44xx_data.c         |  10 -
>  arch/arm/mach-omap2/omap_hwmod_7xx_data.c          | 305 +++++++++++++++++++++
>  arch/arm/mach-omap2/omap_hwmod_common_data.h       |   1 -
>  12 files changed, 412 insertions(+), 64 deletions(-)
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 0/9]
@ 2014-06-25  1:09   ` Suman Anna
  0 siblings, 0 replies; 20+ messages in thread
From: Suman Anna @ 2014-06-25  1:09 UTC (permalink / raw)
  To: linux-arm-kernel

On 06/24/2014 08:00 PM, Suman Anna wrote:
> Hi Tony, Paul,

Please ignore this, resent the same message and the series with the
proper subject.

regards
Suman

> 
> This patch series adds the minimal mailbox DT nodes for the SoCs that are
> currently missing them (OMAP4, AM335x, DRA7). It also limits the legacy
> mailbox platform device creation only for non-DT boot, and cleans up the
> legacy hwmod addresses and attributes used for creating the sub-mailbox
> devices. The sub-mailboxes in DT boot are not created until the OMAP
> mailbox DT adoption series, and is not an issue since some of the other
> required hwmod data for using legacy-mode devices have already been
> cleaned up.
> 
> The patches are based on 3.16-rc2. The series do not have any order
> dependencies with the OMAP mailbox cleanup series [1], and can be applied
> in any order. The following shows the boot logs on various OMAP platforms
> with just these patches on top of 3.16-rc2:
>   OMAP2 (SDP2430)           : http://slexy.org/view/s21gGdJxXP
>   OMAP3 (BeagleXM)          : http://slexy.org/view/s2n8Pc83Rp
>   OMAP4 (PandaBoard)        : http://slexy.org/view/s21StNWKPz
>   OMAP5 (OMAP5 uEVM)        : http://slexy.org/view/s2y3t6HZtk
>   DRA7  (DRA7 EVM)          : http://slexy.org/view/s2qY23Mt97
>   AM33xx (BeagleBone Black) : http://slexy.org/view/s2ce8jj35O
>   AM43xx (AM437x GP EVM)    : http://slexy.org/view/s2nttmOLSq
> 
> [1] http://marc.info/?l=linux-omap&m=140365705821115&w=2
> 
> Suman Anna (9):
>   ARM: dts: OMAP4: Add mailbox node
>   ARM: dts: AM33xx: Add mailbox node
>   ARM: dts: AM4372: Correct mailbox node data
>   ARM: dts: DRA7: Add mailbox nodes
>   ARM: DRA7: hwmod_data: Add mailbox hwmod data
>   ARM: OMAP2+: Avoid mailbox legacy device creation for DT-boot
>   ARM: OMAP2: hwmod_data: Remove legacy mailbox data and addrs
>   ARM: OMAP4: hwmod_data: Remove legacy mailbox addrs
>   ARM: AM33xx: hwmod_data: Remove legacy mailbox addrs
> 
>  arch/arm/boot/dts/am33xx.dtsi                      |   7 +
>  arch/arm/boot/dts/am4372.dtsi                      |   7 +-
>  arch/arm/boot/dts/dra7.dtsi                        |  91 ++++++
>  arch/arm/boot/dts/omap4.dtsi                       |   7 +
>  arch/arm/mach-omap2/devices.c                      |   2 +-
>  arch/arm/mach-omap2/omap_hwmod_2420_data.c         |  14 -
>  arch/arm/mach-omap2/omap_hwmod_2430_data.c         |  13 -
>  .../omap_hwmod_2xxx_3xxx_interconnect_data.c       |   9 -
>  .../omap_hwmod_33xx_43xx_interconnect_data.c       |  10 -
>  arch/arm/mach-omap2/omap_hwmod_44xx_data.c         |  10 -
>  arch/arm/mach-omap2/omap_hwmod_7xx_data.c          | 305 +++++++++++++++++++++
>  arch/arm/mach-omap2/omap_hwmod_common_data.h       |   1 -
>  12 files changed, 412 insertions(+), 64 deletions(-)
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 0/9]
@ 2014-06-25  1:00 ` Suman Anna
  0 siblings, 0 replies; 20+ messages in thread
From: Suman Anna @ 2014-06-25  1:00 UTC (permalink / raw)
  To: Tony Lindgren
  Cc: Paul Walmsley, Dave Gerlach, Jassi Brar, linux-omap,
	linux-arm-kernel, devicetree, Suman Anna

Hi Tony, Paul,

This patch series adds the minimal mailbox DT nodes for the SoCs that are
currently missing them (OMAP4, AM335x, DRA7). It also limits the legacy
mailbox platform device creation only for non-DT boot, and cleans up the
legacy hwmod addresses and attributes used for creating the sub-mailbox
devices. The sub-mailboxes in DT boot are not created until the OMAP
mailbox DT adoption series, and is not an issue since some of the other
required hwmod data for using legacy-mode devices have already been
cleaned up.

The patches are based on 3.16-rc2. The series do not have any order
dependencies with the OMAP mailbox cleanup series [1], and can be applied
in any order. The following shows the boot logs on various OMAP platforms
with just these patches on top of 3.16-rc2:
  OMAP2 (SDP2430)           : http://slexy.org/view/s21gGdJxXP
  OMAP3 (BeagleXM)          : http://slexy.org/view/s2n8Pc83Rp
  OMAP4 (PandaBoard)        : http://slexy.org/view/s21StNWKPz
  OMAP5 (OMAP5 uEVM)        : http://slexy.org/view/s2y3t6HZtk
  DRA7  (DRA7 EVM)          : http://slexy.org/view/s2qY23Mt97
  AM33xx (BeagleBone Black) : http://slexy.org/view/s2ce8jj35O
  AM43xx (AM437x GP EVM)    : http://slexy.org/view/s2nttmOLSq

[1] http://marc.info/?l=linux-omap&m=140365705821115&w=2

Suman Anna (9):
  ARM: dts: OMAP4: Add mailbox node
  ARM: dts: AM33xx: Add mailbox node
  ARM: dts: AM4372: Correct mailbox node data
  ARM: dts: DRA7: Add mailbox nodes
  ARM: DRA7: hwmod_data: Add mailbox hwmod data
  ARM: OMAP2+: Avoid mailbox legacy device creation for DT-boot
  ARM: OMAP2: hwmod_data: Remove legacy mailbox data and addrs
  ARM: OMAP4: hwmod_data: Remove legacy mailbox addrs
  ARM: AM33xx: hwmod_data: Remove legacy mailbox addrs

 arch/arm/boot/dts/am33xx.dtsi                      |   7 +
 arch/arm/boot/dts/am4372.dtsi                      |   7 +-
 arch/arm/boot/dts/dra7.dtsi                        |  91 ++++++
 arch/arm/boot/dts/omap4.dtsi                       |   7 +
 arch/arm/mach-omap2/devices.c                      |   2 +-
 arch/arm/mach-omap2/omap_hwmod_2420_data.c         |  14 -
 arch/arm/mach-omap2/omap_hwmod_2430_data.c         |  13 -
 .../omap_hwmod_2xxx_3xxx_interconnect_data.c       |   9 -
 .../omap_hwmod_33xx_43xx_interconnect_data.c       |  10 -
 arch/arm/mach-omap2/omap_hwmod_44xx_data.c         |  10 -
 arch/arm/mach-omap2/omap_hwmod_7xx_data.c          | 305 +++++++++++++++++++++
 arch/arm/mach-omap2/omap_hwmod_common_data.h       |   1 -
 12 files changed, 412 insertions(+), 64 deletions(-)

-- 
2.0.0


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 0/9]
@ 2014-06-25  1:00 ` Suman Anna
  0 siblings, 0 replies; 20+ messages in thread
From: Suman Anna @ 2014-06-25  1:00 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Tony, Paul,

This patch series adds the minimal mailbox DT nodes for the SoCs that are
currently missing them (OMAP4, AM335x, DRA7). It also limits the legacy
mailbox platform device creation only for non-DT boot, and cleans up the
legacy hwmod addresses and attributes used for creating the sub-mailbox
devices. The sub-mailboxes in DT boot are not created until the OMAP
mailbox DT adoption series, and is not an issue since some of the other
required hwmod data for using legacy-mode devices have already been
cleaned up.

The patches are based on 3.16-rc2. The series do not have any order
dependencies with the OMAP mailbox cleanup series [1], and can be applied
in any order. The following shows the boot logs on various OMAP platforms
with just these patches on top of 3.16-rc2:
  OMAP2 (SDP2430)           : http://slexy.org/view/s21gGdJxXP
  OMAP3 (BeagleXM)          : http://slexy.org/view/s2n8Pc83Rp
  OMAP4 (PandaBoard)        : http://slexy.org/view/s21StNWKPz
  OMAP5 (OMAP5 uEVM)        : http://slexy.org/view/s2y3t6HZtk
  DRA7  (DRA7 EVM)          : http://slexy.org/view/s2qY23Mt97
  AM33xx (BeagleBone Black) : http://slexy.org/view/s2ce8jj35O
  AM43xx (AM437x GP EVM)    : http://slexy.org/view/s2nttmOLSq

[1] http://marc.info/?l=linux-omap&m=140365705821115&w=2

Suman Anna (9):
  ARM: dts: OMAP4: Add mailbox node
  ARM: dts: AM33xx: Add mailbox node
  ARM: dts: AM4372: Correct mailbox node data
  ARM: dts: DRA7: Add mailbox nodes
  ARM: DRA7: hwmod_data: Add mailbox hwmod data
  ARM: OMAP2+: Avoid mailbox legacy device creation for DT-boot
  ARM: OMAP2: hwmod_data: Remove legacy mailbox data and addrs
  ARM: OMAP4: hwmod_data: Remove legacy mailbox addrs
  ARM: AM33xx: hwmod_data: Remove legacy mailbox addrs

 arch/arm/boot/dts/am33xx.dtsi                      |   7 +
 arch/arm/boot/dts/am4372.dtsi                      |   7 +-
 arch/arm/boot/dts/dra7.dtsi                        |  91 ++++++
 arch/arm/boot/dts/omap4.dtsi                       |   7 +
 arch/arm/mach-omap2/devices.c                      |   2 +-
 arch/arm/mach-omap2/omap_hwmod_2420_data.c         |  14 -
 arch/arm/mach-omap2/omap_hwmod_2430_data.c         |  13 -
 .../omap_hwmod_2xxx_3xxx_interconnect_data.c       |   9 -
 .../omap_hwmod_33xx_43xx_interconnect_data.c       |  10 -
 arch/arm/mach-omap2/omap_hwmod_44xx_data.c         |  10 -
 arch/arm/mach-omap2/omap_hwmod_7xx_data.c          | 305 +++++++++++++++++++++
 arch/arm/mach-omap2/omap_hwmod_common_data.h       |   1 -
 12 files changed, 412 insertions(+), 64 deletions(-)

-- 
2.0.0

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 0/9]
  2011-05-17 13:06 [RFC PATCH v3] Consolidate SRAM support Nori, Sekhar
@ 2011-05-17 21:41 ` Ben Gardiner
  0 siblings, 0 replies; 20+ messages in thread
From: Ben Gardiner @ 2011-05-17 21:41 UTC (permalink / raw)
  To: linux-arm-kernel

The davinci platforms are mapping their io regions using iotables. This patch
series converts them to mapping using ioremap. 

This series is based on-top-of '[RFC PATCH v3] Consolidate SRAM support' from
Russell King.

The first patch in the series is a squash of the neccessary changes as
reported by Sekhar Nori in that thread.

The davinci sram init is first changed to ioremap the regions specified by 
each of the soc_infos; then the iotables are each removed; then the SRAM_VIRT
definition is removed. Finally, the da850's sram region is changed from 
the ARM local RAM region to the Shared RAM region. This change is needed
to support mcasp ping-pong buffers on da850. Suspend was tested with rtcwake
and was found to work.

Ben Gardiner (7):
  davinci: sram: ioremap the davinci_soc_info specified sram regions
  davinci: da850: remove the SRAM_VIRT iotable entry
  davinci: dm355: remove the SRAM_VIRT iotable entry
  davinci: dm365: remove the SRAM_VIRT iotable entry
  davinci: dm644x: remove the SRAM_VIRT iotable entry
  davinci: dm646x: remove the SRAM_VIRT iotable entry
  davinci: remove definition of SRAM_VIRT

Nori, Sekhar (1):
  davinci: pm: fix compiler errors and kernel panics from sram
    consolidation

Subhasish Ghosh (1):
  davinci: da850: changed SRAM allocator to shared ram.

 arch/arm/mach-davinci/da850.c               |   10 ++--------
 arch/arm/mach-davinci/dm355.c               |    6 ------
 arch/arm/mach-davinci/dm365.c               |    6 ------
 arch/arm/mach-davinci/dm644x.c              |    6 ------
 arch/arm/mach-davinci/dm646x.c              |    6 ------
 arch/arm/mach-davinci/include/mach/common.h |    2 --
 arch/arm/mach-davinci/include/mach/da8xx.h  |    1 +
 arch/arm/mach-davinci/pm.c                  |    2 +-
 arch/arm/mach-davinci/sleep.S               |    1 +
 arch/arm/mach-davinci/sram.c                |   12 ++++++++++--
 10 files changed, 15 insertions(+), 37 deletions(-)

-- 
1.7.4.1

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2017-04-26 18:43 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-04-04 16:16 [PATCH 0/9] Alex Elder
2013-04-04 16:18 ` [PATCH 1/9] ceph: use page_offset() in ceph_writepages_start() Alex Elder
2013-04-04 16:18 ` [PATCH 2/9] libceph: drop ceph_osd_request->r_con_filling_msg Alex Elder
2013-04-04 16:18 ` [PATCH 3/9] libceph: record length of bio list with bio Alex Elder
2013-04-04 16:19 ` [PATCH 4/9] libceph: record message data length Alex Elder
2013-04-04 18:34   ` [PATCH 4/9, v2] " Alex Elder
2013-04-04 16:19 ` [PATCH 5/9] libceph: don't build request in ceph_osdc_new_request() Alex Elder
2013-04-04 16:19 ` [PATCH 6/9] ceph: define ceph_writepages_osd_request() Alex Elder
2013-04-04 16:19 ` [PATCH 7/9] ceph: kill ceph alloc_page_vec() Alex Elder
2013-04-04 16:20 ` [PATCH 8/9] libceph: hold off building osd request Alex Elder
2013-04-04 16:20 ` [PATCH 9/9] ceph: build osd request message later for writepages Alex Elder
2013-04-05  3:03 ` [PATCH 0/9] Josh Durgin
2013-04-05 12:09   ` Alex Elder
  -- strict thread matches above, loose matches on Subject: below --
2017-04-14 21:06 kusumi.tomohiro
2017-04-26 18:43 ` Jens Axboe
2014-06-25  1:00 Suman Anna
2014-06-25  1:00 ` Suman Anna
2014-06-25  1:09 ` Suman Anna
2014-06-25  1:09   ` Suman Anna
2011-05-17 13:06 [RFC PATCH v3] Consolidate SRAM support Nori, Sekhar
2011-05-17 21:41 ` [PATCH 0/9] Ben Gardiner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.