All of lore.kernel.org
 help / color / mirror / Atom feed
* [Cluster-devel] [GFS2 PATCH 0/4] jhead lookup using bios
@ 2018-10-16  4:07 Abhi Das
  2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 1/4] gfs2: add more timing info to journal recovery process Abhi Das
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Abhi Das @ 2018-10-16  4:07 UTC (permalink / raw)
  To: cluster-devel.redhat.com

This is my latest version of this patchset based on inputs from Andreas
and Steve.
We readahead the journal sequentially in large chunks using bios. Pagecache
pages for the journal inode's mapping are used for the I/O.

There's also some cleanup of the bio functions with this patchset.

xfstests ran to completion with this.

Abhi Das (4):
  gfs2: add more timing info to journal recovery process
  gfs2: changes to gfs2_log_XXX_bio
  gfs2: add a helper function to get_log_header that can be used
    elsewhere
  gfs2: read journal in large chunks to locate the head

 fs/gfs2/bmap.c       |   8 +-
 fs/gfs2/glops.c      |   1 +
 fs/gfs2/log.c        |   4 +-
 fs/gfs2/lops.c       | 240 +++++++++++++++++++++++++++++++++++++++++++--------
 fs/gfs2/lops.h       |   4 +-
 fs/gfs2/ops_fstype.c |   1 +
 fs/gfs2/recovery.c   | 178 ++++++++------------------------------
 fs/gfs2/recovery.h   |   4 +-
 fs/gfs2/super.c      |   1 +
 9 files changed, 255 insertions(+), 186 deletions(-)

-- 
2.4.11



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Cluster-devel] [GFS2 PATCH 1/4] gfs2: add more timing info to journal recovery process
  2018-10-16  4:07 [Cluster-devel] [GFS2 PATCH 0/4] jhead lookup using bios Abhi Das
@ 2018-10-16  4:07 ` Abhi Das
  2018-10-16  9:05   ` Andreas Gruenbacher
  2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 2/4] gfs2: changes to gfs2_log_XXX_bio Abhi Das
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Abhi Das @ 2018-10-16  4:07 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Tells you how many milliseconds map_journal_extents and find_jhead
take.

Signed-off-by: Abhi Das <adas@redhat.com>
---
 fs/gfs2/bmap.c     | 8 ++++++--
 fs/gfs2/recovery.c | 2 ++
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
index 5f3ea07..aaf3682 100644
--- a/fs/gfs2/bmap.c
+++ b/fs/gfs2/bmap.c
@@ -14,6 +14,7 @@
 #include <linux/gfs2_ondisk.h>
 #include <linux/crc32.h>
 #include <linux/iomap.h>
+#include <linux/ktime.h>
 
 #include "gfs2.h"
 #include "incore.h"
@@ -2248,7 +2249,9 @@ int gfs2_map_journal_extents(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd)
 	unsigned int shift = sdp->sd_sb.sb_bsize_shift;
 	u64 size;
 	int rc;
+	ktime_t start, end;
 
+	start = ktime_get();
 	lblock_stop = i_size_read(jd->jd_inode) >> shift;
 	size = (lblock_stop - lblock) << shift;
 	jd->nr_extents = 0;
@@ -2268,8 +2271,9 @@ int gfs2_map_journal_extents(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd)
 		lblock += (bh.b_size >> ip->i_inode.i_blkbits);
 	} while(size > 0);
 
-	fs_info(sdp, "journal %d mapped with %u extents\n", jd->jd_jid,
-		jd->nr_extents);
+	end = ktime_get();
+	fs_info(sdp, "journal %d mapped with %u extents in %lldms\n", jd->jd_jid,
+		jd->nr_extents, ktime_ms_delta(end, start));
 	return 0;
 
 fail:
diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c
index 0f501f9..b0717a0 100644
--- a/fs/gfs2/recovery.c
+++ b/fs/gfs2/recovery.c
@@ -460,6 +460,8 @@ void gfs2_recover_func(struct work_struct *work)
 	if (error)
 		goto fail_gunlock_ji;
 	t_jhd = ktime_get();
+	fs_info(sdp, "jid=%u: Journal head lookup took %lldms\n", jd->jd_jid,
+		ktime_ms_delta(t_jhd, t_jlck));
 
 	if (!(head.lh_flags & GFS2_LOG_HEAD_UNMOUNT)) {
 		fs_info(sdp, "jid=%u: Acquiring the transaction lock...\n",
-- 
2.4.11



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [Cluster-devel] [GFS2 PATCH 2/4] gfs2: changes to gfs2_log_XXX_bio
  2018-10-16  4:07 [Cluster-devel] [GFS2 PATCH 0/4] jhead lookup using bios Abhi Das
  2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 1/4] gfs2: add more timing info to journal recovery process Abhi Das
@ 2018-10-16  4:07 ` Abhi Das
  2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 3/4] gfs2: add a helper function to get_log_header that can be used elsewhere Abhi Das
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Abhi Das @ 2018-10-16  4:07 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Change gfs2_log_XXX_bio family of functions so they can be used
with different bios, not just sdp->sd_log_bio.

This patch also contains some clean up suggested by Andreas.

Signed-off-by: Abhi Das <adas@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
---
 fs/gfs2/log.c  |  4 ++--
 fs/gfs2/lops.c | 73 +++++++++++++++++++++++++++++++---------------------------
 fs/gfs2/lops.h |  2 +-
 3 files changed, 42 insertions(+), 37 deletions(-)

diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
index 96706a2..93a94df 100644
--- a/fs/gfs2/log.c
+++ b/fs/gfs2/log.c
@@ -734,7 +734,7 @@ void gfs2_write_log_header(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd,
 	lh->lh_crc = cpu_to_be32(crc);
 
 	gfs2_log_write(sdp, page, sb->s_blocksize, 0, addr);
-	gfs2_log_flush_bio(sdp, REQ_OP_WRITE, op_flags);
+	gfs2_log_submit_bio(&sdp->sd_log_bio, REQ_OP_WRITE, op_flags);
 	log_flush_wait(sdp);
 }
 
@@ -811,7 +811,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl, u32 flags)
 
 	gfs2_ordered_write(sdp);
 	lops_before_commit(sdp, tr);
-	gfs2_log_flush_bio(sdp, REQ_OP_WRITE, 0);
+	gfs2_log_submit_bio(&sdp->sd_log_bio, REQ_OP_WRITE, 0);
 
 	if (sdp->sd_log_head != sdp->sd_log_flush_head) {
 		log_flush_wait(sdp);
diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
index 4c7069b..2295042 100644
--- a/fs/gfs2/lops.c
+++ b/fs/gfs2/lops.c
@@ -228,8 +228,8 @@ static void gfs2_end_log_write(struct bio *bio)
 }
 
 /**
- * gfs2_log_flush_bio - Submit any pending log bio
- * @sdp: The superblock
+ * gfs2_log_submit_bio - Submit any pending log bio
+ * @biop: Address of the bio pointer
  * @op: REQ_OP
  * @op_flags: req_flag_bits
  *
@@ -237,74 +237,78 @@ static void gfs2_end_log_write(struct bio *bio)
  * there is no pending bio, then this is a no-op.
  */
 
-void gfs2_log_flush_bio(struct gfs2_sbd *sdp, int op, int op_flags)
+void gfs2_log_submit_bio(struct bio **biop, int op, int op_flags)
 {
-	if (sdp->sd_log_bio) {
+	struct bio *bio = *biop;
+	if (bio) {
+		struct gfs2_sbd *sdp = bio->bi_private;
 		atomic_inc(&sdp->sd_log_in_flight);
-		bio_set_op_attrs(sdp->sd_log_bio, op, op_flags);
-		submit_bio(sdp->sd_log_bio);
-		sdp->sd_log_bio = NULL;
+		bio_set_op_attrs(bio, op, op_flags);
+		submit_bio(bio);
+		*biop = NULL;
 	}
 }
 
 /**
- * gfs2_log_alloc_bio - Allocate a new bio for log writing
- * @sdp: The superblock
- * @blkno: The next device block number we want to write to
+ * gfs2_log_alloc_bio - Allocate a bio
+ * @sdp: The super block
+ * @blkno: The device block number we want to write to
+ * @end_io: The bi_end_io callback
  *
- * This should never be called when there is a cached bio in the
- * super block. When it returns, there will be a cached bio in the
- * super block which will have as many bio_vecs as the device is
- * happy to handle.
+ * Allocate a new bio, initialize it with the given parameters and return it.
  *
- * Returns: Newly allocated bio
+ * Returns: The newly allocated bio
  */
 
-static struct bio *gfs2_log_alloc_bio(struct gfs2_sbd *sdp, u64 blkno)
+static struct bio *gfs2_log_alloc_bio(struct gfs2_sbd *sdp, u64 blkno,
+				      bio_end_io_t *end_io)
 {
 	struct super_block *sb = sdp->sd_vfs;
-	struct bio *bio;
+	struct bio *bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES);
 
-	BUG_ON(sdp->sd_log_bio);
-
-	bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES);
 	bio->bi_iter.bi_sector = blkno * (sb->s_blocksize >> 9);
 	bio_set_dev(bio, sb->s_bdev);
-	bio->bi_end_io = gfs2_end_log_write;
+	bio->bi_end_io = end_io;
 	bio->bi_private = sdp;
 
-	sdp->sd_log_bio = bio;
-
 	return bio;
 }
 
 /**
  * gfs2_log_get_bio - Get cached log bio, or allocate a new one
- * @sdp: The superblock
+ * @sdp: The super block
  * @blkno: The device block number we want to write to
+ * @bio: The bio to get or allocate
+ * @op: REQ_OP
+ * @end_io: The bi_end_io callback
+ * @flush: Always flush the current bio and allocate a new one?
  *
  * If there is a cached bio, then if the next block number is sequential
  * with the previous one, return it, otherwise flush the bio to the
- * device. If there is not a cached bio, or we just flushed it, then
+ * device. If there is no cached bio, or we just flushed it, then
  * allocate a new one.
  *
  * Returns: The bio to use for log writes
  */
 
-static struct bio *gfs2_log_get_bio(struct gfs2_sbd *sdp, u64 blkno)
+static struct bio *gfs2_log_get_bio(struct gfs2_sbd *sdp, u64 blkno,
+				    struct bio **biop, int op,
+				    bio_end_io_t *end_io, bool flush)
 {
-	struct bio *bio = sdp->sd_log_bio;
-	u64 nblk;
+	struct bio *bio = *biop;
 
 	if (bio) {
+		u64 nblk;
+
 		nblk = bio_end_sector(bio);
 		nblk >>= sdp->sd_fsb2bb_shift;
-		if (blkno == nblk)
+		if (blkno == nblk && !flush)
 			return bio;
-		gfs2_log_flush_bio(sdp, REQ_OP_WRITE, 0);
+		gfs2_log_submit_bio(biop, op, 0);
 	}
 
-	return gfs2_log_alloc_bio(sdp, blkno);
+	*biop = gfs2_log_alloc_bio(sdp, blkno, end_io);
+	return *biop;
 }
 
 /**
@@ -326,11 +330,12 @@ void gfs2_log_write(struct gfs2_sbd *sdp, struct page *page,
 	struct bio *bio;
 	int ret;
 
-	bio = gfs2_log_get_bio(sdp, blkno);
+	bio = gfs2_log_get_bio(sdp, blkno, &sdp->sd_log_bio, REQ_OP_WRITE,
+			       gfs2_end_log_write, false);
 	ret = bio_add_page(bio, page, size, offset);
 	if (ret == 0) {
-		gfs2_log_flush_bio(sdp, REQ_OP_WRITE, 0);
-		bio = gfs2_log_alloc_bio(sdp, blkno);
+		bio = gfs2_log_get_bio(sdp, blkno, &sdp->sd_log_bio,
+				       REQ_OP_WRITE, gfs2_end_log_write, true);
 		ret = bio_add_page(bio, page, size, offset);
 		WARN_ON(ret == 0);
 	}
diff --git a/fs/gfs2/lops.h b/fs/gfs2/lops.h
index e494939..711c4d8 100644
--- a/fs/gfs2/lops.h
+++ b/fs/gfs2/lops.h
@@ -30,7 +30,7 @@ extern u64 gfs2_log_bmap(struct gfs2_sbd *sdp);
 extern void gfs2_log_write(struct gfs2_sbd *sdp, struct page *page,
 			   unsigned size, unsigned offset, u64 blkno);
 extern void gfs2_log_write_page(struct gfs2_sbd *sdp, struct page *page);
-extern void gfs2_log_flush_bio(struct gfs2_sbd *sdp, int op, int op_flags);
+extern void gfs2_log_submit_bio(struct bio **biop, int op, int op_flags);
 extern void gfs2_pin(struct gfs2_sbd *sdp, struct buffer_head *bh);
 
 static inline unsigned int buf_limit(struct gfs2_sbd *sdp)
-- 
2.4.11



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [Cluster-devel] [GFS2 PATCH 3/4] gfs2: add a helper function to get_log_header that can be used elsewhere
  2018-10-16  4:07 [Cluster-devel] [GFS2 PATCH 0/4] jhead lookup using bios Abhi Das
  2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 1/4] gfs2: add more timing info to journal recovery process Abhi Das
  2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 2/4] gfs2: changes to gfs2_log_XXX_bio Abhi Das
@ 2018-10-16  4:07 ` Abhi Das
  2018-10-16  9:07   ` Andreas Gruenbacher
  2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 4/4] gfs2: read journal in large chunks to locate the head Abhi Das
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Abhi Das @ 2018-10-16  4:07 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Move and re-order the error checks and hash/crc computations into another
function __get_log_header() so it can be used in scenarios where buffer_heads
are not being used for the log header.

Signed-off-by: Abhi Das <adas@redhat.com>
---
 fs/gfs2/recovery.c | 53 ++++++++++++++++++++++++++++++++---------------------
 fs/gfs2/recovery.h |  2 ++
 2 files changed, 34 insertions(+), 21 deletions(-)

diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c
index b0717a0..2dac430 100644
--- a/fs/gfs2/recovery.c
+++ b/fs/gfs2/recovery.c
@@ -120,6 +120,35 @@ void gfs2_revoke_clean(struct gfs2_jdesc *jd)
 	}
 }
 
+int __get_log_header(struct gfs2_sbd *sdp, const struct gfs2_log_header *lh,
+		     unsigned int blkno, struct gfs2_log_header_host *head)
+{
+	u32 hash, crc;
+
+	if (lh->lh_header.mh_magic != cpu_to_be32(GFS2_MAGIC) ||
+	    lh->lh_header.mh_type != cpu_to_be32(GFS2_METATYPE_LH) ||
+	    (blkno && be32_to_cpu(lh->lh_blkno) != blkno))
+		return 1;
+
+	hash = crc32(~0, lh, LH_V1_SIZE - 4);
+	hash = ~crc32_le_shift(hash, 4); /* assume lh_hash is zero */
+
+	if (be32_to_cpu(lh->lh_hash) != hash)
+		return 1;
+
+	crc = crc32c(~0, (void *)lh + LH_V1_SIZE + 4,
+		     sdp->sd_sb.sb_bsize - LH_V1_SIZE - 4);
+
+	if ((lh->lh_crc != 0 && be32_to_cpu(lh->lh_crc) != crc))
+		return 1;
+
+	head->lh_sequence = be64_to_cpu(lh->lh_sequence);
+	head->lh_flags = be32_to_cpu(lh->lh_flags);
+	head->lh_tail = be32_to_cpu(lh->lh_tail);
+	head->lh_blkno = be32_to_cpu(lh->lh_blkno);
+
+	return 0;
+}
 /**
  * get_log_header - read the log header for a given segment
  * @jd: the journal
@@ -137,36 +166,18 @@ void gfs2_revoke_clean(struct gfs2_jdesc *jd)
 static int get_log_header(struct gfs2_jdesc *jd, unsigned int blk,
 			  struct gfs2_log_header_host *head)
 {
-	struct gfs2_log_header *lh;
+	struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode);
 	struct buffer_head *bh;
-	u32 hash, crc;
 	int error;
 
 	error = gfs2_replay_read_block(jd, blk, &bh);
 	if (error)
 		return error;
-	lh = (void *)bh->b_data;
-
-	hash = crc32(~0, lh, LH_V1_SIZE - 4);
-	hash = ~crc32_le_shift(hash, 4);  /* assume lh_hash is zero */
-
-	crc = crc32c(~0, (void *)lh + LH_V1_SIZE + 4,
-		     bh->b_size - LH_V1_SIZE - 4);
-
-	error = lh->lh_header.mh_magic != cpu_to_be32(GFS2_MAGIC) ||
-		lh->lh_header.mh_type != cpu_to_be32(GFS2_METATYPE_LH) ||
-		be32_to_cpu(lh->lh_blkno) != blk ||
-		be32_to_cpu(lh->lh_hash) != hash ||
-		(lh->lh_crc != 0 && be32_to_cpu(lh->lh_crc) != crc);
 
+	error = __get_log_header(sdp, (const struct gfs2_log_header *)bh->b_data,
+				 blk, head);
 	brelse(bh);
 
-	if (!error) {
-		head->lh_sequence = be64_to_cpu(lh->lh_sequence);
-		head->lh_flags = be32_to_cpu(lh->lh_flags);
-		head->lh_tail = be32_to_cpu(lh->lh_tail);
-		head->lh_blkno = be32_to_cpu(lh->lh_blkno);
-	}
 	return error;
 }
 
diff --git a/fs/gfs2/recovery.h b/fs/gfs2/recovery.h
index 11fdfab..943a67c 100644
--- a/fs/gfs2/recovery.h
+++ b/fs/gfs2/recovery.h
@@ -31,6 +31,8 @@ extern int gfs2_find_jhead(struct gfs2_jdesc *jd,
 		    struct gfs2_log_header_host *head);
 extern int gfs2_recover_journal(struct gfs2_jdesc *gfs2_jd, bool wait);
 extern void gfs2_recover_func(struct work_struct *work);
+extern int __get_log_header(struct gfs2_sbd *sdp, const struct gfs2_log_header *lh,
+			    unsigned int blkno, struct gfs2_log_header_host *head);
 
 #endif /* __RECOVERY_DOT_H__ */
 
-- 
2.4.11



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [Cluster-devel] [GFS2 PATCH 4/4] gfs2: read journal in large chunks to locate the head
  2018-10-16  4:07 [Cluster-devel] [GFS2 PATCH 0/4] jhead lookup using bios Abhi Das
                   ` (2 preceding siblings ...)
  2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 3/4] gfs2: add a helper function to get_log_header that can be used elsewhere Abhi Das
@ 2018-10-16  4:07 ` Abhi Das
  2018-10-17  9:43   ` Christoph Hellwig
  2018-10-17  9:32 ` [Cluster-devel] [GFS2 PATCH 0/4] jhead lookup using bios Steven Whitehouse
  2018-11-12 14:29 ` Bob Peterson
  5 siblings, 1 reply; 11+ messages in thread
From: Abhi Das @ 2018-10-16  4:07 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Use bio(s) to read in the journal sequentially in large chunks and
locate the head of the journal.

Signed-off-by: Abhi Das <adas@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
---
 fs/gfs2/glops.c      |   1 +
 fs/gfs2/lops.c       | 167 ++++++++++++++++++++++++++++++++++++++++++++++++++-
 fs/gfs2/lops.h       |   2 +
 fs/gfs2/ops_fstype.c |   1 +
 fs/gfs2/recovery.c   | 123 -------------------------------------
 fs/gfs2/recovery.h   |   2 -
 fs/gfs2/super.c      |   1 +
 7 files changed, 171 insertions(+), 126 deletions(-)

diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
index c63bee9..f79ef95 100644
--- a/fs/gfs2/glops.c
+++ b/fs/gfs2/glops.c
@@ -28,6 +28,7 @@
 #include "util.h"
 #include "trans.h"
 #include "dir.h"
+#include "lops.h"
 
 struct workqueue_struct *gfs2_freeze_wq;
 
diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
index 2295042..568b6cc 100644
--- a/fs/gfs2/lops.c
+++ b/fs/gfs2/lops.c
@@ -18,6 +18,7 @@
 #include <linux/fs.h>
 #include <linux/list_sort.h>
 
+#include "bmap.h"
 #include "dir.h"
 #include "gfs2.h"
 #include "incore.h"
@@ -193,7 +194,6 @@ static void gfs2_end_log_write_bh(struct gfs2_sbd *sdp, struct bio_vec *bvec,
 /**
  * gfs2_end_log_write - end of i/o to the log
  * @bio: The bio
- * @error: Status of i/o request
  *
  * Each bio_vec contains either data from the pagecache or data
  * relating to the log itself. Here we iterate over the bio_vec
@@ -375,6 +375,171 @@ void gfs2_log_write_page(struct gfs2_sbd *sdp, struct page *page)
 		       gfs2_log_bmap(sdp));
 }
 
+/**
+ * gfs2_end_log_read - end I/O callback for reads from the log
+ * @bio: The bio
+ *
+ * Simply unlock the pages in the bio. The main thread will wait on them and
+ * process them in order as necessary.
+ */
+
+static void gfs2_end_log_read(struct bio *bio)
+{
+	struct gfs2_sbd *sdp = bio->bi_private;
+	struct page *page;
+	struct bio_vec *bvec;
+	int i;
+
+	if (bio->bi_status)
+		fs_err(sdp, "Error %d reading from journal\n", bio->bi_status);
+
+	bio_for_each_segment_all(bvec, bio, i)
+		unlock_page(bvec->bv_page);
+
+	bio_put(bio);
+}
+
+/**
+ * gfs2_jhead_pg_srch - Look for the journal head in a given page.
+ * @jd: The journal descriptor
+ * @page: The page to look in
+ *
+ * Returns: 1 if found, 0 otherwise.
+ */
+
+static bool gfs2_jhead_pg_srch(struct gfs2_jdesc *jd,
+			      struct gfs2_log_header_host *head,
+			      struct page *page)
+{
+	struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode);
+	struct gfs2_log_header_host uninitialized_var(lh);
+	void *kaddr = kmap_atomic(page);
+	unsigned int offset;
+	bool ret = false;
+
+	for (offset = 0; offset < PAGE_SIZE; offset += sdp->sd_sb.sb_bsize) {
+		if (!__get_log_header(sdp, kaddr + offset, 0, &lh)) {
+			if (lh.lh_sequence > head->lh_sequence)
+				*head = lh;
+			else {
+				ret = true;
+				break;
+			}
+		}
+	}
+	kunmap_atomic(kaddr);
+	return ret;
+}
+
+/**
+ * gfs2_jhead_process_page - Search/cleanup a page
+ * @jd: The journal descriptor
+ * @index: Index of the page to look into
+ * @done: If set, perform only cleanup, else search and set if found.
+ *
+ * Find the page with 'index' in the journal's mapping. Search the page for
+ * the journal head if requested (cleanup == false). Release refs on the
+ * page so the page cache can reclaim it (put_page() twice). We grabbed a
+ * reference on this page two times, first when we did a find_or_create_page()
+ * to obtain the page to add it to the bio and second when we do a
+ * find_get_page() here to get the page to wait on while I/O on it is being
+ * completed.
+ * This function is also used to free up a page we might've grabbed but not
+ * used. Maybe we added it to a bio, but not submitted it for I/O. Or we
+ * submitted the I/O, but we already found the jhead so we only need to drop
+ * our references to the page.
+ */
+
+static void gfs2_jhead_process_page(struct gfs2_jdesc *jd, unsigned long index,
+				    struct gfs2_log_header_host *head,
+				    bool *done)
+{
+	struct page *page;
+
+	page = find_get_page(jd->jd_inode->i_mapping, index);
+	wait_on_page_locked(page);
+
+	if (!*done)
+		*done = gfs2_jhead_pg_srch(jd, head, page);
+
+	put_page(page); /* Once for find_get_page */
+	put_page(page); /* Once more for find_or_create_page */
+}
+
+/**
+ * gfs2_find_jhead - find the head of a log
+ * @jd: The journal descriptor
+ * @head: The log descriptor for the head of the log is returned here
+ *
+ * Do a search of a journal by reading it in large chunks using bios and find
+ * the valid log entry with the highest sequence number.  (i.e. the log head)
+ *
+ * Returns: 0 on success, errno otherwise
+ */
+
+int gfs2_find_jhead(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head)
+{
+	struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode);
+	struct gfs2_journal_extent *je;
+	u32 block, read_idx = 0, submit_idx = 0, index = 0;
+	int shift = PAGE_SHIFT - sdp->sd_sb.sb_bsize_shift;
+	int blocks_per_page = 1 << shift, sz, ret = 0;
+	struct bio *bio = NULL;
+	struct page *page;
+	bool done = false;
+
+	memset(head, 0, sizeof(*head));
+	if (list_empty(&jd->extent_list))
+		gfs2_map_journal_extents(sdp, jd);
+
+	list_for_each_entry(je, &jd->extent_list, list) {
+		for (block = 0; block < je->blocks; block += blocks_per_page) {
+			index = (je->lblock + block) >> shift;
+
+			page = find_or_create_page(jd->jd_inode->i_mapping,
+						   index, GFP_NOFS);
+			if (!page) {
+				ret = -ENOMEM;
+				done = true;
+				goto out;
+			}
+
+			if (bio) {
+				sz = bio_add_page(bio, page, PAGE_SIZE, 0);
+				if (sz == PAGE_SIZE)
+					goto page_added;
+				submit_idx = index;
+				submit_bio(bio);
+				bio = NULL;
+			}
+
+			bio = gfs2_log_alloc_bio(sdp,
+						 je->dblock + (index << shift),
+						 gfs2_end_log_read);
+			bio_set_op_attrs(bio, REQ_OP_READ, 0);
+			sz = bio_add_page(bio, page, PAGE_SIZE, 0);
+			gfs2_assert_warn(sdp, sz == PAGE_SIZE);
+
+page_added:
+			if (submit_idx <= read_idx + BIO_MAX_PAGES) {
+				/* Keep@least one bio in flight */
+				continue;
+			}
+
+			gfs2_jhead_process_page(jd, read_idx++, head, &done);
+			if (done)
+				goto out;  /* found */
+		}
+	}
+
+out:
+	if (bio)
+		submit_bio(bio);
+	while (read_idx <= index)
+	       gfs2_jhead_process_page(jd, read_idx++, head, &done);
+	return ret;
+}
+
 static struct page *gfs2_get_log_desc(struct gfs2_sbd *sdp, u32 ld_type,
 				      u32 ld_length, u32 ld_data1)
 {
diff --git a/fs/gfs2/lops.h b/fs/gfs2/lops.h
index 711c4d8..6a4714b 100644
--- a/fs/gfs2/lops.h
+++ b/fs/gfs2/lops.h
@@ -32,6 +32,8 @@ extern void gfs2_log_write(struct gfs2_sbd *sdp, struct page *page,
 extern void gfs2_log_write_page(struct gfs2_sbd *sdp, struct page *page);
 extern void gfs2_log_submit_bio(struct bio **biop, int op, int op_flags);
 extern void gfs2_pin(struct gfs2_sbd *sdp, struct buffer_head *bh);
+extern int gfs2_find_jhead(struct gfs2_jdesc *jd,
+			   struct gfs2_log_header_host *head);
 
 static inline unsigned int buf_limit(struct gfs2_sbd *sdp)
 {
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index 4ec69d9..ae3ee51 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -41,6 +41,7 @@
 #include "dir.h"
 #include "meta_io.h"
 #include "trace_gfs2.h"
+#include "lops.h"
 
 #define DO 0
 #define UNDO 1
diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c
index 2dac430..7389e44 100644
--- a/fs/gfs2/recovery.c
+++ b/fs/gfs2/recovery.c
@@ -182,129 +182,6 @@ static int get_log_header(struct gfs2_jdesc *jd, unsigned int blk,
 }
 
 /**
- * find_good_lh - find a good log header
- * @jd: the journal
- * @blk: the segment to start searching from
- * @lh: the log header to fill in
- * @forward: if true search forward in the log, else search backward
- *
- * Call get_log_header() to get a log header for a segment, but if the
- * segment is bad, either scan forward or backward until we find a good one.
- *
- * Returns: errno
- */
-
-static int find_good_lh(struct gfs2_jdesc *jd, unsigned int *blk,
-			struct gfs2_log_header_host *head)
-{
-	unsigned int orig_blk = *blk;
-	int error;
-
-	for (;;) {
-		error = get_log_header(jd, *blk, head);
-		if (error <= 0)
-			return error;
-
-		if (++*blk == jd->jd_blocks)
-			*blk = 0;
-
-		if (*blk == orig_blk) {
-			gfs2_consist_inode(GFS2_I(jd->jd_inode));
-			return -EIO;
-		}
-	}
-}
-
-/**
- * jhead_scan - make sure we've found the head of the log
- * @jd: the journal
- * @head: this is filled in with the log descriptor of the head
- *
- * At this point, seg and lh should be either the head of the log or just
- * before.  Scan forward until we find the head.
- *
- * Returns: errno
- */
-
-static int jhead_scan(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head)
-{
-	unsigned int blk = head->lh_blkno;
-	struct gfs2_log_header_host lh;
-	int error;
-
-	for (;;) {
-		if (++blk == jd->jd_blocks)
-			blk = 0;
-
-		error = get_log_header(jd, blk, &lh);
-		if (error < 0)
-			return error;
-		if (error == 1)
-			continue;
-
-		if (lh.lh_sequence == head->lh_sequence) {
-			gfs2_consist_inode(GFS2_I(jd->jd_inode));
-			return -EIO;
-		}
-		if (lh.lh_sequence < head->lh_sequence)
-			break;
-
-		*head = lh;
-	}
-
-	return 0;
-}
-
-/**
- * gfs2_find_jhead - find the head of a log
- * @jd: the journal
- * @head: the log descriptor for the head of the log is returned here
- *
- * Do a binary search of a journal and find the valid log entry with the
- * highest sequence number.  (i.e. the log head)
- *
- * Returns: errno
- */
-
-int gfs2_find_jhead(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head)
-{
-	struct gfs2_log_header_host lh_1, lh_m;
-	u32 blk_1, blk_2, blk_m;
-	int error;
-
-	blk_1 = 0;
-	blk_2 = jd->jd_blocks - 1;
-
-	for (;;) {
-		blk_m = (blk_1 + blk_2) / 2;
-
-		error = find_good_lh(jd, &blk_1, &lh_1);
-		if (error)
-			return error;
-
-		error = find_good_lh(jd, &blk_m, &lh_m);
-		if (error)
-			return error;
-
-		if (blk_1 == blk_m || blk_m == blk_2)
-			break;
-
-		if (lh_1.lh_sequence <= lh_m.lh_sequence)
-			blk_1 = blk_m;
-		else
-			blk_2 = blk_m;
-	}
-
-	error = jhead_scan(jd, &lh_1);
-	if (error)
-		return error;
-
-	*head = lh_1;
-
-	return error;
-}
-
-/**
  * foreach_descriptor - go through the active part of the log
  * @jd: the journal
  * @start: the first log header in the active region
diff --git a/fs/gfs2/recovery.h b/fs/gfs2/recovery.h
index 943a67c..4d00a92 100644
--- a/fs/gfs2/recovery.h
+++ b/fs/gfs2/recovery.h
@@ -27,8 +27,6 @@ extern int gfs2_revoke_add(struct gfs2_jdesc *jd, u64 blkno, unsigned int where)
 extern int gfs2_revoke_check(struct gfs2_jdesc *jd, u64 blkno, unsigned int where);
 extern void gfs2_revoke_clean(struct gfs2_jdesc *jd);
 
-extern int gfs2_find_jhead(struct gfs2_jdesc *jd,
-		    struct gfs2_log_header_host *head);
 extern int gfs2_recover_journal(struct gfs2_jdesc *gfs2_jd, bool wait);
 extern void gfs2_recover_func(struct work_struct *work);
 extern int __get_log_header(struct gfs2_sbd *sdp, const struct gfs2_log_header *lh,
diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
index a971862..ae38ba7 100644
--- a/fs/gfs2/super.c
+++ b/fs/gfs2/super.c
@@ -45,6 +45,7 @@
 #include "util.h"
 #include "sys.h"
 #include "xattr.h"
+#include "lops.h"
 
 #define args_neq(a1, a2, x) ((a1)->ar_##x != (a2)->ar_##x)
 
-- 
2.4.11



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [Cluster-devel] [GFS2 PATCH 1/4] gfs2: add more timing info to journal recovery process
  2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 1/4] gfs2: add more timing info to journal recovery process Abhi Das
@ 2018-10-16  9:05   ` Andreas Gruenbacher
  0 siblings, 0 replies; 11+ messages in thread
From: Andreas Gruenbacher @ 2018-10-16  9:05 UTC (permalink / raw)
  To: cluster-devel.redhat.com

On Tue, 16 Oct 2018 at 06:23, Abhi Das <adas@redhat.com> wrote:
>
> Tells you how many milliseconds map_journal_extents and find_jhead
> take.
>
> Signed-off-by: Abhi Das <adas@redhat.com>

Reviewed-by: Andreas Gruenbacher <agruenba@redhat.com>

Thanks,
Andreas

> ---
>  fs/gfs2/bmap.c     | 8 ++++++--
>  fs/gfs2/recovery.c | 2 ++
>  2 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
> index 5f3ea07..aaf3682 100644
> --- a/fs/gfs2/bmap.c
> +++ b/fs/gfs2/bmap.c
> @@ -14,6 +14,7 @@
>  #include <linux/gfs2_ondisk.h>
>  #include <linux/crc32.h>
>  #include <linux/iomap.h>
> +#include <linux/ktime.h>
>
>  #include "gfs2.h"
>  #include "incore.h"
> @@ -2248,7 +2249,9 @@ int gfs2_map_journal_extents(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd)
>         unsigned int shift = sdp->sd_sb.sb_bsize_shift;
>         u64 size;
>         int rc;
> +       ktime_t start, end;
>
> +       start = ktime_get();
>         lblock_stop = i_size_read(jd->jd_inode) >> shift;
>         size = (lblock_stop - lblock) << shift;
>         jd->nr_extents = 0;
> @@ -2268,8 +2271,9 @@ int gfs2_map_journal_extents(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd)
>                 lblock += (bh.b_size >> ip->i_inode.i_blkbits);
>         } while(size > 0);
>
> -       fs_info(sdp, "journal %d mapped with %u extents\n", jd->jd_jid,
> -               jd->nr_extents);
> +       end = ktime_get();
> +       fs_info(sdp, "journal %d mapped with %u extents in %lldms\n", jd->jd_jid,
> +               jd->nr_extents, ktime_ms_delta(end, start));
>         return 0;
>
>  fail:
> diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c
> index 0f501f9..b0717a0 100644
> --- a/fs/gfs2/recovery.c
> +++ b/fs/gfs2/recovery.c
> @@ -460,6 +460,8 @@ void gfs2_recover_func(struct work_struct *work)
>         if (error)
>                 goto fail_gunlock_ji;
>         t_jhd = ktime_get();
> +       fs_info(sdp, "jid=%u: Journal head lookup took %lldms\n", jd->jd_jid,
> +               ktime_ms_delta(t_jhd, t_jlck));
>
>         if (!(head.lh_flags & GFS2_LOG_HEAD_UNMOUNT)) {
>                 fs_info(sdp, "jid=%u: Acquiring the transaction lock...\n",
> --
> 2.4.11
>



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Cluster-devel] [GFS2 PATCH 3/4] gfs2: add a helper function to get_log_header that can be used elsewhere
  2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 3/4] gfs2: add a helper function to get_log_header that can be used elsewhere Abhi Das
@ 2018-10-16  9:07   ` Andreas Gruenbacher
  0 siblings, 0 replies; 11+ messages in thread
From: Andreas Gruenbacher @ 2018-10-16  9:07 UTC (permalink / raw)
  To: cluster-devel.redhat.com

On Tue, 16 Oct 2018 at 06:23, Abhi Das <adas@redhat.com> wrote:
> Move and re-order the error checks and hash/crc computations into another
> function __get_log_header() so it can be used in scenarios where buffer_heads
> are not being used for the log header.
>
> Signed-off-by: Abhi Das <adas@redhat.com>

Reviewed-by: Andreas Gruenbacher <agruenba@redhat.com>

Thanks,
Andreas

> ---
>  fs/gfs2/recovery.c | 53 ++++++++++++++++++++++++++++++++---------------------
>  fs/gfs2/recovery.h |  2 ++
>  2 files changed, 34 insertions(+), 21 deletions(-)
>
> diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c
> index b0717a0..2dac430 100644
> --- a/fs/gfs2/recovery.c
> +++ b/fs/gfs2/recovery.c
> @@ -120,6 +120,35 @@ void gfs2_revoke_clean(struct gfs2_jdesc *jd)
>         }
>  }
>
> +int __get_log_header(struct gfs2_sbd *sdp, const struct gfs2_log_header *lh,
> +                    unsigned int blkno, struct gfs2_log_header_host *head)
> +{
> +       u32 hash, crc;
> +
> +       if (lh->lh_header.mh_magic != cpu_to_be32(GFS2_MAGIC) ||
> +           lh->lh_header.mh_type != cpu_to_be32(GFS2_METATYPE_LH) ||
> +           (blkno && be32_to_cpu(lh->lh_blkno) != blkno))
> +               return 1;
> +
> +       hash = crc32(~0, lh, LH_V1_SIZE - 4);
> +       hash = ~crc32_le_shift(hash, 4); /* assume lh_hash is zero */
> +
> +       if (be32_to_cpu(lh->lh_hash) != hash)
> +               return 1;
> +
> +       crc = crc32c(~0, (void *)lh + LH_V1_SIZE + 4,
> +                    sdp->sd_sb.sb_bsize - LH_V1_SIZE - 4);
> +
> +       if ((lh->lh_crc != 0 && be32_to_cpu(lh->lh_crc) != crc))
> +               return 1;
> +
> +       head->lh_sequence = be64_to_cpu(lh->lh_sequence);
> +       head->lh_flags = be32_to_cpu(lh->lh_flags);
> +       head->lh_tail = be32_to_cpu(lh->lh_tail);
> +       head->lh_blkno = be32_to_cpu(lh->lh_blkno);
> +
> +       return 0;
> +}
>  /**
>   * get_log_header - read the log header for a given segment
>   * @jd: the journal
> @@ -137,36 +166,18 @@ void gfs2_revoke_clean(struct gfs2_jdesc *jd)
>  static int get_log_header(struct gfs2_jdesc *jd, unsigned int blk,
>                           struct gfs2_log_header_host *head)
>  {
> -       struct gfs2_log_header *lh;
> +       struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode);
>         struct buffer_head *bh;
> -       u32 hash, crc;
>         int error;
>
>         error = gfs2_replay_read_block(jd, blk, &bh);
>         if (error)
>                 return error;
> -       lh = (void *)bh->b_data;
> -
> -       hash = crc32(~0, lh, LH_V1_SIZE - 4);
> -       hash = ~crc32_le_shift(hash, 4);  /* assume lh_hash is zero */
> -
> -       crc = crc32c(~0, (void *)lh + LH_V1_SIZE + 4,
> -                    bh->b_size - LH_V1_SIZE - 4);
> -
> -       error = lh->lh_header.mh_magic != cpu_to_be32(GFS2_MAGIC) ||
> -               lh->lh_header.mh_type != cpu_to_be32(GFS2_METATYPE_LH) ||
> -               be32_to_cpu(lh->lh_blkno) != blk ||
> -               be32_to_cpu(lh->lh_hash) != hash ||
> -               (lh->lh_crc != 0 && be32_to_cpu(lh->lh_crc) != crc);
>
> +       error = __get_log_header(sdp, (const struct gfs2_log_header *)bh->b_data,
> +                                blk, head);
>         brelse(bh);
>
> -       if (!error) {
> -               head->lh_sequence = be64_to_cpu(lh->lh_sequence);
> -               head->lh_flags = be32_to_cpu(lh->lh_flags);
> -               head->lh_tail = be32_to_cpu(lh->lh_tail);
> -               head->lh_blkno = be32_to_cpu(lh->lh_blkno);
> -       }
>         return error;
>  }
>
> diff --git a/fs/gfs2/recovery.h b/fs/gfs2/recovery.h
> index 11fdfab..943a67c 100644
> --- a/fs/gfs2/recovery.h
> +++ b/fs/gfs2/recovery.h
> @@ -31,6 +31,8 @@ extern int gfs2_find_jhead(struct gfs2_jdesc *jd,
>                     struct gfs2_log_header_host *head);
>  extern int gfs2_recover_journal(struct gfs2_jdesc *gfs2_jd, bool wait);
>  extern void gfs2_recover_func(struct work_struct *work);
> +extern int __get_log_header(struct gfs2_sbd *sdp, const struct gfs2_log_header *lh,
> +                           unsigned int blkno, struct gfs2_log_header_host *head);
>
>  #endif /* __RECOVERY_DOT_H__ */
>
> --
> 2.4.11
>



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Cluster-devel] [GFS2 PATCH 0/4] jhead lookup using bios
  2018-10-16  4:07 [Cluster-devel] [GFS2 PATCH 0/4] jhead lookup using bios Abhi Das
                   ` (3 preceding siblings ...)
  2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 4/4] gfs2: read journal in large chunks to locate the head Abhi Das
@ 2018-10-17  9:32 ` Steven Whitehouse
  2018-11-12 14:29 ` Bob Peterson
  5 siblings, 0 replies; 11+ messages in thread
From: Steven Whitehouse @ 2018-10-17  9:32 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Hi,

This all looks good to me, so now we just need lots of testing. Are you 
still seeing good speed ups vs the current code?

Steve.


On 16/10/18 05:07, Abhi Das wrote:
> This is my latest version of this patchset based on inputs from Andreas
> and Steve.
> We readahead the journal sequentially in large chunks using bios. Pagecache
> pages for the journal inode's mapping are used for the I/O.
>
> There's also some cleanup of the bio functions with this patchset.
>
> xfstests ran to completion with this.
>
> Abhi Das (4):
>    gfs2: add more timing info to journal recovery process
>    gfs2: changes to gfs2_log_XXX_bio
>    gfs2: add a helper function to get_log_header that can be used
>      elsewhere
>    gfs2: read journal in large chunks to locate the head
>
>   fs/gfs2/bmap.c       |   8 +-
>   fs/gfs2/glops.c      |   1 +
>   fs/gfs2/log.c        |   4 +-
>   fs/gfs2/lops.c       | 240 +++++++++++++++++++++++++++++++++++++++++++--------
>   fs/gfs2/lops.h       |   4 +-
>   fs/gfs2/ops_fstype.c |   1 +
>   fs/gfs2/recovery.c   | 178 ++++++++------------------------------
>   fs/gfs2/recovery.h   |   4 +-
>   fs/gfs2/super.c      |   1 +
>   9 files changed, 255 insertions(+), 186 deletions(-)
>



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Cluster-devel] [GFS2 PATCH 4/4] gfs2: read journal in large chunks to locate the head
  2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 4/4] gfs2: read journal in large chunks to locate the head Abhi Das
@ 2018-10-17  9:43   ` Christoph Hellwig
  2018-10-17 15:19     ` Abhijith Das
  0 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2018-10-17  9:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

> +/**
> + * gfs2_end_log_read - end I/O callback for reads from the log
> + * @bio: The bio
> + *
> + * Simply unlock the pages in the bio. The main thread will wait on them and
> + * process them in order as necessary.
> + */
> +
> +static void gfs2_end_log_read(struct bio *bio)
> +{
> +	struct gfs2_sbd *sdp = bio->bi_private;
> +	struct page *page;
> +	struct bio_vec *bvec;
> +	int i;
> +
> +	if (bio->bi_status)
> +		fs_err(sdp, "Error %d reading from journal\n", bio->bi_status);

How is error handling forthis going to work?

> +			bio_set_op_attrs(bio, REQ_OP_READ, 0);

Please assign to bi_opf directly instead of using bio_set_op_attrs.



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Cluster-devel] [GFS2 PATCH 4/4] gfs2: read journal in large chunks to locate the head
  2018-10-17  9:43   ` Christoph Hellwig
@ 2018-10-17 15:19     ` Abhijith Das
  0 siblings, 0 replies; 11+ messages in thread
From: Abhijith Das @ 2018-10-17 15:19 UTC (permalink / raw)
  To: cluster-devel.redhat.com

On Wed, Oct 17, 2018 at 4:43 AM Christoph Hellwig <hch@infradead.org> wrote:

> > +/**
> > + * gfs2_end_log_read - end I/O callback for reads from the log
> > + * @bio: The bio
> > + *
> > + * Simply unlock the pages in the bio. The main thread will wait on
> them and
> > + * process them in order as necessary.
> > + */
> > +
> > +static void gfs2_end_log_read(struct bio *bio)
> > +{
> > +     struct gfs2_sbd *sdp = bio->bi_private;
> > +     struct page *page;
> > +     struct bio_vec *bvec;
> > +     int i;
> > +
> > +     if (bio->bi_status)
> > +             fs_err(sdp, "Error %d reading from journal\n",
> bio->bi_status);
>
> How is error handling forthis going to work?
>

It's not. I'll post an updated version.


>
> > +                     bio_set_op_attrs(bio, REQ_OP_READ, 0);
>
> Please assign to bi_opf directly instead of using bio_set_op_attrs.
>

Yes. Didn't realize this was deprecated. There's another place where this
is used. The updated patch will address this.
Thanks for the review!

Cheers!
--Abhi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/cluster-devel/attachments/20181017/4e080374/attachment.htm>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Cluster-devel] [GFS2 PATCH 0/4] jhead lookup using bios
  2018-10-16  4:07 [Cluster-devel] [GFS2 PATCH 0/4] jhead lookup using bios Abhi Das
                   ` (4 preceding siblings ...)
  2018-10-17  9:32 ` [Cluster-devel] [GFS2 PATCH 0/4] jhead lookup using bios Steven Whitehouse
@ 2018-11-12 14:29 ` Bob Peterson
  5 siblings, 0 replies; 11+ messages in thread
From: Bob Peterson @ 2018-11-12 14:29 UTC (permalink / raw)
  To: cluster-devel.redhat.com

----- Original Message -----
> This is my latest version of this patchset based on inputs from Andreas
> and Steve.
> We readahead the journal sequentially in large chunks using bios. Pagecache
> pages for the journal inode's mapping are used for the I/O.
> 
> There's also some cleanup of the bio functions with this patchset.
> 
> xfstests ran to completion with this.
> 
> Abhi Das (4):
>   gfs2: add more timing info to journal recovery process
>   gfs2: changes to gfs2_log_XXX_bio
>   gfs2: add a helper function to get_log_header that can be used
>     elsewhere
>   gfs2: read journal in large chunks to locate the head
Hi,

Thanks. This patch set is now pushed to the for-next branch of the linux-gfs2 tree:

https://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2.git/commit/fs/gfs2?h=for-next&id=cc584ff3f572370af8e1ee85883db8280ed10965
https://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2.git/commit/fs/gfs2?h=for-next&id=5cc3cc8c5363d34098c67b31d688189a69225eba
https://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2.git/commit/fs/gfs2?h=for-next&id=0bb829823eab09efa001c9f35cc2763ea19d8055
https://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2.git/commit/fs/gfs2?h=for-next&id=8d52c6496a3d07fc784fe21dbbc3b48658b283e2

Regards,

Bob Peterson
Red Hat File Systems



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2018-11-12 14:29 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-16  4:07 [Cluster-devel] [GFS2 PATCH 0/4] jhead lookup using bios Abhi Das
2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 1/4] gfs2: add more timing info to journal recovery process Abhi Das
2018-10-16  9:05   ` Andreas Gruenbacher
2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 2/4] gfs2: changes to gfs2_log_XXX_bio Abhi Das
2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 3/4] gfs2: add a helper function to get_log_header that can be used elsewhere Abhi Das
2018-10-16  9:07   ` Andreas Gruenbacher
2018-10-16  4:07 ` [Cluster-devel] [GFS2 PATCH 4/4] gfs2: read journal in large chunks to locate the head Abhi Das
2018-10-17  9:43   ` Christoph Hellwig
2018-10-17 15:19     ` Abhijith Das
2018-10-17  9:32 ` [Cluster-devel] [GFS2 PATCH 0/4] jhead lookup using bios Steven Whitehouse
2018-11-12 14:29 ` Bob Peterson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.