All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] [PATCHv5 0/4] reiser4: discard support: initial implementation, refactored.
@ 2014-06-20 20:39 Ivan Shapovalov
  2014-06-20 20:39 ` [RFC] [PATCHv5 1/4] reiser4: make space_allocator's check_blocks() reusable Ivan Shapovalov
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Ivan Shapovalov @ 2014-06-20 20:39 UTC (permalink / raw)
  To: reiserfs-devel; +Cc: edward.shishkin, Ivan Shapovalov

v1: - initial implementation (patches 1, 2)

v2: - cleanup, fixes discovered in debug mode
    - saner logging
    - assertions
    - enablement of discard through mount option

v3: - fixed the extent merge loop in discard_atom()

v4: - squashed fix-ups into the main patch (with exception of reiser4_debug())
    - fixed bug in usage of division ops discovered while building on ARM

v5: - squashed mount option into the main patch
    - refactor based on discussion (see commit msg)
      - splitted off blocknr_list code
      - replaced ->discard_set with ->delete_set and ->aux_delete_set

Ivan Shapovalov (4):
  reiser4: make space_allocator's check_blocks() reusable.
  reiser4: add an implementation of "block lists", splitted off the discard code.
  reiser4: add reiser4_debug(): a conditional equivalent of reiser4_log().
  reiser4: discard support: initial implementation using linked lists.

 fs/reiser4/Makefile                       |   2 +
 fs/reiser4/block_alloc.c                  |  49 ++---
 fs/reiser4/block_alloc.h                  |  14 +-
 fs/reiser4/blocknrlist.c                  | 315 ++++++++++++++++++++++++++++++
 fs/reiser4/debug.h                        |   4 +
 fs/reiser4/dformat.h                      |   2 +
 fs/reiser4/discard.c                      | 247 +++++++++++++++++++++++
 fs/reiser4/discard.h                      |  31 +++
 fs/reiser4/forward.h                      |   1 +
 fs/reiser4/init_super.c                   |   2 +
 fs/reiser4/plugin/space/bitmap.c          |  84 +++++---
 fs/reiser4/plugin/space/bitmap.h          |   2 +-
 fs/reiser4/plugin/space/space_allocator.h |   4 +-
 fs/reiser4/super.h                        |   4 +-
 fs/reiser4/txnmgr.c                       | 125 +++++++++++-
 fs/reiser4/txnmgr.h                       |  63 +++++-
 fs/reiser4/znode.c                        |   9 +-
 17 files changed, 884 insertions(+), 74 deletions(-)
 create mode 100644 fs/reiser4/blocknrlist.c
 create mode 100644 fs/reiser4/discard.c
 create mode 100644 fs/reiser4/discard.h

-- 
2.0.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [RFC] [PATCHv5 1/4] reiser4: make space_allocator's check_blocks() reusable.
  2014-06-20 20:39 [RFC] [PATCHv5 0/4] reiser4: discard support: initial implementation, refactored Ivan Shapovalov
@ 2014-06-20 20:39 ` Ivan Shapovalov
  2014-06-20 20:39 ` [RFC] [PATCHv5 2/4] reiser4: add an implementation of "block lists", splitted off the discard code Ivan Shapovalov
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Ivan Shapovalov @ 2014-06-20 20:39 UTC (permalink / raw)
  To: reiserfs-devel; +Cc: edward.shishkin, Ivan Shapovalov

Make check_blocks() return a boolean value (whether did the extent's
state match our expectations) instead of asserting success and crashing
system otherwise.
Also make it possible to check extents spanning multiple bitmap blocks.

The only user of reiser4_check_block() in its previous form has been updated
to assert on true return value.

Thus check_blocks() can now be reused by various parts of reiser4, e. g.
by the discard subsystem which will be added in next commits.

Signed-off-by: Ivan Shapovalov <intelfx100@gmail.com>
---
 fs/reiser4/block_alloc.c                  | 16 +------
 fs/reiser4/block_alloc.h                  | 14 +++---
 fs/reiser4/plugin/space/bitmap.c          | 77 ++++++++++++++++++++-----------
 fs/reiser4/plugin/space/bitmap.h          |  2 +-
 fs/reiser4/plugin/space/space_allocator.h |  4 +-
 fs/reiser4/znode.c                        |  9 ++--
 6 files changed, 68 insertions(+), 54 deletions(-)

diff --git a/fs/reiser4/block_alloc.c b/fs/reiser4/block_alloc.c
index 81ed96f..57b0836 100644
--- a/fs/reiser4/block_alloc.c
+++ b/fs/reiser4/block_alloc.c
@@ -962,26 +962,14 @@ static void used2free(reiser4_super_info_data * sbinfo, __u64 count)
 	spin_unlock_reiser4_super(sbinfo);
 }
 
-#if REISER4_DEBUG
-
 /* check "allocated" state of given block range */
-static void
+int
 reiser4_check_blocks(const reiser4_block_nr * start,
 		     const reiser4_block_nr * len, int desired)
 {
-	sa_check_blocks(start, len, desired);
+	return sa_check_blocks(start, len, desired);
 }
 
-/* check "allocated" state of given block */
-void reiser4_check_block(const reiser4_block_nr * block, int desired)
-{
-	const reiser4_block_nr one = 1;
-
-	reiser4_check_blocks(block, &one, desired);
-}
-
-#endif
-
 /* Blocks deallocation function may do an actual deallocation through space
    plugin allocation or store deleted block numbers in atom's delete_set data
    structure depend on @defer parameter. */
diff --git a/fs/reiser4/block_alloc.h b/fs/reiser4/block_alloc.h
index 689efc1..a4e98af 100644
--- a/fs/reiser4/block_alloc.h
+++ b/fs/reiser4/block_alloc.h
@@ -150,15 +150,15 @@ extern void cluster_reserved2free(int count);
 
 extern int reiser4_check_block_counters(const struct super_block *);
 
-#if REISER4_DEBUG
 
-extern void reiser4_check_block(const reiser4_block_nr *, int);
+extern int reiser4_check_blocks(const reiser4_block_nr *start,
+                                const reiser4_block_nr *len, int desired);
 
-#else
-
-#  define reiser4_check_block(beg, val)        noop
-
-#endif
+static inline int reiser4_check_block(const reiser4_block_nr *start,
+                                      int desired)
+{
+	return reiser4_check_blocks(start, NULL, desired);
+}
 
 extern int reiser4_pre_commit_hook(void);
 extern void reiser4_post_commit_hook(void);
diff --git a/fs/reiser4/plugin/space/bitmap.c b/fs/reiser4/plugin/space/bitmap.c
index 1d0fabf..5bfa71b 100644
--- a/fs/reiser4/plugin/space/bitmap.c
+++ b/fs/reiser4/plugin/space/bitmap.c
@@ -1222,29 +1222,13 @@ void reiser4_dealloc_blocks_bitmap(reiser4_space_allocator * allocator,
 	release_and_unlock_bnode(bnode);
 }
 
-/* plugin->u.space_allocator.check_blocks(). */
-void reiser4_check_blocks_bitmap(const reiser4_block_nr * start,
-				 const reiser4_block_nr * len, int desired)
+static int check_blocks_one_bitmap(bmap_nr_t bmap, bmap_off_t start_offset,
+                                    bmap_off_t end_offset, int desired)
 {
-#if REISER4_DEBUG
 	struct super_block *super = reiser4_get_current_sb();
-
-	bmap_nr_t bmap;
-	bmap_off_t start_offset;
-	bmap_off_t end_offset;
-
-	struct bitmap_node *bnode;
+	struct bitmap_node *bnode = get_bnode(super, bmap);
 	int ret;
 
-	assert("zam-622", len != NULL);
-	check_block_range(start, len);
-	parse_blocknr(start, &bmap, &start_offset);
-
-	end_offset = start_offset + *len;
-	assert("nikita-2214", end_offset <= bmap_bit_count(super->s_blocksize));
-
-	bnode = get_bnode(super, bmap);
-
 	assert("nikita-2215", bnode != NULL);
 
 	ret = load_and_lock_bnode(bnode);
@@ -1253,19 +1237,60 @@ void reiser4_check_blocks_bitmap(const reiser4_block_nr * start,
 	assert("nikita-2216", jnode_is_loaded(bnode->wjnode));
 
 	if (desired) {
-		assert("zam-623",
-		       reiser4_find_next_zero_bit(bnode_working_data(bnode),
+		ret = reiser4_find_next_zero_bit(bnode_working_data(bnode),
 						  end_offset, start_offset)
-		       >= end_offset);
+		      >= end_offset;
 	} else {
-		assert("zam-624",
-		       reiser4_find_next_set_bit(bnode_working_data(bnode),
+		ret = reiser4_find_next_set_bit(bnode_working_data(bnode),
 						 end_offset, start_offset)
-		       >= end_offset);
+		      >= end_offset;
 	}
 
 	release_and_unlock_bnode(bnode);
-#endif
+
+	return ret;
+}
+
+/* plugin->u.space_allocator.check_blocks(). */
+int reiser4_check_blocks_bitmap(const reiser4_block_nr * start,
+				 const reiser4_block_nr * len, int desired)
+{
+	struct super_block *super = reiser4_get_current_sb();
+
+	reiser4_block_nr end;
+	bmap_nr_t bmap, end_bmap;
+	bmap_off_t offset;
+	bmap_off_t end_offset;
+	const bmap_off_t max_offset = bmap_bit_count(super->s_blocksize);
+
+	if (len != NULL) {
+		check_block_range(start, len);
+		end = *start + *len - 1;
+	} else {
+		/* end is used as temporary len here */
+		check_block_range(start, &(end = 1));
+		end = *start;
+	}
+
+	parse_blocknr(start, &bmap, &offset);
+
+	if (end == *start) {
+		end_bmap = bmap;
+		end_offset = offset;
+	} else {
+		parse_blocknr(&end, &end_bmap, &end_offset);
+	}
+	++end_offset;
+
+	assert("intelfx-4", end_bmap >= bmap);
+	assert("intelfx-5", ergo(end_bmap == bmap, end_offset > offset));
+
+	for (; bmap < end_bmap; bmap++, offset = 0) {
+		if (!check_blocks_one_bitmap(bmap, offset, max_offset, desired)) {
+			return 0;
+		}
+	}
+	return check_blocks_one_bitmap(bmap, offset, end_offset, desired);
 }
 
 /* conditional insertion of @node into atom's overwrite set  if it was not there */
diff --git a/fs/reiser4/plugin/space/bitmap.h b/fs/reiser4/plugin/space/bitmap.h
index be867f1..4590498 100644
--- a/fs/reiser4/plugin/space/bitmap.h
+++ b/fs/reiser4/plugin/space/bitmap.h
@@ -19,7 +19,7 @@ extern int reiser4_alloc_blocks_bitmap(reiser4_space_allocator *,
 				       reiser4_blocknr_hint *, int needed,
 				       reiser4_block_nr * start,
 				       reiser4_block_nr * len);
-extern void reiser4_check_blocks_bitmap(const reiser4_block_nr *,
+extern int reiser4_check_blocks_bitmap(const reiser4_block_nr *,
 					const reiser4_block_nr *, int);
 extern void reiser4_dealloc_blocks_bitmap(reiser4_space_allocator *,
 					  reiser4_block_nr,
diff --git a/fs/reiser4/plugin/space/space_allocator.h b/fs/reiser4/plugin/space/space_allocator.h
index 5bfa9a3..71bfd11 100644
--- a/fs/reiser4/plugin/space/space_allocator.h
+++ b/fs/reiser4/plugin/space/space_allocator.h
@@ -29,9 +29,9 @@ static inline void sa_dealloc_blocks (reiser4_space_allocator * al, reiser4_bloc
 	reiser4_dealloc_blocks_##allocator (al, start, len);								\
 }															\
 															\
-static inline void sa_check_blocks (const reiser4_block_nr * start, const reiser4_block_nr * end, int desired) 		\
+static inline int sa_check_blocks (const reiser4_block_nr * start, const reiser4_block_nr * end, int desired) 		\
 {															\
-	reiser4_check_blocks_##allocator (start, end, desired);							        \
+	return reiser4_check_blocks_##allocator (start, end, desired);							        \
 }															\
 															\
 static inline void sa_pre_commit_hook (void)										\
diff --git a/fs/reiser4/znode.c b/fs/reiser4/znode.c
index 4ff9714..08eab3d 100644
--- a/fs/reiser4/znode.c
+++ b/fs/reiser4/znode.c
@@ -534,10 +534,11 @@ znode *zget(reiser4_tree * tree,
 
 		write_unlock_tree(tree);
 	}
-#if REISER4_DEBUG
-	if (!reiser4_blocknr_is_fake(blocknr) && *blocknr != 0)
-		reiser4_check_block(blocknr, 1);
-#endif
+
+	assert("intelfx-6",
+	       ergo(!reiser4_blocknr_is_fake(blocknr) && *blocknr != 0,
+	            reiser4_check_block(blocknr, 1)));
+
 	/* Check for invalid tree level, return -EIO */
 	if (unlikely(znode_get_level(result) != level)) {
 		warning("jmacd-504",
-- 
2.0.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC] [PATCHv5 2/4] reiser4: add an implementation of "block lists", splitted off the discard code.
  2014-06-20 20:39 [RFC] [PATCHv5 0/4] reiser4: discard support: initial implementation, refactored Ivan Shapovalov
  2014-06-20 20:39 ` [RFC] [PATCHv5 1/4] reiser4: make space_allocator's check_blocks() reusable Ivan Shapovalov
@ 2014-06-20 20:39 ` Ivan Shapovalov
  2014-06-20 20:39 ` [RFC] [PATCHv5 3/4] reiser4: add reiser4_debug(): a conditional equivalent of reiser4_log() Ivan Shapovalov
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Ivan Shapovalov @ 2014-06-20 20:39 UTC (permalink / raw)
  To: reiserfs-devel; +Cc: edward.shishkin, Ivan Shapovalov

The block list is a less memory efficient, but ordered (and thus sortable)
implementation of the same concept as the blocknr_set.

Signed-off-by: Ivan Shapovalov <intelfx100@gmail.com>
---
 fs/reiser4/Makefile      |   1 +
 fs/reiser4/blocknrlist.c | 315 +++++++++++++++++++++++++++++++++++++++++++++++
 fs/reiser4/forward.h     |   1 +
 fs/reiser4/txnmgr.h      |  19 +++
 4 files changed, 336 insertions(+)
 create mode 100644 fs/reiser4/blocknrlist.c

diff --git a/fs/reiser4/Makefile b/fs/reiser4/Makefile
index ff73d43..9f07194 100644
--- a/fs/reiser4/Makefile
+++ b/fs/reiser4/Makefile
@@ -46,6 +46,7 @@ reiser4-y := \
 		   status_flags.o \
 		   init_super.o \
 		   safe_link.o \
+		   blocknrlist.o \
            \
 		   plugin/plugin.o \
 		   plugin/plugin_set.o \
diff --git a/fs/reiser4/blocknrlist.c b/fs/reiser4/blocknrlist.c
new file mode 100644
index 0000000..15cef5c
--- /dev/null
+++ b/fs/reiser4/blocknrlist.c
@@ -0,0 +1,315 @@
+/* Copyright 2001, 2002, 2003 by Hans Reiser, licensing governed by
+ * reiser4/README */
+
+/* This is a block list implementation, used to create ordered block sets
+   (at the cost of being less memory efficient than blocknr_set).
+   It is used by discard code. */
+
+#include "debug.h"
+#include "dformat.h"
+#include "txnmgr.h"
+#include "context.h"
+
+#include <linux/slab.h>
+#include <linux/list_sort.h>
+
+/**
+ * Represents an extent range [@start; @end).
+ */
+struct blocknr_list_entry {
+	reiser4_block_nr start, len;
+	struct list_head link;
+};
+
+#define blocknr_list_entry(ptr) list_entry(ptr, blocknr_list_entry, link)
+
+static void blocknr_list_entry_init(blocknr_list_entry *entry)
+{
+	assert("intelfx-11", entry != NULL);
+
+	entry->start = 0;
+	entry->len = 0;
+	INIT_LIST_HEAD(&entry->link);
+}
+
+static blocknr_list_entry *blocknr_list_entry_alloc(void)
+{
+	blocknr_list_entry *entry;
+
+	entry = (blocknr_list_entry *)kmalloc(sizeof(blocknr_list_entry),
+	                                      reiser4_ctx_gfp_mask_get());
+	if (entry == NULL) {
+		return NULL;
+	}
+
+	blocknr_list_entry_init(entry);
+
+	return entry;
+}
+
+static void blocknr_list_entry_free(blocknr_list_entry *entry)
+{
+	assert("intelfx-12", entry != NULL);
+
+	kfree(entry);
+}
+
+/**
+ * Given ranges @to and [@start; @end), if they overlap, their union
+ * is calculated and saved in @to.
+ */
+static int blocknr_list_entry_merge(blocknr_list_entry *to,
+                                    reiser4_block_nr start,
+                                    reiser4_block_nr len)
+{
+	reiser4_block_nr end, to_end;
+
+	assert("intelfx-13", to != NULL);
+
+	assert("intelfx-16", to->len > 0);
+	assert("intelfx-17", len > 0);
+
+	end = start + len;
+	to_end = to->start + to->len;
+
+	if ((to->start <= end) && (start <= to_end)) {
+		reiser4_debug("discard",
+		              "Merging extents: [%llu; %llu) and [%llu; %llu)",
+		              to->start, to_end, start, end);
+
+		if (start < to->start) {
+			to->start = start;
+		}
+
+		if (end > to_end) {
+			to_end = end;
+		}
+
+		to->len = to_end - to->start;
+
+		return 0;
+	}
+
+	return -1;
+}
+
+static int blocknr_list_entry_merge_entry(blocknr_list_entry *to,
+                                          blocknr_list_entry *from)
+{
+	assert("intelfx-18", from != NULL);
+
+	return blocknr_list_entry_merge(to, from->start, from->len);
+}
+
+/**
+ * A comparison function for list_sort().
+ *
+ * "The comparison function @cmp must return a negative value if @a
+ * should sort before @b, and a positive value if @a should sort after
+ * @b. If @a and @b are equivalent, and their original relative
+ * ordering is to be preserved, @cmp must return 0."
+ */
+static int blocknr_list_entry_compare(void* priv UNUSED_ARG,
+                                      struct list_head *a, struct list_head *b)
+{
+	blocknr_list_entry *entry_a, *entry_b;
+	reiser4_block_nr entry_a_end, entry_b_end;
+
+	assert("intelfx-19", a != NULL);
+	assert("intelfx-20", b != NULL);
+
+	entry_a = blocknr_list_entry(a);
+	entry_b = blocknr_list_entry(b);
+
+	entry_a_end = entry_a->start + entry_a->len;
+	entry_b_end = entry_b->start + entry_b->len;
+
+	/* First sort by starting block numbers... */
+	if (entry_a->start < entry_b->start) {
+		return -1;
+	}
+
+	if (entry_a->start > entry_b->start) {
+		return 1;
+	}
+
+	/** Then by ending block numbers.
+	 * If @a contains @b, it will be sorted before. */
+	if (entry_a_end > entry_b_end) {
+		return -1;
+	}
+
+	if (entry_a_end < entry_b_end) {
+		return 1;
+	}
+
+	return 0;
+}
+
+void blocknr_list_init(struct list_head* blist)
+{
+	assert("intelfx-24", blist != NULL);
+
+	INIT_LIST_HEAD(blist);
+}
+
+void blocknr_list_destroy(struct list_head* blist)
+{
+	struct list_head *pos, *tmp;
+	blocknr_list_entry *entry;
+
+	assert("intelfx-25", blist != NULL);
+
+	list_for_each_safe(pos, tmp, blist) {
+		entry = blocknr_list_entry(pos);
+		list_del_init(pos);
+		blocknr_list_entry_free(entry);
+	}
+
+	assert("intelfx-48", list_empty(blist));
+}
+
+void blocknr_list_merge(struct list_head *from, struct list_head *to)
+{
+	assert("intelfx-26", from != NULL);
+	assert("intelfx-27", to != NULL);
+
+	list_splice_tail_init(from, to);
+
+	assert("intelfx-49", list_empty(from));
+}
+
+void blocknr_list_sort_and_join(struct list_head *blist)
+{
+	struct list_head *pos, *next;
+	struct blocknr_list_entry *entry, *next_entry;
+
+	assert("intelfx-50", blist != NULL);
+
+	/* Step 1. Sort the extent list. */
+	list_sort(NULL, blist, blocknr_list_entry_compare);
+
+	/* Step 2. Join adjacent extents in the list. */
+	pos = blist->next;
+	next = pos->next;
+	entry = blocknr_list_entry(pos);
+
+	for (; next != blist; next = pos->next) {
+		/** @next is a valid node at this point */
+		next_entry = blocknr_list_entry(next);
+
+		/** try to merge @next into @pos */
+		if (!blocknr_list_entry_merge_entry(entry, next_entry)) {
+			/** successful; delete the @next node.
+			 * next merge will be attempted into the same node. */
+			list_del_init(next);
+			blocknr_list_entry_free(next_entry);
+		} else {
+			/** otherwise advance @pos. */
+			pos = next;
+			entry = next_entry;
+		}
+	}
+}
+
+int blocknr_list_add_extent(txn_atom *atom,
+                            struct list_head *blist,
+                            blocknr_list_entry **new_entry,
+                            const reiser4_block_nr *start,
+                            const reiser4_block_nr *len)
+{
+	assert("intelfx-29", atom != NULL);
+	assert("intelfx-42", atom_is_protected(atom));
+	assert("intelfx-43", blist != NULL);
+	assert("intelfx-30", new_entry != NULL);
+	assert("intelfx-31", start != NULL);
+	assert("intelfx-32", len != NULL && *len > 0);
+
+	if (*new_entry == NULL) {
+		/*
+		 * Optimization: try to merge new extent into the last one.
+		 */
+		if (!list_empty(blist)) {
+			blocknr_list_entry *last_entry;
+			last_entry = blocknr_list_entry(blist->prev);
+			if (!blocknr_list_entry_merge(last_entry, *start, *len)) {
+				return 0;
+			}
+		}
+
+		/*
+		 * Otherwise, allocate a new entry and tell -E_REPEAT.
+		 * Next time we'll take the branch below.
+		 */
+		spin_unlock_atom(atom);
+		*new_entry = blocknr_list_entry_alloc();
+		return (*new_entry != NULL) ? -E_REPEAT : RETERR(-ENOMEM);
+	}
+
+	/*
+	 * The entry has been allocated beforehand, fill it and link to the list.
+	 */
+	(*new_entry)->start = *start;
+	(*new_entry)->len = *len;
+	list_add_tail(&(*new_entry)->link, blist);
+
+	return 0;
+}
+
+int blocknr_list_iterator(txn_atom *atom,
+                          struct list_head *blist,
+                          blocknr_set_actor_f actor,
+                          void *data,
+                          int delete)
+{
+	struct list_head *pos;
+	blocknr_list_entry *entry;
+	int ret = 0;
+
+	assert("intelfx-46", blist != NULL);
+	assert("intelfx-47", actor != NULL);
+
+	if (delete) {
+		struct list_head *tmp;
+
+		list_for_each_safe(pos, tmp, blist) {
+			entry = blocknr_list_entry(pos);
+
+			/*
+			 * Do not exit, delete flag is set. Instead, on the first error we
+			 * downgrade from iterating to just deleting.
+			 */
+			if (ret == 0) {
+				ret = actor(atom, &entry->start, &entry->len, data);
+			}
+
+			list_del_init(pos);
+			blocknr_list_entry_free(entry);
+		}
+
+		assert("intelfx-44", list_empty(blist));
+	} else {
+		list_for_each(pos, blist) {
+			entry = blocknr_list_entry(pos);
+
+			ret = actor(atom, &entry->start, &entry->len, data);
+
+			if (ret != 0) {
+				return ret;
+			}
+		}
+	}
+
+	return ret;
+}
+
+/* Make Linus happy.
+   Local variables:
+   c-indentation-style: "K&R"
+   mode-name: "LC"
+   c-basic-offset: 8
+   tab-width: 8
+   fill-column: 120
+   scroll-step: 1
+   End:
+*/
diff --git a/fs/reiser4/forward.h b/fs/reiser4/forward.h
index 15dbfdc..9170c2b 100644
--- a/fs/reiser4/forward.h
+++ b/fs/reiser4/forward.h
@@ -38,6 +38,7 @@ typedef struct reiser4_dir_entry_desc reiser4_dir_entry_desc;
 typedef struct reiser4_context reiser4_context;
 typedef struct carry_level carry_level;
 typedef struct blocknr_set_entry blocknr_set_entry;
+typedef struct blocknr_list_entry blocknr_list_entry;
 /* super_block->s_fs_info points to this */
 typedef struct reiser4_super_info_data reiser4_super_info_data;
 /* next two objects are fields of reiser4_super_info_data */
diff --git a/fs/reiser4/txnmgr.h b/fs/reiser4/txnmgr.h
index 034a3fe..18ca23d 100644
--- a/fs/reiser4/txnmgr.h
+++ b/fs/reiser4/txnmgr.h
@@ -485,6 +485,25 @@ extern int blocknr_set_iterator(txn_atom * atom, struct list_head * bset,
 				blocknr_set_actor_f actor, void *data,
 				int delete);
 
+/* This is the block list interface (see blocknrlist.c) */
+extern void blocknr_list_init(struct list_head *blist);
+extern void blocknr_list_destroy(struct list_head *blist);
+extern void blocknr_list_merge(struct list_head *from, struct list_head *to);
+extern void blocknr_list_sort_and_join(struct list_head *blist);
+/**
+ * The @atom should be locked.
+ */
+extern int blocknr_list_add_extent(txn_atom *atom,
+                                   struct list_head *blist,
+                                   blocknr_list_entry **new_entry,
+                                   const reiser4_block_nr *start,
+                                   const reiser4_block_nr *len);
+extern int blocknr_list_iterator(txn_atom *atom,
+                                 struct list_head *blist,
+                                 blocknr_set_actor_f actor,
+                                 void *data,
+                                 int delete);
+
 /* flush code takes care about how to fuse flush queues */
 extern void flush_init_atom(txn_atom * atom);
 extern void flush_fuse_queues(txn_atom * large, txn_atom * small);
-- 
2.0.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC] [PATCHv5 3/4] reiser4: add reiser4_debug(): a conditional equivalent of reiser4_log().
  2014-06-20 20:39 [RFC] [PATCHv5 0/4] reiser4: discard support: initial implementation, refactored Ivan Shapovalov
  2014-06-20 20:39 ` [RFC] [PATCHv5 1/4] reiser4: make space_allocator's check_blocks() reusable Ivan Shapovalov
  2014-06-20 20:39 ` [RFC] [PATCHv5 2/4] reiser4: add an implementation of "block lists", splitted off the discard code Ivan Shapovalov
@ 2014-06-20 20:39 ` Ivan Shapovalov
  2014-06-20 20:39 ` [RFC] [PATCHv5 4/4] reiser4: discard support: initial implementation using linked lists Ivan Shapovalov
  2014-06-20 22:35 ` [RFC] [PATCHv5 0/4] reiser4: discard support: initial implementation, refactored Ivan Shapovalov
  4 siblings, 0 replies; 9+ messages in thread
From: Ivan Shapovalov @ 2014-06-20 20:39 UTC (permalink / raw)
  To: reiserfs-devel; +Cc: edward.shishkin, Ivan Shapovalov

Signed-off-by: Ivan Shapovalov <intelfx100@gmail.com>
---
 fs/reiser4/debug.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/fs/reiser4/debug.h b/fs/reiser4/debug.h
index 060921f..281bf78 100644
--- a/fs/reiser4/debug.h
+++ b/fs/reiser4/debug.h
@@ -46,6 +46,9 @@
 /* version of info that only actually prints anything when _d_ebugging
     is on */
 #define dinfo(format, ...) printk(format , ## __VA_ARGS__)
+/* a conditional equivalent of reiser4_log */
+#define reiser4_debug(label, format, ...)				\
+	reiser4_log(label, format, ## __VA_ARGS__)
 /* macro to catch logical errors. Put it into `default' clause of
     switch() statement. */
 #define impossible(label, format, ...) 			\
@@ -77,6 +80,7 @@ extern void call_on_each_assert(void);
 #else
 
 #define dinfo(format, args...) noop
+#define reiser4_debug(label, format, args...) noop
 #define impossible(label, format, args...) noop
 #define assert(label, cond) noop
 #define check_me(label, expr)	((void) (expr))
-- 
2.0.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC] [PATCHv5 4/4] reiser4: discard support: initial implementation using linked lists.
  2014-06-20 20:39 [RFC] [PATCHv5 0/4] reiser4: discard support: initial implementation, refactored Ivan Shapovalov
                   ` (2 preceding siblings ...)
  2014-06-20 20:39 ` [RFC] [PATCHv5 3/4] reiser4: add reiser4_debug(): a conditional equivalent of reiser4_log() Ivan Shapovalov
@ 2014-06-20 20:39 ` Ivan Shapovalov
  2014-06-20 22:35 ` [RFC] [PATCHv5 0/4] reiser4: discard support: initial implementation, refactored Ivan Shapovalov
  4 siblings, 0 replies; 9+ messages in thread
From: Ivan Shapovalov @ 2014-06-20 20:39 UTC (permalink / raw)
  To: reiserfs-devel; +Cc: edward.shishkin, Ivan Shapovalov

Implementation details:

- candidate extents are stored during the transaction to a linked list
- if an extent being added is adjacent to the last one, they are merged
  to prevent another allocation
- at commit time, the log is further sorted and extents are merged
- extents are further processed to align discard requests to erase unit
  boundaries, extending them to neighbour blocks if needed
- each erase unit is checked to be deallocated and submitted to discard
- processing stops at first failure (this does not fail atom commit)

For now (shortcomings):

- kernel-reported erase unit granularity and alignment offset are used
  as-is without any override (it may make sense in order to mitigate
  bogus values sometimes reported by kernel/hardware)
- processing each erase unit makes its own bitmap query to check
  allocation status; this is suboptimal when granularity is less than
  the block size (should not matter in practice as granularity almost
  never is that small)

Note on candidate block collection:

Another per-atom block set, ->aux_delete_set, has been added, containing
extents deallocated without BA_DEFER (i. e. blocks of the wandered
journal). When discard is enabled, storage of both delete sets is enabled.
They are stored using blocknr_lists, then spliced and sorted before
discarding, so all blocks that have been deallocated during the
transaction are considered for discarding.

Otherwise, only ->delete_set is maintained, and it is stored using
blocknr_set which is more memory efficient but inherently unordered (so it
cannot be used for the discard algorithm).

The only semantic-significant change to existing code is that
reiser4_post_commit_hook() does not clear ->delete_set (instead, it is
cleared either by discard_atom() or as part of atom's destruction). This is
OK because ->delete_set is not accessed after reiser4_post_commit_hook().

Signed-off-by: Ivan Shapovalov <intelfx100@gmail.com>
---
 fs/reiser4/Makefile              |   1 +
 fs/reiser4/block_alloc.c         |  33 ++++--
 fs/reiser4/dformat.h             |   2 +
 fs/reiser4/discard.c             | 247 +++++++++++++++++++++++++++++++++++++++
 fs/reiser4/discard.h             |  31 +++++
 fs/reiser4/init_super.c          |   2 +
 fs/reiser4/plugin/space/bitmap.c |  11 +-
 fs/reiser4/super.h               |   4 +-
 fs/reiser4/txnmgr.c              | 125 +++++++++++++++++++-
 fs/reiser4/txnmgr.h              |  44 ++++++-
 10 files changed, 478 insertions(+), 22 deletions(-)
 create mode 100644 fs/reiser4/discard.c
 create mode 100644 fs/reiser4/discard.h

diff --git a/fs/reiser4/Makefile b/fs/reiser4/Makefile
index 9f07194..f50bb96 100644
--- a/fs/reiser4/Makefile
+++ b/fs/reiser4/Makefile
@@ -47,6 +47,7 @@ reiser4-y := \
 		   init_super.o \
 		   safe_link.o \
 		   blocknrlist.o \
+		   discard.o \
            \
 		   plugin/plugin.o \
 		   plugin/plugin_set.o \
diff --git a/fs/reiser4/block_alloc.c b/fs/reiser4/block_alloc.c
index 57b0836..9e44e8b 100644
--- a/fs/reiser4/block_alloc.c
+++ b/fs/reiser4/block_alloc.c
@@ -9,6 +9,7 @@ reiser4/README */
 #include "block_alloc.h"
 #include "tree.h"
 #include "super.h"
+#include "discard.h"
 
 #include <linux/types.h>	/* for __u??  */
 #include <linux/fs.h>		/* for struct super_block  */
@@ -992,6 +993,7 @@ reiser4_dealloc_blocks(const reiser4_block_nr * start,
 	int ret;
 	reiser4_context *ctx;
 	reiser4_super_info_data *sbinfo;
+	void *new_entry = NULL;
 
 	ctx = get_current_context();
 	sbinfo = get_super_private(ctx->super);
@@ -1007,17 +1009,13 @@ reiser4_dealloc_blocks(const reiser4_block_nr * start,
 	}
 
 	if (flags & BA_DEFER) {
-		blocknr_set_entry *bsep = NULL;
-
-		/* storing deleted block numbers in a blocknr set
-		   datastructure for further actual deletion */
+		/* store deleted block numbers in the atom's deferred delete set
+		   for further actual deletion */
 		do {
 			atom = get_current_atom_locked();
 			assert("zam-430", atom != NULL);
 
-			ret =
-			    blocknr_set_add_extent(atom, &atom->delete_set,
-						   &bsep, start, len);
+			ret = atom_dset_deferred_add_extent(atom, &new_entry, start, len);
 
 			if (ret == -ENOMEM)
 				return ret;
@@ -1031,6 +1029,25 @@ reiser4_dealloc_blocks(const reiser4_block_nr * start,
 		spin_unlock_atom(atom);
 
 	} else {
+		/* store deleted block numbers in the atom's immediate delete set
+		   for further processing */
+		do {
+			atom = get_current_atom_locked();
+			assert("intelfx-51", atom != NULL);
+
+			ret = atom_dset_immediate_add_extent(atom, &new_entry, start, len);
+
+			if (ret == -ENOMEM)
+				return ret;
+
+			/* This loop might spin at most two times */
+		} while (ret == -E_REPEAT);
+
+		assert("intelfx-52", ret == 0);
+		assert("intelfx-53", atom != NULL);
+
+		spin_unlock_atom(atom);
+
 		assert("zam-425", get_current_super_private() != NULL);
 		sa_dealloc_blocks(reiser4_get_space_allocator(ctx->super),
 				  *start, *len);
@@ -1128,7 +1145,7 @@ void reiser4_post_commit_hook(void)
 
 	/* do the block deallocation which was deferred
 	   until commit is done */
-	blocknr_set_iterator(atom, &atom->delete_set, apply_dset, NULL, 1);
+	atom_dset_deferred_apply(atom, apply_dset, NULL, 0);
 
 	assert("zam-504", get_current_super_private() != NULL);
 	sa_post_commit_hook();
diff --git a/fs/reiser4/dformat.h b/fs/reiser4/dformat.h
index 7943762..7316754 100644
--- a/fs/reiser4/dformat.h
+++ b/fs/reiser4/dformat.h
@@ -14,6 +14,8 @@
 #if !defined(__FS_REISER4_DFORMAT_H__)
 #define __FS_REISER4_DFORMAT_H__
 
+#include "debug.h"
+
 #include <asm/byteorder.h>
 #include <asm/unaligned.h>
 #include <linux/types.h>
diff --git a/fs/reiser4/discard.c b/fs/reiser4/discard.c
new file mode 100644
index 0000000..f1b00ef
--- /dev/null
+++ b/fs/reiser4/discard.c
@@ -0,0 +1,247 @@
+/* Copyright 2001, 2002, 2003 by Hans Reiser, licensing governed by
+ * reiser4/README */
+
+/* TRIM/discard interoperation subsystem for reiser4. */
+
+/*
+ * This subsystem is responsible for populating an atom's ->discard_set and
+ * (later) converting it into a series of discard calls to the kernel.
+ *
+ * The discard is an in-kernel interface for notifying the storage
+ * hardware about blocks that are being logically freed by the filesystem.
+ * This is done via calling the blkdev_issue_discard() function. There are
+ * restrictions on block ranges: they should constitute at least one erase unit
+ * in length and be correspondingly aligned. Otherwise a discard request will
+ * be ignored.
+ *
+ * The erase unit size is kept in struct queue_limits as discard_granularity.
+ * The offset from the partition start to the first erase unit is kept in
+ * struct queue_limits as discard_alignment.
+ *
+ * At atom level, we log all blocks that happen to be deallocated at least once.
+ * Then we have to read the log, filter out any blocks that have since been
+ * allocated again and issue discards for everything still valid. This is what
+ * discard.[ch] is here for.
+ *
+ * The log is ->discard_set of struct txn_atom. Simply iterating through the
+ * logged block ranges is not enough:
+ * - if a single logged range is smaller than the erase unit, then this
+ *   particular range won't be discarded even if it is surrounded by enough
+ *   free blocks to constitute a whole erase unit;
+ * - we won't be able to merge small adjacent ranges forming a range long
+ *   enough to be discarded.
+ *
+ * MECHANISM:
+ *
+ * During the transaction deallocated extents are logged as-is to a data
+ * structure (let's call it "the discard set"). On atom commit we will generate
+ * a minimal superset of the discard set, but comprised of whole erase units.
+ *
+ * For now the discard set is a linked list.
+ *
+ * So, at commit time the following actions take place:
+ * - elements of the discard set are sorted;
+ * - the discard set is iterated, merging any adjacent extents;
+ * - each resulting extents is "covered" by erase units:
+ *   - its start is rounded down to the closest erase unit boundary;
+ *   - starting from this block, extents of erase unit length are created
+ *     until the original is fully covered
+ * - the calculated erase units are checked to be fully deallocated
+ * - remaining (valid) erase units are then passed to blkdev_issue_discard()
+ */
+
+#include "discard.h"
+#include "context.h"
+#include "debug.h"
+#include "txnmgr.h"
+#include "super.h"
+
+#include <linux/slab.h>
+#include <linux/fs.h>
+#include <linux/blkdev.h>
+
+static int __discard_extent(struct block_device *bdev, sector_t start,
+                            sector_t len)
+{
+	assert("intelfx-21", bdev != NULL);
+
+	reiser4_debug("discard", "DISCARDING: [%llu; %llu)",
+	              (unsigned long long)start,
+	              (unsigned long long)(start + len));
+
+	return blkdev_issue_discard(bdev, start, len, reiser4_ctx_gfp_mask_get(),
+	                            0);
+}
+
+static int discard_extent(txn_atom *atom UNUSED_ARG,
+                          const reiser4_block_nr* start,
+                          const reiser4_block_nr* len,
+                          void *data UNUSED_ARG)
+{
+	struct super_block *sb = reiser4_get_current_sb();
+	struct block_device *bdev = sb->s_bdev;
+	struct queue_limits *limits = &bdev_get_queue(bdev)->limits;
+
+	sector_t extent_start_sec, extent_end_sec,
+	         unit_sec, request_start_sec = 0, request_len_sec = 0;
+	reiser4_block_nr unit_start_blk, unit_len_blk;
+	int ret, erase_unit_counter = 0;
+
+	const int sec_per_blk = sb->s_blocksize >> 9;
+
+	/* from blkdev_issue_discard():
+	 * Zero-sector (unknown) and one-sector granularities are the same.  */
+	const int granularity = max(limits->discard_granularity >> 9, 1U);
+	const int alignment = (bdev_discard_alignment(bdev) >> 9) % granularity;
+
+	/* we assume block = N * sector */
+	assert("intelfx-7", sec_per_blk > 0);
+
+	reiser4_debug("discard", "Extent {blk}: [%llu; %llu)",
+	              (unsigned long long)*start,
+	              (unsigned long long)(*start + *len));
+
+	/* convert extent to sectors */
+	extent_start_sec = *start * sec_per_blk;
+	extent_end_sec = (*start + *len) * sec_per_blk;
+
+	reiser4_debug("discard", "Extent {sec}: [%llu; %llu)",
+	              (unsigned long long)extent_start_sec,
+	              (unsigned long long)extent_end_sec);
+
+	/* round down extent start sector to an erase unit boundary */
+	unit_sec = extent_start_sec;
+	if (granularity > 1) {
+		sector_t tmp = extent_start_sec - alignment;
+		unit_sec -= sector_div(tmp, granularity);
+	}
+
+	/* iterate over erase units in the extent */
+	do {
+		/* considering erase unit:
+		 * [unit_sec; unit_sec + granularity) */
+
+		reiser4_debug("discard", "Erase unit %d {sec}: [%llu; %llu)",
+		              erase_unit_counter,
+		              (unsigned long long)unit_sec,
+		              (unsigned long long)(unit_sec + granularity));
+
+		/* calculate block range for erase unit:
+		 * [unit_start_blk; unit_start_blk+unit_len_blk) */
+		unit_start_blk = unit_sec;
+		do_div(unit_start_blk, sec_per_blk);
+
+		if (granularity > 1) {
+			unit_len_blk = unit_sec + granularity - 1;
+			do_div(unit_len_blk, sec_per_blk);
+			++unit_len_blk;
+
+			assert("intelfx-22", unit_len_blk > unit_start_blk);
+
+			unit_len_blk -= unit_start_blk;
+		} else {
+			unit_len_blk = 1;
+		}
+
+		reiser4_debug("discard", "Erase unit %d {blk}: [%llu; %llu)",
+		              erase_unit_counter,
+		              (unsigned long long)unit_start_blk,
+		              (unsigned long long)(unit_start_blk + unit_len_blk));
+
+		if (reiser4_check_blocks(&unit_start_blk, &unit_len_blk, 0)) {
+			/* OK. Add this unit to the accumulator.
+			 * We accumulate discard units to call blkdev_issue_discard()
+			 * not too frequently. */
+
+			reiser4_debug("discard", "Erase unit %d: OK, adding to request",
+			              erase_unit_counter);
+
+			if (request_len_sec > 0) {
+				request_len_sec += granularity;
+			} else {
+				request_start_sec = unit_sec;
+				request_len_sec = granularity;
+			}
+
+			reiser4_debug("discard",
+			              "Erase unit %d: request updated: [%llu; %llu)",
+			              erase_unit_counter,
+			              (unsigned long long)request_start_sec,
+			              (unsigned long long)(request_start_sec +
+			                                   request_len_sec));
+		} else {
+			/* This unit can't be discarded. Discard what's been accumulated
+			 * so far. */
+			if (request_len_sec > 0) {
+				ret = __discard_extent(bdev, request_start_sec, request_len_sec);
+				if (ret != 0) {
+					return ret;
+				}
+				request_len_sec = 0;
+			}
+		}
+
+		unit_sec += granularity;
+		++erase_unit_counter;
+	} while (unit_sec < extent_end_sec);
+
+	/* Discard the last accumulated request. */
+	if (request_len_sec > 0) {
+		ret = __discard_extent(bdev, request_start_sec, request_len_sec);
+		if (ret != 0) {
+			return ret;
+		}
+	}
+
+	reiser4_debug("discard", "Extent done");
+
+	return 0;
+}
+
+int discard_atom(txn_atom *atom)
+{
+	int ret;
+	struct list_head discard_set;
+
+	if (!reiser4_is_set(reiser4_get_current_sb(), REISER4_DISCARD)) {
+		spin_unlock_atom(atom);
+		return 0;
+	}
+
+	assert("intelfx-28", atom != NULL);
+
+	if (list_empty(&atom->discard.delete_set) &&
+	    list_empty(&atom->discard.aux_delete_set)) {
+		spin_unlock_atom(atom);
+		return 0;
+	}
+
+	/* Take the delete sets from the atom in order to release atom spinlock. */
+	blocknr_list_init(&discard_set);
+	blocknr_list_merge(&atom->discard.delete_set, &discard_set);
+	blocknr_list_merge(&atom->discard.aux_delete_set, &discard_set);
+	spin_unlock_atom(atom);
+
+	/* Sort the discard list, joining adjacent and overlapping extents. */
+	blocknr_list_sort_and_join(&discard_set);
+
+	/* Perform actual dirty work. */
+	ret = blocknr_list_iterator(NULL, &discard_set, &discard_extent, NULL, 1);
+	if (ret != 0) {
+		return ret;
+	}
+
+	/* Let's do this again for any new extents in the atom's discard set. */
+	return -E_REPEAT;
+}
+
+/* Make Linus happy.
+   Local variables:
+   c-indentation-style: "K&R"
+   mode-name: "LC"
+   c-basic-offset: 8
+   tab-width: 8
+   fill-column: 120
+   scroll-step: 1
+   End:
+*/
diff --git a/fs/reiser4/discard.h b/fs/reiser4/discard.h
new file mode 100644
index 0000000..ea46334
--- /dev/null
+++ b/fs/reiser4/discard.h
@@ -0,0 +1,31 @@
+/* Copyright 2001, 2002, 2003 by Hans Reiser, licensing governed by
+ * reiser4/README */
+
+/* TRIM/discard interoperation subsystem for reiser4. */
+
+#if !defined(__FS_REISER4_DISCARD_H__)
+#define __FS_REISER4_DISCARD_H__
+
+#include "forward.h"
+#include "dformat.h"
+
+/**
+ * Issue discard requests for all block extents recorded in @atom's delete sets,
+ * if discard is enabled. In this case the delete sets are cleared.
+ *
+ * @atom should be locked on entry and is unlocked on exit.
+ */
+extern int discard_atom(txn_atom *atom);
+
+/* __FS_REISER4_DISCARD_H__ */
+#endif
+
+/* Make Linus happy.
+   Local variables:
+   c-indentation-style: "K&R"
+   mode-name: "LC"
+   c-basic-offset: 8
+   tab-width: 8
+   fill-column: 120
+   End:
+*/
diff --git a/fs/reiser4/init_super.c b/fs/reiser4/init_super.c
index 620a0f5..1ff8dad 100644
--- a/fs/reiser4/init_super.c
+++ b/fs/reiser4/init_super.c
@@ -494,6 +494,8 @@ int reiser4_init_super_data(struct super_block *super, char *opt_string)
 	PUSH_BIT_OPT("atomic_write", REISER4_ATOMIC_WRITE);
 	/* disable use of write barriers in the reiser4 log writer. */
 	PUSH_BIT_OPT("no_write_barrier", REISER4_NO_WRITE_BARRIER);
+	/* enable issuing of discard requests */
+	PUSH_BIT_OPT("discard", REISER4_DISCARD);
 
 	PUSH_OPT(p, opts,
 	{
diff --git a/fs/reiser4/plugin/space/bitmap.c b/fs/reiser4/plugin/space/bitmap.c
index 5bfa71b..03bc5e7 100644
--- a/fs/reiser4/plugin/space/bitmap.c
+++ b/fs/reiser4/plugin/space/bitmap.c
@@ -1263,12 +1263,16 @@ int reiser4_check_blocks_bitmap(const reiser4_block_nr * start,
 	bmap_off_t end_offset;
 	const bmap_off_t max_offset = bmap_bit_count(super->s_blocksize);
 
+	assert("intelfx-9", start != NULL);
+	assert("intelfx-10", ergo(len != NULL, *len > 0));
+
 	if (len != NULL) {
 		check_block_range(start, len);
 		end = *start + *len - 1;
 	} else {
 		/* end is used as temporary len here */
-		check_block_range(start, &(end = 1));
+		end = 1;
+		check_block_range(start, &end);
 		end = *start;
 	}
 
@@ -1283,7 +1287,7 @@ int reiser4_check_blocks_bitmap(const reiser4_block_nr * start,
 	++end_offset;
 
 	assert("intelfx-4", end_bmap >= bmap);
-	assert("intelfx-5", ergo(end_bmap == bmap, end_offset > offset));
+	assert("intelfx-5", ergo(end_bmap == bmap, end_offset >= offset));
 
 	for (; bmap < end_bmap; bmap++, offset = 0) {
 		if (!check_blocks_one_bitmap(bmap, offset, max_offset, desired)) {
@@ -1456,8 +1460,7 @@ int reiser4_pre_commit_hook_bitmap(void)
 		}
 	}
 
-	blocknr_set_iterator(atom, &atom->delete_set, apply_dset_to_commit_bmap,
-			     &blocks_freed, 0);
+	atom_dset_deferred_apply(atom, apply_dset_to_commit_bmap, &blocks_freed, 0);
 
 	blocks_freed -= atom->nr_blocks_allocated;
 
diff --git a/fs/reiser4/super.h b/fs/reiser4/super.h
index 0c73845..895c3f3 100644
--- a/fs/reiser4/super.h
+++ b/fs/reiser4/super.h
@@ -51,7 +51,9 @@ typedef enum {
 	/* enforce atomicity during write(2) */
 	REISER4_ATOMIC_WRITE = 6,
 	/* don't use write barriers in the log writer code. */
-	REISER4_NO_WRITE_BARRIER = 7
+	REISER4_NO_WRITE_BARRIER = 7,
+	/* enable issuing of discard requests */
+	REISER4_DISCARD = 8
 } reiser4_fs_flag;
 
 /*
diff --git a/fs/reiser4/txnmgr.c b/fs/reiser4/txnmgr.c
index 4950179..f27d1dc 100644
--- a/fs/reiser4/txnmgr.c
+++ b/fs/reiser4/txnmgr.c
@@ -233,6 +233,7 @@ year old --- define all technical terms used.
 #include "vfs_ops.h"
 #include "inode.h"
 #include "flush.h"
+#include "discard.h"
 
 #include <asm/atomic.h>
 #include <linux/types.h>
@@ -404,9 +405,10 @@ static void atom_init(txn_atom * atom)
 	INIT_LIST_HEAD(&atom->atom_link);
 	INIT_LIST_HEAD(&atom->fwaitfor_list);
 	INIT_LIST_HEAD(&atom->fwaiting_list);
-	blocknr_set_init(&atom->delete_set);
 	blocknr_set_init(&atom->wandered_map);
 
+	atom_dset_init(atom);
+
 	init_atom_fq_parts(atom);
 }
 
@@ -798,9 +800,10 @@ static void atom_free(txn_atom * atom)
 	       (atom->stage == ASTAGE_INVALID || atom->stage == ASTAGE_DONE));
 	atom->stage = ASTAGE_FREE;
 
-	blocknr_set_destroy(&atom->delete_set);
 	blocknr_set_destroy(&atom->wandered_map);
 
+	atom_dset_destroy(atom);
+
 	assert("jmacd-16", atom_isclean(atom));
 
 	spin_unlock_atom(atom);
@@ -1086,6 +1089,17 @@ static int commit_current_atom(long *nr_submitted, txn_atom ** atom)
 	if (ret < 0)
 		reiser4_panic("zam-597", "write log failed (%ld)\n", ret);
 
+	/* process and issue discard requests */
+	do {
+		spin_lock_atom(*atom);
+		ret = discard_atom(*atom);
+	} while (ret == -E_REPEAT);
+
+	if (ret) {
+		warning("intelfx-8", "discard atom failed (%ld)", ret);
+		ret = 0; /* the discard is optional, don't fail the commit */
+	}
+
 	/* The atom->ovrwr_nodes list is processed under commit mutex held
 	   because of bitmap nodes which are captured by special way in
 	   reiser4_pre_commit_hook_bitmap(), that way does not include
@@ -2938,9 +2952,11 @@ static void capture_fuse_into(txn_atom * small, txn_atom * large)
 	large->flags |= small->flags;
 
 	/* Merge blocknr sets. */
-	blocknr_set_merge(&small->delete_set, &large->delete_set);
 	blocknr_set_merge(&small->wandered_map, &large->wandered_map);
 
+	/* Merge delete sets. */
+	atom_dset_merge(small, large);
+
 	/* Merge allocated/deleted file counts */
 	large->nr_objects_deleted += small->nr_objects_deleted;
 	large->nr_objects_created += small->nr_objects_created;
@@ -3064,9 +3080,7 @@ reiser4_block_nr txnmgr_count_deleted_blocks(void)
 	list_for_each_entry(atom, &tmgr->atoms_list, atom_link) {
 		spin_lock_atom(atom);
 		if (atom_isopen(atom))
-			blocknr_set_iterator(
-				atom, &atom->delete_set,
-				count_deleted_blocks_actor, &result, 0);
+			atom_dset_deferred_apply(atom, count_deleted_blocks_actor, &result, 0);
 		spin_unlock_atom(atom);
 	}
 	spin_unlock_txnmgr(tmgr);
@@ -3074,6 +3088,105 @@ reiser4_block_nr txnmgr_count_deleted_blocks(void)
 	return result;
 }
 
+void atom_dset_init(txn_atom *atom)
+{
+	if (reiser4_is_set(reiser4_get_current_sb(), REISER4_DISCARD)) {
+		blocknr_list_init(&atom->discard.delete_set);
+		blocknr_list_init(&atom->discard.aux_delete_set);
+	} else {
+		blocknr_set_init(&atom->nodiscard.delete_set);
+	}
+}
+
+void atom_dset_destroy(txn_atom *atom)
+{
+	if (reiser4_is_set(reiser4_get_current_sb(), REISER4_DISCARD)) {
+		blocknr_list_destroy(&atom->discard.delete_set);
+		blocknr_list_destroy(&atom->discard.aux_delete_set);
+	} else {
+		blocknr_set_destroy(&atom->nodiscard.delete_set);
+	}
+}
+
+void atom_dset_merge(txn_atom *from, txn_atom *to)
+{
+	if (reiser4_is_set(reiser4_get_current_sb(), REISER4_DISCARD)) {
+		blocknr_list_merge(&from->discard.delete_set, &to->discard.delete_set);
+		blocknr_list_merge(&from->discard.aux_delete_set, &to->discard.aux_delete_set);
+	} else {
+		blocknr_set_merge(&from->nodiscard.delete_set, &to->nodiscard.delete_set);
+	}
+}
+
+int atom_dset_deferred_apply(txn_atom* atom,
+                             blocknr_set_actor_f actor,
+                             void *data,
+                             int delete)
+{
+	int ret;
+
+	if (reiser4_is_set(reiser4_get_current_sb(), REISER4_DISCARD)) {
+		ret = blocknr_list_iterator(atom,
+		                            &atom->discard.delete_set,
+		                            actor,
+		                            data,
+		                            delete);
+	} else {
+		ret = blocknr_set_iterator(atom,
+		                           &atom->nodiscard.delete_set,
+		                           actor,
+		                           data,
+		                           delete);
+	}
+
+	return ret;
+}
+
+extern int atom_dset_deferred_add_extent(txn_atom *atom,
+                                         void **new_entry,
+                                         const reiser4_block_nr *start,
+                                         const reiser4_block_nr *len)
+{
+	int ret;
+
+	if (reiser4_is_set(reiser4_get_current_sb(), REISER4_DISCARD)) {
+		ret = blocknr_list_add_extent(atom,
+		                              &atom->discard.delete_set,
+		                              (blocknr_list_entry**)new_entry,
+		                              start,
+		                              len);
+	} else {
+		ret = blocknr_set_add_extent(atom,
+		                             &atom->nodiscard.delete_set,
+		                             (blocknr_set_entry**)new_entry,
+		                             start,
+		                             len);
+	}
+
+	return ret;
+}
+
+extern int atom_dset_immediate_add_extent(txn_atom *atom,
+                                          void **new_entry,
+                                          const reiser4_block_nr *start,
+                                          const reiser4_block_nr *len)
+{
+	int ret;
+
+	if (reiser4_is_set(reiser4_get_current_sb(), REISER4_DISCARD)) {
+		ret = blocknr_list_add_extent(atom,
+		                             &atom->discard.aux_delete_set,
+		                             (blocknr_list_entry**)new_entry,
+		                             start,
+		                             len);
+	} else {
+		/* no-op */
+		ret = 0;
+	}
+
+	return ret;
+}
+
 /*
  * Local variables:
  * c-indentation-style: "K&R"
diff --git a/fs/reiser4/txnmgr.h b/fs/reiser4/txnmgr.h
index 18ca23d..02fc938 100644
--- a/fs/reiser4/txnmgr.h
+++ b/fs/reiser4/txnmgr.h
@@ -245,9 +245,26 @@ struct txn_atom {
 	/* Start time. */
 	unsigned long start_time;
 
-	/* The atom's delete set. It collects block numbers of the nodes
-	   which were deleted during the transaction. */
-	struct list_head delete_set;
+	/* The atom's delete sets.
+	   "simple" are blocknr_set instances and are used when discard is disabled.
+	   "discard" are blocknr_list instances and are used when discard is enabled. */
+	union {
+		struct {
+		/* The atom's delete set. It collects block numbers of the nodes
+		   which were deleted during the transaction. */
+			struct list_head delete_set;
+		} nodiscard;
+
+		struct {
+			/* The atom's delete set. It collects block numbers which were
+			   deallocated with BA_DEFER, i. e. of ordinary nodes. */
+			struct list_head delete_set;
+
+			/* The atom's auxiliary delete set. It collects block numbers
+			   which were deallocated without BA_DEFER, i. e. immediately. */
+			struct list_head aux_delete_set;
+		} discard;
+	};
 
 	/* The atom's wandered_block mapping. */
 	struct list_head wandered_map;
@@ -504,6 +521,27 @@ extern int blocknr_list_iterator(txn_atom *atom,
                                  void *data,
                                  int delete);
 
+/* These are wrappers for accessing and modifying atom's delete lists,
+   depending on whether discard is enabled or not.
+   If it is enabled. both deferred and immediate delete lists are maintained,
+   and (less memory efficient) blocknr_lists are used for storage. Otherwise, only
+   deferred delete list is maintained and blocknr_set is used for its storage. */
+extern void atom_dset_init(txn_atom *atom);
+extern void atom_dset_destroy(txn_atom *atom);
+extern void atom_dset_merge(txn_atom *from, txn_atom *to);
+extern int atom_dset_deferred_apply(txn_atom* atom,
+                                    blocknr_set_actor_f actor,
+                                    void *data,
+                                    int delete);
+extern int atom_dset_deferred_add_extent(txn_atom *atom,
+                                         void **new_entry,
+                                         const reiser4_block_nr *start,
+                                         const reiser4_block_nr *len);
+extern int atom_dset_immediate_add_extent(txn_atom *atom,
+                                          void **new_entry,
+                                          const reiser4_block_nr *start,
+                                          const reiser4_block_nr *len);
+
 /* flush code takes care about how to fuse flush queues */
 extern void flush_init_atom(txn_atom * atom);
 extern void flush_fuse_queues(txn_atom * large, txn_atom * small);
-- 
2.0.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [RFC] [PATCHv5 0/4] reiser4: discard support: initial implementation, refactored.
  2014-06-20 20:39 [RFC] [PATCHv5 0/4] reiser4: discard support: initial implementation, refactored Ivan Shapovalov
                   ` (3 preceding siblings ...)
  2014-06-20 20:39 ` [RFC] [PATCHv5 4/4] reiser4: discard support: initial implementation using linked lists Ivan Shapovalov
@ 2014-06-20 22:35 ` Ivan Shapovalov
  2014-06-21 11:20   ` Edward Shishkin
  4 siblings, 1 reply; 9+ messages in thread
From: Ivan Shapovalov @ 2014-06-20 22:35 UTC (permalink / raw)
  To: reiserfs-devel; +Cc: edward.shishkin

[-- Attachment #1: Type: text/plain, Size: 2337 bytes --]

On Saturday 21 June 2014 at 00:39:54, Ivan Shapovalov wrote:	
> v1: - initial implementation (patches 1, 2)
> 
> v2: - cleanup, fixes discovered in debug mode
>     - saner logging
>     - assertions
>     - enablement of discard through mount option
> 
> v3: - fixed the extent merge loop in discard_atom()
> 
> v4: - squashed fix-ups into the main patch (with exception of reiser4_debug())
>     - fixed bug in usage of division ops discovered while building on ARM
> 
> v5: - squashed mount option into the main patch
>     - refactor based on discussion (see commit msg)
>       - splitted off blocknr_list code
>       - replaced ->discard_set with ->delete_set and ->aux_delete_set
> 
> Ivan Shapovalov (4):
>   reiser4: make space_allocator's check_blocks() reusable.
>   reiser4: add an implementation of "block lists", splitted off the discard code.
>   reiser4: add reiser4_debug(): a conditional equivalent of reiser4_log().
>   reiser4: discard support: initial implementation using linked lists.
> 
>  fs/reiser4/Makefile                       |   2 +
>  fs/reiser4/block_alloc.c                  |  49 ++---
>  fs/reiser4/block_alloc.h                  |  14 +-
>  fs/reiser4/blocknrlist.c                  | 315 ++++++++++++++++++++++++++++++
>  fs/reiser4/debug.h                        |   4 +
>  fs/reiser4/dformat.h                      |   2 +
>  fs/reiser4/discard.c                      | 247 +++++++++++++++++++++++
>  fs/reiser4/discard.h                      |  31 +++
>  fs/reiser4/forward.h                      |   1 +
>  fs/reiser4/init_super.c                   |   2 +
>  fs/reiser4/plugin/space/bitmap.c          |  84 +++++---
>  fs/reiser4/plugin/space/bitmap.h          |   2 +-
>  fs/reiser4/plugin/space/space_allocator.h |   4 +-
>  fs/reiser4/super.h                        |   4 +-
>  fs/reiser4/txnmgr.c                       | 125 +++++++++++-
>  fs/reiser4/txnmgr.h                       |  63 +++++-
>  fs/reiser4/znode.c                        |   9 +-
>  17 files changed, 884 insertions(+), 74 deletions(-)
>  create mode 100644 fs/reiser4/blocknrlist.c
>  create mode 100644 fs/reiser4/discard.c
>  create mode 100644 fs/reiser4/discard.h

Also I would like if this code could be given a review. :)

Thanks,
-- 
Ivan Shapovalov / intelfx /

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 213 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC] [PATCHv5 0/4] reiser4: discard support: initial implementation, refactored.
  2014-06-20 22:35 ` [RFC] [PATCHv5 0/4] reiser4: discard support: initial implementation, refactored Ivan Shapovalov
@ 2014-06-21 11:20   ` Edward Shishkin
  2014-06-21 20:15     ` Ivan Shapovalov
  0 siblings, 1 reply; 9+ messages in thread
From: Edward Shishkin @ 2014-06-21 11:20 UTC (permalink / raw)
  To: Ivan Shapovalov; +Cc: reiserfs-devel

On 06/21/2014 12:35 AM, Ivan Shapovalov wrote:
> On Saturday 21 June 2014 at 00:39:54, Ivan Shapovalov wrote:	
>> v1: - initial implementation (patches 1, 2)
>>
>> v2: - cleanup, fixes discovered in debug mode
>>      - saner logging
>>      - assertions
>>      - enablement of discard through mount option
>>
>> v3: - fixed the extent merge loop in discard_atom()
>>
>> v4: - squashed fix-ups into the main patch (with exception of reiser4_debug())
>>      - fixed bug in usage of division ops discovered while building on ARM
>>
>> v5: - squashed mount option into the main patch
>>      - refactor based on discussion (see commit msg)
>>        - splitted off blocknr_list code
>>        - replaced ->discard_set with ->delete_set and ->aux_delete_set
>>
>> Ivan Shapovalov (4):
>>    reiser4: make space_allocator's check_blocks() reusable.
>>    reiser4: add an implementation of "block lists", splitted off the discard code.
>>    reiser4: add reiser4_debug(): a conditional equivalent of reiser4_log().
>>    reiser4: discard support: initial implementation using linked lists.
>>
>>   fs/reiser4/Makefile                       |   2 +
>>   fs/reiser4/block_alloc.c                  |  49 ++---
>>   fs/reiser4/block_alloc.h                  |  14 +-
>>   fs/reiser4/blocknrlist.c                  | 315 ++++++++++++++++++++++++++++++
>>   fs/reiser4/debug.h                        |   4 +
>>   fs/reiser4/dformat.h                      |   2 +
>>   fs/reiser4/discard.c                      | 247 +++++++++++++++++++++++
>>   fs/reiser4/discard.h                      |  31 +++
>>   fs/reiser4/forward.h                      |   1 +
>>   fs/reiser4/init_super.c                   |   2 +
>>   fs/reiser4/plugin/space/bitmap.c          |  84 +++++---
>>   fs/reiser4/plugin/space/bitmap.h          |   2 +-
>>   fs/reiser4/plugin/space/space_allocator.h |   4 +-
>>   fs/reiser4/super.h                        |   4 +-
>>   fs/reiser4/txnmgr.c                       | 125 +++++++++++-
>>   fs/reiser4/txnmgr.h                       |  63 +++++-
>>   fs/reiser4/znode.c                        |   9 +-
>>   17 files changed, 884 insertions(+), 74 deletions(-)
>>   create mode 100644 fs/reiser4/blocknrlist.c
>>   create mode 100644 fs/reiser4/discard.c
>>   create mode 100644 fs/reiser4/discard.h
> Also I would like if this code could be given a review. :)

Great! Looks nice for me, thanks!
There are 2 issues, though...

1) kmalloc/kfree a huge number of 32-byte chunks (blocknr_list entries) is
suboptimal. There is a special low-level memory allocator for such purposes.
Take a look how we initialize so-called "slab cache" for jnodes 
(_jnode_slab),
atoms (_atom_slab), etc, and allocate memory for them (kmem_cache_alloc()).

2) A lot of blocknr_list entries are allocated at flush time, when the 
high-level
allocator (txmod.c) makes "relocation decisions" (especially when txmod=wa).
The problem is that the flush (with the following commit) usually is the 
file system
response to memory pressure notifications, when additional memory allocation
is not desirable.

I think that with the fixed (1) we'll include the discard support (if 
everything will
be OK in the next 1-2 weeks).

As to (2): that is a common problem of all Linux subsystems which want 
memory
to free memory. It is unresolvable, however, we can improve the 
situation. It
would be nice to implement a per-atom pool of memory (as a list of 
kmalloc-ed
buffers with "cursors") with an optional possibility to pre-allocate 1-2 
such buffers
at atom initialization time. But this is for the future...

I don't see other urgent improvements. Yes, overall scalability of 
rb-trees is better,
as we found, however, merging rb-trees is more expensive, plus atom's fusion
is not a background process, so it can lead to performance drop. There are
rb-trees with fingers, however I haven't seen their implementation on C 
language
(it can be not so simple).

Thanks!
Edward.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC] [PATCHv5 0/4] reiser4: discard support: initial implementation, refactored.
  2014-06-21 11:20   ` Edward Shishkin
@ 2014-06-21 20:15     ` Ivan Shapovalov
  2014-06-22  0:17       ` Ivan Shapovalov
  0 siblings, 1 reply; 9+ messages in thread
From: Ivan Shapovalov @ 2014-06-21 20:15 UTC (permalink / raw)
  To: reiserfs-devel; +Cc: Edward Shishkin

[-- Attachment #1: Type: text/plain, Size: 2313 bytes --]

On Saturday 21 June 2014 at 13:20:18, Edward Shishkin wrote:	
> On 06/21/2014 12:35 AM, Ivan Shapovalov wrote:
> > [...]
> > Also I would like if this code could be given a review. :)
> 
> Great! Looks nice for me, thanks!
> There are 2 issues, though...
> 
> 1) kmalloc/kfree a huge number of 32-byte chunks (blocknr_list entries) is
> suboptimal. There is a special low-level memory allocator for such purposes.
> Take a look how we initialize so-called "slab cache" for jnodes 
> (_jnode_slab),
> atoms (_atom_slab), etc, and allocate memory for them (kmem_cache_alloc()).
> 
> 2) A lot of blocknr_list entries are allocated at flush time, when the 
> high-level
> allocator (txmod.c) makes "relocation decisions" (especially when txmod=wa).
> The problem is that the flush (with the following commit) usually is the 
> file system
> response to memory pressure notifications, when additional memory allocation
> is not desirable.
> 
> I think that with the fixed (1) we'll include the discard support (if 
> everything will
> be OK in the next 1-2 weeks).
> 
> As to (2): that is a common problem of all Linux subsystems which want 
> memory
> to free memory. It is unresolvable, however, we can improve the 
> situation. It
> would be nice to implement a per-atom pool of memory (as a list of 
> kmalloc-ed
> buffers with "cursors") with an optional possibility to pre-allocate 1-2 
> such buffers
> at atom initialization time. But this is for the future...
> 
> I don't see other urgent improvements. Yes, overall scalability of 
> rb-trees is better,
> as we found, however, merging rb-trees is more expensive, plus atom's fusion
> is not a background process, so it can lead to performance drop. There are
> rb-trees with fingers, however I haven't seen their implementation on C 
> language
> (it can be not so simple).
> 
> Thanks!
> Edward.
> 

Thanks for the review!

(1) surely seems trivial. Do we need something similar for blocknr_set as well?

For (2) I don't quite understand you. How such a pool should be organized?

Do you mean `blocknr_list_entry **pool` of size N * sizeof(void*) filled with
kmem_cache_alloc()'d pointers, or just `blocknr_list_entry *pool` of size
N * sizeof(blocknr_list_entry)?

-- 
Ivan Shapovalov / intelfx /

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 213 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC] [PATCHv5 0/4] reiser4: discard support: initial implementation, refactored.
  2014-06-21 20:15     ` Ivan Shapovalov
@ 2014-06-22  0:17       ` Ivan Shapovalov
  0 siblings, 0 replies; 9+ messages in thread
From: Ivan Shapovalov @ 2014-06-22  0:17 UTC (permalink / raw)
  To: reiserfs-devel; +Cc: Edward Shishkin

[-- Attachment #1: Type: text/plain, Size: 2647 bytes --]

On Sunday 22 June 2014 at 00:15:49, Ivan Shapovalov wrote:	
> On Saturday 21 June 2014 at 13:20:18, Edward Shishkin wrote:	
> > On 06/21/2014 12:35 AM, Ivan Shapovalov wrote:
> > > [...]
> > > Also I would like if this code could be given a review. :)
> > 
> > Great! Looks nice for me, thanks!
> > There are 2 issues, though...
> > 
> > 1) kmalloc/kfree a huge number of 32-byte chunks (blocknr_list entries) is
> > suboptimal. There is a special low-level memory allocator for such purposes.
> > Take a look how we initialize so-called "slab cache" for jnodes 
> > (_jnode_slab),
> > atoms (_atom_slab), etc, and allocate memory for them (kmem_cache_alloc()).
> > 
> > 2) A lot of blocknr_list entries are allocated at flush time, when the 
> > high-level
> > allocator (txmod.c) makes "relocation decisions" (especially when txmod=wa).
> > The problem is that the flush (with the following commit) usually is the 
> > file system
> > response to memory pressure notifications, when additional memory allocation
> > is not desirable.
> > 
> > I think that with the fixed (1) we'll include the discard support (if 
> > everything will
> > be OK in the next 1-2 weeks).
> > 
> > As to (2): that is a common problem of all Linux subsystems which want 
> > memory
> > to free memory. It is unresolvable, however, we can improve the 
> > situation. It
> > would be nice to implement a per-atom pool of memory (as a list of 
> > kmalloc-ed
> > buffers with "cursors") with an optional possibility to pre-allocate 1-2 
> > such buffers
> > at atom initialization time. But this is for the future...
> > 
> > I don't see other urgent improvements. Yes, overall scalability of 
> > rb-trees is better,
> > as we found, however, merging rb-trees is more expensive, plus atom's fusion
> > is not a background process, so it can lead to performance drop. There are
> > rb-trees with fingers, however I haven't seen their implementation on C 
> > language
> > (it can be not so simple).
> > 
> > Thanks!
> > Edward.
> > 
> 
> Thanks for the review!
> 
> (1) surely seems trivial. Do we need something similar for blocknr_set as well?

...I suppose yes, because there is at least one blocknr_set_entry per each txn_atom,
while the latter is already allocated with slab allocator).

-- 
Ivan Shapovalov / intelfx /

> 
> For (2) I don't quite understand you. How such a pool should be organized?
> 
> Do you mean `blocknr_list_entry **pool` of size N * sizeof(void*) filled with
> kmem_cache_alloc()'d pointers, or just `blocknr_list_entry *pool` of size
> N * sizeof(blocknr_list_entry)?
> 
> 

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 213 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2014-06-22  0:17 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-20 20:39 [RFC] [PATCHv5 0/4] reiser4: discard support: initial implementation, refactored Ivan Shapovalov
2014-06-20 20:39 ` [RFC] [PATCHv5 1/4] reiser4: make space_allocator's check_blocks() reusable Ivan Shapovalov
2014-06-20 20:39 ` [RFC] [PATCHv5 2/4] reiser4: add an implementation of "block lists", splitted off the discard code Ivan Shapovalov
2014-06-20 20:39 ` [RFC] [PATCHv5 3/4] reiser4: add reiser4_debug(): a conditional equivalent of reiser4_log() Ivan Shapovalov
2014-06-20 20:39 ` [RFC] [PATCHv5 4/4] reiser4: discard support: initial implementation using linked lists Ivan Shapovalov
2014-06-20 22:35 ` [RFC] [PATCHv5 0/4] reiser4: discard support: initial implementation, refactored Ivan Shapovalov
2014-06-21 11:20   ` Edward Shishkin
2014-06-21 20:15     ` Ivan Shapovalov
2014-06-22  0:17       ` Ivan Shapovalov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.