All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] Bcache: version 4
@ 2010-05-01  0:12 Kent Overstreet
  2010-05-01 13:01 ` Valdis.Kletnieks
  2010-05-04 10:14 ` Andi Kleen
  0 siblings, 2 replies; 5+ messages in thread
From: Kent Overstreet @ 2010-05-01  0:12 UTC (permalink / raw)
  To: linux-kernel

I've got some documentation incorporated since the last posting. The
user documentation should be sufficient; the code could probably use
more but it's hard for me to say what, so I'll try and add whatever
people find unclear.

Most of the basic functionality is now there; the most visible thing is
it's now correctly saving all the metadata, so you can unload a cache
and then reload it, and everything will still be there. I plan on
having read/write in the next version; barring the unexpected version 5
should be good enough for people to start playing with.

The performance issues I was seeing that I posted about in the last
version completely vanished when I tested it outside of kvm - there was
no visible overhead. I don't know what's going on with kvm, it must be
triggering a pathalogical corner case somewhere - performance varies
wildly for no good reason. Unfortunately, I don't have the hardware to
do any real performance testing, but from what I've seen so far it's
plenty fast.

Program to make a cache device is attached; the rest is split out more
or less by function. There's more comments along with the hooks patch.


#define _XOPEN_SOURCE 500

#include <fcntl.h>
#include <stdint.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>

static const char bcache_magic[] = {
	0xc6, 0x85, 0x73, 0xf6, 0x4e, 0x1a, 0x45, 0xca,
	0x82, 0x65, 0xf5, 0x7f, 0x48, 0xba, 0x6d, 0x81 };

struct cache_sb {
	uint8_t  magic[16];
	uint32_t version;
	uint16_t block_size;		/* sectors */
	uint16_t bucket_size;		/* sectors */
	uint32_t journal_start;		/* buckets */
	uint32_t first_bucket;		/* start of data */
	uint64_t nbuckets;		/* device size */
	uint64_t btree_root;
	uint16_t btree_level;
};

struct bucket_disk {
	uint16_t	priority;
	uint8_t		generation;
} __attribute((packed));

struct btree_node_header {
	uint32_t	csum;
	uint32_t	nkeys;
	uint64_t	random;
};

char zero[4096];

int main(int argc, char **argv)
{
	int fd, random, i;
	struct stat statbuf;
	struct cache_sb sb;
	struct bucket_disk b;
	struct btree_node_header n = { .nkeys = 0, };

	if (argc < 2) {
		printf("Please supply a device\n");
		return 0;
	}

	fd = open(argv[1], O_RDWR);
	if (!fd) {
		perror("Can't open dev\n");
		return 0;
	}

	random = open("/dev/urandom", O_RDONLY);
	if (!random) {
		perror("Can't open urandom\n");
		return 0;
	}

	if (fstat(fd, &statbuf)) {
		perror("stat error\n");
		return 0;
	}

	memcpy(sb.magic, bcache_magic, 16);
	sb.version = 0;
	sb.block_size = 8;
	sb.bucket_size = 32;
	sb.nbuckets = statbuf.st_size / (sb.bucket_size * 512);

	do
		sb.first_bucket = ((--sb.nbuckets * sizeof(struct bucket_disk))
				   + 4096 * 3) / (sb.bucket_size * 512) + 1;
	while ((sb.nbuckets + sb.first_bucket) * sb.bucket_size * 512
	       > statbuf.st_size);

	sb.journal_start = sb.first_bucket;

	sb.btree_root = sb.first_bucket * sb.bucket_size;
	sb.btree_level = 0;

	printf("block_size:		%u\n"
	       "bucket_size:		%u\n"
	       "journal_start:		%u\n"
	       "first_bucket:		%u\n"
	       "nbuckets:		%ju\n",
	       sb.block_size,
	       sb.bucket_size,
	       sb.journal_start,
	       sb.first_bucket,
	       sb.nbuckets);

	/* Zero out priorities */
	lseek(fd, 4096, SEEK_SET);
	for (i = 8; i < sb.first_bucket * sb.bucket_size; i++)
		if (write(fd, zero, 512) != 512)
			goto err;

	if (pwrite(fd, &sb, sizeof(sb), 4096) != sizeof(sb))
		goto err;

	b.priority = ~0;
	b.generation = 1;
	if (pwrite(fd, &b, sizeof(b), 4096 * 3) != sizeof(b))
		goto err;

	if (read(random, &n.random, 8) != 8)
		goto err;

	if (pwrite(fd, &n, sizeof(n), sb.btree_root * 512) != sizeof(n))
		goto err;

	return 0;
err:
	perror("write error\n");
	return 1;
}

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/3] Bcache: version 4
  2010-05-01  0:12 [PATCH 0/3] Bcache: version 4 Kent Overstreet
@ 2010-05-01 13:01 ` Valdis.Kletnieks
  2010-05-01 18:43   ` Kent Overstreet
  2010-05-04 10:14 ` Andi Kleen
  1 sibling, 1 reply; 5+ messages in thread
From: Valdis.Kletnieks @ 2010-05-01 13:01 UTC (permalink / raw)
  To: Kent Overstreet; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 564 bytes --]

On Fri, 30 Apr 2010 16:12:13 -0800, Kent Overstreet said:

> Most of the basic functionality is now there; the most visible thing is
> it's now correctly saving all the metadata, so you can unload a cache
> and then reload it, and everything will still be there.

If you unload a cache and then reload it, what prevents it from serving
up now-stale data from an extent that was modified while the cache was
unloaded?

(Telling me "Get some caffeine, it's about halfway down in patch 2" or
"We'll add that in Version 6 of the patch" are both acceptable answers" :)

[-- Attachment #2: Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/3] Bcache: version 4
  2010-05-01 13:01 ` Valdis.Kletnieks
@ 2010-05-01 18:43   ` Kent Overstreet
  0 siblings, 0 replies; 5+ messages in thread
From: Kent Overstreet @ 2010-05-01 18:43 UTC (permalink / raw)
  To: Valdis.Kletnieks; +Cc: linux-kernel

On 05/01/2010 05:01 AM, Valdis.Kletnieks@vt.edu wrote:
> On Fri, 30 Apr 2010 16:12:13 -0800, Kent Overstreet said:
>
>> Most of the basic functionality is now there; the most visible thing is
>> it's now correctly saving all the metadata, so you can unload a cache
>> and then reload it, and everything will still be there.
>
> If you unload a cache and then reload it, what prevents it from serving
> up now-stale data from an extent that was modified while the cache was
> unloaded?
>
> (Telling me "Get some caffeine, it's about halfway down in patch 2" or
> "We'll add that in Version 6 of the patch" are both acceptable answers" :)

The plan is check if any devices are open read write on cache load and 
unload, and invalidate their cached data if so. Not implemented yet, though.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/3] Bcache: version 4
  2010-05-01  0:12 [PATCH 0/3] Bcache: version 4 Kent Overstreet
  2010-05-01 13:01 ` Valdis.Kletnieks
@ 2010-05-04 10:14 ` Andi Kleen
  2010-05-04 23:47   ` Kent Overstreet
  1 sibling, 1 reply; 5+ messages in thread
From: Andi Kleen @ 2010-05-04 10:14 UTC (permalink / raw)
  To: Kent Overstreet; +Cc: linux-kernel

Kent Overstreet <kent.overstreet@gmail.com> writes:

> I've got some documentation incorporated since the last posting. The
> user documentation should be sufficient; the code could probably use
> more but it's hard for me to say what, so I'll try and add whatever
> people find unclear.

I read all of this email now and I still have no clue what exactly
a 'bcache' is and why anyone would want one (and if one needs
a large stick to handle it or not)  

Normally the 0/x series of a patch kit is supposed to contain that
information.

I know it's probably obvious to you, but it's not to most other readers.

Consider adding some introduction? 

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/3] Bcache: version 4
  2010-05-04 10:14 ` Andi Kleen
@ 2010-05-04 23:47   ` Kent Overstreet
  0 siblings, 0 replies; 5+ messages in thread
From: Kent Overstreet @ 2010-05-04 23:47 UTC (permalink / raw)
  To: Andi Kleen; +Cc: linux-kernel

On 05/04/2010 02:14 AM, Andi Kleen wrote:
> I read all of this email now and I still have no clue what exactly
> a 'bcache' is and why anyone would want one (and if one needs
> a large stick to handle it or not)
>
> Normally the 0/x series of a patch kit is supposed to contain that
> information.
>
> I know it's probably obvious to you, but it's not to most other readers.
>
> Consider adding some introduction?

Ah, whoops. "Block cache" - it uses one block device to cache another, 
it's intended for SSDs. It's summarized decently in the documentation 
patch, which if I'd thought harder would've been the first email...
http://thread.gmane.org/gmane.linux.kernel/979977
Second patch also has relevant commentary:
http://thread.gmane.org/gmane.linux.kernel/979978
http://thread.gmane.org/gmane.linux.kernel/979979

Currently all the necessary functionality for read only is close to 
done, I'm now doing stress testing. I'm hoping to have it Stable For Me 
and read/write done in a week or two, at which point it should be ready 
for people to start playing with.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2010-05-04 23:47 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-05-01  0:12 [PATCH 0/3] Bcache: version 4 Kent Overstreet
2010-05-01 13:01 ` Valdis.Kletnieks
2010-05-01 18:43   ` Kent Overstreet
2010-05-04 10:14 ` Andi Kleen
2010-05-04 23:47   ` Kent Overstreet

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.