All of lore.kernel.org
 help / color / mirror / Atom feed
* RFC: Handling GSS rotated data
@ 2012-04-04 22:28 Kevin Coffman
  2012-04-10 21:56 ` J. Bruce Fields
  0 siblings, 1 reply; 10+ messages in thread
From: Kevin Coffman @ 2012-04-04 22:28 UTC (permalink / raw)
  To: NFS list; +Cc: Steve Dickson

[-- Attachment #1: Type: text/plain, Size: 1022 bytes --]

This bug points out a deficiency in the GSS code previously added to the kernel:

https://bugzilla.redhat.com/show_bug.cgi?id=796992

The spec (RFC 4121 section 4.2.5) says, "The receiver MUST be able to
interpret all possible rotation count values, including rotation
counts greater than the length of the token."  Note that an
implementation is never required to send rotated data.  However it is
required to be able to handle receiving rotated data.  Windows is the
only implementation that I am aware of that currently sends tokens
with rotated data.

Attached is a patch (with way too much debugging) to handle the
rotated data we have seen from Microsoft clients.  Admittedly, it does
not handle all cases, which is required to be fully compliant with the
spec.  I will not have the time to devote to making it fully
compliant.  I submit this patch as an RFC and for someone else to
complete!  Also note that I may have over-complicating things!!

I apologize for the attachment rather than putting it inline.

K.C.

[-- Attachment #2: gss-wrap-rotate.patch --]
[-- Type: application/octet-stream, Size: 7488 bytes --]

diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c
index 2763e3e..b824067 100644
--- a/net/sunrpc/auth_gss/gss_krb5_wrap.c
+++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c
@@ -384,7 +384,6 @@ gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
  * We cannot currently handle tokens with rotated data.  We need a
  * generalized routine to rotate the data in place.  It is anticipated
  * that we won't encounter rotated data in the general case.
- */
 static u32
 rotate_left(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, u16 rrc)
 {
@@ -397,6 +396,139 @@ rotate_left(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, u16 rrc)
 		"rrc %u, realrrc %u\n", __func__, rrc, realrrc);
 	return 1;
 }
+ */
+
+#include <linux/ctype.h>
+void print_xdrbuf(const char *msg, struct xdr_buf *x);
+void print_xdrbuf_data(const char *msg, struct xdr_buf *x, unsigned int offset);
+
+#define LOCAL_BUF_LEN 32
+
+static u32
+rotate_left(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, u16 rrc)
+{
+	unsigned int realhead, realpage, realtail;
+	unsigned int remainder;
+	unsigned int rot_len = buf->len - offset - 16;
+	unsigned int realrrc = rrc % rot_len;
+	struct kvec *head, *tail;
+	void *headstart;    /* start of data past rpc header */
+	u32 err;
+
+	printk("%s: rrc %u, buf->len %u, offset %u, rot_len %u, realrrc %u\n",
+		__func__, rrc, buf->len, offset, rot_len, realrrc);
+	print_xdrbuf("in rotate_left", buf);
+	print_xdrbuf_data("BEFORE: ", buf, offset);
+
+	if (realrrc == 0)
+		return 0;
+
+	/*
+	 * At the time this function is called, the only length
+	 * values we can absolutely trust are buf->len (the length
+	 * of data actually read off the network) and buf->buflen
+	 * (the total available buffer space).
+	 *
+	 * The head, tail, and page lengths are the available
+	 * buffer space for each, not how much data is actually
+	 * available in each.  We know that data is read to fill
+	 * up the head, then page data, and then tail; so we can
+	 * calculate the *real* lengths of each.
+	 */
+	realhead = realpage = realtail = 0;
+	remainder = buf->len;
+	head = &buf->head[0];
+	tail = &buf->tail[0];
+
+	if (remainder) {
+		realhead = min_t(unsigned int, buf->len, head->iov_len);
+		remainder -= realhead;
+	}
+	if (remainder) {
+		realpage = min_t(unsigned int, remainder, buf->page_len);
+		remainder -= realpage;
+	}
+	if (remainder) {
+		realtail = min_t(unsigned int, remainder, tail->iov_len);
+		remainder -= realtail;
+	}
+	BUG_ON(remainder);
+	printk("%s: realhead %u, realpage %u, realtail %u\n",
+		__func__, realhead, realpage, realtail);
+
+	headstart = head->iov_base + offset + 16;
+
+	if (realpage == 0 && realtail == 0) {
+		/*
+		 * All the data is in the head.  Therefore all
+		 * the data to be moved is in the head
+		 */
+		char localbuf[LOCAL_BUF_LEN];
+		unsigned int amtleft = realrrc;
+		unsigned int thismove;
+		printk("%s: We are lucky! (moving %u within head)\n",
+			__func__, realrrc);
+		while (amtleft != 0) {
+			thismove = min_t(unsigned int, amtleft, LOCAL_BUF_LEN);
+			memcpy(localbuf, headstart, thismove);
+			memmove(headstart, headstart + thismove,
+				realhead - offset - 16 - thismove);
+			memcpy(head->iov_base + realhead - thismove,
+				localbuf, thismove);
+			amtleft -= thismove;
+		}
+	} else if (realhead - offset - 16 < realrrc) {
+		/*
+		 * The part to be moved is split between the head and
+		 * page data or tail
+		 */
+		printk("%s: We are not lucky.  All the realrrc data (%u) "
+			" is not in the head!\n", __func__, realrrc);
+		return 1;
+	} else if (realtail != 0 && tail->iov_len - realtail >= realrrc) {
+		/*
+		 * We are lucky enough to simply move rrc bytes
+		 * from the head to the end of the tail
+		 */
+		printk("%s: We are lucky! (moving %u from head to tail)\n",
+			__func__, realrrc);
+		memcpy(tail->iov_base + realtail,
+		       head->iov_base + offset + 16,
+		       realrrc);
+		tail->iov_len = realtail + realrrc;
+		memmove(headstart, headstart + realrrc,
+			realhead - offset - 16 - realrrc);
+		head->iov_len -= realrrc;
+	} else if (realpage != 0 && realtail == 0) {
+		/*
+		 * There is page data and no existing tail
+		 */
+		printk("%s: We are lucky! (moving %u from head to pages)\n",
+			__func__, realrrc);
+		buf->page_len += realrrc;
+		err = write_bytes_to_xdr_buf(buf, realhead + realpage,
+					     headstart, realrrc);
+		if (err) {
+			printk("%s: We are not lucky getting err %d while "
+				"writing %u bytes to xdr_buf\n",
+				__func__, err, realrrc);
+			return 1;
+		}
+		memmove(headstart, headstart + realrrc,
+			realhead - offset - 16 - realrrc);
+		head->iov_len -= realrrc;
+	} else {
+		/*
+		 * None of the quick fixes works.  Do it the "hard way".
+		 */
+		printk("%s: We are not lucky with realrrc %u\n",
+			__func__, realrrc);
+		return 1;
+	}
+
+	print_xdrbuf_data("AFTER : ", buf, offset);
+	return 0;
+}
 
 static u32
 gss_wrap_kerberos_v2(struct krb5_ctx *kctx, u32 offset,
@@ -585,3 +717,74 @@ gss_unwrap_kerberos(struct gss_ctx *gctx, int offset, struct xdr_buf *buf)
 	}
 }
 
+void
+print_hexl(const char *title, u8 *p, u_int length, u_int offset)
+{
+	u_int i, j, jm;
+	u8 c, *cp;
+
+	printk("RPC:       %s: length %d: %p\n", title, length, p);
+	cp = (u8 *) p;
+
+	for (i = 0; i < length; i += 0x10) {
+	        printk("  %04x: ", (u_int)(i + offset));
+	        jm = length - i;
+	        jm = jm > 16 ? 16 : jm;
+
+	        for (j = 0; j < jm; j++) {
+	                if ((j % 2) == 1)
+	                        printk("%02x ", (u_int)cp[i+j]);
+	                else
+	                        printk("%02x", (u_int)cp[i+j]);
+	        }
+	        for (; j < 16; j++) {
+	                if ((j % 2) == 1)
+	                        printk("   ");
+	                else
+	                        printk("  ");
+	        }
+	        printk(" ");
+
+	        for (j = 0; j < jm; j++) {
+	                c = cp[i+j];
+	                c = isprint(c) ? c : '.';
+	                printk("%c", c);
+	        }
+	        printk("\n");
+	}
+	printk("\n");
+}
+EXPORT_SYMBOL(print_hexl);
+
+void print_xdrbuf(const char *msg, struct xdr_buf *x)
+{
+	printk("---------- xdr_buf at %p: %s ----------\n", x, msg);
+	printk("  head[0].iov_base %p, iov_len %zu\n",
+	        x->head[0].iov_base, x->head[0].iov_len);
+	printk("  tail[0].iov_base %p, iov_len %zu\n",
+	        x->tail[0].iov_base, x->tail[0].iov_len);
+	printk("  pages %p\n", x->pages);
+	printk("  page_base %u, page_len %u, flags %u\n",
+	        x->page_base, x->page_len, x->flags);
+	printk("  buflen %u, len %u\n", x->buflen, x->len);
+}
+EXPORT_SYMBOL(print_xdrbuf);
+
+//extern void print_hexl(const char *title, u8 *p, u_int length, u_int offset);
+void print_xdrbuf_data(const char *msg, struct xdr_buf *x, unsigned int offset)
+{
+	printk("========== xdr_buf %p offset %u: %s ==========\n",
+	        x, offset, msg);
+	if (offset < x->head[0].iov_len)
+	        print_hexl("head data", x->head[0].iov_base + offset,
+	                                x->head[0].iov_len - offset, 0);
+	if (x->page_len) {
+	        int plen = (x->page_len > 512) ? 512 : x->page_len;
+	        print_hexl("page data",
+	                page_address(x->pages[0]) + x->page_base, plen, 0);
+	}
+	if (x->tail[0].iov_base != NULL && x->tail[0].iov_len != 0)
+	        print_hexl("tail data", x->tail[0].iov_base,
+	                                x->tail[0].iov_len, 0);
+}
+EXPORT_SYMBOL(print_xdrbuf_data);

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: RFC: Handling GSS rotated data
  2012-04-04 22:28 RFC: Handling GSS rotated data Kevin Coffman
@ 2012-04-10 21:56 ` J. Bruce Fields
  2012-04-10 23:02   ` J. Bruce Fields
  0 siblings, 1 reply; 10+ messages in thread
From: J. Bruce Fields @ 2012-04-10 21:56 UTC (permalink / raw)
  To: Kevin Coffman; +Cc: NFS list, Steve Dickson

On Wed, Apr 04, 2012 at 06:28:47PM -0400, Kevin Coffman wrote:
> This bug points out a deficiency in the GSS code previously added to the kernel:
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=796992
> 
> The spec (RFC 4121 section 4.2.5) says, "The receiver MUST be able to
> interpret all possible rotation count values, including rotation
> counts greater than the length of the token."  Note that an
> implementation is never required to send rotated data.  However it is
> required to be able to handle receiving rotated data.  Windows is the
> only implementation that I am aware of that currently sends tokens
> with rotated data.
> 
> Attached is a patch (with way too much debugging) to handle the
> rotated data we have seen from Microsoft clients.  Admittedly, it does
> not handle all cases, which is required to be fully compliant with the
> spec.  I will not have the time to devote to making it fully
> compliant.  I submit this patch as an RFC and for someone else to
> complete!  Also note that I may have over-complicating things!!

Thanks!

But I'm not going to agree to implementing another subset of the
cases--we already fell into that trap once, let's not do it again.

Even if it means we have to do something slow and stupid as the first
pass, I'd rather have something complete....

--b.

diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c
index 2763e3e..b824067 100644
--- a/net/sunrpc/auth_gss/gss_krb5_wrap.c
+++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c
@@ -384,7 +384,6 @@ gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
  * We cannot currently handle tokens with rotated data.  We need a
  * generalized routine to rotate the data in place.  It is anticipated
  * that we won't encounter rotated data in the general case.
- */
 static u32
 rotate_left(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, u16 rrc)
 {
@@ -397,6 +396,139 @@ rotate_left(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, u16 rrc)
 		"rrc %u, realrrc %u\n", __func__, rrc, realrrc);
 	return 1;
 }
+ */
+
+#include <linux/ctype.h>
+void print_xdrbuf(const char *msg, struct xdr_buf *x);
+void print_xdrbuf_data(const char *msg, struct xdr_buf *x, unsigned int offset);
+
+#define LOCAL_BUF_LEN 32
+
+static u32
+rotate_left(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, u16 rrc)
+{
+	unsigned int realhead, realpage, realtail;
+	unsigned int remainder;
+	unsigned int rot_len = buf->len - offset - 16;
+	unsigned int realrrc = rrc % rot_len;
+	struct kvec *head, *tail;
+	void *headstart;    /* start of data past rpc header */
+	u32 err;
+
+	printk("%s: rrc %u, buf->len %u, offset %u, rot_len %u, realrrc %u\n",
+		__func__, rrc, buf->len, offset, rot_len, realrrc);
+	print_xdrbuf("in rotate_left", buf);
+	print_xdrbuf_data("BEFORE: ", buf, offset);
+
+	if (realrrc == 0)
+		return 0;
+
+	/*
+	 * At the time this function is called, the only length
+	 * values we can absolutely trust are buf->len (the length
+	 * of data actually read off the network) and buf->buflen
+	 * (the total available buffer space).
+	 *
+	 * The head, tail, and page lengths are the available
+	 * buffer space for each, not how much data is actually
+	 * available in each.  We know that data is read to fill
+	 * up the head, then page data, and then tail; so we can
+	 * calculate the *real* lengths of each.
+	 */
+	realhead = realpage = realtail = 0;
+	remainder = buf->len;
+	head = &buf->head[0];
+	tail = &buf->tail[0];
+
+	if (remainder) {
+		realhead = min_t(unsigned int, buf->len, head->iov_len);
+		remainder -= realhead;
+	}
+	if (remainder) {
+		realpage = min_t(unsigned int, remainder, buf->page_len);
+		remainder -= realpage;
+	}
+	if (remainder) {
+		realtail = min_t(unsigned int, remainder, tail->iov_len);
+		remainder -= realtail;
+	}
+	BUG_ON(remainder);
+	printk("%s: realhead %u, realpage %u, realtail %u\n",
+		__func__, realhead, realpage, realtail);
+
+	headstart = head->iov_base + offset + 16;
+
+	if (realpage == 0 && realtail == 0) {
+		/*
+		 * All the data is in the head.  Therefore all
+		 * the data to be moved is in the head
+		 */
+		char localbuf[LOCAL_BUF_LEN];
+		unsigned int amtleft = realrrc;
+		unsigned int thismove;
+		printk("%s: We are lucky! (moving %u within head)\n",
+			__func__, realrrc);
+		while (amtleft != 0) {
+			thismove = min_t(unsigned int, amtleft, LOCAL_BUF_LEN);
+			memcpy(localbuf, headstart, thismove);
+			memmove(headstart, headstart + thismove,
+				realhead - offset - 16 - thismove);
+			memcpy(head->iov_base + realhead - thismove,
+				localbuf, thismove);
+			amtleft -= thismove;
+		}
+	} else if (realhead - offset - 16 < realrrc) {
+		/*
+		 * The part to be moved is split between the head and
+		 * page data or tail
+		 */
+		printk("%s: We are not lucky.  All the realrrc data (%u) "
+			" is not in the head!\n", __func__, realrrc);
+		return 1;
+	} else if (realtail != 0 && tail->iov_len - realtail >= realrrc) {
+		/*
+		 * We are lucky enough to simply move rrc bytes
+		 * from the head to the end of the tail
+		 */
+		printk("%s: We are lucky! (moving %u from head to tail)\n",
+			__func__, realrrc);
+		memcpy(tail->iov_base + realtail,
+		       head->iov_base + offset + 16,
+		       realrrc);
+		tail->iov_len = realtail + realrrc;
+		memmove(headstart, headstart + realrrc,
+			realhead - offset - 16 - realrrc);
+		head->iov_len -= realrrc;
+	} else if (realpage != 0 && realtail == 0) {
+		/*
+		 * There is page data and no existing tail
+		 */
+		printk("%s: We are lucky! (moving %u from head to pages)\n",
+			__func__, realrrc);
+		buf->page_len += realrrc;
+		err = write_bytes_to_xdr_buf(buf, realhead + realpage,
+					     headstart, realrrc);
+		if (err) {
+			printk("%s: We are not lucky getting err %d while "
+				"writing %u bytes to xdr_buf\n",
+				__func__, err, realrrc);
+			return 1;
+		}
+		memmove(headstart, headstart + realrrc,
+			realhead - offset - 16 - realrrc);
+		head->iov_len -= realrrc;
+	} else {
+		/*
+		 * None of the quick fixes works.  Do it the "hard way".
+		 */
+		printk("%s: We are not lucky with realrrc %u\n",
+			__func__, realrrc);
+		return 1;
+	}
+
+	print_xdrbuf_data("AFTER : ", buf, offset);
+	return 0;
+}
 
 static u32
 gss_wrap_kerberos_v2(struct krb5_ctx *kctx, u32 offset,
@@ -585,3 +717,74 @@ gss_unwrap_kerberos(struct gss_ctx *gctx, int offset, struct xdr_buf *buf)
 	}
 }
 
+void
+print_hexl(const char *title, u8 *p, u_int length, u_int offset)
+{
+	u_int i, j, jm;
+	u8 c, *cp;
+
+	printk("RPC:       %s: length %d: %p\n", title, length, p);
+	cp = (u8 *) p;
+
+	for (i = 0; i < length; i += 0x10) {
+	        printk("  %04x: ", (u_int)(i + offset));
+	        jm = length - i;
+	        jm = jm > 16 ? 16 : jm;
+
+	        for (j = 0; j < jm; j++) {
+	                if ((j % 2) == 1)
+	                        printk("%02x ", (u_int)cp[i+j]);
+	                else
+	                        printk("%02x", (u_int)cp[i+j]);
+	        }
+	        for (; j < 16; j++) {
+	                if ((j % 2) == 1)
+	                        printk("   ");
+	                else
+	                        printk("  ");
+	        }
+	        printk(" ");
+
+	        for (j = 0; j < jm; j++) {
+	                c = cp[i+j];
+	                c = isprint(c) ? c : '.';
+	                printk("%c", c);
+	        }
+	        printk("\n");
+	}
+	printk("\n");
+}
+EXPORT_SYMBOL(print_hexl);
+
+void print_xdrbuf(const char *msg, struct xdr_buf *x)
+{
+	printk("---------- xdr_buf at %p: %s ----------\n", x, msg);
+	printk("  head[0].iov_base %p, iov_len %zu\n",
+	        x->head[0].iov_base, x->head[0].iov_len);
+	printk("  tail[0].iov_base %p, iov_len %zu\n",
+	        x->tail[0].iov_base, x->tail[0].iov_len);
+	printk("  pages %p\n", x->pages);
+	printk("  page_base %u, page_len %u, flags %u\n",
+	        x->page_base, x->page_len, x->flags);
+	printk("  buflen %u, len %u\n", x->buflen, x->len);
+}
+EXPORT_SYMBOL(print_xdrbuf);
+
+//extern void print_hexl(const char *title, u8 *p, u_int length, u_int offset);
+void print_xdrbuf_data(const char *msg, struct xdr_buf *x, unsigned int offset)
+{
+	printk("========== xdr_buf %p offset %u: %s ==========\n",
+	        x, offset, msg);
+	if (offset < x->head[0].iov_len)
+	        print_hexl("head data", x->head[0].iov_base + offset,
+	                                x->head[0].iov_len - offset, 0);
+	if (x->page_len) {
+	        int plen = (x->page_len > 512) ? 512 : x->page_len;
+	        print_hexl("page data",
+	                page_address(x->pages[0]) + x->page_base, plen, 0);
+	}
+	if (x->tail[0].iov_base != NULL && x->tail[0].iov_len != 0)
+	        print_hexl("tail data", x->tail[0].iov_base,
+	                                x->tail[0].iov_len, 0);
+}
+EXPORT_SYMBOL(print_xdrbuf_data);

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: RFC: Handling GSS rotated data
  2012-04-10 21:56 ` J. Bruce Fields
@ 2012-04-10 23:02   ` J. Bruce Fields
  2012-04-11  0:56     ` Kevin Coffman
  0 siblings, 1 reply; 10+ messages in thread
From: J. Bruce Fields @ 2012-04-10 23:02 UTC (permalink / raw)
  To: Kevin Coffman; +Cc: NFS list, Steve Dickson

On Tue, Apr 10, 2012 at 05:56:33PM -0400, bfields wrote:
> On Wed, Apr 04, 2012 at 06:28:47PM -0400, Kevin Coffman wrote:
> > This bug points out a deficiency in the GSS code previously added to the kernel:
> > 
> > https://bugzilla.redhat.com/show_bug.cgi?id=796992
> > 
> > The spec (RFC 4121 section 4.2.5) says, "The receiver MUST be able to
> > interpret all possible rotation count values, including rotation
> > counts greater than the length of the token."  Note that an
> > implementation is never required to send rotated data.  However it is
> > required to be able to handle receiving rotated data.  Windows is the
> > only implementation that I am aware of that currently sends tokens
> > with rotated data.
> > 
> > Attached is a patch (with way too much debugging) to handle the
> > rotated data we have seen from Microsoft clients.  Admittedly, it does
> > not handle all cases, which is required to be fully compliant with the
> > spec.  I will not have the time to devote to making it fully
> > compliant.  I submit this patch as an RFC and for someone else to
> > complete!  Also note that I may have over-complicating things!!
> 
> Thanks!
> 
> But I'm not going to agree to implementing another subset of the
> cases--we already fell into that trap once, let's not do it again.
> 
> Even if it means we have to do something slow and stupid as the first
> pass, I'd rather have something complete....

By the way, did the cases you saw form microsoft all have a small
rotation value?

So, one approach xdr_buf_subsegment() can calculate a new xdr_buf that
has the "correct" head, tail, and page lengths (making
realhead/realpage/realtail unnecessary).

And read_bytes_from_xdr_buf()/write_bytes_from_xdr_buf() already know
how to do scatter/gather copies from/to an xdr_buf.

And then we can use those to repeatedly shift by up to LOCAL_BUF_LEN
bytes until we've rotated the right amount.  Assuming the typical case
is a small (less than LOCAL_BUF_LEN) rotation value, that will only
require one pass.  So maybe it's not too horrible.

So, something like:

	rrc = rrc % buf->len;
	xdr_buf_subsegment(buf, subbuf, offset, buf->len);
	while (shifted < rrc) {
		this_shift = min(rrc, LOCAL_BUF_LEN);
		shift_buf_a_little(subbuf, this_shift);
		shifted += this_shift;
	}

void shift_buf_a_little(buf, shift)
{
	char head[LOCAL_BUF_LEN];
	
	BUG_ON(shift > LOCAL_BUF_LEN);

	read_bytes_from_xdr_buf(buf, 0, head, shift);
	for (i=0; i + shift < buf->len; i += shift) {
		char tmp[LOCAL_BUF_LEN];
		this_len = min(shift, buf->len - i);
		read_bytes_from_xdr_buf(buf, i+shift, tmp, this_len);
		write_bytes_to_xdr_buf(buf, i, tmp, this_len);
	}
	write_bytes_to_xdr_buf(buf, buf->len - shift, head, shift);
}

?

Uh, but I'm not certain of that second function.

--b.

> 
> --b.
> 
> diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c
> index 2763e3e..b824067 100644
> --- a/net/sunrpc/auth_gss/gss_krb5_wrap.c
> +++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c
> @@ -384,7 +384,6 @@ gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
>   * We cannot currently handle tokens with rotated data.  We need a
>   * generalized routine to rotate the data in place.  It is anticipated
>   * that we won't encounter rotated data in the general case.
> - */
>  static u32
>  rotate_left(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, u16 rrc)
>  {
> @@ -397,6 +396,139 @@ rotate_left(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, u16 rrc)
>  		"rrc %u, realrrc %u\n", __func__, rrc, realrrc);
>  	return 1;
>  }
> + */
> +
> +#include <linux/ctype.h>
> +void print_xdrbuf(const char *msg, struct xdr_buf *x);
> +void print_xdrbuf_data(const char *msg, struct xdr_buf *x, unsigned int offset);
> +
> +#define LOCAL_BUF_LEN 32
> +
> +static u32
> +rotate_left(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, u16 rrc)
> +{
> +	unsigned int realhead, realpage, realtail;
> +	unsigned int remainder;
> +	unsigned int rot_len = buf->len - offset - 16;
> +	unsigned int realrrc = rrc % rot_len;
> +	struct kvec *head, *tail;
> +	void *headstart;    /* start of data past rpc header */
> +	u32 err;
> +
> +	printk("%s: rrc %u, buf->len %u, offset %u, rot_len %u, realrrc %u\n",
> +		__func__, rrc, buf->len, offset, rot_len, realrrc);
> +	print_xdrbuf("in rotate_left", buf);
> +	print_xdrbuf_data("BEFORE: ", buf, offset);
> +
> +	if (realrrc == 0)
> +		return 0;
> +
> +	/*
> +	 * At the time this function is called, the only length
> +	 * values we can absolutely trust are buf->len (the length
> +	 * of data actually read off the network) and buf->buflen
> +	 * (the total available buffer space).
> +	 *
> +	 * The head, tail, and page lengths are the available
> +	 * buffer space for each, not how much data is actually
> +	 * available in each.  We know that data is read to fill
> +	 * up the head, then page data, and then tail; so we can
> +	 * calculate the *real* lengths of each.
> +	 */
> +	realhead = realpage = realtail = 0;
> +	remainder = buf->len;
> +	head = &buf->head[0];
> +	tail = &buf->tail[0];
> +
> +	if (remainder) {
> +		realhead = min_t(unsigned int, buf->len, head->iov_len);
> +		remainder -= realhead;
> +	}
> +	if (remainder) {
> +		realpage = min_t(unsigned int, remainder, buf->page_len);
> +		remainder -= realpage;
> +	}
> +	if (remainder) {
> +		realtail = min_t(unsigned int, remainder, tail->iov_len);
> +		remainder -= realtail;
> +	}
> +	BUG_ON(remainder);
> +	printk("%s: realhead %u, realpage %u, realtail %u\n",
> +		__func__, realhead, realpage, realtail);
> +
> +	headstart = head->iov_base + offset + 16;
> +
> +	if (realpage == 0 && realtail == 0) {
> +		/*
> +		 * All the data is in the head.  Therefore all
> +		 * the data to be moved is in the head
> +		 */
> +		char localbuf[LOCAL_BUF_LEN];
> +		unsigned int amtleft = realrrc;
> +		unsigned int thismove;
> +		printk("%s: We are lucky! (moving %u within head)\n",
> +			__func__, realrrc);
> +		while (amtleft != 0) {
> +			thismove = min_t(unsigned int, amtleft, LOCAL_BUF_LEN);
> +			memcpy(localbuf, headstart, thismove);
> +			memmove(headstart, headstart + thismove,
> +				realhead - offset - 16 - thismove);
> +			memcpy(head->iov_base + realhead - thismove,
> +				localbuf, thismove);
> +			amtleft -= thismove;
> +		}
> +	} else if (realhead - offset - 16 < realrrc) {
> +		/*
> +		 * The part to be moved is split between the head and
> +		 * page data or tail
> +		 */
> +		printk("%s: We are not lucky.  All the realrrc data (%u) "
> +			" is not in the head!\n", __func__, realrrc);
> +		return 1;
> +	} else if (realtail != 0 && tail->iov_len - realtail >= realrrc) {
> +		/*
> +		 * We are lucky enough to simply move rrc bytes
> +		 * from the head to the end of the tail
> +		 */
> +		printk("%s: We are lucky! (moving %u from head to tail)\n",
> +			__func__, realrrc);
> +		memcpy(tail->iov_base + realtail,
> +		       head->iov_base + offset + 16,
> +		       realrrc);
> +		tail->iov_len = realtail + realrrc;
> +		memmove(headstart, headstart + realrrc,
> +			realhead - offset - 16 - realrrc);
> +		head->iov_len -= realrrc;
> +	} else if (realpage != 0 && realtail == 0) {
> +		/*
> +		 * There is page data and no existing tail
> +		 */
> +		printk("%s: We are lucky! (moving %u from head to pages)\n",
> +			__func__, realrrc);
> +		buf->page_len += realrrc;
> +		err = write_bytes_to_xdr_buf(buf, realhead + realpage,
> +					     headstart, realrrc);
> +		if (err) {
> +			printk("%s: We are not lucky getting err %d while "
> +				"writing %u bytes to xdr_buf\n",
> +				__func__, err, realrrc);
> +			return 1;
> +		}
> +		memmove(headstart, headstart + realrrc,
> +			realhead - offset - 16 - realrrc);
> +		head->iov_len -= realrrc;
> +	} else {
> +		/*
> +		 * None of the quick fixes works.  Do it the "hard way".
> +		 */
> +		printk("%s: We are not lucky with realrrc %u\n",
> +			__func__, realrrc);
> +		return 1;
> +	}
> +
> +	print_xdrbuf_data("AFTER : ", buf, offset);
> +	return 0;
> +}
>  
>  static u32
>  gss_wrap_kerberos_v2(struct krb5_ctx *kctx, u32 offset,
> @@ -585,3 +717,74 @@ gss_unwrap_kerberos(struct gss_ctx *gctx, int offset, struct xdr_buf *buf)
>  	}
>  }
>  
> +void
> +print_hexl(const char *title, u8 *p, u_int length, u_int offset)
> +{
> +	u_int i, j, jm;
> +	u8 c, *cp;
> +
> +	printk("RPC:       %s: length %d: %p\n", title, length, p);
> +	cp = (u8 *) p;
> +
> +	for (i = 0; i < length; i += 0x10) {
> +	        printk("  %04x: ", (u_int)(i + offset));
> +	        jm = length - i;
> +	        jm = jm > 16 ? 16 : jm;
> +
> +	        for (j = 0; j < jm; j++) {
> +	                if ((j % 2) == 1)
> +	                        printk("%02x ", (u_int)cp[i+j]);
> +	                else
> +	                        printk("%02x", (u_int)cp[i+j]);
> +	        }
> +	        for (; j < 16; j++) {
> +	                if ((j % 2) == 1)
> +	                        printk("   ");
> +	                else
> +	                        printk("  ");
> +	        }
> +	        printk(" ");
> +
> +	        for (j = 0; j < jm; j++) {
> +	                c = cp[i+j];
> +	                c = isprint(c) ? c : '.';
> +	                printk("%c", c);
> +	        }
> +	        printk("\n");
> +	}
> +	printk("\n");
> +}
> +EXPORT_SYMBOL(print_hexl);
> +
> +void print_xdrbuf(const char *msg, struct xdr_buf *x)
> +{
> +	printk("---------- xdr_buf at %p: %s ----------\n", x, msg);
> +	printk("  head[0].iov_base %p, iov_len %zu\n",
> +	        x->head[0].iov_base, x->head[0].iov_len);
> +	printk("  tail[0].iov_base %p, iov_len %zu\n",
> +	        x->tail[0].iov_base, x->tail[0].iov_len);
> +	printk("  pages %p\n", x->pages);
> +	printk("  page_base %u, page_len %u, flags %u\n",
> +	        x->page_base, x->page_len, x->flags);
> +	printk("  buflen %u, len %u\n", x->buflen, x->len);
> +}
> +EXPORT_SYMBOL(print_xdrbuf);
> +
> +//extern void print_hexl(const char *title, u8 *p, u_int length, u_int offset);
> +void print_xdrbuf_data(const char *msg, struct xdr_buf *x, unsigned int offset)
> +{
> +	printk("========== xdr_buf %p offset %u: %s ==========\n",
> +	        x, offset, msg);
> +	if (offset < x->head[0].iov_len)
> +	        print_hexl("head data", x->head[0].iov_base + offset,
> +	                                x->head[0].iov_len - offset, 0);
> +	if (x->page_len) {
> +	        int plen = (x->page_len > 512) ? 512 : x->page_len;
> +	        print_hexl("page data",
> +	                page_address(x->pages[0]) + x->page_base, plen, 0);
> +	}
> +	if (x->tail[0].iov_base != NULL && x->tail[0].iov_len != 0)
> +	        print_hexl("tail data", x->tail[0].iov_base,
> +	                                x->tail[0].iov_len, 0);
> +}
> +EXPORT_SYMBOL(print_xdrbuf_data);

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RFC: Handling GSS rotated data
  2012-04-10 23:02   ` J. Bruce Fields
@ 2012-04-11  0:56     ` Kevin Coffman
  2012-04-12  0:12       ` J. Bruce Fields
  0 siblings, 1 reply; 10+ messages in thread
From: Kevin Coffman @ 2012-04-11  0:56 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: NFS list, Steve Dickson

On Tue, Apr 10, 2012 at 7:02 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
> On Tue, Apr 10, 2012 at 05:56:33PM -0400, bfields wrote:
>> On Wed, Apr 04, 2012 at 06:28:47PM -0400, Kevin Coffman wrote:
>> > This bug points out a deficiency in the GSS code previously added to the kernel:
>> >
>> > https://bugzilla.redhat.com/show_bug.cgi?id=796992
>> >
>> > The spec (RFC 4121 section 4.2.5) says, "The receiver MUST be able to
>> > interpret all possible rotation count values, including rotation
>> > counts greater than the length of the token."  Note that an
>> > implementation is never required to send rotated data.  However it is
>> > required to be able to handle receiving rotated data.  Windows is the
>> > only implementation that I am aware of that currently sends tokens
>> > with rotated data.
>> >
>> > Attached is a patch (with way too much debugging) to handle the
>> > rotated data we have seen from Microsoft clients.  Admittedly, it does
>> > not handle all cases, which is required to be fully compliant with the
>> > spec.  I will not have the time to devote to making it fully
>> > compliant.  I submit this patch as an RFC and for someone else to
>> > complete!  Also note that I may have over-complicating things!!
>>
>> Thanks!
>>
>> But I'm not going to agree to implementing another subset of the
>> cases--we already fell into that trap once, let's not do it again.
>>
>> Even if it means we have to do something slow and stupid as the first
>> pass, I'd rather have something complete....
>
> By the way, did the cases you saw form microsoft all have a small
> rotation value?
>
> So, one approach xdr_buf_subsegment() can calculate a new xdr_buf that
> has the "correct" head, tail, and page lengths (making
> realhead/realpage/realtail unnecessary).
>
> And read_bytes_from_xdr_buf()/write_bytes_from_xdr_buf() already know
> how to do scatter/gather copies from/to an xdr_buf.
>
> And then we can use those to repeatedly shift by up to LOCAL_BUF_LEN
> bytes until we've rotated the right amount.  Assuming the typical case
> is a small (less than LOCAL_BUF_LEN) rotation value, that will only
> require one pass.  So maybe it's not too horrible.
>
> So, something like:
>
>        rrc = rrc % buf->len;
>        xdr_buf_subsegment(buf, subbuf, offset, buf->len);
>        while (shifted < rrc) {
>                this_shift = min(rrc, LOCAL_BUF_LEN);
>                shift_buf_a_little(subbuf, this_shift);
>                shifted += this_shift;
>        }
>
> void shift_buf_a_little(buf, shift)
> {
>        char head[LOCAL_BUF_LEN];
>
>        BUG_ON(shift > LOCAL_BUF_LEN);
>
>        read_bytes_from_xdr_buf(buf, 0, head, shift);
>        for (i=0; i + shift < buf->len; i += shift) {
>                char tmp[LOCAL_BUF_LEN];
>                this_len = min(shift, buf->len - i);
>                read_bytes_from_xdr_buf(buf, i+shift, tmp, this_len);
>                write_bytes_to_xdr_buf(buf, i, tmp, this_len);
>        }
>        write_bytes_to_xdr_buf(buf, buf->len - shift, head, shift);
> }
>
> ?
>
> Uh, but I'm not certain of that second function.
>
> --b.

The rotation values we've seen from Microsoft clients/servers for AES
have been 28.  Your code definitely looks simpler!

K.C.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RFC: Handling GSS rotated data
  2012-04-11  0:56     ` Kevin Coffman
@ 2012-04-12  0:12       ` J. Bruce Fields
  2012-04-12  0:19         ` J. Bruce Fields
  2012-04-12  1:06         ` Jim Rees
  0 siblings, 2 replies; 10+ messages in thread
From: J. Bruce Fields @ 2012-04-12  0:12 UTC (permalink / raw)
  To: Kevin Coffman; +Cc: NFS list, Steve Dickson

On Tue, Apr 10, 2012 at 08:56:56PM -0400, Kevin Coffman wrote:
> The rotation values we've seen from Microsoft clients/servers for AES
> have been 28.  Your code definitely looks simpler!

So, here's something that actually compiles.  I'll need someone else to
test, though, and I won't be at all surprised if it needs some
fixing....

--b.

commit cf3f40cb1fdda74abf76614f03eb00f5d6f35b54
Author: J. Bruce Fields <bfields@redhat.com>
Date:   Wed Apr 11 20:08:45 2012 -0400

    rpc: handle rotated gss data for Windows interoperability
    
    The data in Kerberos gss tokens can be rotated.  But we were lazy and
    rejected any nonzero rotation value.  It wasn't necessary for the
    implementations we were testing against at the time.
    
    But it appears that Windows does use a nonzero value here.
    
    So, implementation rotation to bring ourselves into compliance with the
    spec and to interoperate with Windows.
    
    Signed-off-by: J. Bruce Fields <bfields@redhat.com>

diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c
index 38f388c..2070681 100644
--- a/net/sunrpc/auth_gss/gss_krb5_wrap.c
+++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c
@@ -381,21 +381,47 @@ gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
 }
 
 /*
- * We cannot currently handle tokens with rotated data.  We need a
- * generalized routine to rotate the data in place.  It is anticipated
- * that we won't encounter rotated data in the general case.
+ * We can shift data by up to LOCAL_BUF_LEN bytes in a pass.  If we need
+ * to do more than that, we shift repeatedly.  Kevin Coffman reports
+ * seeing 28 bytes as the value used by Microsoft clients and servers
+ * with AES, so this constant is chosen to allow handling 28 in one pass
+ * without using two much stack space.
+ *
+ * If that proves to a problem perhaps we could use a more clever
+ * algorithm.
  */
-static u32
-rotate_left(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, u16 rrc)
+#define LOCAL_BUF_LEN 32u
+
+void shift_buf_a_little(struct xdr_buf *buf, unsigned int shift)
 {
-	unsigned int realrrc = rrc % (buf->len - offset - GSS_KRB5_TOK_HDR_LEN);
+	char head[LOCAL_BUF_LEN];
+	unsigned int this_len, i;
 
-	if (realrrc == 0)
-		return 0;
+	BUG_ON(shift > LOCAL_BUF_LEN);
 
-	dprintk("%s: cannot process token with rotated data: "
-		"rrc %u, realrrc %u\n", __func__, rrc, realrrc);
-	return 1;
+	read_bytes_from_xdr_buf(buf, 0, head, shift);
+	for (i=0; i + shift < buf->len; i += shift) {
+		char tmp[LOCAL_BUF_LEN];
+		this_len = min(shift, buf->len - i);
+		read_bytes_from_xdr_buf(buf, i+shift, tmp, this_len);
+		write_bytes_to_xdr_buf(buf, i, tmp, this_len);
+	}
+	write_bytes_to_xdr_buf(buf, buf->len - shift, head, shift);
+}
+
+void rotate_left(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, unsigned int rrc)
+{
+	struct xdr_buf subbuf;
+	int shifted = 0;
+	int this_shift;
+
+	rrc %= buf->len;
+	xdr_buf_subsegment(buf, &subbuf, offset, buf->len);
+	while (shifted < rrc) {
+		this_shift = min(rrc, LOCAL_BUF_LEN);
+		shift_buf_a_little(&subbuf, this_shift);
+		shifted += this_shift;
+	}
 }
 
 static u32
@@ -495,11 +521,8 @@ gss_unwrap_kerberos_v2(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
 
 	seqnum = be64_to_cpup((__be64 *)(ptr + 8));
 
-	if (rrc != 0) {
-		err = rotate_left(kctx, offset, buf, rrc);
-		if (err)
-			return GSS_S_FAILURE;
-	}
+	if (rrc != 0)
+		rotate_left(kctx, offset, buf, rrc);
 
 	err = (*kctx->gk5e->decrypt_v2)(kctx, offset, buf,
 					&headskip, &tailskip);

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: RFC: Handling GSS rotated data
  2012-04-12  0:12       ` J. Bruce Fields
@ 2012-04-12  0:19         ` J. Bruce Fields
  2012-05-01 23:52           ` J. Bruce Fields
  2012-04-12  1:06         ` Jim Rees
  1 sibling, 1 reply; 10+ messages in thread
From: J. Bruce Fields @ 2012-04-12  0:19 UTC (permalink / raw)
  To: Kevin Coffman; +Cc: NFS list, Steve Dickson, aglo

On Wed, Apr 11, 2012 at 08:12:24PM -0400, J. Bruce Fields wrote:
> On Tue, Apr 10, 2012 at 08:56:56PM -0400, Kevin Coffman wrote:
> > The rotation values we've seen from Microsoft clients/servers for AES
> > have been 28.  Your code definitely looks simpler!
> 
> So, here's something that actually compiles.  I'll need someone else to
> test, though, and I won't be at all surprised if it needs some
> fixing....

(Adding Olga to cc.)

--b.

> 
> --b.
> 
> commit cf3f40cb1fdda74abf76614f03eb00f5d6f35b54
> Author: J. Bruce Fields <bfields@redhat.com>
> Date:   Wed Apr 11 20:08:45 2012 -0400
> 
>     rpc: handle rotated gss data for Windows interoperability
>     
>     The data in Kerberos gss tokens can be rotated.  But we were lazy and
>     rejected any nonzero rotation value.  It wasn't necessary for the
>     implementations we were testing against at the time.
>     
>     But it appears that Windows does use a nonzero value here.
>     
>     So, implementation rotation to bring ourselves into compliance with the
>     spec and to interoperate with Windows.
>     
>     Signed-off-by: J. Bruce Fields <bfields@redhat.com>
> 
> diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c
> index 38f388c..2070681 100644
> --- a/net/sunrpc/auth_gss/gss_krb5_wrap.c
> +++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c
> @@ -381,21 +381,47 @@ gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
>  }
>  
>  /*
> - * We cannot currently handle tokens with rotated data.  We need a
> - * generalized routine to rotate the data in place.  It is anticipated
> - * that we won't encounter rotated data in the general case.
> + * We can shift data by up to LOCAL_BUF_LEN bytes in a pass.  If we need
> + * to do more than that, we shift repeatedly.  Kevin Coffman reports
> + * seeing 28 bytes as the value used by Microsoft clients and servers
> + * with AES, so this constant is chosen to allow handling 28 in one pass
> + * without using two much stack space.
> + *
> + * If that proves to a problem perhaps we could use a more clever
> + * algorithm.
>   */
> -static u32
> -rotate_left(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, u16 rrc)
> +#define LOCAL_BUF_LEN 32u
> +
> +void shift_buf_a_little(struct xdr_buf *buf, unsigned int shift)
>  {
> -	unsigned int realrrc = rrc % (buf->len - offset - GSS_KRB5_TOK_HDR_LEN);
> +	char head[LOCAL_BUF_LEN];
> +	unsigned int this_len, i;
>  
> -	if (realrrc == 0)
> -		return 0;
> +	BUG_ON(shift > LOCAL_BUF_LEN);
>  
> -	dprintk("%s: cannot process token with rotated data: "
> -		"rrc %u, realrrc %u\n", __func__, rrc, realrrc);
> -	return 1;
> +	read_bytes_from_xdr_buf(buf, 0, head, shift);
> +	for (i=0; i + shift < buf->len; i += shift) {
> +		char tmp[LOCAL_BUF_LEN];
> +		this_len = min(shift, buf->len - i);
> +		read_bytes_from_xdr_buf(buf, i+shift, tmp, this_len);
> +		write_bytes_to_xdr_buf(buf, i, tmp, this_len);
> +	}
> +	write_bytes_to_xdr_buf(buf, buf->len - shift, head, shift);
> +}
> +
> +void rotate_left(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, unsigned int rrc)
> +{
> +	struct xdr_buf subbuf;
> +	int shifted = 0;
> +	int this_shift;
> +
> +	rrc %= buf->len;
> +	xdr_buf_subsegment(buf, &subbuf, offset, buf->len);
> +	while (shifted < rrc) {
> +		this_shift = min(rrc, LOCAL_BUF_LEN);
> +		shift_buf_a_little(&subbuf, this_shift);
> +		shifted += this_shift;
> +	}
>  }
>  
>  static u32
> @@ -495,11 +521,8 @@ gss_unwrap_kerberos_v2(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
>  
>  	seqnum = be64_to_cpup((__be64 *)(ptr + 8));
>  
> -	if (rrc != 0) {
> -		err = rotate_left(kctx, offset, buf, rrc);
> -		if (err)
> -			return GSS_S_FAILURE;
> -	}
> +	if (rrc != 0)
> +		rotate_left(kctx, offset, buf, rrc);
>  
>  	err = (*kctx->gk5e->decrypt_v2)(kctx, offset, buf,
>  					&headskip, &tailskip);

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RFC: Handling GSS rotated data
  2012-04-12  0:12       ` J. Bruce Fields
  2012-04-12  0:19         ` J. Bruce Fields
@ 2012-04-12  1:06         ` Jim Rees
  2012-04-12 16:46           ` J. Bruce Fields
  1 sibling, 1 reply; 10+ messages in thread
From: Jim Rees @ 2012-04-12  1:06 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: Kevin Coffman, NFS list, Steve Dickson

J. Bruce Fields wrote:

  + * without using two much stack space.

"two" should be "too".

Sorry to violate the nfs list convention of quoting the entire patch in my
reply but it seems unnecessary in this case.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RFC: Handling GSS rotated data
  2012-04-12  1:06         ` Jim Rees
@ 2012-04-12 16:46           ` J. Bruce Fields
  0 siblings, 0 replies; 10+ messages in thread
From: J. Bruce Fields @ 2012-04-12 16:46 UTC (permalink / raw)
  To: Jim Rees; +Cc: Kevin Coffman, NFS list, Steve Dickson

On Wed, Apr 11, 2012 at 09:06:56PM -0400, Jim Rees wrote:
> J. Bruce Fields wrote:
> 
>   + * without using two much stack space.
> 
> "two" should be "too".
> 
> Sorry to violate the nfs list convention of quoting the entire patch in my
> reply but it seems unnecessary in this case.

We'll refer your case to the local netiquette committee for a judgement.

--b.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RFC: Handling GSS rotated data
  2012-04-12  0:19         ` J. Bruce Fields
@ 2012-05-01 23:52           ` J. Bruce Fields
  2012-05-02 22:08             ` J. Bruce Fields
  0 siblings, 1 reply; 10+ messages in thread
From: J. Bruce Fields @ 2012-05-01 23:52 UTC (permalink / raw)
  To: Kevin Coffman; +Cc: NFS list, Steve Dickson, aglo

On Wed, Apr 11, 2012 at 08:19:22PM -0400, J. Bruce Fields wrote:
> On Wed, Apr 11, 2012 at 08:12:24PM -0400, J. Bruce Fields wrote:
> > On Tue, Apr 10, 2012 at 08:56:56PM -0400, Kevin Coffman wrote:
> > > The rotation values we've seen from Microsoft clients/servers for AES
> > > have been 28.  Your code definitely looks simpler!
> > 
> > So, here's something that actually compiles.  I'll need someone else to
> > test, though, and I won't be at all surprised if it needs some
> > fixing....

By the way, handing it off to Olga for testing this afternoon I gave it
some more review and found some bugs; this is the version she's testing.

--b.

commit 52980de302070f9c182d18a803a51196617c509f
Author: J. Bruce Fields <bfields@redhat.com>
Date:   Wed Apr 11 20:08:45 2012 -0400

    rpc: handle rotated gss data for Windows interoperability
    
    The data in Kerberos gss tokens can be rotated.  But we were lazy and
    rejected any nonzero rotation value.  It wasn't necessary for the
    implementations we were testing against at the time.
    
    But it appears that Windows does use a nonzero value here.
    
    So, implement rotation to bring ourselves into compliance with the spec
    and to interoperate with Windows.
    
    Signed-off-by: J. Bruce Fields <bfields@redhat.com>

diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c
index 38f388c..107c452 100644
--- a/net/sunrpc/auth_gss/gss_krb5_wrap.c
+++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c
@@ -381,21 +381,53 @@ gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
 }
 
 /*
- * We cannot currently handle tokens with rotated data.  We need a
- * generalized routine to rotate the data in place.  It is anticipated
- * that we won't encounter rotated data in the general case.
+ * We can shift data by up to LOCAL_BUF_LEN bytes in a pass.  If we need
+ * to do more than that, we shift repeatedly.  Kevin Coffman reports
+ * seeing 28 bytes as the value used by Microsoft clients and servers
+ * with AES, so this constant is chosen to allow handling 28 in one pass
+ * without using too much stack space.
+ *
+ * If that proves to a problem perhaps we could use a more clever
+ * algorithm.
  */
-static u32
-rotate_left(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, u16 rrc)
+#define LOCAL_BUF_LEN 32u
+
+static void rotate_buf_a_little(struct xdr_buf *buf, unsigned int shift)
 {
-	unsigned int realrrc = rrc % (buf->len - offset - GSS_KRB5_TOK_HDR_LEN);
+	char head[LOCAL_BUF_LEN];
+	char tmp[LOCAL_BUF_LEN];
+	unsigned int this_len, i;
+
+	BUG_ON(shift > LOCAL_BUF_LEN);
 
-	if (realrrc == 0)
-		return 0;
+	read_bytes_from_xdr_buf(buf, 0, head, shift);
+	for (i = 0; i + shift < buf->len; i += LOCAL_BUF_LEN) {
+		this_len = min(LOCAL_BUF_LEN, buf->len - (i + shift));
+		read_bytes_from_xdr_buf(buf, i+shift, tmp, this_len);
+		write_bytes_to_xdr_buf(buf, i, tmp, this_len);
+	}
+	write_bytes_to_xdr_buf(buf, buf->len - shift, head, shift);
+}
 
-	dprintk("%s: cannot process token with rotated data: "
-		"rrc %u, realrrc %u\n", __func__, rrc, realrrc);
-	return 1;
+static void _rotate_left(struct xdr_buf *buf, unsigned int shift)
+{
+	int shifted = 0;
+	int this_shift;
+
+	shift %= buf->len;
+	while (shifted < shift) {
+		this_shift = min(shift - shifted, LOCAL_BUF_LEN);
+		rotate_buf_a_little(buf, this_shift);
+		shifted += this_shift;
+	}
+}
+
+static void rotate_left(u32 base, struct xdr_buf *buf, unsigned int shift)
+{
+	struct xdr_buf subbuf;
+
+	xdr_buf_subsegment(buf, &subbuf, base, buf->len - base);
+	_rotate_left(&subbuf, shift);
 }
 
 static u32
@@ -495,11 +527,8 @@ gss_unwrap_kerberos_v2(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
 
 	seqnum = be64_to_cpup((__be64 *)(ptr + 8));
 
-	if (rrc != 0) {
-		err = rotate_left(kctx, offset, buf, rrc);
-		if (err)
-			return GSS_S_FAILURE;
-	}
+	if (rrc != 0)
+		rotate_left(offset + 16, buf, rrc);
 
 	err = (*kctx->gk5e->decrypt_v2)(kctx, offset, buf,
 					&headskip, &tailskip);

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: RFC: Handling GSS rotated data
  2012-05-01 23:52           ` J. Bruce Fields
@ 2012-05-02 22:08             ` J. Bruce Fields
  0 siblings, 0 replies; 10+ messages in thread
From: J. Bruce Fields @ 2012-05-02 22:08 UTC (permalink / raw)
  To: Kevin Coffman; +Cc: NFS list, Steve Dickson, aglo

On Tue, May 01, 2012 at 07:52:10PM -0400, J. Bruce Fields wrote:
> On Wed, Apr 11, 2012 at 08:19:22PM -0400, J. Bruce Fields wrote:
> > On Wed, Apr 11, 2012 at 08:12:24PM -0400, J. Bruce Fields wrote:
> > > On Tue, Apr 10, 2012 at 08:56:56PM -0400, Kevin Coffman wrote:
> > > > The rotation values we've seen from Microsoft clients/servers for AES
> > > > have been 28.  Your code definitely looks simpler!
> > > 
> > > So, here's something that actually compiles.  I'll need someone else to
> > > test, though, and I won't be at all surprised if it needs some
> > > fixing....
> 
> By the way, handing it off to Olga for testing this afternoon I gave it
> some more review and found some bugs; this is the version she's testing.

Looks like this is passing tests, so I'm commiting for 3.5.  Thanks to
Olga for testing and to Kevin for the first implementation.

--b.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-05-02 22:08 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-04 22:28 RFC: Handling GSS rotated data Kevin Coffman
2012-04-10 21:56 ` J. Bruce Fields
2012-04-10 23:02   ` J. Bruce Fields
2012-04-11  0:56     ` Kevin Coffman
2012-04-12  0:12       ` J. Bruce Fields
2012-04-12  0:19         ` J. Bruce Fields
2012-05-01 23:52           ` J. Bruce Fields
2012-05-02 22:08             ` J. Bruce Fields
2012-04-12  1:06         ` Jim Rees
2012-04-12 16:46           ` J. Bruce Fields

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.