All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Woodhouse <dwmw2@infradead.org>
To: Avery Pennarun <apenwarr@gmail.com>
Cc: "Jeffrey Hundstad" <jeffrey.hundstad@mnsu.edu>,
	"viresh kumar" <viresh.kumar@st.com>,
	"Felipe Contreras" <felipe.contreras@gmail.com>,
	"git@vger.kernel.org" <git@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"Justin P. Mattock" <justinmattock@gmail.com>,
	"Uwe Kleine-König" <u.kleine-koenig@pengutronix.de>,
	"Valeo de Vries" <valeo@valeo.co.cc>,
	"Linus Walleij" <linus.ml.walleij@gmail.com>,
	"Matti Aarnio" <matti.aarnio@zmailer.org>,
	mihai.dontu@gmail.com, richardcochran@gmail.com, "Gadiyar,
	Anand" <gadiyar@ti.com>
Subject: Re: Query: Patches break with Microsoft exchange server.
Date: Wed, 11 Aug 2010 17:30:34 +0100	[thread overview]
Message-ID: <1281544234.5107.25.camel@localhost> (raw)
In-Reply-To: <AANLkTi=9xVdfXJXpkNPUMahc7AsbxjVbZFSxeBrzvbmS@mail.gmail.com>

On Wed, 2010-08-11 at 12:18 -0400, Avery Pennarun wrote:
> 
> Out of curiosity, why fall back to one chunk at a time?  It seems to
> me that IMAP should be able to still support multiple outstanding
> requests in that case, but you'd just get errors on the latter chunks.
> 
> It is just that there was no point optimizing the workaround case?

There wasn't a lot of point in optimising it.

The current logic, shown in the patch I referenced, is to keep fetching
new chunks while the stream position matches the end of the previous
chunk we attempted to fetch.

To handle multiple outstanding requests, especially if they can be
satisfied out-of-order, would have been more complex because the stream
position (in the 'really_fetched' variable) wouldn't necessarily match
anything interesting. We'd have to keep more state, and the whole thing
would get a lot more intrusive.

Also, for the common case where the server isn't broken and the mail
size happens not to fall on a chunk boundary, the current implementation
results in no extra fetch requests. Doing otherwise would either mean
extra fetch requests even for this common case, or would mean even more
complexity to 'catch up' by issuing additional fetch requests as soon as
we realise the server lied about RFC822.SIZE (which is when we receive
the last chunk, and it runs over the size we expected).

It may be that there's a neat and simple way to handle all of the above,
and if so then patches would be welcome -- but personally, I just
couldn't be bothered to think too hard about it. There were more
pressing matters to attend to, like implementing QRESYNC support.

-- 
David Woodhouse                            Open Source Technology Centre
David.Woodhouse@intel.com                              Intel Corporation


WARNING: multiple messages have this Message-ID (diff)
From: dwmw2@infradead.org (David Woodhouse)
To: linux-arm-kernel@lists.infradead.org
Subject: Query: Patches break with Microsoft exchange server.
Date: Wed, 11 Aug 2010 17:30:34 +0100	[thread overview]
Message-ID: <1281544234.5107.25.camel@localhost> (raw)
In-Reply-To: <AANLkTi=9xVdfXJXpkNPUMahc7AsbxjVbZFSxeBrzvbmS@mail.gmail.com>

On Wed, 2010-08-11 at 12:18 -0400, Avery Pennarun wrote:
> 
> Out of curiosity, why fall back to one chunk at a time?  It seems to
> me that IMAP should be able to still support multiple outstanding
> requests in that case, but you'd just get errors on the latter chunks.
> 
> It is just that there was no point optimizing the workaround case?

There wasn't a lot of point in optimising it.

The current logic, shown in the patch I referenced, is to keep fetching
new chunks while the stream position matches the end of the previous
chunk we attempted to fetch.

To handle multiple outstanding requests, especially if they can be
satisfied out-of-order, would have been more complex because the stream
position (in the 'really_fetched' variable) wouldn't necessarily match
anything interesting. We'd have to keep more state, and the whole thing
would get a lot more intrusive.

Also, for the common case where the server isn't broken and the mail
size happens not to fall on a chunk boundary, the current implementation
results in no extra fetch requests. Doing otherwise would either mean
extra fetch requests even for this common case, or would mean even more
complexity to 'catch up' by issuing additional fetch requests as soon as
we realise the server lied about RFC822.SIZE (which is when we receive
the last chunk, and it runs over the size we expected).

It may be that there's a neat and simple way to handle all of the above,
and if so then patches would be welcome -- but personally, I just
couldn't be bothered to think too hard about it. There were more
pressing matters to attend to, like implementing QRESYNC support.

-- 
David Woodhouse                            Open Source Technology Centre
David.Woodhouse at intel.com                              Intel Corporation

  reply	other threads:[~2010-08-11 16:30 UTC|newest]

Thread overview: 74+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-09  6:07 Query: Patches break with Microsoft exchange server viresh kumar
2018-06-19 20:23 ` viresh kumar
2010-08-09  6:42 ` Justin P. Mattock
2018-06-19 20:24   ` Justin P. Mattock
2010-08-09  6:55   ` viresh kumar
2018-06-19 20:24     ` viresh kumar
2010-08-09  7:32     ` Justin P. Mattock
2018-06-19 20:24       ` Justin P. Mattock
2010-08-09  6:49 ` Uwe Kleine-König
2018-06-19 20:24   ` Uwe Kleine-König
2010-08-09  6:56   ` viresh kumar
2018-06-19 20:24     ` viresh kumar
2010-08-09  7:19     ` Valeo de Vries
2010-08-09  7:47       ` viresh kumar
2010-08-09  9:00         ` Valeo de Vries
2010-08-09  9:01     ` Matti Aarnio
2018-06-19 20:26       ` Matti Aarnio
2010-08-09  9:35       ` viresh kumar
2018-06-19 20:26         ` viresh kumar
2010-08-09  9:43         ` Justin P. Mattock
2018-06-19 20:26           ` Justin P. Mattock
2010-08-09 14:35           ` Mihai Donțu
2010-08-09 14:35             ` Mihai Donțu
2010-08-09 17:55             ` Justin P. Mattock
2010-08-09 17:55               ` Justin P. Mattock
2010-08-09 18:15               ` Mihai Donțu
2010-08-09 18:15                 ` Mihai Donțu
2010-08-09 18:53                 ` Justin P. Mattock
2010-08-09 18:53                   ` Justin P. Mattock
2010-08-09 21:28                 ` David Woodhouse
2010-08-09 21:28                   ` David Woodhouse
2010-08-09 21:56                   ` Justin P. Mattock
2010-08-09 21:56                     ` Justin P. Mattock
2010-08-09 22:12                     ` Valeo de Vries
2010-08-09 22:24                       ` Justin P. Mattock
2010-08-11 14:05                   ` Geert Uytterhoeven
2010-08-11 14:05                     ` Geert Uytterhoeven
2010-08-10  9:22         ` Gadiyar, Anand
2010-08-10  9:22           ` Gadiyar, Anand
2010-08-10  9:26           ` viresh kumar
2010-08-10  9:26             ` viresh kumar
2010-08-09 10:02       ` David Woodhouse
2018-06-19 20:26         ` David Woodhouse
2018-06-19 20:42         ` Willy Tarreau
2018-06-19 20:42           ` Willy Tarreau
2010-08-09 14:19 ` Richard Cochran
2010-08-09 14:19   ` Richard Cochran
2010-08-09 14:34   ` Valeo de Vries
2010-08-10 22:04 ` Felipe Contreras
2010-08-10 22:04   ` Felipe Contreras
2010-08-10 22:04   ` Felipe Contreras
2010-08-11  7:01   ` viresh kumar
2010-08-11  7:01     ` viresh kumar
2010-08-11 10:11     ` Valeo de Vries
2010-08-11 10:53       ` viresh kumar
2010-08-11 11:38         ` Wouter Simons
2010-08-11 13:33       ` Justin P. Mattock
2010-08-11 15:46     ` Jeffrey Hundstad
2010-08-11 15:46       ` Jeffrey Hundstad
2010-08-11 15:46       ` Jeffrey Hundstad
2010-08-11 15:58       ` David Woodhouse
2010-08-11 15:58         ` David Woodhouse
2010-08-11 16:16         ` Jeffrey Hundstad
2010-08-11 16:16           ` Jeffrey Hundstad
2010-08-11 16:18         ` Avery Pennarun
2010-08-11 16:18           ` Avery Pennarun
2010-08-11 16:30           ` David Woodhouse [this message]
2010-08-11 16:30             ` David Woodhouse
2010-08-11 16:39             ` Avery Pennarun
2010-08-11 16:39               ` Avery Pennarun
2010-08-12  4:41       ` viresh kumar
2010-08-12  4:41         ` viresh kumar
2010-08-16  0:32 ` Gururaja Hebbar K R
2010-08-16  1:02   ` Gururaja Hebbar K R

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1281544234.5107.25.camel@localhost \
    --to=dwmw2@infradead.org \
    --cc=apenwarr@gmail.com \
    --cc=felipe.contreras@gmail.com \
    --cc=gadiyar@ti.com \
    --cc=git@vger.kernel.org \
    --cc=jeffrey.hundstad@mnsu.edu \
    --cc=justinmattock@gmail.com \
    --cc=linus.ml.walleij@gmail.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=matti.aarnio@zmailer.org \
    --cc=mihai.dontu@gmail.com \
    --cc=richardcochran@gmail.com \
    --cc=u.kleine-koenig@pengutronix.de \
    --cc=valeo@valeo.co.cc \
    --cc=viresh.kumar@st.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.