On Monday 20 January 2020 16:43:21 David Laight wrote: > From: Pali Rohár > > Sent: 20 January 2020 16:27 > ... > > > Unfortunately there is neither a 1:1 mapping of all possible byte sequences > > > to wchar_t (or unicode code points), > > > > I was talking about valid UTF-8 sequence (invalid, illformed is out of > > game and for sure would always cause problems). > > Except that they are always likely to happen. As wrote before, Linux kernel does not allow such sequences. So userspace get error when is trying to store garbage. > I've been pissed off by programs crashing because they assume that > a input string (eg an email) is UTF-8 but happens to contain a single > 0xa3 byte in the otherwise 7-bit data. > > The standard ought to have defined a translation for such sequences > and just a 'warning' from the function(s) that unexpected bytes were > processed. There is informative part, how to replace invalid part of sequence to Unicode code point U+FFFD. So if your need to to "process any byte sequence as UTF-8" there is standardized way to convert it into one exact sequence of Unicode code points. This is what email programs should do and non-broken are already doing it. > > > nor a 1:1 mapping of all possible wchar_t values to UTF-8. > > > > This is not truth. There is exactly only one way how to convert sequence > > of Unicode code points to UTF-8. UTF is Unicode Transformation Format > > and has exact definition how is Unicode Transformed. > > But a wchar_t can hold lots of values that aren't Unicode code points. > Prior to the 2003 changes half of the 2^32 values could be converted. > Afterwards only a small fraction. wchar_t in kernel can hold only subset of Unicode code points, up to the U+FFFF (2^16-1). Halves of surrogate pairs are not valid Unicode code points but as stated they are used in MS FAT. So anything which can be put into kernel's wchar_t is valid for FAT. > > > If you have valid UTF-8 sequence then it describe one exact sequence of > > Unicode code points. And if you have sequence (ordinals) of Unicode code > > points there is exactly one and only one its representation in UTF-8. > > > > I would suggest you to read Unicode standard, section 2.5 Encoding Forms. > > That all assumes everyone is playing the correct game And why should we not play correct game? On input we have UTF and internally we works with Unicode. Unicode codepoints does not leak from kernel, so we can play correct game and assume that our code in kernel is correct (and if not, we can fix it). Plus when communicating with outside word, just check that input data are valid (which we already do for UTF-8 user input). So I do not see any problem there. > > > Really both need to be defined - even for otherwise 'invalid' sequences. > > > > > > Even the 16-bit values above 0xd000 can appear on their own in > > > windows filesystems (according to wikipedia). > > > > If you are talking about UTF-16 (which is _not_ 16-bit as you wrote), > > look at my previous email: > > UFT-16 is a sequence of 16-bit values.... No, this is not truth. UTF-16 is sequence either of 16-bit values or of 32-bit values with other restrictions. UTF-16 is variable length enc. > It can contain 0xd000 to 0xffff (usually in pairs) but they aren't UTF-8 codepoints. > > David > > - > Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK > Registration No: 1397386 (Wales) -- Pali Rohár pali.rohar@gmail.com