I was planning on adding VISCII support as well, but Python encode()
method does not have any support for it apparently, so I cannot generate
the proper statistics data with the current version of the string.
I disable only ISO-8859-15 which is similar to ISO-8859-1 for all
Spanish letters. Unfortunately illegal codepoints are similar too.
Difference should likely be done on symbols (like the euro symbol)
but our current algorithm does nothing about this for charset
comparison.
Text from https://es.wikipedia.org/wiki/España
ISO-8859-2 and Windows-1250 are absolutely similar for all letters in
the Hungarian alphabet. So for most texts, it is not an error to return
one charset or the other.
What could make the difference is for instance that Windows-1250 has
some symbols where ISO-8859-2 has control characters, like quotes,
dashes, the euro symbol…
Since control characters have a negative impact on confidence now,
texts with such symbols would tend towards Windows-1250 decision.
The new test file has such quote symbols.
ISO-8859-11 is basically exactly identical to TIS-620, with the added
non-breaking space character.
Basically our detection will always return TIS-620 except for
exceptional cases when a text has a non-breaking space.
Previous technical text about charsets themselves were not relevant
to identify a language. In particular the special characters different
between ISO-8859-1 and ISO-8859-15 were used by themselves, out of a
char sequence context. Therefore without language understanding, they
could have as well been representing the ISO-8859-15 letters or the
ISO-8859-1 symbols at the corresponding codepoints.
Replacing with text from this Wikipedia page:
https://fr.wikipedia.org/wiki/Œuf_(cuisine)
This uses some of these same characters (in particular 'œ') but in
contextual character sequences, making it relevant for our algorithm.
Mostly generated with a script from Wikipedia data (only the typical
positive ratio is slightly modified).
This is a first test before adding my generating script to the main tree.
Some charsets are simply not supported (ex: fr:iso-8859-1), some are
temporarily deactivated (ex: hu:iso-8859-2) and some are wrongly
detected as closely related charsets.
These were broken (or not efficient) from the start, and there is no
need to pollute the `make test` output with these, which may make us
miss when actual regressions will occur. So let's hide these away for
now until we can improve the situation.
I realize that the language information a text has been written in is
very important since it would completely change the character
distribution. Our test files should take this into account, and we
should create several test files in different languages for encoding
used in various languages.
Taken from French page about ISO-8859-1:
https://fr.wikipedia.org/wiki/ISO_8859-1
... and Hungarian Wikipedia page about ISO-8859-2:
https://hu.wikipedia.org/wiki/ISO/IEC_8859-2
We don't have support for ISO-8859-1, and both these files are detected
as "WINDOWS-1252" (which is acceptable for iso-8859-1.txt since
Windows-1252 is a superset of ISO-8859-1). ISO-8859-2 support is
disabled because the ISO-8859-1 file would be detected as ISO-8859-2,
which would in turn be a clear error.
Contains text taken from Wikipedia on EUC-KR page in Korean.
https://ko.wikipedia.org/wiki/EUC-KR
I added it as a simili-subtitle file because as the original Mozilla
paper says: "The input text may contain extraneous noises which have no
relation to its encoding, e.g. HTML tags, non-native words".
Therefore I feel it is important to have test files a little noisy if
possible, in order to test our resistance to noise in our algorithm.