109 Commits

Author SHA1 Message Date
Jehan
bdd71d88f8 script: improve a bit create-table.py and regenerate the Georgian charsets.
- Avoid trailing whitespaces.
- Print which tool and version were used for the generation (to help for
  future debugging in case of discrepancies between versions or
  implementations).
2022-12-20 14:38:51 +01:00
Jehan
7875272a8c script, src, test: new Georgian support.
For charsets UTF-8, GEORGIAN-ACADEMY and GEORGIAN-PS. The 2 GEORGIAN-*
sets were generated thanks to the new create-table.py script.

Test text comes from page 'ვირზაზუნა' page of Wikipedia in Georgian.
2022-12-20 14:28:29 +01:00
Jehan
c843d23a17 script: new create-table script.
I wanted to add new tables for which I could find no listing anywhere,
even though iconv has support for it (not core Python though), which are
GEORGIAN-ACADEMY and GEORGIAN-PS.
I could find info on these in libiconv source (./lib/georgian_academy.h
and ./lib/georgian_ps.h), though rather than trying to read these, I
thought I should just do the other way around: get back a table from the
return value of iconv API (or Python decode() when relevant).

So this script is able to generate tables in the format used under
script/charsets/, from either Python decode() or iconv. It will be much
useful!
2022-12-20 12:03:19 +01:00
Jehan
419a971e6a script: update the README. 2022-12-20 01:56:24 +01:00
Jehan
d40e5868d5 script, src, test: adding Catalan support.
For UTF-8, ISO-8859-1 and WINDOWS-1252 support.

The test for UTF-8 and ISO-8859-1 is taken from 'Marmota' page on
Wikipedia in Catalan. The test for WINDOWS-1252 is taken from the
'Unió_Europea' page. ISO-8859-1 and WINDOWS-1252 being very similar,
regarding most letters (in particular the ones used in Catalan), I
differentiated the test with a text containing the '€' symbol, which is
on an unused spot in ISO-8859-1.
2022-12-20 01:46:15 +01:00
Jehan
0fe51d3851 Issue #21: Greek CP737 support.
It actually breaks "zh:big5" so I'm going to hold-off a bit. Adding more
language and charset support is slowly starting to show the limitations
of our legacy multi-byte charset supports, since I haven't really
touched these since the original implementation of Mozilla.

It might be time to start reviewing these parts of the code.

The test file contents comes from 'Μαρμότα' page on Wikipedia in Greek
(though since 2 letters are missing in this encoding, despite its
popularity for Greek, I had to be careful in choosing pieces of text
without such letters).
2022-12-18 22:33:12 +01:00
Jehan
a82139b3bd script: fix a notice message.
Probably broken in commit db836fa (I changed a bunch of print() with
sys.stderr.write()).
2022-12-18 22:24:55 +01:00
Jehan
d4ef245fdc script: add a requirements.txt for our generation script.
It will make it easier to follow any dependency change as it is kinda a
standard file in Python projects. Of course, it's not a dependency to
uchardet itself, only for the generation script (so for developers
only), which is why I put it inside the script/ folder.
2022-12-18 17:27:38 +01:00
Jehan
db836fad63 script, src: generate more code for language and sequence model listing.
Right now, each time we add new language or new charset support, we have
too many pieces of code not to forget to edit. The script
script/BuildLangModel.py will now take care of the main parts: listing
the sequence models, listing the generic language models and computing
the numbers for each listing.

Furthermore the script will now end with a TODO list of the parts which
are still to be done manually (2 functions to edit and a CMakeLists).

Finally the script now allows to give a list of languages to edit rather
of having to run it with languages one by one. It also allows 2 special
code: "none", which will retrain none of the languages, but will
re-generate only the new generated listings; and "all" which will
retrain all models (useful in particulare when we change the model
formats or usage and want to regenerate everything).
2022-12-18 17:23:34 +01:00
Jehan
abd123e07d script, src, test: add Serbian support.
For UTF-8, ISO-8859-5 and WINDOWS-1251.

Test files' contents come from page 'Мрмот' on Wikipedia in Serbian.
2022-12-17 22:47:54 +01:00
Jehan
d00d4d52b7 src, script: add Macedonian support.
For UTF-8, ISO-8859-5, WINDOWS-1251 and IBM855 encodings.

Test files' contents come from page 'Хибернација' on Wikipedia in
Macedonian.
2022-12-17 22:47:54 +01:00
Jehan
41d309e8a2 script, src: regenerate Russian models and add UTF-8/Russian support.
This fixes the broken Russian test in Windows-1251 which once again gets
a much better score with Russian. Also this adds UTF-8 support.

Same as Bulgarian, I wonder why I had not regenerated this earlier.

The new UTF-8 test comes from the 'Сурки' page of Wikipedia in Russian.

Note that now this broke the test zh:gb18030 (the score for KOI8-R / ru
(0.766388) beats GB18030 / zh (0.700000)). I think I'll have to look a
bit closer at our GB18030 dedicated prober.
2022-12-17 21:41:11 +01:00
Jehan
60dcec8a82 script, src, test: add Ukrainian support.
UTF-8 and Windows-1251 support for now.

This actually breaks ru:windows-1251 test but same as Bulgarian, I never
generated Russian models with my scripts, so the models we currently use
are quite outdated. It will obviously be a lot better once we have new
Russian models.

The test file contents comes from 'Бабак' page on Wikipedia in
Ukrainian.
2022-12-17 21:40:56 +01:00
Jehan
0fffc109b5 script, src, test: adding Belarusian support.
Support for UTF-8, Windows-1251 and ISO-8859-5.
The test contents comes from page 'Суркі' on Wikipedia in Belarusian.
2022-12-17 19:13:03 +01:00
Jehan
ffb94e4a9d script, src, test: Bulgarian language models added.
Not sure why we had the Bulgarian support but haven't recently updated
it (i.e. never with the model generation script, or so it seems),
especially with generic language models, allowing to have
UTF-8/Bulgarian support. Maybe I tested it some time ago and it was
getting bad results? Anyway now with all the recents updates on the
confidence computation, I get very good detection scores.

So adding support for UTF-8/Bulgarian and rebuilding other models too.

Also adding a test for ISO-8859-5/Bulgarian (we already had support, but
no test files).

The 2 new test files are text from page 'Мармоти' on Wikipedia in
Bulgarian language.
2022-12-17 18:41:00 +01:00
Jehan
5e25e93da7 script: add an error handling for when iconv fail to convert from a codepoint.
It could happen either when our character set table is wrong, but it
could also happen for when iconv has a bug with incomplete charset
tables. For instance, I was trying to implement IBM880 for #29, but
iconv was missing a few codepoints. For instance, it seems to think that
0x45 (є), 0.55 (ў), 0x74 (Ў) are meant to be illegal in IBM880 (and
possibly others), but the information we have seem to say they are
valid.
And Python does not support this character set at all.

This test will help discovering the issue earlier (rather than breaking
a few line later because `iconv` failed and returned an empty string,
making ord() fail with TypeError exception.

See: https://gitlab.freedesktop.org/uchardet/uchardet/-/issues/29#note_1691847
2022-12-17 18:00:22 +01:00
Jehan
0974920bdd Issue #22: Hebrew CP862 support.
Added in both visual and logical order since Wikipedia says:

> Hebrew text encoded using code page 862 was usually stored in visual
> order; nevertheless, a few DOS applications, notably a word processor
> named EinsteinWriter, stored Hebrew in logical order.

I am not using the nsHebrewProber wrapper (nameProber) for this new
support, because I am really unsure this is of any use. Our statistical
code based on letter and sequence usage should be more than enough to
detect both variants of Hebrew encoding already, and my testing show
that so far (with pretty outstanding score on actual Hebrew tests while
all the other probers return bad scores). This will have to be studied a
bit more later and maybe the whole nsHebrewProber might be deleted, even
for Windows-1255 charset.

I'm also cleaning a bit nsSBCSGroupProber::nsSBCSGroupProber() code by
incrementing a single index, instead of maintaining the indexes by hand
(otherwise each time we add probers in the middle, to keep them
logically gathered by languages, we have to manually increment dozens of
following probers).
2022-12-16 23:27:52 +01:00
Jehan
e6e51d9fe8 src: all language models now rebuilt after the fix. 2022-12-15 14:31:55 +01:00
Jehan
362086bf56 script: fix BuildLangModel.py. 2022-12-15 14:31:10 +01:00
Jehan
6bb1b3e101 scripts: all language models rebuilt with the new ratio data. 2022-12-14 20:16:44 +01:00
Jehan
e311b64cd9 script: model-building script updated to produce the 2 new ratios…
… introduced in previous commit.
2022-12-14 20:15:34 +01:00
Jehan
7f386d922e script, src: rebuild the English model.
The previous model was most obviously wrong: all letters had the same
probability, even non-ASCII ones! Anyway this new model does make unit
tests a tiny bit better though the English detection is still weak (I
have more concepts which I want to experiment to get this better).
2022-12-14 00:36:02 +01:00
Jehan
b5b75b81ce script, src: rebuild the Danish model.
Now that it has IBM865 support on the main branch and that I rebased,
this feature branch for the new API got broken too.
2022-12-14 00:24:53 +01:00
Jehan
0be80a21db script, src: update Norwegian model with the new language features.
As I just rebased my branch about new language detection API, I needed
to re-generate Norwegian language models. Unfortunately it doesn't
detect UTF-8 Norwegian text, though not far off (it detects it as second
candidate with high 91% confidence; beaten by Danish UTF-8 with 94%
confidence unfortunately!).

Note that I also update the alphabet list for Norwegian as there were
too many letters in there (according to Wikipedia at least), so even
when training a model, we had some missing characters in the training
set.
2022-12-14 00:24:53 +01:00
Jehan
784f614c84 script: further fixing BuildLangModel.py. 2022-12-14 00:24:53 +01:00
Jehan
6365cad4fd script: improve a bit the management of use_ascii option. 2022-12-14 00:24:53 +01:00
Jehan
81b83fffa9 script: work around recent issue of python wikipedia module.
Adding `auto_suggest=False` to the wikipedia.page() call because this
auto-suggest is completely broken, searching "mar ot" instead of
"marmot" or "ground hug" instead of "Groundhog" (this one is extra funny
but not so useful!). I actually wonder why it even needs to suggest
anything when the Wikipedia pages do actually exist! Anyway the script
BuildLangModel.py was very broken because of this, now it's better.

See: https://github.com/goldsmith/Wikipedia/issues/295

Also printing the error message when we discard a page, which helps
debugging.
2022-12-14 00:24:53 +01:00
Jehan
bfa4b10d4d script, src: add English language model.
English detection is still quite crappy so I don't add a unit test yet.
Though I believe the detection being bad is mostly because of too much
shortcutting we are doing to go "fast". I should probably review this
whole part of the logics as well.
2022-12-14 00:24:53 +01:00
Jehan
8e2cf7b81b script: generate more complete frequent characters when range is set.
The early version used to stop earlier, assuming frequent ranges were
used only for language scripts with a lot of characters (such as Korean,
or even more Japanese or Chinese), hence it was not efficient to keep
data for them all. Since we now use a separate language detector for
CJK, remaining scripts (so far) have a usable range of characters.
Therefore it is much prefered to keep as much data as possible on these.

This allowed to redo the Thai model (cf. previous commit) with more
data, hence get much better language confidence on Thai texts.
2022-12-14 00:24:53 +01:00
Jehan
314f062c70 script, src: regenerate the Thai model.
With all the changes we made, regenerate the Thai model which is of poor
quality. This new one is much better.
2022-12-14 00:24:53 +01:00
Jehan
41fec68674 src, script: fix the order of characters for Vietnamese.
Cf. commit 872294d.
2022-12-14 00:24:53 +01:00
Jehan
338a51564a src, script: add concept of alphabet_mapping in language models.
This allows to handle cases where some characters are actually
alternative/variants of another. For instance, a same word can be
written with both variants, while both are considered correct and
equivalent. Browsing a bit Slovenian Wikipedia, it looks like they only
use them for titles there.

I use this the first time on characters with diacritics in Slovene.
Indeed these are so rarely used that they would hardly show in the stats
and worse, any sequence using these in tested text would likely show as
negative sequences hence drop the confidence in Slovenian. As a
consequence, various Slovene text would show up as Slovak as it's close
enough and contains the same character with diacritics in a common way.
2022-12-14 00:24:53 +01:00
Jehan
ba7d72e3b0 script: regenerate Slovak and Slovene with better alphabet support.
I was missing some characters, especially in the Slovak alphabet.
Oppositely the Slovene alphabet does not use 4 of the common ASCII
alphabet.
2022-12-14 00:24:53 +01:00
Jehan
adb158b058 script: fix a stupid bug making same ratio for all frequent characters.
Argh! How did I miss this!
2022-12-14 00:24:53 +01:00
Jehan
19737886fe script, src: regenerate the Vietnamese model.
The alphabet was not complete and thus confidence was a bit too low.
For instance the VISCII test case's confidence bumped from 0.643401 to
0.696346 and the UTF-8 test case bumped from 0.863777 to 0.99.
Only the Windows-1258 test case is slightly worse from 0.532846 to
0.532098. But the overwhole recognition gain is obvious anyway.
2022-12-14 00:24:53 +01:00
Jehan
b7acffc806 script, src: remove generated statistics data for Korean. 2022-12-14 00:24:53 +01:00
Jehan
a1b186fa8b src: add Hindi/UTF-8 support. 2022-12-14 00:23:13 +01:00
Jehan
a98cdcd88f script: fix a bit BuildLangModel.py when use_ascii is True.
In particular, I prepare the case for English detection. I am not
pushing actual English models yet, because it's not so efficient yet. I
will do when I will be able to handle better English confidence.
2022-12-14 00:23:13 +01:00
Jehan
629bc879f3 script, src: add generic Korean model.
Until now, Korean charsets had its own probers as there are no
single-byte encoding for writing Korean. I now added a Korean model only
for the generic character and sequence statistics.

I also improved the generation script (script/BuildLangModel.py) to
allow for languages without single-byte charset generation and to
provide meaningful statistics even when the language script has a lot of
characters (so we can't have a full sequence combination array, just too
much data). It's not perfect yet. For instance our UTF-8 Korean test
file ends up with confidence of 0.38503, which is low for obvious Korean
text. Still it works (correctly detected, with top confidence compared
to others) and is a first step toward more improvement for detection
confidence.
2022-12-14 00:23:13 +01:00
Jehan
ded948ce15 script, src: generate the Hebrew models.
The Hebrew Model had never been regenerated by my scripts. I now added
the base generation files.

Note that I added 2 charsets: ISO-8859-8 and WINDOWS-1255 but they are
nearly identical. One of the difference is that the generic currency
sign is replaced by the sheqel sign (Israel currency) in Windows-1255.
And though this one lost the "double low line", apparently some Yiddish
characters were added. Basically it looks like most Hebrew text would
work fine with the same confidence on both charsets and detecting both
is likely irrelevant. So I keep the charset file for ISO-8859-8, but
won't actually use it.

The good part is now that Hebrew is also recognized in UTF-8 text thanks
to the new code and newly generated language model.
2022-12-14 00:23:13 +01:00
Jehan
eb8308d50a src, script: regenerate all existing language models.
Now making sure that we have a generic language model working with UTF-8
for all 26 supported models which had single-byte encoding support until
now.
2022-12-14 00:23:13 +01:00
Jehan
b70b1ebf88 Rebuild a bunch of language models.
Adding generic language model (see coming commit), which uses the same
data as specific single-byte encoding statistics model, except that it
applies it to unicode code points.
For this to work, instead of the CharToOrderMap which was mapping
directly from encoded byte (always 256 values) to order, now we add an
array of frequent characters, ordered by generic unicode code points to
the order of frequency (which can be used on the same sequence mapping
array).

This of course means that each prober where we will want to use these
generic models will have to implement their own byte to code point
decoder, as this is per-encoding logics anyway. This will come in a
subsequent commit.
2022-12-14 00:23:13 +01:00
Jehan
c550af99a7 script: update BuildLangModel.py to updated SequenceModel struct.
In particular, there is now a language code member.
2022-12-14 00:23:13 +01:00
Jehan
388777be51 script, src, test: add IBM865 support for Danish.
Newly added IBM865 charset (for Norwegian) can also be used for Danish

By the way, I fixed `script/charsets/ibm865.py` as Danish uses the 'da'
ISO 639-1 code by the way, not 'dk' (which is sometimes used for other
codes for Denmark, such as ISO 3166 country code and internet TLD) but
not for the language itself.

For the test, adding some text from the top article of the day on the
Danish Wikipedia, which was about Jimi Hendrix. And that's cool! 🎸 ;-)
2022-11-30 19:57:52 +01:00
Jehan
5aa628272b script: fix small issues with commits e41e8a4 and 8d15d6b. 2022-11-30 19:24:28 +01:00
Martin T. H. Sandsmark
099a9a4fd6 Add norwegian support 2022-11-30 19:09:09 +01:00
Martin T. H. Sandsmark
e41e8a47e4 improve model building script a bit 2022-11-30 19:09:09 +01:00
Martin T. H. Sandsmark
8d15d6b557 make the logfile usable 2022-11-30 19:09:09 +01:00
Jehan
98bc2f31ef Issue #8: have BuildLangModel.py add ending newline to generated source. 2020-04-22 22:57:25 +02:00
Jehan
119fed7e8d LangModels: add Swedish support.
Encodings: ISO-8859-1, ISO-8859-4, ISO-8859-9, ISO-8859-15 and
WINDOWS-1252.
Test text from https://sv.wikipedia.org/wiki/Mölle
2016-09-28 22:42:13 +02:00