… and rebuild of models.
The scores are really not bad now, 0.896026 for Norwegian and 0.877947
for Danish. It looks like the last confidence computation changes I did
are really giving fruits!
I had this test file locally for some time now, but it was always
failing, and recognized as other languages until now. Thanks to the
recent confidence improvements with new frequent/rare ratios, it is
finally detected as English by uchardet!
Additionally to the "frequent characters" concept, we add 2
sub-categories, which are the "very frequent characters" and "rare
characters". The former are usually just a few characters which are used
most of the time (like 3 or 4 characters used 40% of the time!), whereas
the later are often a dozen or more characters which are barely used a
few percents of the time, all together.
We use this additional concept to help distinguish very similar
languages, or languages whose frequent characters are a subset of
the ones from another language (typically English, whose alphabet is a
subset of many other European languages).
The mTypicalPositiveRatio is getting rid of, as it was anyway barely of
any use (it was 0.99-something for nearly all languages!). Instead we
get these 2 new ratios: veryFreqRatio and lowFreqRatio, and of course
the associated order counts to know which character are in these sets.
The previous model was most obviously wrong: all letters had the same
probability, even non-ASCII ones! Anyway this new model does make unit
tests a tiny bit better though the English detection is still weak (I
have more concepts which I want to experiment to get this better).
It currently recognizes as Danish/UTF-8 with 0.958 score, though
Norwegian/UTF-8 is indeed the second candidate with 0.911 (the third
candidate is far behind, Swedish/UTF-8 with 0.815). Before wasting time
tweaking models, there are more basic conceptual changes that I want to
implement first (it might be enough to change the results!). So let's
skip this test for now.
We were experiencing segmentation fault when processing long texts
because we were ending up trying to access out-of-range data (from
codePointBuffer). Verify when this will happen and process data to reset
the index before adding more code points.
As I just rebased my branch about new language detection API, I needed
to re-generate Norwegian language models. Unfortunately it doesn't
detect UTF-8 Norwegian text, though not far off (it detects it as second
candidate with high 91% confidence; beaten by Danish UTF-8 with 94%
confidence unfortunately!).
Note that I also update the alphabet list for Norwegian as there were
too many letters in there (according to Wikipedia at least), so even
when training a model, we had some missing characters in the training
set.
Adding `auto_suggest=False` to the wikipedia.page() call because this
auto-suggest is completely broken, searching "mar ot" instead of
"marmot" or "ground hug" instead of "Groundhog" (this one is extra funny
but not so useful!). I actually wonder why it even needs to suggest
anything when the Wikipedia pages do actually exist! Anyway the script
BuildLangModel.py was very broken because of this, now it's better.
See: https://github.com/goldsmith/Wikipedia/issues/295
Also printing the error message when we discard a page, which helps
debugging.
English detection is still quite crappy so I don't add a unit test yet.
Though I believe the detection being bad is mostly because of too much
shortcutting we are doing to go "fast". I should probably review this
whole part of the logics as well.
Some languages are not meant to have multibyte characters. For instance,
English would typically have none. Yet you can still have UTF-8 English
text (with a few special characters, or foreign words…). So anyway let's
make it less of a deal breaker.
To be even fairer, the whole logics is biased of course and I believe
that eventually we should get rid of these lines of code dropping
confidence on a number of character. This is a ridiculous rule (we base
on our whole logics on language statistics and suddenly we add some
weird rule with a completely random number). But for now, I'll keep this
as-is until we make the whole library even more robust.
realpath() doesn't exist on Windows. Replace it with _fullpath() which
does the same thing, as far as I can see (at least for creating an
absolute path, it doesn't seem to canonicalize the path, or the docs
doesn't say it, yet since we are controlling the arguments from our
CMake script, it's not a big problem anyway).
This fixed the CI build for Windows failing with:
> undefined reference to `realpath'
Failing to do so, we always return the same language once we detected a
shortcut one, even after resetting. For instance, the issue happened on
the uchardet CLI tool.
Just commenting it out for now. This is just not good enough and could
take over detection when other probers have low confidence (yet
reasonable ones), returning an ugly WINDOWS-1252 with no language
detection. I think we should even just get rid of it completely. For
now, I temporarily uncomment it and will see with further experiments.
Nearly the same algorithm on both pieces of code now. I reintroduced the
mTypicalPositiveRatio now that our models actually gives the right ratio
(not the "first 512" meaningless stuff anymore).
In remaining differences, the last computation is the ratio of frequent
characters on the whole characters. For the generic detector, we use the
frequent+out sum instead. It works much better. I think that Unicode
text is much more prone to have characters outside your expected range,
while still being meaningful characters. Even control characters are
much more meaningful in Unicode.
So a ratio off it would make much too low confidence.
Anyway this confidence algorithm is already better. We seem to approach
much nicer confidence at each iteration, very satisfying!
The early version used to stop earlier, assuming frequent ranges were
used only for language scripts with a lot of characters (such as Korean,
or even more Japanese or Chinese), hence it was not efficient to keep
data for them all. Since we now use a separate language detector for
CJK, remaining scripts (so far) have a usable range of characters.
Therefore it is much prefered to keep as much data as possible on these.
This allowed to redo the Thai model (cf. previous commit) with more
data, hence get much better language confidence on Thai texts.
This allows to handle cases where some characters are actually
alternative/variants of another. For instance, a same word can be
written with both variants, while both are considered correct and
equivalent. Browsing a bit Slovenian Wikipedia, it looks like they only
use them for titles there.
I use this the first time on characters with diacritics in Slovene.
Indeed these are so rarely used that they would hardly show in the stats
and worse, any sequence using these in tested text would likely show as
negative sequences hence drop the confidence in Slovenian. As a
consequence, various Slovene text would show up as Slovak as it's close
enough and contains the same character with diacritics in a common way.
The alphabet was not complete and thus confidence was a bit too low.
For instance the VISCII test case's confidence bumped from 0.643401 to
0.696346 and the UTF-8 test case bumped from 0.863777 to 0.99.
Only the Windows-1258 test case is slightly worse from 0.532846 to
0.532098. But the overwhole recognition gain is obvious anyway.
In extreme case of more mCtrlChar than mTotalChar (since the later does
not include control characters), we end up with a negative value, which
in unsigned int becomes a huge integer. So because the confidence was so
bad that it would be negative, we ended up in a huge confidence.
We had this case with our Japanese UTF-8 test file which ended up
identified as French ISO-8859-1. So I just cast the uint to float early
on in order to avoid such pitfall.
Now all our test cases succeed again, this time with full UTF-8+language
support! Wouhou!
I was pondering improving the logics of the LanguageModel contents, in
order to better handle language with a huge number of characters (far
too much to keep a full frequent list while keeping reasonable memory
consumption and speed).
But then I realize that this happens for languages which have anyway
their own set of characters.
For instance, modern Korean is near full hangul. Of course, we can find
some Chinese characters here and there, but nothing which should really
break confidence if we base it on the hangul ratio. Of course if some
day we want to go further and detect older Korean, we will have to
improve the logics a bit with some statistics, though I wonder if
limiting ourselves to character frequency is not enough here (sequence
frequency is maybe a bit overboard). To be tested.
In any case, this new class gives much more relevant confidence on
Korean texts, compared to the statistics data we previously generated.
For Japanese, it is a mix of kana and Chinese characters. A modern full
text cannot exist without a lot of kanas (probably only old text or very
short texts, such as titles, could have only Chinese characters). We
would still want to add a bit of statistics to differentiate correctly a
Japanese text with a lot of Chinese characters in it and a Chinese
text which quotes a bit of Japanese phrases. It will have to be
improved, but for now it works fairly ok.
A last case where we would want to play with statistics might be if we
want to differentiate between regional variants. For instance,
Simplified Chinese, Taiwan or Hong Kong Chinese… More to experiment
later on. It's already a first good step for UTF-8 support with
language!
Basically since we excluse non-letters (Control chars, punctuations,
spaces, separators, emoticones and whatnot), we consider any remaining
character as an off-script letter (we may have forgotten some cases, but
so far, it looks promising). Hence it is normal to consider a
combination with these (i.e. 2 off-script letters or 1 frequent letter +
1 off-script, in any order) as a sequence too. Doing so will drop the
confidence even more of any text having too much of these. As a
consequence, it expands again the gap between the first and second
contender, which seems to really show it works.
Detect various blocks of characters for punctuation, symbols, emoticons
and whatnot. These are considered kind of neutral in the confidence
(because it's normal to have punctuation, and various text nowadays are
expected to display emoticones or various symbols).
What is of interest is all the rest, which will then consider as
out-of-range characters (likely characters for other scripts) and will
therefore drop the confidence.
Now confidence will therefore take into account the ratio of all
in-range characters (script letters + various neutral characters) and
the ratio of frequent letters within all letters (script letters +
out-of-range characters).
This improved algorithm makes for much more efficient detection, as it
bumped most confidence in all our unit test, and usually increased the
gap between the first and second contender.
In particular, I prepare the case for English detection. I am not
pushing actual English models yet, because it's not so efficient yet. I
will do when I will be able to handle better English confidence.
Until now, Korean charsets had its own probers as there are no
single-byte encoding for writing Korean. I now added a Korean model only
for the generic character and sequence statistics.
I also improved the generation script (script/BuildLangModel.py) to
allow for languages without single-byte charset generation and to
provide meaningful statistics even when the language script has a lot of
characters (so we can't have a full sequence combination array, just too
much data). It's not perfect yet. For instance our UTF-8 Korean test
file ends up with confidence of 0.38503, which is low for obvious Korean
text. Still it works (correctly detected, with top confidence compared
to others) and is a first step toward more improvement for detection
confidence.
This prober comes from MR !1 on the main branch though it was too
agressive then and could not get merged. On the improved API branch, it
doesn't detect other tests as Johab anymore.
Also fixing it to work with the new API.
Finally adding a Johab/ko unit test.
The Hebrew Model had never been regenerated by my scripts. I now added
the base generation files.
Note that I added 2 charsets: ISO-8859-8 and WINDOWS-1255 but they are
nearly identical. One of the difference is that the generic currency
sign is replaced by the sheqel sign (Israel currency) in Windows-1255.
And though this one lost the "double low line", apparently some Yiddish
characters were added. Basically it looks like most Hebrew text would
work fine with the same confidence on both charsets and detecting both
is likely irrelevant. So I keep the charset file for ISO-8859-8, but
won't actually use it.
The good part is now that Hebrew is also recognized in UTF-8 text thanks
to the new code and newly generated language model.
Taken from random pages for each of these languages.
I now have a test for every 26 supported couple of (UTF-8, language).
These are all working very fine and detected at the right encoding and
language.
Some probers are based on character distribution analysis. Though it is
still relevant detection logics, we also know that it is a lot less
subtle than sequence distribution.
Therefore let's give a good confidence for a text passing such analysis,
yet not a near perfect one, thus leaving some chance for other probers.
In particular, we can definitely consider that if some text gets over
0.7 on sequence distribution analysis, this is a very likely candidate.
I had the case with the Finnish UTF-8 test which was passing (UTF-8,
Finnish) detection with a staggering 0.86 confidence, yet was overrided
by UHC (EUC-KR). This used to not be a problem when nsMBCSGroupProber
would check the UTF-8 prober first and stop there with just some basic
encoding detection. Now that we go further and return all relevant
candidates, some simpler detection algorithm which always return
too-good confidence is not the best idea.
I had the case with the Czech test which was considered as Irish after
being shortcutted far too early after only 16 characters. Confidence
values was just barely above 0.5 for Irish (and barely below for Czech).
By adding a threshold (at least 256 characters), we give a bit of
relevant data to the engine to actually make an informed decision. By
then, the Czech detection was at more than 0.7, whereas the Irish one at
0.6.