14 Commits

Author SHA1 Message Date
Jehan
6436e1dd47 src: improve confidence computation (generic and single-byte charset).
Nearly the same algorithm on both pieces of code now. I reintroduced the
mTypicalPositiveRatio now that our models actually gives the right ratio
(not the "first 512" meaningless stuff anymore).
In remaining differences, the last computation is the ratio of frequent
characters on the whole characters. For the generic detector, we use the
frequent+out sum instead. It works much better. I think that Unicode
text is much more prone to have characters outside your expected range,
while still being meaningful characters. Even control characters are
much more meaningful in Unicode.
So a ratio off it would make much too low confidence.

Anyway this confidence algorithm is already better. We seem to approach
much nicer confidence at each iteration, very satisfying!
2022-12-14 00:24:53 +01:00
Jehan
9d29c3e26f src: fix negative confidence wrapping around because of unsigned int.
In extreme case of more mCtrlChar than mTotalChar (since the later does
not include control characters), we end up with a negative value, which
in unsigned int becomes a huge integer. So because the confidence was so
bad that it would be negative, we ended up in a huge confidence.

We had this case with our Japanese UTF-8 test file which ended up
identified as French ISO-8859-1. So I just cast the uint to float early
on in order to avoid such pitfall.

Now all our test cases succeed again, this time with full UTF-8+language
support! Wouhou!
2022-12-14 00:24:53 +01:00
Jehan
2127f4fc0d src: allow for nsCharSetProber to return several candidates.
No functional change yet because all probers still return 1 candidate.
Yet now we add a GetCandidates() method to return a number of
candidates.
GetCharSetName(), GetLanguage() and GetConfidence() now take a parameter
which is the candidate index (which must be below the return value of
GetCandidates()). We can now consider that nsCharSetProber computes a
couple (charset, language) and that the confidence is for this specific
couple, not just the confidence for charset detection.
2022-12-14 00:23:13 +01:00
Jehan
5257fc1abf Using the generic language detector in UTF-8 detection.
Now the UTF-8 prober would not only detect valid UTF-8, but would also
detect the most probable language. Using the data generated 2 commits
away, this works very well.

This is still basic and will require even more improvements. In
particular, now the nsUTF8Prober should return an array of ("UTF-8",
language) couple candidate. And nsMBCSGroupProber should itself forward
these candidates as well as other candidates from other multi-byte
detectors. This way, the public-facing API would get more probable
candidates, in case the algorithm is slightly wrong.

Also the UTF-8 confidence is currently stupidly high as soon as we
consider it to be right. We should likely weigh it with language
detection (in particular, if no language is detected, this should
severely weigh down UTF-8 detection; not to 0, but high enough to be a
fallback in case no other encoding+lang is valid and low enough to give
chances to other good candidate couples.
2022-12-14 00:23:13 +01:00
Jehan
5a949265d5 src: new API to get the detected language.
This doesn't work for all probers yet, in particular not for the most
generic probers (such as UTF-8) or WINDOWS-1252. These will return NULL.
It's still a good first step.

Right now, it returns the 2-character language code from ISO 639-1. A
using project could easily get the English language name from the
XML/json files provided by the iso-codes project. This project will also
allow to easily localize the language name in other languages through
gettext (this is what we do in GIMP for instance). I don't add any
dependency though and leave it to downstream projects to implement this.

I was also wondering if we want to support region information for cases
when it would make sense. I especially wondered about it for Chinese
encodings as some of them seem quite specific to a region (according to
Wikipedia at least). For the time being though, these just return "zh".
We'll see later if it makes sense to be more accurate (maybe depending
on reports?).
2022-12-14 00:23:13 +01:00
Jehan
50743e16f8 src: minor indentation fix. 2017-05-14 21:35:11 +02:00
Jehan
e0eec3bae8 src: give a little weight to "probable sequences".
Up to now, we were only considering positive sequences, which are
sequences of 2 characters which happen the most. Yet our data gather
4 categories of sequences (the last one being called "negative", since
they never happened in our data).
I will call the category below positive: probable sequences. They may
happen, yet not often. The last category could be called "neutral".
This seems to fix the detection of a user's subtitle example without
breaking any of our current unit tests.
Probably I should still review this whole logics more in details later.
2016-05-25 17:38:20 +02:00
Jehan
4287d3accc src: trailing whitespace removed. 2016-05-25 16:07:17 +02:00
Jehan
55b4f23971 Single Byte charsets: high ctrl character ratio lowers confidence.
Control characters are not an error per-se. Nevertheless they are clearly not
frequent in single-byte charset texts. It is only normal for them to lower
confidence in a charset. In particular a higher ctrl-per-letter ratio means
a lower confidence.
This fixes for instance our Windows-1252 German test (otherwise detected as
ISO-8859-1).
2015-12-04 00:04:43 +01:00
Jehan
c4fa728e7a Merge branch 'master' of https://github.com/lovasoa/uchardet into lovasoa-master
Let's shortcut Single Byte charset detection on invalid codepoints.
Merging and fixing the contributor's commit conflicts after code
redesign: in particular we added an illegal character concept (they were
mixed with control characters in current charmaps. Yet ctrl characters
are NOT to be considered invalid) and constants instead of hardcoded
numbers ('ILL' rather than 255).
2015-12-03 19:26:19 +01:00
Jehan
4f1c3ff85e nsSBCharSetProber: multiply confidence by ratio of positive seqs per chars.
If all sequences in a text are positive sequences, the ratio of positive
sequences cannot make the difference between 2 very close charsets.
A ratio of positive sequences per letters on the other hand will
change a tie between 2 encoding. If while adding a letter, the number
of positive sequences does not increase, the confidence will decrease
(corresponding to the fact it was likely not a letter).
On the other hand, if the number of positive sequences increase, so
will the confidence.
For instance this fixes wrong detections of ISO-8859-1 and ISO-8859-15.
When letters only available in ISO-8859-15 appear in a text, we expect
confidence to tilt towards the close yet slightly different ISO-8859-15.
2015-11-30 19:52:07 +01:00
Jehan
dbb4c1d2ff nsSBCharSetProber: replace the fixed 64 SAMPLE_SIZE...
... with per-language model "frequent character" count.
2015-11-29 23:51:55 +01:00
Ophir LOJKINE
5ef60164fc Stop detection early on control characters 2015-11-24 22:07:41 +03:00
BYVoid
3601900164 Initial release. 2011-07-10 15:04:42 +08:00