Aug-11-2022, 05:26 PM
Hi,
I’m using pytesseract with 99+% success rate.
The 1% failed sentences are often caused by a drawing , printed somewhere in the left margin.
I tested several things like cropping etc, but:
when I invert the image (White letters on Black instead of B on W), the problem seems to go away in some cases.
Has this been observed/documented before, or is it just a coincidence?
Hence the question, does OCR work better on B&W inverted images ?
thx,
Paul
I’m using pytesseract with 99+% success rate.
The 1% failed sentences are often caused by a drawing , printed somewhere in the left margin.
I tested several things like cropping etc, but:
when I invert the image (White letters on Black instead of B on W), the problem seems to go away in some cases.
Has this been observed/documented before, or is it just a coincidence?
Hence the question, does OCR work better on B&W inverted images ?
thx,
Paul
It is more important to do the right thing, than to do the thing right.(P.Drucker)
Better is the enemy of good. (Montesquieu) = French version for 'kiss'.
Better is the enemy of good. (Montesquieu) = French version for 'kiss'.