Tesseract-OCR helloworld

2021-08-03 18:16:48 浏览数 (1)

Ubuntu installation

代码语言:shell复制
sudo apt install tesseract-ocr
pip install pytesseract
# Jetson Nano
# sudo vim ~/.bashrc
# export OPENBLAS_CORETYPE=ARMV8

Python test

代码语言:python代码运行次数:0复制
import cv2
import pytesseract
import numpy as np
def ocr_tesseract(path):
    img = cv2.imread(path)
    gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
    gray, img_bin = cv2.threshold(gray,128,255,cv2.THRESH_BINARY | cv2.THRESH_OTSU)
    gray = cv2.bitwise_not(img_bin)
    kernel = np.ones((2, 1), np.uint8)
    img = cv2.erode(gray, kernel, iterations=1)
    img = cv2.dilate(img, kernel, iterations=1)
    return pytesseract.image_to_string(img)
if __name__ == '__main__': print(ocr_tesseract("./test.jpg"))

Windows installation

https://github.com/UB-Mannheim/tesseract/wiki

Github official page

https://github.com/tesseract-ocr/tesseract/

Google cloud

https://cloud.google.com/vision/docs/ocr

中文识别

https://bbs.huaweicloud.com/blogs/143914

test.jpgtest.jpg
代码语言:shell复制
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decode
configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple
network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on
two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to
train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including
ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU
score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the
Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

[Finished in 2.6s]

0 人点赞