feat: 切换后端至PaddleOCR-NCNN,切换工程为CMake

1.项目后端整体迁移至PaddleOCR-NCNN算法,已通过基本的兼容性测试
2.工程改为使用CMake组织,后续为了更好地兼容第三方库,不再提供QMake工程
3.重整权利声明文件,重整代码工程,确保最小化侵权风险

Log: 切换后端至PaddleOCR-NCNN,切换工程为CMake
Change-Id: I4d5d2c5d37505a4a24b389b1a4c5d12f17bfa38c
This commit is contained in:
wangzhengyang
2022-05-10 09:54:44 +08:00
parent ecdd171c6f
commit 718c41634f
10018 changed files with 3593797 additions and 186748 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 248 KiB

View File

@ -0,0 +1,95 @@
Getting Started with Images {#tutorial_js_image_display}
===========================
Goals
-----
- Learn how to read an image and how to display it in a web.
Read an image
-------------
OpenCV.js saves images as cv.Mat type. We use HTML canvas element to transfer cv.Mat to the web
or in reverse. The ImageData interface can represent or set the underlying pixel data of an area of a
canvas element.
@note Please refer to canvas docs for more details.
First, create an ImageData obj from canvas:
@code{.js}
let canvas = document.getElementById(canvasInputId);
let ctx = canvas.getContext('2d');
let imgData = ctx.getImageData(0, 0, canvas.width, canvas.height);
@endcode
Then, use cv.matFromImageData to construct a cv.Mat:
@code{.js}
let src = cv.matFromImageData(imgData);
@endcode
@note Because canvas only support 8-bit RGBA image with continuous storage, the cv.Mat type is cv.CV_8UC4.
It is different from native OpenCV because images returned and shown by the native **imread** and
**imshow** have the channels stored in BGR order.
Display an image
----------------
First, convert the type of src to cv.CV_8UC4:
@code{.js}
let dst = new cv.Mat();
// scale and shift are used to map the data to [0, 255].
src.convertTo(dst, cv.CV_8U, scale, shift);
// *** is GRAY, RGB, or RGBA, according to src.channels() is 1, 3 or 4.
cv.cvtColor(dst, dst, cv.COLOR_***2RGBA);
@endcode
Then, new an ImageData obj from dst:
@code{.js}
let imgData = new ImageData(new Uint8ClampedArray(dst.data, dst.cols, dst.rows);
@endcode
Finally, display it:
@code{.js}
let canvas = document.getElementById(canvasOutputId);
let ctx = canvas.getContext('2d');
ctx.clearRect(0, 0, canvas.width, canvas.height);
canvas.width = imgData.width;
canvas.height = imgData.height;
ctx.putImageData(imgData, 0, 0);
@endcode
In OpenCV.js
------------
OpenCV.js implements image reading and showing using the above method.
We use **cv.imread (imageSource)** to read an image from html canvas or img element.
@param imageSource canvas element or id, or img element or id.
@return mat with channels stored in RGBA order.
We use **cv.imshow (canvasSource, mat)** to display it. The function may scale the mat,
depending on its depth:
- If the mat is 8-bit unsigned, it is displayed as is.
- If the mat is 16-bit unsigned or 32-bit integer, the pixels are divided by 256. That
is, the value range [0,255*256] is mapped to [0,255].
- If the mat is 32-bit floating-point, the pixel values are multiplied by 255. That is,
the value range [0,1] is mapped to [0,255].
@param canvasSource canvas element or id.
@param mat mat to be shown.
The above code of image reading and showing could be simplified as below.
@code{.js}
let img = cv.imread(imageSource);
cv.imshow(canvasOutput, img);
img.delete();
@endcode
Try it
------
\htmlonly
<iframe src="../../js_image_display.html" width="100%"
onload="this.style.height=this.contentDocument.body.scrollHeight +'px';">
</iframe>
\endhtmlonly

View File

@ -0,0 +1,14 @@
GUI Features {#tutorial_js_table_of_contents_gui}
============
- @subpage tutorial_js_image_display
Learn to load an image and display it in a web
- @subpage tutorial_js_video_display
Learn to capture video from Camera and play it
- @subpage tutorial_js_trackbar
Create trackbar to control certain parameters

Binary file not shown.

After

Width:  |  Height:  |  Size: 350 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

View File

@ -0,0 +1,73 @@
Add a Trackbar to Your Application {#tutorial_js_trackbar}
==================================
Goal
----
- Use HTML DOM Input Range Object to add a trackbar to your application.
Code Demo
---------
Here, we will create a simple application that blends two images. We will let the user enter the
weight by using the trackbar.
First, we need to create three canvas elements: two for input and one for output. Please refer to
the tutorial @ref tutorial_js_image_display.
@code{.js}
let src1 = cv.imread('canvasInput1');
let src2 = cv.imread('canvasInput2');
@endcode
Then, we use HTML DOM Input Range Object to implement the trackbar, which is shown as below.
![](images/Trackbar_Tutorial_Range.png)
@note &lt;input&gt; elements with type="range" are not supported in Internet Explorer 9 and earlier versions.
You can create an &lt;input&gt; element with type="range" with the document.createElement() method:
@code{.js}
let x = document.createElement('INPUT');
x.setAttribute('type', 'range');
@endcode
You can access an &lt;input&gt; element with type="range" with getElementById():
@code{.js}
let x = document.getElementById('myRange');
@endcode
As a trackbar, the range element need a trackbar name, the default value, minimum value, maximum value,
step and the callback function which is executed every time trackbar value changes. The callback function
always has a default argument, which is the trackbar position. Additionally, a text element to display the
trackbar value is fine. In our case, we can create the trackbar as below:
@code{.html}
Weight: <input type="range" id="trackbar" value="50" min="0" max="100" step="1" oninput="callback()">
<input type="text" id="weightValue" size="3" value="50"/>
@endcode
Finally, we can use the trackbar value in the callback function, blend the two images, and display the result.
@code{.js}
let weightValue = document.getElementById('weightValue');
let trackbar = document.getElementById('trackbar');
weightValue.setAttribute('value', trackbar.value);
let alpha = trackbar.value/trackbar.max;
let beta = ( 1.0 - alpha );
let src1 = cv.imread('canvasInput1');
let src2 = cv.imread('canvasInput2');
let dst = new cv.Mat();
cv.addWeighted( src1, alpha, src2, beta, 0.0, dst, -1);
cv.imshow('canvasOutput', dst);
dst.delete();
src1.delete();
src2.delete();
@endcode
@sa cv.addWeighted
Try it
------
\htmlonly
<iframe src="../../js_trackbar.html" width="100%"
onload="this.style.height=this.contentDocument.body.scrollHeight +'px';">
</iframe>
\endhtmlonly

View File

@ -0,0 +1,105 @@
Getting Started with Videos {#tutorial_js_video_display}
===========================
Goal
----
- Learn to capture video from a camera and display it.
Capture video from camera
-------------------------
Often, we have to capture live stream with a camera. In OpenCV.js, we use [WebRTC](https://webrtc.org/)
and HTML canvas element to implement this. Let's capture a video from the camera(built-in
or a usb), convert it into grayscale video and display it.
To capture a video, you need to add some HTML elements to the web page:
- a &lt;video&gt; to display video from camera directly
- a &lt;canvas&gt; to transfer video to canvas ImageData frame-by-frame
- another &lt;canvas&gt; to display the video OpenCV.js gets
First, we use WebRTC navigator.mediaDevices.getUserMedia to get the media stream.
@code{.js}
let video = document.getElementById("videoInput"); // video is the id of video tag
navigator.mediaDevices.getUserMedia({ video: true, audio: false })
.then(function(stream) {
video.srcObject = stream;
video.play();
})
.catch(function(err) {
console.log("An error occurred! " + err);
});
@endcode
@note This function is unnecessary when you capture video from a video file. But notice that
HTML video element only supports video formats of Ogg(Theora), WebM(VP8/VP9) or MP4(H.264).
Playing video
-------------
Now, the browser gets the camera stream. Then, we use CanvasRenderingContext2D.drawImage() method
of the Canvas 2D API to draw video onto the canvas. Finally, we can use the method in @ref tutorial_js_image_display
to read and display image in canvas. For playing video, cv.imshow() should be executed every delay
milliseconds. We recommend setTimeout() method. And if the video is 30fps, the delay milliseconds
should be (1000/30 - processing_time).
@code{.js}
let canvasFrame = document.getElementById("canvasFrame"); // canvasFrame is the id of <canvas>
let context = canvasFrame.getContext("2d");
let src = new cv.Mat(height, width, cv.CV_8UC4);
let dst = new cv.Mat(height, width, cv.CV_8UC1);
const FPS = 30;
function processVideo() {
let begin = Date.now();
context.drawImage(video, 0, 0, width, height);
src.data.set(context.getImageData(0, 0, width, height).data);
cv.cvtColor(src, dst, cv.COLOR_RGBA2GRAY);
cv.imshow("canvasOutput", dst); // canvasOutput is the id of another <canvas>;
// schedule next one.
let delay = 1000/FPS - (Date.now() - begin);
setTimeout(processVideo, delay);
}
// schedule first one.
setTimeout(processVideo, 0);
@endcode
OpenCV.js implements **cv.VideoCapture (videoSource)** using the above method. You need not to
add the hidden canvas element manually.
@param videoSource the video id or element.
@return cv.VideoCapture instance
We use **read (image)** to get one frame of the video. For performance reasons, the image should be
constructed with cv.CV_8UC4 type and same size as the video.
@param image image with cv.CV_8UC4 type and same size as the video.
The above code of playing video could be simplified as below.
@code{.js}
let src = new cv.Mat(height, width, cv.CV_8UC4);
let dst = new cv.Mat(height, width, cv.CV_8UC1);
let cap = new cv.VideoCapture(videoSource);
const FPS = 30;
function processVideo() {
let begin = Date.now();
cap.read(src);
cv.cvtColor(src, dst, cv.COLOR_RGBA2GRAY);
cv.imshow("canvasOutput", dst);
// schedule next one.
let delay = 1000/FPS - (Date.now() - begin);
setTimeout(processVideo, delay);
}
// schedule first one.
setTimeout(processVideo, 0);
@endcode
@note Remember to delete src and dst after when stop.
Try it
------
\htmlonly
<iframe src="../../js_video_display.html" width="100%"
onload="this.style.height=this.contentDocument.body.scrollHeight +'px';">
</iframe>
\endhtmlonly