From ec262830d8db7676ddc6efc2cbe4fe296fe082fe Mon Sep 17 00:00:00 2001 From: Xnuk Shuman Date: Mon, 12 Sep 2022 22:07:45 +0900 Subject: [PATCH 1/7] [cv2] `add` and `subtract` can accept numeric values --- cv2/__init__.pyi | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/cv2/__init__.pyi b/cv2/__init__.pyi index 0b3a452b..08ee7a28 100644 --- a/cv2/__init__.pyi +++ b/cv2/__init__.pyi @@ -1694,7 +1694,7 @@ def adaptiveThreshold(src: Mat, maxValue, adaptiveMethod, thresholdType, blockSi 'adaptiveThreshold(src, maxValue, adaptiveMethod, thresholdType, blockSize, C[, dst]) -> dst\n. @brief Applies an adaptive threshold to an array.\n. \n. The function transforms a grayscale image to a binary image according to the formulae:\n. - **THRESH_BINARY**\n. \\f[dst(x,y) = \\fork{\\texttt{maxValue}}{if \\(src(x,y) > T(x,y)\\)}{0}{otherwise}\\f]\n. - **THRESH_BINARY_INV**\n. \\f[dst(x,y) = \\fork{0}{if \\(src(x,y) > T(x,y)\\)}{\\texttt{maxValue}}{otherwise}\\f]\n. where \\f$T(x,y)\\f$ is a threshold calculated individually for each pixel (see adaptiveMethod parameter).\n. \n. The function can process the image in-place.\n. \n. @param src Source 8-bit single-channel image.\n. @param dst Destination image of the same size and the same type as src.\n. @param maxValue Non-zero value assigned to the pixels for which the condition is satisfied\n. @param adaptiveMethod Adaptive thresholding algorithm to use, see #AdaptiveThresholdTypes.\n. The #BORDER_REPLICATE | #BORDER_ISOLATED is used to process boundaries.\n. @param thresholdType Thresholding type that must be either #THRESH_BINARY or #THRESH_BINARY_INV,\n. see #ThresholdTypes.\n. @param blockSize Size of a pixel neighborhood that is used to calculate a threshold value for the\n. pixel: 3, 5, 7, and so on.\n. @param C Constant subtracted from the mean or weighted mean (see the details below). Normally, it\n. is positive but may be zero or negative as well.\n. \n. @sa threshold, blur, GaussianBlur' ... -def add(src1: Mat, src2: Mat, dts: Mat = ..., mask: Mat = ..., dtype=...) -> typing.Any: +def add(src1: typing.Union[Mat, float, int], src2: typing.Union[Mat, float, int], dts: Mat = ..., mask: Mat = ..., dtype=...) -> typing.Any: 'add(src1, src2[, dst[, mask[, dtype]]]) -> dst\n. @brief Calculates the per-element sum of two arrays or an array and a scalar.\n. \n. The function add calculates:\n. - Sum of two arrays when both input arrays have the same size and the same number of channels:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1}(I) + \\texttt{src2}(I)) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. - Sum of an array and a scalar when src2 is constructed from Scalar or has the same number of\n. elements as `src1.channels()`:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1}(I) + \\texttt{src2} ) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. - Sum of a scalar and an array when src1 is constructed from Scalar or has the same number of\n. elements as `src2.channels()`:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1} + \\texttt{src2}(I) ) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. where `I` is a multi-dimensional index of array elements. In case of multi-channel arrays, each\n. channel is processed independently.\n. \n. The first function in the list above can be replaced with matrix expressions:\n. @code{.cpp}\n. dst = src1 + src2;\n. dst += src1; // equivalent to add(dst, src1, dst);\n. @endcode\n. The input arrays and the output array can all have the same or different depths. For example, you\n. can add a 16-bit unsigned array to a 8-bit signed array and store the sum as a 32-bit\n. floating-point array. Depth of the output array is determined by the dtype parameter. In the second\n. and third cases above, as well as in the first case, when src1.depth() == src2.depth(), dtype can\n. be set to the default -1. In this case, the output array will have the same depth as the input\n. array, be it src1, src2 or both.\n. @note Saturation is not applied when the output array has the depth CV_32S. You may even get\n. result of an incorrect sign in the case of overflow.\n. @param src1 first input array or a scalar.\n. @param src2 second input array or a scalar.\n. @param dst output array that has the same size and number of channels as the input array(s); the\n. depth is defined by dtype or src1/src2.\n. @param mask optional operation mask - 8-bit single channel array, that specifies elements of the\n. output array to be changed.\n. @param dtype optional depth of the output array (see the discussion below).\n. @sa subtract, addWeighted, scaleAdd, Mat::convertTo' ... @@ -3010,7 +3010,7 @@ def stylization(src: Mat, dts: Mat = ..., sigma_s=..., sigma_r=...) -> typing.An 'stylization(src[, dst[, sigma_s[, sigma_r]]]) -> dst\n. @brief Stylization aims to produce digital imagery with a wide variety of effects not focused on\n. photorealism. Edge-aware filters are ideal for stylization, as they can abstract regions of low\n. contrast while preserving, or enhancing, high-contrast features.\n. \n. @param src Input 8-bit 3-channel image.\n. @param dst Output image with the same size and type as src.\n. @param sigma_s %Range between 0 to 200.\n. @param sigma_r %Range between 0 to 1.' ... -def subtract(src1: Mat, src2: Mat, dts: Mat = ..., mask: Mat = ..., dtype=...) -> typing.Any: +def subtract(src1: typing.Union[Mat, int, float], src2: typing.Union[Mat, int, float], dts: Mat = ..., mask: Mat = ..., dtype=...) -> typing.Any: 'subtract(src1, src2[, dst[, mask[, dtype]]]) -> dst\n. @brief Calculates the per-element difference between two arrays or array and a scalar.\n. \n. The function subtract calculates:\n. - Difference between two arrays, when both input arrays have the same size and the same number of\n. channels:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1}(I) - \\texttt{src2}(I)) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. - Difference between an array and a scalar, when src2 is constructed from Scalar or has the same\n. number of elements as `src1.channels()`:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1}(I) - \\texttt{src2} ) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. - Difference between a scalar and an array, when src1 is constructed from Scalar or has the same\n. number of elements as `src2.channels()`:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1} - \\texttt{src2}(I) ) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. - The reverse difference between a scalar and an array in the case of `SubRS`:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src2} - \\texttt{src1}(I) ) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. where I is a multi-dimensional index of array elements. In case of multi-channel arrays, each\n. channel is processed independently.\n. \n. The first function in the list above can be replaced with matrix expressions:\n. @code{.cpp}\n. dst = src1 - src2;\n. dst -= src1; // equivalent to subtract(dst, src1, dst);\n. @endcode\n. The input arrays and the output array can all have the same or different depths. For example, you\n. can subtract to 8-bit unsigned arrays and store the difference in a 16-bit signed array. Depth of\n. the output array is determined by dtype parameter. In the second and third cases above, as well as\n. in the first case, when src1.depth() == src2.depth(), dtype can be set to the default -1. In this\n. case the output array will have the same depth as the input array, be it src1, src2 or both.\n. @note Saturation is not applied when the output array has the depth CV_32S. You may even get\n. result of an incorrect sign in the case of overflow.\n. @param src1 first input array or a scalar.\n. @param src2 second input array or a scalar.\n. @param dst output array of the same size and the same number of channels as the input array.\n. @param mask optional operation mask; this is an 8-bit single channel array that specifies elements\n. of the output array to be changed.\n. @param dtype optional depth of the output array\n. @sa add, addWeighted, scaleAdd, Mat::convertTo' ... From 431294c01161c64b4c4aa93d0352d8eb5e075ea6 Mon Sep 17 00:00:00 2001 From: Xnuk Shuman Date: Tue, 13 Sep 2022 01:45:20 +0900 Subject: [PATCH 2/7] [cv2] `normalize` receives snake_cased kwargs --- cv2/__init__.pyi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cv2/__init__.pyi b/cv2/__init__.pyi index 08ee7a28..9b65a1e8 100644 --- a/cv2/__init__.pyi +++ b/cv2/__init__.pyi @@ -2725,7 +2725,7 @@ def norm(src1: Mat, src2: Mat, normType: int = ..., mask: Mat = ...) -> float: 'norm(src1, src2[, normType[, mask]]) -> retval\n. @brief Calculates the absolute norm of an array.\n. \n. This version of #norm calculates the absolute norm of src1. The type of norm to calculate is specified using #NormTypes.\n. \n. As example for one array consider the function \\f$r(x)= \\begin{pmatrix} x \\\\ 1-x \\end{pmatrix}, x \\in [-1;1]\\f$.\n. The \\f$ L_{1}, L_{2} \\f$ and \\f$ L_{\\infty} \\f$ norm for the sample value \\f$r(-1) = \\begin{pmatrix} -1 \\\\ 2 \\end{pmatrix}\\f$\n. is calculated as follows\n. \\f{align*}\n. \\| r(-1) \\|_{L_1} &= |-1| + |2| = 3 \\\\\n. \\| r(-1) \\|_{L_2} &= \\sqrt{(-1)^{2} + (2)^{2}} = \\sqrt{5} \\\\\n. \\| r(-1) \\|_{L_\\infty} &= \\max(|-1|,|2|) = 2\n. \\f}\n. and for \\f$r(0.5) = \\begin{pmatrix} 0.5 \\\\ 0.5 \\end{pmatrix}\\f$ the calculation is\n. \\f{align*}\n. \\| r(0.5) \\|_{L_1} &= |0.5| + |0.5| = 1 \\\\\n. \\| r(0.5) \\|_{L_2} &= \\sqrt{(0.5)^{2} + (0.5)^{2}} = \\sqrt{0.5} \\\\\n. \\| r(0.5) \\|_{L_\\infty} &= \\max(|0.5|,|0.5|) = 0.5.\n. \\f}\n. The following graphic shows all values for the three norm functions \\f$\\| r(x) \\|_{L_1}, \\| r(x) \\|_{L_2}\\f$ and \\f$\\| r(x) \\|_{L_\\infty}\\f$.\n. It is notable that the \\f$ L_{1} \\f$ norm forms the upper and the \\f$ L_{\\infty} \\f$ norm forms the lower border for the example function \\f$ r(x) \\f$.\n. ![Graphs for the different norm functions from the above example](pics/NormTypes_OneArray_1-2-INF.png)\n. \n. When the mask parameter is specified and it is not empty, the norm is\n. \n. If normType is not specified, #NORM_L2 is used.\n. calculated only over the region specified by the mask.\n. \n. Multi-channel input arrays are treated as single-channel arrays, that is,\n. the results for all channels are combined.\n. \n. Hamming norms can only be calculated with CV_8U depth arrays.\n. \n. @param src1 first input array.\n. @param normType type of the norm (see #NormTypes).\n. @param mask optional operation mask; it must have the same size as src1 and CV_8UC1 type.\n\n\n\nnorm(src1, src2[, normType[, mask]]) -> retval\n. @brief Calculates an absolute difference norm or a relative difference norm.\n. \n. This version of cv::norm calculates the absolute difference norm\n. or the relative difference norm of arrays src1 and src2.\n. The type of norm to calculate is specified using #NormTypes.\n. \n. @param src1 first input array.\n. @param src2 second input array of the same size and the same type as src1.\n. @param normType type of the norm (see #NormTypes).\n. @param mask optional operation mask; it must have the same size as src1 and CV_8UC1 type.' ... -def normalize(src: Mat, dts: Mat, alpha=..., beta=..., normType: int = ..., dtype=..., mask: Mat = ...) -> Mat: +def normalize(src: Mat, dts: Mat, alpha=..., beta=..., norm_type: int = ..., dtype=..., mask: Mat = ...) -> Mat: 'normalize(src, dst[, alpha[, beta[, normType[, dtype[, mask]]]]]) -> dst\n. @brief Normalizes the norm or value range of an array.\n. \n. The function cv::normalize normalizes scale and shift the input array elements so that\n. \\f[\\| \\texttt{dst} \\| _{L_p}= \\texttt{alpha}\\f]\n. (where p=Inf, 1 or 2) when normType=NORM_INF, NORM_L1, or NORM_L2, respectively; or so that\n. \\f[\\min _I \\texttt{dst} (I)= \\texttt{alpha} , \\, \\, \\max _I \\texttt{dst} (I)= \\texttt{beta}\\f]\n. \n. when normType=NORM_MINMAX (for dense arrays only). The optional mask specifies a sub-array to be\n. normalized. This means that the norm or min-n-max are calculated over the sub-array, and then this\n. sub-array is modified to be normalized. If you want to only use the mask to calculate the norm or\n. min-max but modify the whole array, you can use norm and Mat::convertTo.\n. \n. In case of sparse matrices, only the non-zero values are analyzed and transformed. Because of this,\n. the range transformation for sparse matrices is not allowed since it can shift the zero level.\n. \n. Possible usage with some positive example data:\n. @code{.cpp}\n. vector positiveData = { 2.0, 8.0, 10.0 };\n. vector normalizedData_l1, normalizedData_l2, normalizedData_inf, normalizedData_minmax;\n. \n. // Norm to probability (total count)\n. // sum(numbers) = 20.0\n. // 2.0 0.1 (2.0/20.0)\n. // 8.0 0.4 (8.0/20.0)\n. // 10.0 0.5 (10.0/20.0)\n. normalize(positiveData, normalizedData_l1, 1.0, 0.0, NORM_L1);\n. \n. // Norm to unit vector: ||positiveData|| = 1.0\n. // 2.0 0.15\n. // 8.0 0.62\n. // 10.0 0.77\n. normalize(positiveData, normalizedData_l2, 1.0, 0.0, NORM_L2);\n. \n. // Norm to max element\n. // 2.0 0.2 (2.0/10.0)\n. // 8.0 0.8 (8.0/10.0)\n. // 10.0 1.0 (10.0/10.0)\n. normalize(positiveData, normalizedData_inf, 1.0, 0.0, NORM_INF);\n. \n. // Norm to range [0.0;1.0]\n. // 2.0 0.0 (shift to left border)\n. // 8.0 0.75 (6.0/8.0)\n. // 10.0 1.0 (shift to right border)\n. normalize(positiveData, normalizedData_minmax, 1.0, 0.0, NORM_MINMAX);\n. @endcode\n. \n. @param src input array.\n. @param dst output array of the same size as src .\n. @param alpha norm value to normalize to or the lower range boundary in case of the range\n. normalization.\n. @param beta upper range boundary in case of the range normalization; it is not used for the norm\n. normalization.\n. @param normType normalization type (see cv::NormTypes).\n. @param dtype when negative, the output array has the same type as src; otherwise, it has the same\n. number of channels as src and the depth =CV_MAT_DEPTH(dtype).\n. @param mask optional operation mask.\n. @sa norm, Mat::convertTo, SparseMat::convertTo' ... From e7f948bc587837d8b8e08848f5e32548551df19e Mon Sep 17 00:00:00 2001 From: Xnuk Shuman Date: Tue, 13 Sep 2022 02:01:37 +0900 Subject: [PATCH 3/7] [cv2] fix hconcat/vconcat --- cv2/__init__.pyi | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/cv2/__init__.pyi b/cv2/__init__.pyi index 9b65a1e8..7ab288e5 100644 --- a/cv2/__init__.pyi +++ b/cv2/__init__.pyi @@ -2508,10 +2508,13 @@ def haveOpenVX() -> typing.Any: 'haveOpenVX() -> retval\n.' ... -def hconcat(src: Mat, dts: Mat = ...) -> typing.Any: +def hconcat(src: list[Mat], dts: Mat = ...) -> Mat: 'hconcat(src[, dst]) -> dst\n. @overload\n. @code{.cpp}\n. std::vector matrices = { cv::Mat(4, 1, CV_8UC1, cv::Scalar(1)),\n. cv::Mat(4, 1, CV_8UC1, cv::Scalar(2)),\n. cv::Mat(4, 1, CV_8UC1, cv::Scalar(3)),};\n. \n. cv::Mat out;\n. cv::hconcat( matrices, out );\n. //out:\n. //[1, 2, 3;\n. // 1, 2, 3;\n. // 1, 2, 3;\n. // 1, 2, 3]\n. @endcode\n. @param src input array or vector of matrices. all of the matrices must have the same number of rows and the same depth.\n. @param dst output array. It has the same number of rows and depth as the src, and the sum of cols of the src.\n. same depth.' ... +def hconcat(src1: Mat, src2: Mat, dts: Mat = ...) -> Mat: + ... + def idct(src: Mat, dts: Mat = ..., flags: int = ...) -> typing.Any: 'idct(src[, dst[, flags]]) -> dst\n. @brief Calculates the inverse Discrete Cosine Transform of a 1D or 2D array.\n. \n. idct(src, dst, flags) is equivalent to dct(src, dst, flags | DCT_INVERSE).\n. @param src input floating-point single-channel array.\n. @param dst output array of the same size and type as src.\n. @param flags operation flags.\n. @sa dct, dft, idft, getOptimalDFTSize' ... @@ -3066,10 +3069,13 @@ def validateDisparity(disparity, cost, minDisparity, numberOfDisparities, disp12 'validateDisparity(disparity, cost, minDisparity, numberOfDisparities[, disp12MaxDisp]) -> disparity\n.' ... -def vconcat(src: Mat, dts: Mat = ...) -> typing.Any: +def vconcat(src: list[Mat], dts: Mat = ...) -> Mat: 'vconcat(src[, dst]) -> dst\n. @overload\n. @code{.cpp}\n. std::vector matrices = { cv::Mat(1, 4, CV_8UC1, cv::Scalar(1)),\n. cv::Mat(1, 4, CV_8UC1, cv::Scalar(2)),\n. cv::Mat(1, 4, CV_8UC1, cv::Scalar(3)),};\n. \n. cv::Mat out;\n. cv::vconcat( matrices, out );\n. //out:\n. //[1, 1, 1, 1;\n. // 2, 2, 2, 2;\n. // 3, 3, 3, 3]\n. @endcode\n. @param src input array or vector of matrices. all of the matrices must have the same number of cols and the same depth\n. @param dst output array. It has the same number of cols and depth as the src, and the sum of rows of the src.\n. same depth.' ... +def vconcat(src1: Mat, src2: Mat, dts: Mat = ...) -> Mat: + ... + def waitKey(delay=...) -> typing.Any: 'waitKey([, delay]) -> retval\n. @brief Waits for a pressed key.\n. \n. The function waitKey waits for a key event infinitely (when \\f$\\texttt{delay}\\leq 0\\f$ ) or for delay\n. milliseconds, when it is positive. Since the OS has a minimum time between switching threads, the\n. function will not wait exactly delay ms, it will wait at least delay ms, depending on what else is\n. running on your computer at that time. It returns the code of the pressed key or -1 if no key was\n. pressed before the specified time had elapsed.\n. \n. @note\n. \n. This function is the only method in HighGUI that can fetch and handle events, so it needs to be\n. called periodically for normal event processing unless HighGUI is used within an environment that\n. takes care of event processing.\n. \n. @note\n. \n. The function only works if there is at least one HighGUI window created and the window is active.\n. If there are several HighGUI windows, any of them can be active.\n. \n. @param delay Delay in milliseconds. 0 is the special value that means "forever".' ... From 2864defbf851f5e2cb8d7e40818933ba8c887a01 Mon Sep 17 00:00:00 2001 From: Xnuk Shuman Date: Tue, 13 Sep 2022 02:06:03 +0900 Subject: [PATCH 4/7] [cv2] use `|` instead of typing.Union --- cv2/__init__.pyi | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/cv2/__init__.pyi b/cv2/__init__.pyi index 7ab288e5..a3d845e7 100644 --- a/cv2/__init__.pyi +++ b/cv2/__init__.pyi @@ -1694,7 +1694,7 @@ def adaptiveThreshold(src: Mat, maxValue, adaptiveMethod, thresholdType, blockSi 'adaptiveThreshold(src, maxValue, adaptiveMethod, thresholdType, blockSize, C[, dst]) -> dst\n. @brief Applies an adaptive threshold to an array.\n. \n. The function transforms a grayscale image to a binary image according to the formulae:\n. - **THRESH_BINARY**\n. \\f[dst(x,y) = \\fork{\\texttt{maxValue}}{if \\(src(x,y) > T(x,y)\\)}{0}{otherwise}\\f]\n. - **THRESH_BINARY_INV**\n. \\f[dst(x,y) = \\fork{0}{if \\(src(x,y) > T(x,y)\\)}{\\texttt{maxValue}}{otherwise}\\f]\n. where \\f$T(x,y)\\f$ is a threshold calculated individually for each pixel (see adaptiveMethod parameter).\n. \n. The function can process the image in-place.\n. \n. @param src Source 8-bit single-channel image.\n. @param dst Destination image of the same size and the same type as src.\n. @param maxValue Non-zero value assigned to the pixels for which the condition is satisfied\n. @param adaptiveMethod Adaptive thresholding algorithm to use, see #AdaptiveThresholdTypes.\n. The #BORDER_REPLICATE | #BORDER_ISOLATED is used to process boundaries.\n. @param thresholdType Thresholding type that must be either #THRESH_BINARY or #THRESH_BINARY_INV,\n. see #ThresholdTypes.\n. @param blockSize Size of a pixel neighborhood that is used to calculate a threshold value for the\n. pixel: 3, 5, 7, and so on.\n. @param C Constant subtracted from the mean or weighted mean (see the details below). Normally, it\n. is positive but may be zero or negative as well.\n. \n. @sa threshold, blur, GaussianBlur' ... -def add(src1: typing.Union[Mat, float, int], src2: typing.Union[Mat, float, int], dts: Mat = ..., mask: Mat = ..., dtype=...) -> typing.Any: +def add(src1: Mat | float | int, src2: Mat | float | int, dts: Mat = ..., mask: Mat = ..., dtype=...) -> typing.Any: 'add(src1, src2[, dst[, mask[, dtype]]]) -> dst\n. @brief Calculates the per-element sum of two arrays or an array and a scalar.\n. \n. The function add calculates:\n. - Sum of two arrays when both input arrays have the same size and the same number of channels:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1}(I) + \\texttt{src2}(I)) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. - Sum of an array and a scalar when src2 is constructed from Scalar or has the same number of\n. elements as `src1.channels()`:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1}(I) + \\texttt{src2} ) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. - Sum of a scalar and an array when src1 is constructed from Scalar or has the same number of\n. elements as `src2.channels()`:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1} + \\texttt{src2}(I) ) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. where `I` is a multi-dimensional index of array elements. In case of multi-channel arrays, each\n. channel is processed independently.\n. \n. The first function in the list above can be replaced with matrix expressions:\n. @code{.cpp}\n. dst = src1 + src2;\n. dst += src1; // equivalent to add(dst, src1, dst);\n. @endcode\n. The input arrays and the output array can all have the same or different depths. For example, you\n. can add a 16-bit unsigned array to a 8-bit signed array and store the sum as a 32-bit\n. floating-point array. Depth of the output array is determined by the dtype parameter. In the second\n. and third cases above, as well as in the first case, when src1.depth() == src2.depth(), dtype can\n. be set to the default -1. In this case, the output array will have the same depth as the input\n. array, be it src1, src2 or both.\n. @note Saturation is not applied when the output array has the depth CV_32S. You may even get\n. result of an incorrect sign in the case of overflow.\n. @param src1 first input array or a scalar.\n. @param src2 second input array or a scalar.\n. @param dst output array that has the same size and number of channels as the input array(s); the\n. depth is defined by dtype or src1/src2.\n. @param mask optional operation mask - 8-bit single channel array, that specifies elements of the\n. output array to be changed.\n. @param dtype optional depth of the output array (see the discussion below).\n. @sa subtract, addWeighted, scaleAdd, Mat::convertTo' ... @@ -3013,7 +3013,7 @@ def stylization(src: Mat, dts: Mat = ..., sigma_s=..., sigma_r=...) -> typing.An 'stylization(src[, dst[, sigma_s[, sigma_r]]]) -> dst\n. @brief Stylization aims to produce digital imagery with a wide variety of effects not focused on\n. photorealism. Edge-aware filters are ideal for stylization, as they can abstract regions of low\n. contrast while preserving, or enhancing, high-contrast features.\n. \n. @param src Input 8-bit 3-channel image.\n. @param dst Output image with the same size and type as src.\n. @param sigma_s %Range between 0 to 200.\n. @param sigma_r %Range between 0 to 1.' ... -def subtract(src1: typing.Union[Mat, int, float], src2: typing.Union[Mat, int, float], dts: Mat = ..., mask: Mat = ..., dtype=...) -> typing.Any: +def subtract(src1: Mat | float | int, src2: Mat | float | int, dts: Mat = ..., mask: Mat = ..., dtype=...) -> typing.Any: 'subtract(src1, src2[, dst[, mask[, dtype]]]) -> dst\n. @brief Calculates the per-element difference between two arrays or array and a scalar.\n. \n. The function subtract calculates:\n. - Difference between two arrays, when both input arrays have the same size and the same number of\n. channels:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1}(I) - \\texttt{src2}(I)) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. - Difference between an array and a scalar, when src2 is constructed from Scalar or has the same\n. number of elements as `src1.channels()`:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1}(I) - \\texttt{src2} ) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. - Difference between a scalar and an array, when src1 is constructed from Scalar or has the same\n. number of elements as `src2.channels()`:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1} - \\texttt{src2}(I) ) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. - The reverse difference between a scalar and an array in the case of `SubRS`:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src2} - \\texttt{src1}(I) ) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. where I is a multi-dimensional index of array elements. In case of multi-channel arrays, each\n. channel is processed independently.\n. \n. The first function in the list above can be replaced with matrix expressions:\n. @code{.cpp}\n. dst = src1 - src2;\n. dst -= src1; // equivalent to subtract(dst, src1, dst);\n. @endcode\n. The input arrays and the output array can all have the same or different depths. For example, you\n. can subtract to 8-bit unsigned arrays and store the difference in a 16-bit signed array. Depth of\n. the output array is determined by dtype parameter. In the second and third cases above, as well as\n. in the first case, when src1.depth() == src2.depth(), dtype can be set to the default -1. In this\n. case the output array will have the same depth as the input array, be it src1, src2 or both.\n. @note Saturation is not applied when the output array has the depth CV_32S. You may even get\n. result of an incorrect sign in the case of overflow.\n. @param src1 first input array or a scalar.\n. @param src2 second input array or a scalar.\n. @param dst output array of the same size and the same number of channels as the input array.\n. @param mask optional operation mask; this is an 8-bit single channel array that specifies elements\n. of the output array to be changed.\n. @param dtype optional depth of the output array\n. @sa add, addWeighted, scaleAdd, Mat::convertTo' ... From 1c7f53e5516784ee052d0a92c60a4ea96e03fa9b Mon Sep 17 00:00:00 2001 From: Xnuk Shuman Date: Tue, 13 Sep 2022 03:14:09 +0900 Subject: [PATCH 5/7] [cv2] tried @ overload but it seems not working --- cv2/__init__.pyi | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/cv2/__init__.pyi b/cv2/__init__.pyi index a3d845e7..df50466d 100644 --- a/cv2/__init__.pyi +++ b/cv2/__init__.pyi @@ -2508,10 +2508,12 @@ def haveOpenVX() -> typing.Any: 'haveOpenVX() -> retval\n.' ... +@overload def hconcat(src: list[Mat], dts: Mat = ...) -> Mat: 'hconcat(src[, dst]) -> dst\n. @overload\n. @code{.cpp}\n. std::vector matrices = { cv::Mat(4, 1, CV_8UC1, cv::Scalar(1)),\n. cv::Mat(4, 1, CV_8UC1, cv::Scalar(2)),\n. cv::Mat(4, 1, CV_8UC1, cv::Scalar(3)),};\n. \n. cv::Mat out;\n. cv::hconcat( matrices, out );\n. //out:\n. //[1, 2, 3;\n. // 1, 2, 3;\n. // 1, 2, 3;\n. // 1, 2, 3]\n. @endcode\n. @param src input array or vector of matrices. all of the matrices must have the same number of rows and the same depth.\n. @param dst output array. It has the same number of rows and depth as the src, and the sum of cols of the src.\n. same depth.' ... +@overload def hconcat(src1: Mat, src2: Mat, dts: Mat = ...) -> Mat: ... @@ -3069,10 +3071,12 @@ def validateDisparity(disparity, cost, minDisparity, numberOfDisparities, disp12 'validateDisparity(disparity, cost, minDisparity, numberOfDisparities[, disp12MaxDisp]) -> disparity\n.' ... +@overload def vconcat(src: list[Mat], dts: Mat = ...) -> Mat: 'vconcat(src[, dst]) -> dst\n. @overload\n. @code{.cpp}\n. std::vector matrices = { cv::Mat(1, 4, CV_8UC1, cv::Scalar(1)),\n. cv::Mat(1, 4, CV_8UC1, cv::Scalar(2)),\n. cv::Mat(1, 4, CV_8UC1, cv::Scalar(3)),};\n. \n. cv::Mat out;\n. cv::vconcat( matrices, out );\n. //out:\n. //[1, 1, 1, 1;\n. // 2, 2, 2, 2;\n. // 3, 3, 3, 3]\n. @endcode\n. @param src input array or vector of matrices. all of the matrices must have the same number of cols and the same depth\n. @param dst output array. It has the same number of cols and depth as the src, and the sum of rows of the src.\n. same depth.' ... +@overload def vconcat(src1: Mat, src2: Mat, dts: Mat = ...) -> Mat: ... From c37636f916bb83d95b5b5217a01feb09c89ca02a Mon Sep 17 00:00:00 2001 From: Xnuk Shuman Date: Tue, 13 Sep 2022 03:15:29 +0900 Subject: [PATCH 6/7] [cv2] rectangle can accept `Rect(x, y, width, height)` --- cv2/__init__.pyi | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/cv2/__init__.pyi b/cv2/__init__.pyi index df50466d..284ed4df 100644 --- a/cv2/__init__.pyi +++ b/cv2/__init__.pyi @@ -2815,10 +2815,15 @@ def recoverPose(E, points1, points2, cameraMatrix, R=..., t=..., mask: Mat = ... "recoverPose(E, points1, points2, cameraMatrix[, R[, t[, mask]]]) -> retval, R, t, mask\n. @brief Recovers the relative camera rotation and the translation from an estimated essential\n. matrix and the corresponding points in two images, using cheirality check. Returns the number of\n. inliers that pass the check.\n. \n. @param E The input essential matrix.\n. @param points1 Array of N 2D points from the first image. The point coordinates should be\n. floating-point (single or double precision).\n. @param points2 Array of the second image points of the same size and format as points1 .\n. @param cameraMatrix Camera matrix \\f$A = \\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\\f$ .\n. Note that this function assumes that points1 and points2 are feature points from cameras with the\n. same camera matrix.\n. @param R Output rotation matrix. Together with the translation vector, this matrix makes up a tuple\n. that performs a change of basis from the first camera's coordinate system to the second camera's\n. coordinate system. Note that, in general, t can not be used for this tuple, see the parameter\n. described below.\n. @param t Output translation vector. This vector is obtained by @ref decomposeEssentialMat and\n. therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit\n. length.\n. @param mask Input/output mask for inliers in points1 and points2. If it is not empty, then it marks\n. inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to\n. recover pose. In the output mask only inliers which pass the cheirality check.\n. \n. This function decomposes an essential matrix using @ref decomposeEssentialMat and then verifies\n. possible pose hypotheses by doing cheirality check. The cheirality check means that the\n. triangulated 3D points should have positive depth. Some details can be found in @cite Nister03.\n. \n. This function can be used to process the output E and mask from @ref findEssentialMat. In this\n. scenario, points1 and points2 are the same input for findEssentialMat.:\n. @code\n. // Example. Estimation of fundamental matrix using the RANSAC algorithm\n. int point_count = 100;\n. vector points1(point_count);\n. vector points2(point_count);\n. \n. // initialize the points here ...\n. for( int i = 0; i < point_count; i++ )\n. {\n. points1[i] = ...;\n. points2[i] = ...;\n. }\n. \n. // cametra matrix with both focal lengths = 1, and principal point = (0, 0)\n. Mat cameraMatrix = Mat::eye(3, 3, CV_64F);\n. \n. Mat E, R, t, mask;\n. \n. E = findEssentialMat(points1, points2, cameraMatrix, RANSAC, 0.999, 1.0, mask);\n. recoverPose(E, points1, points2, cameraMatrix, R, t, mask);\n. @endcode\n\n\n\nrecoverPose(E, points1, points2[, R[, t[, focal[, pp[, mask]]]]]) -> retval, R, t, mask\n. @overload\n. @param E The input essential matrix.\n. @param points1 Array of N 2D points from the first image. The point coordinates should be\n. floating-point (single or double precision).\n. @param points2 Array of the second image points of the same size and format as points1 .\n. @param R Output rotation matrix. Together with the translation vector, this matrix makes up a tuple\n. that performs a change of basis from the first camera's coordinate system to the second camera's\n. coordinate system. Note that, in general, t can not be used for this tuple, see the parameter\n. description below.\n. @param t Output translation vector. This vector is obtained by @ref decomposeEssentialMat and\n. therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit\n. length.\n. @param focal Focal length of the camera. Note that this function assumes that points1 and points2\n. are feature points from cameras with same focal length and principal point.\n. @param pp principal point of the camera.\n. @param mask Input/output mask for inliers in points1 and points2. If it is not empty, then it marks\n. inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to\n. recover pose. In the output mask only inliers which pass the cheirality check.\n. \n. This function differs from the one above that it computes camera matrix from focal length and\n. principal point:\n. \n. \\f[A =\n. \\begin{bmatrix}\n. f & 0 & x_{pp} \\\\\n. 0 & f & y_{pp} \\\\\n. 0 & 0 & 1\n. \\end{bmatrix}\\f]\n\n\n\nrecoverPose(E, points1, points2, cameraMatrix, distanceThresh[, R[, t[, mask[, triangulatedPoints]]]]) -> retval, R, t, mask, triangulatedPoints\n. @overload\n. @param E The input essential matrix.\n. @param points1 Array of N 2D points from the first image. The point coordinates should be\n. floating-point (single or double precision).\n. @param points2 Array of the second image points of the same size and format as points1.\n. @param cameraMatrix Camera matrix \\f$A = \\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\\f$ .\n. Note that this function assumes that points1 and points2 are feature points from cameras with the\n. same camera matrix.\n. @param R Output rotation matrix. Together with the translation vector, this matrix makes up a tuple\n. that performs a change of basis from the first camera's coordinate system to the second camera's\n. coordinate system. Note that, in general, t can not be used for this tuple, see the parameter\n. description below.\n. @param t Output translation vector. This vector is obtained by @ref decomposeEssentialMat and\n. therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit\n. length.\n. @param distanceThresh threshold distance which is used to filter out far away points (i.e. infinite\n. points).\n. @param mask Input/output mask for inliers in points1 and points2. If it is not empty, then it marks\n. inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to\n. recover pose. In the output mask only inliers which pass the cheirality check.\n. @param triangulatedPoints 3D points which were reconstructed by triangulation.\n. \n. This function differs from the one above that it outputs the triangulated 3D point that are used for\n. the cheirality check." ... -def rectangle(img: Mat, pt1, pt2, color, thickness=..., lineType=..., shift=...) -> typing.Any: +@overload +def rectangle(img: Mat, pt1: typing.Tuple[int, int], pt2: typing.Tuple[int, int], color, thickness=..., lineType=..., shift=...) -> typing.Any: 'rectangle(img, pt1, pt2, color[, thickness[, lineType[, shift]]]) -> img\n. @brief Draws a simple, thick, or filled up-right rectangle.\n. \n. The function cv::rectangle draws a rectangle outline or a filled rectangle whose two opposite corners\n. are pt1 and pt2.\n. \n. @param img Image.\n. @param pt1 Vertex of the rectangle.\n. @param pt2 Vertex of the rectangle opposite to pt1 .\n. @param color Rectangle color or brightness (grayscale image).\n. @param thickness Thickness of lines that make up the rectangle. Negative values, like #FILLED,\n. mean that the function has to draw a filled rectangle.\n. @param lineType Type of the line. See #LineTypes\n. @param shift Number of fractional bits in the point coordinates.\n\n\n\nrectangle(img, rec, color[, thickness[, lineType[, shift]]]) -> img\n. @overload\n. \n. use `rec` parameter as alternative specification of the drawn rectangle: `r.tl() and\n. r.br()-Point(1,1)` are opposite corners' ... +@overload +def rectangle(img: Mat, rec: typing.Tuple[int, int, int, int], color, thickness=..., lineType=..., shift=...) -> typing.Any: + ... + def rectify3Collinear(cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, cameraMatrix3, distCoeffs3, imgpt1, imgpt3, imageSize, R12, T12, R13, T13, alpha, newImgSize, flags: int, R1=..., R2=..., R3=..., P1=..., P2=..., P3=..., Q=...) -> typing.Any: 'rectify3Collinear(cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, cameraMatrix3, distCoeffs3, imgpt1, imgpt3, imageSize, R12, T12, R13, T13, alpha, newImgSize, flags[, R1[, R2[, R3[, P1[, P2[, P3[, Q]]]]]]]) -> retval, R1, R2, R3, P1, P2, P3, Q, roi1, roi2\n.' ... From 1a4ffd57c96fefa2e3f3117326b831174396a4a3 Mon Sep 17 00:00:00 2001 From: Erik De Bonte Date: Thu, 25 May 2023 10:37:54 -0700 Subject: [PATCH 7/7] Apply suggestions from code review Co-authored-by: Bill Schnurr --- cv2/__init__.pyi | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/cv2/__init__.pyi b/cv2/__init__.pyi index 284ed4df..f654ebf5 100644 --- a/cv2/__init__.pyi +++ b/cv2/__init__.pyi @@ -1694,7 +1694,7 @@ def adaptiveThreshold(src: Mat, maxValue, adaptiveMethod, thresholdType, blockSi 'adaptiveThreshold(src, maxValue, adaptiveMethod, thresholdType, blockSize, C[, dst]) -> dst\n. @brief Applies an adaptive threshold to an array.\n. \n. The function transforms a grayscale image to a binary image according to the formulae:\n. - **THRESH_BINARY**\n. \\f[dst(x,y) = \\fork{\\texttt{maxValue}}{if \\(src(x,y) > T(x,y)\\)}{0}{otherwise}\\f]\n. - **THRESH_BINARY_INV**\n. \\f[dst(x,y) = \\fork{0}{if \\(src(x,y) > T(x,y)\\)}{\\texttt{maxValue}}{otherwise}\\f]\n. where \\f$T(x,y)\\f$ is a threshold calculated individually for each pixel (see adaptiveMethod parameter).\n. \n. The function can process the image in-place.\n. \n. @param src Source 8-bit single-channel image.\n. @param dst Destination image of the same size and the same type as src.\n. @param maxValue Non-zero value assigned to the pixels for which the condition is satisfied\n. @param adaptiveMethod Adaptive thresholding algorithm to use, see #AdaptiveThresholdTypes.\n. The #BORDER_REPLICATE | #BORDER_ISOLATED is used to process boundaries.\n. @param thresholdType Thresholding type that must be either #THRESH_BINARY or #THRESH_BINARY_INV,\n. see #ThresholdTypes.\n. @param blockSize Size of a pixel neighborhood that is used to calculate a threshold value for the\n. pixel: 3, 5, 7, and so on.\n. @param C Constant subtracted from the mean or weighted mean (see the details below). Normally, it\n. is positive but may be zero or negative as well.\n. \n. @sa threshold, blur, GaussianBlur' ... -def add(src1: Mat | float | int, src2: Mat | float | int, dts: Mat = ..., mask: Mat = ..., dtype=...) -> typing.Any: +def add(src1: Mat | float, src2: Mat | float, dts: Mat = ..., mask: Mat = ..., dtype=...) -> typing.Any: 'add(src1, src2[, dst[, mask[, dtype]]]) -> dst\n. @brief Calculates the per-element sum of two arrays or an array and a scalar.\n. \n. The function add calculates:\n. - Sum of two arrays when both input arrays have the same size and the same number of channels:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1}(I) + \\texttt{src2}(I)) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. - Sum of an array and a scalar when src2 is constructed from Scalar or has the same number of\n. elements as `src1.channels()`:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1}(I) + \\texttt{src2} ) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. - Sum of a scalar and an array when src1 is constructed from Scalar or has the same number of\n. elements as `src2.channels()`:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1} + \\texttt{src2}(I) ) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. where `I` is a multi-dimensional index of array elements. In case of multi-channel arrays, each\n. channel is processed independently.\n. \n. The first function in the list above can be replaced with matrix expressions:\n. @code{.cpp}\n. dst = src1 + src2;\n. dst += src1; // equivalent to add(dst, src1, dst);\n. @endcode\n. The input arrays and the output array can all have the same or different depths. For example, you\n. can add a 16-bit unsigned array to a 8-bit signed array and store the sum as a 32-bit\n. floating-point array. Depth of the output array is determined by the dtype parameter. In the second\n. and third cases above, as well as in the first case, when src1.depth() == src2.depth(), dtype can\n. be set to the default -1. In this case, the output array will have the same depth as the input\n. array, be it src1, src2 or both.\n. @note Saturation is not applied when the output array has the depth CV_32S. You may even get\n. result of an incorrect sign in the case of overflow.\n. @param src1 first input array or a scalar.\n. @param src2 second input array or a scalar.\n. @param dst output array that has the same size and number of channels as the input array(s); the\n. depth is defined by dtype or src1/src2.\n. @param mask optional operation mask - 8-bit single channel array, that specifies elements of the\n. output array to be changed.\n. @param dtype optional depth of the output array (see the discussion below).\n. @sa subtract, addWeighted, scaleAdd, Mat::convertTo' ... @@ -3020,7 +3020,7 @@ def stylization(src: Mat, dts: Mat = ..., sigma_s=..., sigma_r=...) -> typing.An 'stylization(src[, dst[, sigma_s[, sigma_r]]]) -> dst\n. @brief Stylization aims to produce digital imagery with a wide variety of effects not focused on\n. photorealism. Edge-aware filters are ideal for stylization, as they can abstract regions of low\n. contrast while preserving, or enhancing, high-contrast features.\n. \n. @param src Input 8-bit 3-channel image.\n. @param dst Output image with the same size and type as src.\n. @param sigma_s %Range between 0 to 200.\n. @param sigma_r %Range between 0 to 1.' ... -def subtract(src1: Mat | float | int, src2: Mat | float | int, dts: Mat = ..., mask: Mat = ..., dtype=...) -> typing.Any: +def subtract(src1: Mat | float, src2: Mat | float, dts: Mat = ..., mask: Mat = ..., dtype=...) -> typing.Any: 'subtract(src1, src2[, dst[, mask[, dtype]]]) -> dst\n. @brief Calculates the per-element difference between two arrays or array and a scalar.\n. \n. The function subtract calculates:\n. - Difference between two arrays, when both input arrays have the same size and the same number of\n. channels:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1}(I) - \\texttt{src2}(I)) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. - Difference between an array and a scalar, when src2 is constructed from Scalar or has the same\n. number of elements as `src1.channels()`:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1}(I) - \\texttt{src2} ) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. - Difference between a scalar and an array, when src1 is constructed from Scalar or has the same\n. number of elements as `src2.channels()`:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src1} - \\texttt{src2}(I) ) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. - The reverse difference between a scalar and an array in the case of `SubRS`:\n. \\f[\\texttt{dst}(I) = \\texttt{saturate} ( \\texttt{src2} - \\texttt{src1}(I) ) \\quad \\texttt{if mask}(I) \\ne0\\f]\n. where I is a multi-dimensional index of array elements. In case of multi-channel arrays, each\n. channel is processed independently.\n. \n. The first function in the list above can be replaced with matrix expressions:\n. @code{.cpp}\n. dst = src1 - src2;\n. dst -= src1; // equivalent to subtract(dst, src1, dst);\n. @endcode\n. The input arrays and the output array can all have the same or different depths. For example, you\n. can subtract to 8-bit unsigned arrays and store the difference in a 16-bit signed array. Depth of\n. the output array is determined by dtype parameter. In the second and third cases above, as well as\n. in the first case, when src1.depth() == src2.depth(), dtype can be set to the default -1. In this\n. case the output array will have the same depth as the input array, be it src1, src2 or both.\n. @note Saturation is not applied when the output array has the depth CV_32S. You may even get\n. result of an incorrect sign in the case of overflow.\n. @param src1 first input array or a scalar.\n. @param src2 second input array or a scalar.\n. @param dst output array of the same size and the same number of channels as the input array.\n. @param mask optional operation mask; this is an 8-bit single channel array that specifies elements\n. of the output array to be changed.\n. @param dtype optional depth of the output array\n. @sa add, addWeighted, scaleAdd, Mat::convertTo' ...