You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a few test cases that simply pass, where I think they shouldn't pass. I guess it all depends on how NutJS does image comparison, but I was hoping some light can be shed on why the below images return success. The only way I can get them to fail is with a confidence level of 1, which is pretty much impossible to use across machines. But lowering them to e.g. 0.94 will make the comparison pass, which I don't think it should.
Desired execution environment / tested on
Virtual machine
Docker container
Dev/Host system
node version:
v20.11.0
OS type and version: Mac and Windows
Full code sample related to question
Example 1:
Expected image:
Search region image during test execution:
A await nut.screen.waitFor(nut.imageResource(imageToFind), fluentWaitTimeout, 200, searchParams) where the confidence level was 0.92 returned that if found the expected image.
If I take a screenshot of the found region it looks like this
Too me this looks like just because most of the white color pixels are the same, the image search was a success. To me it shouldn't be with 0.92.
Example 2:
Expected image:
Search region image:
With a confidence level of 0.94 it succeeds. It fails with 0.95 and above. They are pretty different from when you look at the images, but my guess is that it has to do with the very very similar color composition of the expected image and search region that it passes.
I hope i am clear enough in my concerns, because it could lead to tests to pass, that shouldn't pass. And having a confidence level always at a very high level, will not necessarily be doable. I am hoping there is a way for me to overwrite how images are matched up. But I also need to understand how the current matching happens.
E.g. if we just count up pixels by color and if the count are in range, then that can lead to false/positives.
Thanks !
PS: Yes I know how to work around this. Basically compare smaller pieces in smaller regions. The problem though is a lot of times you take an expected screenshot and it passes. A month later, even though it should fail, it doesn't fail and you may not notice it until somebody manually looks at a screen.
The text was updated successfully, but these errors were encountered:
We do not set any special option for the nl-matcher. All we do is require it and do a awaitnut.screen.waitFor(nut.imageResource(imageToFind), fluentWaitTimeout, 200, searchParams);
Short summary
We have a few test cases that simply pass, where I think they shouldn't pass. I guess it all depends on how NutJS does image comparison, but I was hoping some light can be shed on why the below images return success. The only way I can get them to fail is with a confidence level of
1
, which is pretty much impossible to use across machines. But lowering them to e.g. 0.94 will make the comparison pass, which I don't think it should.Desired execution environment / tested on
node version:
v20.11.0
OS type and version:
Mac
andWindows
Full code sample related to question
Example 1:
Expected image:
Search region image during test execution:
A
await nut.screen.waitFor(nut.imageResource(imageToFind), fluentWaitTimeout, 200, searchParams)
where the confidence level was0.92
returned that if found the expected image.If I take a screenshot of the found region it looks like this
Too me this looks like just because most of the white color pixels are the same, the image search was a success. To me it shouldn't be with
0.92
.Example 2:
Expected image:
Search region image:
With a confidence level of 0.94 it succeeds. It fails with 0.95 and above. They are pretty different from when you look at the images, but my guess is that it has to do with the very very similar color composition of the expected image and search region that it passes.
I hope i am clear enough in my concerns, because it could lead to tests to pass, that shouldn't pass. And having a confidence level always at a very high level, will not necessarily be doable. I am hoping there is a way for me to overwrite how images are matched up. But I also need to understand how the current matching happens.
E.g. if we just count up pixels by color and if the count are in range, then that can lead to false/positives.
Thanks !
PS: Yes I know how to work around this. Basically compare smaller pieces in smaller regions. The problem though is a lot of times you take an expected screenshot and it passes. A month later, even though it should fail, it doesn't fail and you may not notice it until somebody manually looks at a screen.
The text was updated successfully, but these errors were encountered: