-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor tests to use pytest parametrized #89
Conversation
out_channels = [1, 16] | ||
hw_strides = [[1, 1], [2, 2]] | ||
paddings = ["VALID", "SAME"] | ||
def _get_args_list(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer the API of https://docs.pytest.org/en/latest/parametrize.html#parametrize a lot, which makes things like this a lot easier. But I don't think it will work together with tf.test.TestCase
.
e37d8f0
to
978ec71
Compare
TF 1.x tests still run on Python 3.4 😢 so my attempts to simplify this further failed. |
shall we drop the TF 1 support entirely? what is the status with Larq in respect to TF 1 support? |
I'd be in favour. We could also only support 1.15+ which uses the same docker container as TF 2
We still support TF 1 and there are no plans to drop it just yet. Though I am happy to re-evaluate if it makes our life difficult. |
For now, the compute engine is mostly meant for tflite. Until we start adding CUDA operations, the TF side of the compute engine only exists to be able to define a model and convert it to TF lite. And TF lite is kind of independent of TF1 vs TF2, it is just its own version. I think it will still take a long time before we start having a serious binary CUDA kernel for TF, so I don't think it's worth all the extra effort now. The main "product" for users is the tflite part.
Since this last case is so rare (I think), I vote to drop TF2 support in the compute engine. |
I don't know.
I'd always vote to drop support for anything that make maintenance difficult. Would upgrading the TF 1.x tests to TF 1.15.0 solve our issues, since it uses the same docker container? |
I'm not sure what the current issues are. From your comment I understood that that tf 1.x needs older python versions? Does TF 1.15 work with newer python versions? |
Just the default custom-op container for 1.13 and 1.14 is different and doesn't include a modern version of Python. We run TF 1 with Python 3.7 in larq. If I recall correctly with 1.15 they switched to the |
I switched the test runner from |
Then we should definitely switch to tf 1.15 with the new container. I think we should do that.
I don't know if there is a good reason to use |
Either that or require the user to install the desired TF version beforehand.
Yes, it provides Tensor support for
We could handle that like we do in the TFLite test with running |
I like that even more. The user should install the tensorflow version of their choice, this should not be done by the configure script. (This will even speed up the ARM Github actions because they don't require tensorflow to be installed at all).
The assert can be replaced by the pytest variants. I think I remember seeing something like
Sounds good. I'm in favor if this means we can use more recent python tools. |
I am exploring some ideas in larq/larq#313 |
I just realized that we already use the If we want to run the test in both eager and graph mode, we can reuse the fixtures introduced in larq/larq#313 Keep in mind, that this doesn't switch the CLI command to |
Just a note: I saw that this Another note: it seems we have to increase the python test timeout limit because the unit test is failing. (It makes sense because it is a rather extensive test of all the bconv2d options). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Can be merged after marking bconv2d_tests
as "large".
This should simplify some of the logic of the tetst, I can change the TFLite test once #88 is merged.