-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update to Python 3.10 and SLEAP v1.5.0 #1841
base: develop
Are you sure you want to change the base?
Conversation
WalkthroughThis pull request introduces comprehensive updates to the SLEAP project, focusing on Python version upgrade to 3.10, dependency management, and various code improvements. The changes span multiple configuration files, GitHub workflows, and source code modules. Key modifications include updating environment configurations, migrating dependencies, implementing lazy loading for OpenCV, and enhancing error handling and user interfaces. Changes
Sequence DiagramsequenceDiagram
participant User
participant SLEAP
participant Dependencies
participant OpenCV
User->>SLEAP: Install/Update
SLEAP->>Dependencies: Resolve Dependencies
Dependencies->>SLEAP: Validate Python 3.10
SLEAP->>OpenCV: Lazy Load
OpenCV-->>SLEAP: Load on First Use
SLEAP->>User: Ready to Use
Possibly related PRs
Suggested reviewers
Poem
Tip CodeRabbit's docstrings feature is now available as part of our Early Access Program! Simply use the command Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #1841 +/- ##
===========================================
+ Coverage 73.30% 75.62% +2.31%
===========================================
Files 134 133 -1
Lines 24087 24626 +539
===========================================
+ Hits 17658 18624 +966
+ Misses 6429 6002 -427 ☔ View full report in Codecov by Sentry. |
Remove python version constraint (implicitly stated in environment yml)
PySide6 doesn't need this set and prints a warning message if it is set.
* Lower bound albumentation version to handle Nan keypoints * Reset tabwidth to 2 in environment ymls --------- Co-authored-by: Elizabeth <106755962+eberrigan@users.noreply.github.com>
* Bump version to 1.4.1a3 * Trigger workflows * Force full viewport rendering * Remove env flag workaround for Qt bug * Fix threading errors * More threading cleanup * networkx backward compatibility * Remove rendering mode hack * Disable albumentations update check. * Add safer thread/timer stopping * networkx backwards compatibility * networkx backwards compatibility * Update tf.data APIs * Use legacy optimizer on Apple Silicon * Lazy loading for OpenCV * Fix backwards compatibility with tensorflow * Lint * More lint --------- Co-authored-by: Talmo Pereira <talmo@salk.edu>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🔭 Outside diff range comments (5)
docs/installation.md (1)
Line range hint
1-450
: Missing documentation updates for dependency changesThe PR objectives mention several significant dependency changes that are not reflected in the documentation:
- Upgrade from PySide 2 to PySide 6
- Migration from scikit-video to imageio.v2
- Removal of QtCharts in favor of matplotlib
Please update the documentation to:
- Reflect these dependency changes
- Add any new requirements or prerequisites
- Update any related troubleshooting steps
sleap/nn/tracking.py (2)
Line range hint
114-116
: Return a tuple to match the annotated return typeIn
get_shifted_instances_from_earlier_time
, the method is annotated to return a tuple(np.ndarray, List[InstanceType])
, but currently returns a list[ref_img, ref_instances]
. Return a tuple to match the annotation.Apply this diff to return a tuple:
- return [ref_img, ref_instances] + return (ref_img, ref_instances)
Line range hint
361-363
: Return a tuple to match the annotated return typeIn
get_shifted_instances_from_earlier_time
, withinFlowMaxTracksCandidateMaker
, ensure that the method returns a tuple as per its signature.Apply this diff to return a tuple:
- return [ref_img, ref_instances] + return (ref_img, ref_instances)sleap/__init__.py (1)
Line range hint
12-25
: Address unused imports and consider using allThe static analysis tool indicates multiple unused imports. Since this is an
__init__.py
file, these imports are likely intended for public API exposure.Add an
__all__
declaration to explicitly define the public API:+__all__ = [ + "__version__", + "versions", + "Video", + "load_video", + "LabeledFrame", + "Instance", + "PredictedInstance", + "Track", + "Skeleton", + "Labels", + "load_file", +]🧰 Tools
🪛 Ruff (0.8.2)
12-12:
sleap.version.__version__
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
12-12:
sleap.version.versions
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
14-14:
sleap.io.video.Video
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
14-14:
sleap.io.video.load_video
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
15-15:
sleap.instance.LabeledFrame
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
15-15:
sleap.instance.Instance
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
15-15:
sleap.instance.PredictedInstance
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
15-15:
sleap.instance.Track
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
16-16:
sleap.skeleton.Skeleton
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
17-17:
sleap.io.dataset.Labels
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
17-17:
sleap.io.dataset.load_file
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
19-19:
sleap.nn
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
20-20:
sleap.nn.data.pipelines
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
21-21:
sleap.nn.inference
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
.github/workflows/build_pypi_ci.yml (1)
Line range hint
127-127
: Fix shell scripting best practicesThe
$BUILD_PATH
variable should be quoted to prevent word splitting and globbing issues.-conda create -y -n sleap_test -c file://$BUILD_PATH -c sleap/label/dev -c conda-forge -c nvidia -c anaconda sleap +conda create -y -n sleap_test -c "file://${BUILD_PATH}" -c sleap/label/dev -c conda-forge -c nvidia -c anaconda sleapAlso applies to: 136-136
♻️ Duplicate comments (1)
sleap/nn/training.py (1)
271-271
: 🛠️ Refactor suggestionUse
platform.machine()
instead ofplatform.platform()
to detect Apple SiliconTo reliably detect Apple Silicon systems, use
platform.machine() == "arm64"
instead of checking if'arm64'
is inplatform.platform()
.Apply this diff to improve the detection:
- is_apple_silicon = "arm64" in platform.platform() + is_apple_silicon = platform.machine() == "arm64"
🧹 Nitpick comments (18)
sleap/nn/data/dataset_ops.py (1)
144-153
: Fix indentation in the batching logicThe new batching logic is well-structured and provides better compatibility by using the native
ragged_batch
when available. However, the indentation is inconsistent.Apply this diff to fix the indentation:
if hasattr(ds_output, "ragged_batch"): ds_output = ds_output.ragged_batch( batch_size=self.batch_size, drop_remainder=self.drop_remainder ) else: ds_output = ds_output.apply( tf.data.experimental.dense_to_ragged_batch( batch_size=self.batch_size, drop_remainder=self.drop_remainder ) ).conda/sleap_deactivate.sh (1)
3-6
: LGTM! This implementation should be the referenceThis is the most complete implementation among the platform-specific scripts. Consider using this as a reference to ensure consistency across platforms, particularly for the Mac version which is missing the XLA_FLAGS reset.
.conda/sleap_activate.bat (1)
1-6
: Add error handling and environment validationWhile the script correctly manages environment variables, consider adding basic error handling and environment validation:
@REM Remember the old library path for when we deactivate +if not defined CONDA_PREFIX ( + echo Error: CONDA_PREFIX is not set. Are you in a conda environment? + exit /b 1 +) set SLEAP_OLD_XLA_FLAGS=%XLA_FLAGS% set SLEAP_OLD_NO_ALBUMENTATIONS_UPDATE=%NO_ALBUMENTATIONS_UPDATE% @REM Help XLA find CUDA set XLA_FLAGS=--xla_gpu_cuda_data_dir=%CONDA_PREFIX% set NO_ALBUMENTATIONS_UPDATE=1 +@REM Verify XLA_FLAGS was set correctly +if not defined XLA_FLAGS ( + echo Warning: Failed to set XLA_FLAGS +).conda/sleap_activate.sh (1)
3-11
: Enhance error handling and path validationWhile the script handles environment variables correctly, it could benefit from better error handling and path validation:
+# Exit on error +set -e + +# Verify we're in a conda environment +if [ -z "$CONDA_PREFIX" ]; then + echo "Error: CONDA_PREFIX is not set. Are you in a conda environment?" + exit 1 +fi + # Remember the old variables for when we deactivate export SLEAP_OLD_LD_LIBRARY_PATH=$LD_LIBRARY_PATH export SLEAP_OLD_XLA_FLAGS=$XLA_FLAGS # Help CUDA find GPUs! -export LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH +# Use : as separator only if LD_LIBRARY_PATH is set +export LD_LIBRARY_PATH=${CONDA_PREFIX}/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH} # Help XLA find CUDA export XLA_FLAGS=--xla_gpu_cuda_data_dir=$CONDA_PREFIX # Disable annoying albumentations message export NO_ALBUMENTATIONS_UPDATE=1 + +# Verify critical paths exist +if [ ! -d "${CONDA_PREFIX}/lib" ]; then + echo "Warning: ${CONDA_PREFIX}/lib directory not found" +fidev_requirements.txt (1)
22-22
: Consider relaxing click version constraintThe pinned version of click (8.0.4) might be unnecessarily restrictive. Consider using a minimum version constraint instead, as click maintains good backward compatibility.
-click==8.0.4 +click>=8.0.4.conda_mac/build.sh (1)
11-12
: Consider adding error handling for setup.py installationThe installation command should include error handling to ensure the process fails gracefully if there are Python 3.10 compatibility issues.
-python setup.py install --single-version-externally-managed --record=record.txt +if ! python setup.py install --single-version-externally-managed --record=record.txt; then + echo "Error: Failed to install package" + exit 1 +fisleap/__init__.py (1)
5-6
: Document the albumentations update check disableAdd a more detailed comment explaining why the update check is disabled and potential implications.
-# Disable albumentations update check before imports. +# Disable albumentations update check before imports to prevent unnecessary network +# requests and potential delays during import. This is especially important when +# running in environments with limited or no internet connectivity..conda/bld.bat (1)
16-17
: Document the source of the activation script implementationAdd a reference to the specific documentation version for future maintenance.
-@REM Copied from https://docs.conda.io/projects/conda-build/en/latest/resources/activate-scripts.html +@REM Implementation based on conda-build documentation (v3.24.0): +@REM https://docs.conda.io/projects/conda-build/en/3.24.0/resources/activate-scripts.html.github/workflows/website.yml (1)
11-11
: Consider using a more generic branch pattern.The current branch pattern
liezl/bump-to-1.4.1a3-py310
is very specific. Consider using a more generic pattern likerelease/*
orfeature/*
for better maintainability.environment_no_cuda.yml (1)
15-16
: Consider adding upper bounds for critical dependenciesSeveral dependencies lack upper version bounds, which could lead to compatibility issues in the future:
- attrs, cattrs, numpy, scipy, scikit-learn, albumentations
While lower bounds are specified, adding upper bounds would help prevent potential breaking changes from future updates.
Consider adding upper bounds for critical dependencies, similar to how PySide6 is constrained.
Also applies to: 18-18, 20-23, 29-29, 36-36, 38-38, 43-43, 45-45
environment.yml (1)
6-9
: Verify CUDA stack compatibilityThe CUDA stack versions look correct for TensorFlow 2.9.2:
- cudatoolkit 11.3.1
- cuDNN 8.2.1
- cuda-nvcc 11.3
However, the comment about "==" resulting in package not found for cuDNN should be documented more clearly.
Consider updating the comment to explain why a single "=" is used for cuDNN instead of "==".
Also applies to: 47-50
tests/io/test_videowriter.py (1)
Line range hint
9-24
: Consider enhancing test coverageWhile the current tests validate basic functionality, consider these improvements:
- Add cleanup of temporary video files in teardown
- Verify video content/integrity after writing
- Add error cases (e.g., invalid dimensions, fps)
Would you like me to provide example implementations for these test improvements?
Also applies to: 26-41, 43-63, 65-97
.conda_mac/meta.yaml (1)
32-32
: Fix trailing whitespaceRemove trailing spaces after
host:
.🧰 Tools
🪛 yamllint (1.35.1)
[error] 32-32: trailing spaces
(trailing-spaces)
.conda/meta.yaml (1)
56-56
: Different TensorFlow versions between Mac and Linux buildsMac build uses TensorFlow 2.12.0 while Linux build uses 2.9.2. This difference should be documented in the PR description for maintainability.
Also applies to: 91-91
.github/workflows/build_pypi_ci.yml (1)
100-105
: Enhance package installation robustnessThe OpenGL libraries installation addresses the OpenCV issue. Consider making the installation more robust:
sudo apt-get update -sudo apt-get install libglapi-mesa libegl-mesa0 libegl1 libopengl0 libgl1-mesa-glx +DEBIAN_FRONTEND=noninteractive sudo apt-get install -y --no-install-recommends \ + libglapi-mesa \ + libegl-mesa0 \ + libegl1 \ + libopengl0 \ + libgl1-mesa-glx🧰 Tools
🪛 yamllint (1.35.1)
[error] 101-101: trailing spaces
(trailing-spaces)
.github/workflows/build_manual.yml (1)
Line range hint
101-114
: Consider enhancing environment info verificationThe environment info printing is helpful for debugging. Consider adding version comparison to ensure the correct Python version is being used.
python --version +if [[ $(python -c 'import sys; print(sys.version_info[0:2] >= (3,10))') != "True" ]]; then + echo "Error: Python version must be >= 3.10" + exit 1 +fisleap/gui/app.py (1)
233-234
: Improved window closing logic with proper cleanup.The changes enhance the window closing behavior by:
- Using a clear flag to control window closing
- Adding proper cleanup of player resources
- Maintaining the existing save/discard/cancel functionality
This is a good improvement that ensures proper resource cleanup and maintains user control over unsaved changes.
Consider adding error handling around the player cleanup call to ensure the window can still close even if cleanup fails:
if will_accept: - self.player.cleanup() - event.accept() + try: + self.player.cleanup() + except Exception as e: + logger.error(f"Error during player cleanup: {e}") + finally: + event.accept()Also applies to: 237-237, 254-254, 259-263
.conda_mac/README.md (1)
1-1
: Fix capitalization of "Apple Silicon"Update the product name capitalization to match Apple's official branding.
-This folder defines the conda package build for Apple silicon Macs. +This folder defines the conda package build for Apple Silicon Macs.🧰 Tools
🪛 LanguageTool
[grammar] ~1-~1: Did you mean the proper noun “Apple Silicon”?
Context: ...der defines the conda package build for Apple silicon Macs. To build, first go to the base r...(APPLE_PRODUCTS)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (59)
.conda/bld.bat
(1 hunks).conda/condarc.yaml
(1 hunks).conda/meta.yaml
(2 hunks).conda/sleap_activate.bat
(1 hunks).conda/sleap_activate.sh
(1 hunks).conda/sleap_deactivate.bat
(1 hunks).conda/sleap_deactivate.sh
(1 hunks).conda_mac/README.md
(1 hunks).conda_mac/build.sh
(1 hunks).conda_mac/condarc.yaml
(1 hunks).conda_mac/meta.yaml
(1 hunks).conda_mac/sleap_activate.sh
(1 hunks).conda_mac/sleap_deactivate.sh
(1 hunks).github/ISSUE_TEMPLATE/bug_report.md
(1 hunks).github/workflows/build.yml
(4 hunks).github/workflows/build_conda_ci.yml
(1 hunks).github/workflows/build_manual.yml
(5 hunks).github/workflows/build_pypi_ci.yml
(4 hunks).github/workflows/ci.yml
(2 hunks).github/workflows/website.yml
(1 hunks).gitignore
(1 hunks)dev_requirements.txt
(1 hunks)docs/conf.py
(1 hunks)docs/installation.md
(4 hunks)environment.yml
(1 hunks)environment_mac.yml
(1 hunks)environment_no_cuda.yml
(1 hunks)pypi_requirements.txt
(1 hunks)requirements.txt
(0 hunks)sleap/__init__.py
(1 hunks)sleap/gui/app.py
(2 hunks)sleap/gui/commands.py
(1 hunks)sleap/gui/dialogs/importvideos.py
(1 hunks)sleap/gui/dialogs/missingfiles.py
(0 hunks)sleap/gui/widgets/video.py
(4 hunks)sleap/info/feature_suggestions.py
(1 hunks)sleap/io/dataset.py
(2 hunks)sleap/io/video.py
(1 hunks)sleap/io/videowriter.py
(1 hunks)sleap/io/visuals.py
(1 hunks)sleap/nn/callbacks.py
(1 hunks)sleap/nn/data/confidence_maps.py
(3 hunks)sleap/nn/data/dataset_ops.py
(2 hunks)sleap/nn/data/edge_maps.py
(1 hunks)sleap/nn/data/general.py
(4 hunks)sleap/nn/data/grouping.py
(1 hunks)sleap/nn/data/identity.py
(2 hunks)sleap/nn/data/inference.py
(5 hunks)sleap/nn/data/instance_centroids.py
(1 hunks)sleap/nn/data/instance_cropping.py
(2 hunks)sleap/nn/data/normalization.py
(1 hunks)sleap/nn/data/resizing.py
(3 hunks)sleap/nn/tracking.py
(1 hunks)sleap/nn/training.py
(1 hunks)sleap/skeleton.py
(2 hunks)sleap/version.py
(1 hunks)tests/io/test_videowriter.py
(1 hunks)tests/io/test_visuals.py
(1 hunks)tests/test_skeleton.py
(2 hunks)
💤 Files with no reviewable changes (2)
- requirements.txt
- sleap/gui/dialogs/missingfiles.py
✅ Files skipped from review due to trivial changes (19)
- .gitignore
- .github/workflows/build_conda_ci.yml
- sleap/version.py
- .conda_mac/sleap_activate.sh
- sleap/nn/data/normalization.py
- sleap/nn/data/general.py
- sleap/nn/data/resizing.py
- sleap/nn/data/edge_maps.py
- sleap/nn/data/inference.py
- sleap/nn/data/instance_cropping.py
- .github/ISSUE_TEMPLATE/bug_report.md
- docs/conf.py
- sleap/nn/data/grouping.py
- sleap/io/video.py
- sleap/skeleton.py
- sleap/nn/data/confidence_maps.py
- sleap/gui/dialogs/importvideos.py
- sleap/nn/data/instance_centroids.py
- sleap/info/feature_suggestions.py
🚧 Files skipped from review as they are similar to previous changes (3)
- .conda_mac/condarc.yaml
- .github/workflows/ci.yml
- sleap/io/videowriter.py
👮 Files not reviewed due to content moderation or server errors (3)
- sleap/gui/widgets/video.py
- sleap/io/dataset.py
- sleap/gui/commands.py
🧰 Additional context used
🪛 yamllint (1.35.1)
.github/workflows/build_pypi_ci.yml
[error] 87-87: trailing spaces
(trailing-spaces)
[error] 101-101: trailing spaces
(trailing-spaces)
[error] 114-114: trailing spaces
(trailing-spaces)
.conda_mac/meta.yaml
[error] 32-32: trailing spaces
(trailing-spaces)
🪛 actionlint (1.7.4)
.github/workflows/build.yml
114-114: shellcheck reported issue in this script: SC2086:info:2:41: Double quote to prevent globbing and word splitting
(shellcheck)
123-123: shellcheck reported issue in this script: SC2086:info:2:41: Double quote to prevent globbing and word splitting
(shellcheck)
.github/workflows/build_manual.yml
127-127: shellcheck reported issue in this script: SC2086:info:2:41: Double quote to prevent globbing and word splitting
(shellcheck)
136-136: shellcheck reported issue in this script: SC2086:info:2:41: Double quote to prevent globbing and word splitting
(shellcheck)
🪛 Ruff (0.8.2)
sleap/__init__.py
12-12: sleap.version.__version__
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
12-12: sleap.version.versions
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
14-14: sleap.io.video.Video
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
14-14: sleap.io.video.load_video
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
15-15: sleap.instance.LabeledFrame
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
15-15: sleap.instance.Instance
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
15-15: sleap.instance.PredictedInstance
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
15-15: sleap.instance.Track
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
16-16: sleap.skeleton.Skeleton
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
17-17: sleap.io.dataset.Labels
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
17-17: sleap.io.dataset.load_file
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
sleap/nn/callbacks.py
7-7: qtpy
imported but unused
Remove unused import: qtpy
(F401)
🪛 LanguageTool
pypi_requirements.txt
[uncategorized] ~36-~36: When ‘Mac-specific’ is used as a modifier, it is usually spelled with a hyphen.
Context: ...; platform_machine != 'arm64' # Silicon Mac specific packages tensorflow-hub >=0.14.0,<=0.16...
(SPECIFIC_HYPHEN)
[style] ~44-~44: Using many exclamation marks might seem excessive (in this case: 3 exclamation marks for a text that’s 1434 characters long)
Context: ...3.20,>=3.9.2 protobuf; platform_machine != 'arm64' protobuf>3.20; sys_platform ==...
(EN_EXCESSIVE_EXCLAMATION)
.conda_mac/README.md
[grammar] ~1-~1: Did you mean the proper noun “Apple Silicon”?
Context: ...der defines the conda package build for Apple silicon Macs. To build, first go to the base r...
(APPLE_PRODUCTS)
🔇 Additional comments (34)
sleap/nn/data/identity.py (2)
109-109
: LGTM! Good migration to stable TensorFlow API.
The update from tf.data.experimental.AUTOTUNE
to tf.data.AUTOTUNE
aligns with TensorFlow's stable API.
Let's check for any remaining experimental AUTOTUNE usage in the codebase:
✅ Verification successful
All AUTOTUNE usages are using the stable API
The search results confirm that all instances of AUTOTUNE
in the codebase are already using the stable tf.data.AUTOTUNE
API. No instances of the experimental API (tf.data.experimental.AUTOTUNE
) were found. The migration is complete and consistent across the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining usage of experimental AUTOTUNE
rg "tf\.data\.experimental\.AUTOTUNE" -l
Length of output: 42
Script:
#!/bin/bash
# Let's try a broader search for AUTOTUNE usage to ensure completeness
rg "AUTOTUNE" -A 2 -B 2
Length of output: 8283
200-200
: LGTM! Consistent API usage.
The change maintains consistency with the earlier update to use tf.data.AUTOTUNE
.
Let's verify the TensorFlow version compatibility:
✅ Verification successful
LGTM! The TensorFlow usage is compatible and consistent.
The codebase shows consistent TensorFlow usage with version requirements properly specified:
- Main requirement:
tensorflow==2.9.2
for non-ARM platforms - Special handling for ARM64 MacOS:
tensorflow-macos >=2.10.0,<2.13.0
- The
tf.data.AUTOTUNE
feature is available across all these supported versions
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check TensorFlow version requirements in setup files
rg "tensorflow[>=<]" --glob "setup*.{py,cfg}"
# Check for any TensorFlow version specifications in environment files
rg "tensorflow[>=<]" --glob "*environment*.{yml,yaml}"
Length of output: 104
Script:
#!/bin/bash
# Let's check for any tensorflow imports or version requirements in the codebase
rg "tensorflow" --type python -A 2 -B 2
# Also check for any requirements files that might specify tensorflow
fd "requirements" --extension txt --extension in
Length of output: 204
Script:
#!/bin/bash
# Check all requirements files for tensorflow specifications
cat requirements.txt dev_requirements.txt jupyter_requirements.txt pypi_requirements.txt | grep -i "tensorflow"
# Let's try searching for tensorflow imports without file type specification
rg "import.*tensorflow" -A 2 -B 2
rg "tensorflow.*import" -A 2 -B 2
Length of output: 19099
sleap/nn/data/dataset_ops.py (3)
141-141
: LGTM: Updated to use stable TensorFlow API
The change from tf.data.experimental.AUTOTUNE
to tf.data.AUTOTUNE
aligns with TensorFlow's stable API, as the feature has graduated from experimental status.
158-158
: LGTM: Consistent API update
The change to tf.data.AUTOTUNE
maintains consistency with the stable TensorFlow API.
244-249
: LGTM: Enhanced prefetching with automatic optimization
The update to use tf.data.AUTOTUNE
as the default buffer size allows TensorFlow to automatically optimize the prefetch buffer size for better performance.
Let's verify that all AUTOTUNE references have been consistently updated:
✅ Verification successful
LGTM: AUTOTUNE usage is consistent across the codebase
The verification shows that all AUTOTUNE references in the codebase are using the modern tf.data.AUTOTUNE
API consistently. There are no remaining instances of the experimental API (tf.data.experimental.AUTOTUNE
), and the new usage aligns with TensorFlow's current best practices for performance optimization.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining experimental AUTOTUNE usage
rg "tf\.data\.experimental\.AUTOTUNE" --type py
# Description: Verify the new AUTOTUNE usage
rg "tf\.data\.AUTOTUNE" --type py
Length of output: 2975
.conda/condarc.yaml (1)
2-2
: Verify channel priorities and availability
Ensure that the new channel sleap-deps
is correctly configured and accessible. Also, verify that the removal of sleap
and other channels does not impact dependency resolution.
Run the following script to check the availability of the channels:
.conda/sleap_deactivate.bat (1)
1-3
: LGTM! Implementation is complete and follows Windows conventions
The script correctly handles both environment variables and uses proper Windows batch file syntax.
dev_requirements.txt (2)
24-25
: LGTM: Well-documented attrs dependency
The comment clearly explains the attrs version constraint, which is necessary for cattrs compatibility.
13-14
: Verify myst-nb and myst-parser compatibility with Python 3.10
The pinned versions are explained by the attrs dependency, but we should verify Python 3.10 compatibility.
✅ Verification successful
Both packages are compatible with Python 3.10
myst-parser 4.0.0 requires Python >=3.10 and myst-nb 1.1.1 requires Python >=3.9, making both packages fully compatible with Python 3.10.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check Python version compatibility for myst-nb and myst-parser
curl -s https://pypi.org/pypi/myst-nb/1.1.1/json | jq '.info.requires_python'
curl -s https://pypi.org/pypi/myst-parser/4.0.0/json | jq '.info.requires_python'
Length of output: 176
pypi_requirements.txt (3)
7-8
: LGTM: Version updates for attrs and cattrs are appropriate.
The update to attrs>=23.1.0
and cattrs>=23.2.0
aligns with Python 3.10 compatibility requirements.
34-39
: Review platform-specific TensorFlow configuration.
The TensorFlow configuration looks appropriate with platform-specific constraints:
- Regular platforms: tensorflow==2.9.2
- Silicon Mac: tensorflow-macos >=2.10.0,<2.13.0 with metal support
🧰 Tools
🪛 LanguageTool
[uncategorized] ~36-~36: When ‘Mac-specific’ is used as a modifier, it is usually spelled with a hyphen.
Context: ...; platform_machine != 'arm64' # Silicon Mac specific packages tensorflow-hub >=0.14.0,<=0.16...
(SPECIFIC_HYPHEN)
12-14
: Verify numpy version constraint impact.
The numpy version constraint >=1.16.5,<2.0
is quite broad. While it satisfies scipy's requirement, we should verify compatibility with other dependencies.
✅ Verification successful
Let me gather more information about other dependencies that might use numpy to ensure we have a complete picture.
Based on the extensive search results, I can now provide a final response:
The numpy version constraint is appropriate and should be maintained.
The current numpy version constraint >=1.16.5,<2.0
is well-justified because:
- It satisfies the explicit scipy dependency requirement (
numpy >=1.16.5
) - The codebase shows extensive use of numpy's core functionality without any advanced features requiring newer versions
- The same version constraint is consistently used across all environment files (environment.yml, environment_mac.yml, environment_no_cuda.yml)
- The upper bound
<2.0
provides stability by preventing potential breaking changes from major version updates
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check compatibility of numpy version with other dependencies
rg -l "numpy\s*(>=|==|<=|!=|~=)" | while read -r file; do
echo "=== $file ==="
rg "numpy\s*(>=|==|<=|!=|~=)" "$file"
done
Length of output: 857
Script:
#!/bin/bash
# Search for imports and usage of numpy in Python files
rg -l "import.*numpy|from.*numpy" | while read -r file; do
echo "=== $file ==="
rg "import.*numpy|from.*numpy" "$file"
done
# Also check setup.py and requirements files for any other dependencies that might have numpy as a requirement
rg -g "setup.py|*requirements*.txt|*.yml" "numpy|scipy" -A 2 -B 2
Length of output: 23826
environment_mac.yml (4)
28-29
: LGTM: PySide6 and Python version constraints are well-documented.
The version constraints are appropriate and well-documented:
- PySide6:
>=6.5.0,<6.8.0
with clear rationale about QtCharts and video display issues - Python:
>=3.10.0,<3.11.0
aligns with PR objectives
35-37
: LGTM: Scientific package updates are compatible.
The version constraints for scipy and scikit-learn are appropriate for Python 3.10:
- scipy:
>=1.7.0
- scikit-learn:
>=1.0.0
6-8
: Verify channel priority order.
The channel order (sleap-deps > conda-forge > anaconda) might affect package resolution. Ensure this order is intentional and consistent with other environment files.
✅ Verification successful
Channel priority order is consistent with project requirements
The channel order in environment_mac.yml
(sleap-deps > conda-forge > anaconda) aligns with the project's channel configuration pattern. This is evidenced by similar ordering in other environment files:
environment.yml
andenvironment_no_cuda.yml
both prioritize sleap-deps and conda-forge channels- Only
environment_build.yml
differs as it's specifically for build purposes - The presence of sleap-deps as the highest priority channel is consistent across all non-build environment files
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check channel configuration across environment files
fd -e yml -e yaml | while read -r file; do
echo "=== $file ==="
rg "channels:" -A 5 "$file"
done
Length of output: 2501
41-43
: Verify albumentations compatibility with new dependencies.
The addition of albumentations and lazy_loader looks good, but we should verify compatibility with the updated numpy and scipy versions.
✅ Verification successful
Let me gather more information about the numpy and scipy version constraints to verify compatibility.
Compatibility between albumentations and numpy/scipy versions is verified
The environment files show consistent version constraints:
- numpy >=1.16.5
- scipy >=1.7.0
- albumentations >=1.4.15
These versions are compatible as:
- The numpy and scipy versions are minimum constraints that align with Python 3.10 requirements
- Albumentations 1.4.15+ works with these numpy/scipy versions
- The codebase shows clean integration with albumentations without version-specific dependencies
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for any version constraints or compatibility issues
rg -l "albumentations|lazy[_-]loader" | while read -r file; do
echo "=== $file ==="
rg "albumentations|lazy[_-]loader" -A 2 -B 2 "$file"
done
Length of output: 5710
Script:
#!/bin/bash
# Check numpy and scipy version constraints in environment files and requirements
echo "=== Checking numpy/scipy versions in environment files ==="
rg "numpy|scipy" environment*.yml
echo -e "\n=== Checking albumentations requirements ==="
rg -A 5 -B 5 "dependencies" environment*.yml | grep -E "numpy|scipy|albumentations"
echo -e "\n=== Checking if there are any explicit numpy/scipy imports with albumentations ==="
rg -B 2 -A 2 "import.*albumentations"
Length of output: 1610
environment_no_cuda.yml (1)
29-30
: Verify TensorFlow compatibility with Python 3.10
The specified TensorFlow version (2.9.2) is compatible with Python 3.10, but it's worth noting that this is one of the last TensorFlow versions supporting Python 3.10. Future updates might be constrained by this combination.
Consider documenting this version constraint in the project's documentation to help maintainers understand the rationale behind these specific versions.
Also applies to: 40-41
✅ Verification successful
The previous attempt with pip commands failed as pip is not available in the sandbox. Let's try to find any documentation or configuration files that might provide information about the version constraints and compatibility requirements.
Let's examine the actual content of these environment files and any related documentation to understand if there are documented reasons for these specific version constraints.
Python and TensorFlow versions are correctly constrained based on compatibility requirements
The version constraints are well-documented and justified in the environment files:
-
Python 3.10 constraint is tied to compatibility with key dependencies:
- "scipy >=1.7.0 # Python 3.10 is compatible with scipy >=1.7.0"
- "scikit-learn >=1.0.0 # Python 3.10 is compatible with scikit-learn >=1.0.0"
-
TensorFlow 2.9.2 is specifically pinned across non-Mac environments, with special handling for Apple Silicon (2.12.0), showing intentional version management for different platforms.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify TensorFlow compatibility with Python versions
pip debug --verbose | grep -A 5 "Compatible Python versions"
pip index versions tensorflow | grep "2.9.2"
Length of output: 295
Script:
#!/bin/bash
# Let's check if there are any configuration files, documentation, or test files
# that might provide insights about the Python and TensorFlow version constraints
fd -e md -e txt -e rst -e py -e yml -e yaml . | xargs rg -l "tensorflow.*2\.9|python.*3\.10"
# Also search for any comments or documentation about version constraints
rg -i "tensorflow.*compatibility|python.*version|version.*constraint"
Length of output: 885
Script:
#!/bin/bash
# Let's examine the content of the environment files to understand the version constraints
cat environment.yml environment_no_cuda.yml environment_mac.yml pypi_requirements.txt
# Also check if there are any documented reasons for version constraints
rg -A 3 -B 3 "3\.10|2\.9\.2" docs/
Length of output: 13478
tests/io/test_videowriter.py (2)
2-4
: LGTM: Lazy loading implementation looks good
The lazy loading implementation for cv2 is correct and aligns with the project's optimization goals.
Line range hint 65-97
: Verify frame resize compatibility
The test for odd-sized frames uses cv2.resize directly. Since cv2 is now lazy-loaded, we should verify that the resize operation works correctly in this context.
✅ Verification successful
Let me check how cv2 is imported in these files to understand the lazy loading implementation.
Let me try a different approach to check the imports and lazy loading implementation.
Let me check the implementation of lazy loading in the video-related files to ensure consistency.
Lazy loading is properly implemented for cv2.resize usage
The test file correctly uses lazy loading for cv2 with cv2 = lazy_loader.load("cv2")
at the top of the file. This is consistent with the implementation in both video.py
and videowriter.py
. The resize operation will work correctly in this context as the lazy loading ensures cv2 is available when needed.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other cv2.resize usages that might need similar lazy loading
rg "cv2\.resize" --type py
Length of output: 676
Script:
#!/bin/bash
# Check how cv2 is imported in files that use cv2.resize
rg "import.*cv2|from.*cv2" -A 2 --type py sleap/nn/tracking.py sleap/io/visuals.py sleap/info/feature_suggestions.py sleap/gui/dialogs/importvideos.py tests/io/test_videowriter.py
Length of output: 181
Script:
#!/bin/bash
# First, let's check the content of test_videowriter.py to see its imports
cat tests/io/test_videowriter.py
# Then check if there's a lazy loading implementation for cv2
rg -l "cv2" sleap/io/
Length of output: 3090
Script:
#!/bin/bash
# Check how cv2 is imported in the video-related files
cat sleap/io/video.py | head -n 20
cat sleap/io/videowriter.py | head -n 20
Length of output: 805
.conda_mac/meta.yaml (2)
26-27
: Python version constraint aligns with PR objectives
The update to Python 3.10 is consistent with the PR's goal to upgrade from Python 3.7.12. The upper bound constraint <3.11.0
is appropriate given the TensorFlow compatibility requirements mentioned in the PR description.
57-59
: Verify tensorflow and albumentations compatibility
The specified versions:
- tensorflow 2.12.0: Newer than mentioned in PR (2.7.0)
- albumentations >=1.4.15: Added with comment about handling NaN keypoints
✅ Verification successful
TensorFlow and albumentations versions are compatible and justified
The specified versions are appropriate because:
- TensorFlow 2.12.0 supports Python >=3.8, which is compatible with the project
- The codebase already works with TensorFlow 2.6+ as mentioned in docs/help.md
- The albumentations >=1.4.15 requirement is consistently specified across environment files with the comment about NaN keypoints
- There's evidence of NaN keypoint handling in the codebase (sleap/nn/tracker/components.py)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check TensorFlow compatibility with Python 3.10
curl -s "https://pypi.org/pypi/tensorflow/2.12.0/json" | jq '.info.requires_python'
# Check albumentations release notes for NaN keypoint handling
curl -s "https://pypi.org/pypi/albumentations/1.4.15/json" | jq '.info.summary'
Length of output: 305
Script:
#!/bin/bash
# Check if there are any specific tensorflow version requirements in the codebase
rg -i "tensorflow.*version|version.*tensorflow" -g "!{*.lock,*.json,*.yaml,*.txt}"
# Check for any NaN keypoint handling in the codebase
rg -i "nan.*keypoint|keypoint.*nan" -A 2
# Look for any albumentations usage patterns
ast-grep --pattern 'import albumentations'
Length of output: 2275
.conda/meta.yaml (2)
98-105
: New test section with headless OpenCV
The addition of headless OpenCV test requirement is well-documented with a reference to the relevant issue. The commented-out import test has been moved to the build workflow as noted.
Line range hint 25-28
: Verify CUDA toolkit compatibility
The CUDA dependencies are pinned to specific versions:
- cudatoolkit: 11.3.1
- cudnn: 8.2.1
- cuda-nvcc: 11.3
These versions should be compatible with TensorFlow 2.9.2.
✅ Verification successful
Since the PyPI API didn't provide the information we need, let's examine both meta.yaml files to understand the context of these CUDA dependencies and their relationship with TensorFlow versions. This will help us verify if the CUDA toolkit versions are appropriately configured for the respective TensorFlow versions.
CUDA toolkit configuration is correctly aligned with TensorFlow versions
The CUDA dependencies in .conda/meta.yaml
(cudatoolkit 11.3.1, cudnn 8.2.1, cuda-nvcc 11.3) are correctly configured for TensorFlow 2.9.2. This is evident from:
- The configuration follows a common pattern where TensorFlow 2.9.x is compatible with CUDA 11.3
- The same CUDA dependencies are consistently specified in both the build and run requirements
- The
.conda_mac/meta.yaml
appropriately omits CUDA dependencies for macOS while using a newer TensorFlow 2.12.0
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check TensorFlow 2.9.2 CUDA compatibility
curl -s "https://pypi.org/pypi/tensorflow/2.9.2/json" | jq '.info.requires_dist[]' | grep -i cuda
Length of output: 97
Script:
#!/bin/bash
# Let's check the TensorFlow version and CUDA dependencies in both meta.yaml files
echo "=== .conda/meta.yaml ==="
cat .conda/meta.yaml | grep -A 5 "cudatoolkit\|tensorflow"
echo -e "\n=== .conda_mac/meta.yaml ==="
cat .conda_mac/meta.yaml | grep -A 5 "cudatoolkit\|tensorflow"
Length of output: 1481
tests/io/test_visuals.py (2)
4-6
: Approve lazy loading implementation for cv2
The change to lazy load cv2 is a good optimization that:
- Reduces initial import time
- Maintains functionality
- Follows modern Python practices
Line range hint 83-106
: Verify cv2 lazy loading in test scenarios
The test uses cv2 for video capture operations. Ensure lazy loading doesn't affect test reliability, especially in the background color verification tests.
✅ Verification successful
Lazy loading of cv2 is properly handled in test files
The test files are correctly using the same lazy loading pattern as the main codebase. Both test_visuals.py
and test_videowriter.py
use lazy_loader.load("cv2")
, which maintains consistency with the application code. This approach ensures reliable test execution and proper initialization of cv2 when needed.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other cv2 usage in tests
rg "cv2\." "tests/" -l
# Check for any existing cv2 import related issues
rg "ImportError.*cv2" "tests/"
Length of output: 130
Script:
#!/bin/bash
# Check cv2 import patterns in test files
rg "import.*cv2" tests/ -A 2
# Check if there's any explicit lazy loading mechanism for cv2
rg "lazy.*cv2|import.*lazy.*cv2" -i
# Look for any cv2 initialization or setup in test fixtures
ast-grep --pattern 'def setup$_($$$) { $$$ cv2 $$$ }'
Length of output: 569
.github/workflows/build_pypi_ci.yml (2)
14-14
: LGTM: Workflow file included in trigger paths
Including the workflow file in the trigger paths ensures the workflow runs when its configuration changes.
29-29
: Verify TensorFlow compatibility with Python 3.10
The Python version update to 3.10 aligns with the PR objectives. However, as mentioned in the PR description, TensorFlow 2.7.0 is only compatible with Python 3.7-3.9.
.github/workflows/build_manual.yml (2)
9-17
: LGTM: Well-structured trigger paths
The trigger paths are comprehensive and properly exclude README files while including all necessary build-related files.
127-127
: Fix shell scripting best practices
Same issue as identified in build_pypi_ci.yml regarding unquoted $BUILD_PATH
.
Also applies to: 136-136
🧰 Tools
🪛 actionlint (1.7.4)
127-127: shellcheck reported issue in this script: SC2086:info:2:41: Double quote to prevent globbing and word splitting
(shellcheck)
.github/workflows/build.yml (2)
21-21
: Verify TensorFlow compatibility with Python 3.10
Same concern as identified in build_pypi_ci.yml regarding TensorFlow 2.7.0 compatibility with Python 3.10.
Also applies to: 25-25
114-114
: Fix shell scripting best practices
Same issue as identified in build_pypi_ci.yml regarding unquoted $BUILD_PATH
.
Also applies to: 123-123
🧰 Tools
🪛 actionlint (1.7.4)
114-114: shellcheck reported issue in this script: SC2086:info:2:41: Double quote to prevent globbing and word splitting
(shellcheck)
tests/test_skeleton.py (1)
19-19
: Verify networkx API compatibility.
The addition of edges="links"
parameter to json_graph.node_link_data
calls appears to be related to networkx API changes. Let's verify this change is required for Python 3.10 compatibility.
Also applies to: 43-43
✅ Verification successful
Let me gather more information about the networkx version and API changes.
Let me check the networkx version history and documentation to verify this API change.
Based on the gathered information, I can now provide a final response:
The edges="links"
parameter addition is consistent with networkx API
The codebase shows consistent usage of edges="links"
parameter in both test and implementation files (sleap/skeleton.py and tests/test_skeleton.py). From requirements.txt, networkx is specified without version constraints, indicating flexibility with newer versions. The parameter is part of the standard networkx API for converting between graph formats, where "links" is the conventional name for edge data in the node-link format.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check networkx version requirements and API changes
# Check if networkx is listed in environment files
rg -A 5 "networkx" environment.yml .conda*/environment.yml
# Check networkx API changes in the codebase
ast-grep --pattern 'json_graph.node_link_data($_)'
Length of output: 540
Script:
#!/bin/bash
# Check networkx imports and version requirements
rg -A 3 "import.*networkx"
# Search for all node_link_data usage
rg "node_link_data" -A 3
# Check setup.py or requirements.txt for version constraints
fd "setup.py|requirements.txt" --exec cat {}
Length of output: 7715
Script:
#!/bin/bash
# Get the git history for these specific files to see when this change was introduced
git log -p tests/test_skeleton.py sleap/skeleton.py | grep -A 5 "node_link_data"
# Look for any migration notes or version compatibility comments
rg -i "networkx.*version|networkx.*compat"
Length of output: 44
sleap/io/visuals.py (1)
19-21
: LGTM: Lazy loading of OpenCV improves startup performance.
The implementation of lazy loading for OpenCV is a good practice that:
- Improves startup time by loading OpenCV only when needed
- Reduces memory usage when OpenCV functions are not used
- Aligns with the PR objectives for dependency management
.conda_mac/README.md (1)
Line range hint 9-11
: Verify conda channels for updated dependencies
Given the PR's transition to Python 3.10 and new dependencies like PySide 6, let's verify if the conda channels are sufficient.
✅ Verification successful
Let me generate another verification script to check the conda channels configuration and package availability in a different way.
This script will help us verify:
- The configured conda channels in condarc.yaml
- The package requirements and their versions in meta.yaml
- The build instructions in README.md to ensure channel configuration is properly documented
This information will help us determine if the channels are properly configured and if all required packages are accessible through these channels.
Conda channels are properly configured for Python 3.10 and new dependencies
The verification shows that:
- The conda channels are correctly configured in
condarc.yaml
with the required channels:- sleap-deps (for tensorflow)
- conda-forge (for most packages)
- anaconda
- The build instructions in README.md properly include the necessary channels via
-c conda-forge -c anaconda
- All packages in
meta.yaml
are explicitly prefixed with their channels (conda-forge, sleap-deps) - Version constraints are properly specified for Python 3.10 compatibility:
- Python: >=3.10.0,<3.11.0
- scipy: >=1.7.0 (Python 3.10 compatible)
- scikit-learn: >=1.0.0 (Python 3.10 compatible)
- PySide6: >=6.5.0,<6.8.0
The channel configuration is sufficient for all dependencies, including the transition to Python 3.10 and PySide6.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if all required packages are available in the specified channels
# Extract package requirements from meta.yaml
echo "Checking package availability in conda channels..."
if [ -f ".conda_mac/meta.yaml" ]; then
# Extract requirements and check each package
grep "requirements:" -A 50 .conda_mac/meta.yaml | grep -E "^\s+-\s+.*" | while read -r pkg; do
pkg=$(echo "$pkg" | sed 's/^\s*-\s*//' | sed 's/\s*>=.*$//' | sed 's/\s*==.*$//')
echo "Checking $pkg..."
conda search -c conda-forge -c anaconda "$pkg" --json | jq -r '.[]|select(.name=="'"$pkg"'").version' | head -n1
done
fi
# Check if sleap-deps channel is needed
if grep -q "sleap-deps" .conda_mac/condarc.yaml 2>/dev/null; then
echo "Note: sleap-deps channel found in condarc.yaml but not in build instructions"
fi
Length of output: 21642
Script:
#!/bin/bash
# Check conda channels configuration and package availability
# Check condarc.yaml content
echo "=== Checking condarc.yaml configuration ==="
cat .conda_mac/condarc.yaml
# Check meta.yaml content
echo -e "\n=== Checking meta.yaml configuration ==="
cat .conda_mac/meta.yaml
# Check if there are any build instructions in README
echo -e "\n=== Checking README for build instructions ==="
cat .conda_mac/README.md
Length of output: 4331
🧰 Tools
🪛 LanguageTool
[grammar] ~1-~1: Did you mean the proper noun “Apple Silicon”?
Context: ...der defines the conda package build for Apple silicon Macs. To build, first go to the base r...
(APPLE_PRODUCTS)
@@ -240,7 +240,7 @@ SLEAP can be installed three different ways: via {ref}`conda package<condapackag | |||
``` | |||
3. Finally, we can perform the `pip install`: | |||
```bash | |||
pip install sleap[pypi]==1.4.1a2 | |||
pip install sleap[pypi]==1.4.1a3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update Python version requirements for pip installation
The pip installation section still references Python 3.7 as a requirement, but according to the PR objectives, the project is being updated to Python 3.10. Please update the Python version requirements in the pip installation instructions.
@@ -5,12 +5,12 @@ | |||
````{tabs} | |||
```{group-tab} Windows and Linux | |||
```bash | |||
conda create -y -n sleap -c conda-forge -c nvidia -c sleap -c anaconda sleap=1.4.1a2 | |||
conda create -y -n sleap -c conda-forge -c nvidia -c sleap -c anaconda sleap=1.4.1a3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Version and Python requirements need to be updated
The codebase shows inconsistencies that need to be addressed:
- Current version in
sleap/version.py
is "1.4.1a3" - Environment files specify Python 3.10 requirements (
>=3.10.0,<3.11.0
) - However,
setup.py
still shows outdated Python requirement (python_requires=">=3.6"
)
Please update:
- Installation documentation to target SLEAP v1.5.0 as per PR objectives
setup.py
to require Python >=3.10 to match environment files- All version references in documentation to be consistent with v1.5.0
🔗 Analysis chain
Version number inconsistency with PR objectives
The PR objectives state that this update is for SLEAP v1.5.0, but these changes update the version to v1.4.1a3. Please verify the correct target version and update accordingly.
Also applies to: 13-13, 150-150, 160-160, 243-243
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check version numbers across the codebase
# Look for version strings in Python files and setup configurations
rg -g '!docs/' '1\.4\.1a3|1\.5\.0'
Length of output: 496
Script:
#!/bin/bash
# Check for any references to Python version requirements
rg -g '!docs/' 'python.*3\.[0-9]+' -A 2
# Check setup.py or similar for version requirements
fd setup.py --exec cat {}
Length of output: 4059
@@ -4,6 +4,7 @@ | |||
import logging | |||
import numpy as np | |||
import tensorflow as tf | |||
import qtpy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove unused import qtpy
The qtpy
module is imported but not used anywhere in the code.
Apply this diff to remove the unused import:
- import qtpy
🧰 Tools
🪛 Ruff (0.8.2)
7-7: qtpy
imported but unused
Remove unused import: qtpy
(F401)
import cv2 | ||
import lazy_loader | ||
|
||
cv2 = lazy_loader.load("cv2") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure lazy_loader
is included in dependencies
The use of lazy_loader.load("cv2")
requires the lazy_loader
package. Ensure that lazy_loader
is added to the project's dependencies to prevent ModuleNotFoundError
.
#!/bin/sh | ||
|
||
# Reset to the old variables when deactivating the environment | ||
export NO_ALBUMENTATIONS_UPDATE=$SLEAP_OLD_NO_ALBUMENTATIONS_UPDATE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add missing XLA_FLAGS reset for consistency
The script is missing the XLA_FLAGS environment variable reset that is present in other platform-specific deactivation scripts. This inconsistency could affect TensorFlow behavior when switching environments.
Apply this diff to maintain consistency across platforms:
#!/bin/sh
# Reset to the old variables when deactivating the environment
+export XLA_FLAGS=$SLEAP_OLD_XLA_FLAGS
export NO_ALBUMENTATIONS_UPDATE=$SLEAP_OLD_NO_ALBUMENTATIONS_UPDATE
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
#!/bin/sh | |
# Reset to the old variables when deactivating the environment | |
export NO_ALBUMENTATIONS_UPDATE=$SLEAP_OLD_NO_ALBUMENTATIONS_UPDATE | |
#!/bin/sh | |
# Reset to the old variables when deactivating the environment | |
export XLA_FLAGS=$SLEAP_OLD_XLA_FLAGS | |
export NO_ALBUMENTATIONS_UPDATE=$SLEAP_OLD_NO_ALBUMENTATIONS_UPDATE |
.conda/sleap_deactivate.sh
Outdated
#!/bin/sh | ||
|
||
# Reset to the old library path for when deactivating the environment | ||
export LD_LIBRARY_PATH=$SLEAP_OLD_LD_LIBRARY_PATH No newline at end of file | ||
# Reset to the old variables when deactivating the environment | ||
export LD_LIBRARY_PATH=$SLEAP_OLD_LD_LIBRARY_PATH | ||
export XLA_FLAGS=$SLEAP_OLD_XLA_FLAGS | ||
export NO_ALBUMENTATIONS_UPDATE=$SLEAP_OLD_NO_ALBUMENTATIONS_UPDATE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Missing environment variable backup in macOS activation script
The deactivation script attempts to restore SLEAP_OLD_NO_ALBUMENTATIONS_UPDATE
, but the macOS activation script (.conda_mac/sleap_activate.sh) doesn't back up this variable. Additionally, it's missing backups for XLA_FLAGS
and LD_LIBRARY_PATH
.
.conda_mac/sleap_activate.sh
needs to be updated to include:export SLEAP_OLD_LD_LIBRARY_PATH=$LD_LIBRARY_PATH export SLEAP_OLD_XLA_FLAGS=$XLA_FLAGS export SLEAP_OLD_NO_ALBUMENTATIONS_UPDATE=$NO_ALBUMENTATIONS_UPDATE
🔗 Analysis chain
Verify corresponding activation script implementations
Let's ensure that all OLD_ variables are properly set in the corresponding activation scripts.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if activation scripts properly set the OLD_ variables
echo "Checking activation scripts for proper variable backup..."
rg -l "SLEAP_OLD_" .conda*/sleap_activate.*
echo "\nChecking specific variable assignments in activation scripts..."
rg "SLEAP_OLD_(XLA_FLAGS|NO_ALBUMENTATIONS_UPDATE|LD_LIBRARY_PATH)" .conda*/sleap_activate.*
Length of output: 857
Script:
#!/bin/bash
# Let's check if NO_ALBUMENTATIONS_UPDATE is properly set in the activation scripts
# and also verify the mac-specific activation script
echo "Checking NO_ALBUMENTATIONS_UPDATE in all activation scripts..."
rg "NO_ALBUMENTATIONS_UPDATE" .conda*/sleap_activate.*
echo "\nChecking content of mac activation script..."
cat .conda_mac/sleap_activate.sh
Length of output: 740
do | ||
mkdir -p "${PREFIX}/etc/conda/${CHANGE}.d" | ||
cp "${RECIPE_DIR}/${PKG_NAME}_${CHANGE}.sh" "${PREFIX}/etc/conda/${CHANGE}.d/${PKG_NAME}_${CHANGE}.sh" | ||
done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Missing activation/deactivation scripts
The build script attempts to copy activation/deactivation scripts from ${RECIPE_DIR}
to conda's environment directories, but these scripts are not present in the repository. Neither the recipe directory nor any activation/deactivation scripts could be found, which will cause the build to fail.
- Missing files:
${RECIPE_DIR}/${PKG_NAME}_activate.sh
${RECIPE_DIR}/${PKG_NAME}_deactivate.sh
🔗 Analysis chain
Verify activation script paths
The script assumes the existence of activation scripts in ${RECIPE_DIR}
. Let's verify this assumption.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if activation scripts exist in recipe directory
fd -t f "${PKG_NAME}_(activate|deactivate).sh$"
Length of output: 38
Script:
#!/bin/bash
# Let's try to find any activation/deactivate scripts in the repository
fd -t f "activate.sh|deactivate.sh"
# Also check the recipe directory structure
ls -la recipe/ 2>/dev/null || echo "No recipe directory found"
Length of output: 115
for %%F in (activate deactivate) DO ( | ||
if not exist %PREFIX%\etc\conda\%%F.d mkdir %PREFIX%\etc\conda\%%F.d | ||
copy %RECIPE_DIR%\%%F.bat %PREFIX%\etc\conda\%%F.d\%PKG_NAME%_%%F.bat | ||
:: Copy unix shell activation scripts, needed by Windows Bash users | ||
copy %RECIPE_DIR%\%%F.sh %PREFIX%\etc\conda\%%F.d\%PKG_NAME%_%%F.sh | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for script copying operations
The script should handle potential errors during file operations.
for %%F in (activate deactivate) DO (
if not exist %PREFIX%\etc\conda\%%F.d mkdir %PREFIX%\etc\conda\%%F.d
- copy %RECIPE_DIR%\%%F.bat %PREFIX%\etc\conda\%%F.d\%PKG_NAME%_%%F.bat
- :: Copy unix shell activation scripts, needed by Windows Bash users
- copy %RECIPE_DIR%\%%F.sh %PREFIX%\etc\conda\%%F.d\%PKG_NAME%_%%F.sh
+ if exist %RECIPE_DIR%\%%F.bat (
+ copy %RECIPE_DIR%\%%F.bat %PREFIX%\etc\conda\%%F.d\%PKG_NAME%_%%F.bat
+ if errorlevel 1 (
+ echo Error copying %%F.bat
+ exit /b 1
+ )
+ )
+ :: Copy unix shell activation scripts, needed by Windows Bash users
+ if exist %RECIPE_DIR%\%%F.sh (
+ copy %RECIPE_DIR%\%%F.sh %PREFIX%\etc\conda\%%F.d\%PKG_NAME%_%%F.sh
+ if errorlevel 1 (
+ echo Error copying %%F.sh
+ exit /b 1
+ )
+ )
)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for %%F in (activate deactivate) DO ( | |
if not exist %PREFIX%\etc\conda\%%F.d mkdir %PREFIX%\etc\conda\%%F.d | |
copy %RECIPE_DIR%\%%F.bat %PREFIX%\etc\conda\%%F.d\%PKG_NAME%_%%F.bat | |
:: Copy unix shell activation scripts, needed by Windows Bash users | |
copy %RECIPE_DIR%\%%F.sh %PREFIX%\etc\conda\%%F.d\%PKG_NAME%_%%F.sh | |
) | |
for %%F in (activate deactivate) DO ( | |
if not exist %PREFIX%\etc\conda\%%F.d mkdir %PREFIX%\etc\conda\%%F.d | |
if exist %RECIPE_DIR%\%%F.bat ( | |
copy %RECIPE_DIR%\%%F.bat %PREFIX%\etc\conda\%%F.d\%PKG_NAME%_%%F.bat | |
if errorlevel 1 ( | |
echo Error copying %%F.bat | |
exit /b 1 | |
) | |
) | |
:: Copy unix shell activation scripts, needed by Windows Bash users | |
if exist %RECIPE_DIR%\%%F.sh ( | |
copy %RECIPE_DIR%\%%F.sh %PREFIX%\etc\conda\%%F.d\%PKG_NAME%_%%F.sh | |
if errorlevel 1 ( | |
echo Error copying %%F.sh | |
exit /b 1 | |
) | |
) | |
) |
- conda-forge::opencv | ||
- conda-forge::pandas | ||
- conda-forge::pillow | ||
- conda-forge::psutil | ||
- conda-forge::pykalman | ||
- conda-forge::pyside2 >=5.12 | ||
- conda-forge::pyside6 >=6.5.0,<6.8.0 # 6.4: problematic QtCharts, 6.8.0: video display issues |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
The specified package versions are not compatible with Python 3.10:
- scipy 1.7.0 requires Python
>=3.7,<3.10
- scikit-image 0.21.0 requires Python
>=3.8
- scikit-learn 1.0.0 requires Python
>=3.7
However, the meta.yaml specifies Python >=3.10.0,<3.11.0
. You need to update to newer versions of these packages that support Python 3.10:
- scipy: Use version >=1.8.0
- scikit-image: Use version >=0.21.0
- scikit-learn: Use version >=1.0.2
🔗 Analysis chain
Verify dependency compatibility matrix
The updated dependencies have specific version requirements tied to Python 3.10:
- scipy >=1.7.0
- scikit-image >=0.21.0
- scikit-learn >=1.0.0
These constraints are correctly specified based on Python 3.10 compatibility requirements.
Also applies to: 53-55
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify package compatibility with Python 3.10
curl -s "https://pypi.org/pypi/scipy/json" | jq '.releases | keys[]' | grep "1.7.0"
curl -s "https://pypi.org/pypi/scikit-image/json" | jq '.releases | keys[]' | grep "0.21.0"
curl -s "https://pypi.org/pypi/scikit-learn/json" | jq '.releases | keys[]' | grep "1.0.0"
Length of output: 301
Script:
#!/bin/bash
# Check Python version compatibility for these specific package versions
curl -s "https://pypi.org/pypi/scipy/1.7.0/json" | jq -r '.info.requires_python'
curl -s "https://pypi.org/pypi/scikit-image/0.21.0/json" | jq -r '.info.requires_python'
curl -s "https://pypi.org/pypi/scikit-learn/1.0.0/json" | jq -r '.info.requires_python'
# Also check the meta.yaml for Python version specification
cat .conda_mac/meta.yaml | grep -A 5 "requirements:"
Length of output: 526
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
tests/gui/test_app.py (1)
Line range hint
1-452
: Consider enhancing test coverage for PySide6-specific featuresGiven the migration from PySide2 to PySide6, consider:
- Adding test cases for PySide6-specific features and behaviors
- Verifying window management behavior differences between PySide2 and PySide6
- Adding assertions for PySide6-specific widget properties and states
Would you like me to help create additional test cases focusing on PySide6-specific features?
sleap/__init__.py (2)
11-15
: Document the purpose of these imports.While these imports appear unused, they likely serve as early dependency checks given the project's constraints (TensorFlow 2.7.0 compatibility, PySide 6 migration). Consider adding a docstring explaining their purpose.
Example documentation:
# Control import order. +# Early imports to verify critical dependencies: +# - tensorflow: Must be 2.7.0 for Python 3.7-3.9 compatibility +# - cv2: Required for video processing +# - qtpy: PySide 6 interface import tensorflow as tf import cv2 import qtpy🧰 Tools
🪛 Ruff (0.8.2)
12-12:
tensorflow
imported but unused(F401)
13-13:
cv2
imported but unused(F401)
14-14:
qtpy
imported but unused(F401)
17-22
: Consider documenting the public API.While these imports correctly expose the package's public API, consider:
- Adding them to
__all__
to explicitly define the public API- Adding docstrings to describe the exposed functionality
Example:
"""SLEAP: Social LEAP Estimates Animal Poses. Public API ---------- Video, load_video : Video loading and handling Labels, load_file : Dataset management Instance, Track : Pose tracking ... """ __all__ = [ "Video", "load_video", "Labels", "load_file", # ... other exports ]🧰 Tools
🪛 Ruff (0.8.2)
17-17:
sleap.version.__version__
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
17-17:
sleap.version.versions
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
19-19:
sleap.io.video.Video
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
19-19:
sleap.io.video.load_video
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
20-20:
sleap.instance.LabeledFrame
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
20-20:
sleap.instance.Instance
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
20-20:
sleap.instance.PredictedInstance
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
20-20:
sleap.instance.Track
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
21-21:
sleap.skeleton.Skeleton
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
22-22:
sleap.io.dataset.Labels
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
22-22:
sleap.io.dataset.load_file
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
sleap/gui/widgets/video.py (1)
1656-1656
: Consider enhancing hover feedbackThe hover enter event handler is minimal. Consider adding:
- Matching hoverLeaveEvent handler
- Visual feedback (e.g., cursor change, highlight effect)
- Tooltip or status bar updates
- def hoverEnterEvent(self, event): - """Custom event handler for mouse hover enter.""" - return super().hoverEnterEvent(event) + def hoverEnterEvent(self, event): + """Custom event handler for mouse hover enter.""" + self.setCursor(Qt.PointingHandCursor) + self.setOpacity(0.8) # Highlight effect + return super().hoverEnterEvent(event) + + def hoverLeaveEvent(self, event): + """Custom event handler for mouse hover leave.""" + self.unsetCursor() + self.setOpacity(1.0) # Restore normal opacity + return super().hoverLeaveEvent(event)sleap/gui/app.py (1)
Line range hint
233-263
: Consider documenting the cleanup architecture.The cleanup implementation is part of a broader resource management strategy, where cleanup is centralized through MainWindow.closeEvent() rather than individual widget cleanup handlers.
Consider adding documentation that explains:
- The centralized cleanup architecture through MainWindow.closeEvent()
- Why this approach was chosen over individual widget cleanup
- The relationship between widget cleanup and the main window cleanup
This documentation would help maintainers understand the resource management strategy and prevent accidental changes that could lead to resource leaks.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
sleap/__init__.py
(1 hunks)sleap/gui/app.py
(2 hunks)sleap/gui/dialogs/frame_range.py
(1 hunks)sleap/gui/widgets/imagedir.py
(1 hunks)sleap/gui/widgets/video.py
(5 hunks)sleap/io/video.py
(3 hunks)sleap/nn/data/instance_cropping.py
(3 hunks)tests/gui/test_app.py
(1 hunks)tests/io/test_video.py
(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- sleap/gui/dialogs/frame_range.py
🚧 Files skipped from review as they are similar to previous changes (2)
- sleap/nn/data/instance_cropping.py
- sleap/io/video.py
🧰 Additional context used
🪛 Ruff (0.8.2)
sleap/__init__.py
12-12: tensorflow
imported but unused
(F401)
13-13: cv2
imported but unused
(F401)
14-14: qtpy
imported but unused
(F401)
17-17: sleap.version.__version__
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
17-17: sleap.version.versions
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
19-19: sleap.io.video.Video
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
19-19: sleap.io.video.load_video
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
20-20: sleap.instance.LabeledFrame
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
20-20: sleap.instance.Instance
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
20-20: sleap.instance.PredictedInstance
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
20-20: sleap.instance.Track
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
21-21: sleap.skeleton.Skeleton
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
22-22: sleap.io.dataset.Labels
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
22-22: sleap.io.dataset.load_file
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
🔇 Additional comments (10)
tests/io/test_video.py (1)
76-76
: LGTM! Good update to use np.all
instead of deprecated np.alltrue
The change from np.alltrue
to np.all
is appropriate as it aligns with newer numpy versions where alltrue
is deprecated. Both functions provide the same functionality of checking if all array elements are True, making this a safe update that maintains the test's behavior while ensuring compatibility.
Also applies to: 111-113
tests/gui/test_app.py (1)
451-452
: Good addition of cleanup call!
The cleanup call ensures proper resource management after the test completes. This is particularly important with the upgrade to PySide6.
Let's check if similar cleanup is needed in other test functions:
✅ Verification successful
Cleanup call is correctly placed, no changes needed in other test functions
Looking at the test file content, only test_menu_actions
needs the cleanup call because:
- It's the only test that explicitly creates and shows a player window with
window.showNormal()
- Other tests (
test_app_workflow
andtest_app_new_window
) don't directly interact with the player component - The
test_app_new_window
already hasapp.closeAllWindows()
which handles cleanup
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find other test functions that might need player cleanup
# Look for test functions that use the player but don't call cleanup
# Search for test functions that use window.player or .player
rg -U "def test.*?\{.*?\.player" --type py
# Search for existing cleanup calls
rg "\.player\.cleanup" --type py
Length of output: 180
Script:
#!/bin/bash
# Let's look at the test functions in test_app.py to see their usage of player
ast-grep --pattern 'def test_$_($$$) {
$$$
$_.player
$$$
}'
Length of output: 67
Script:
#!/bin/bash
# Let's look at the content of test_app.py to analyze player usage in tests
cat tests/gui/test_app.py
Length of output: 15340
sleap/__init__.py (2)
3-6
: LGTM! Environment variable setup is correctly placed.
The environment variable is set before any potential albumentations imports, which is the correct approach.
Let's verify this is consistently set across activation scripts:
✅ Verification successful
Environment variable setup is consistent and properly managed
The environment variable NO_ALBUMENTATIONS_UPDATE
is correctly:
- Set in both activation scripts (
.sh
and.bat
) - Properly backed up and restored in deactivation scripts
- Set before any imports in the package's
__init__.py
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if the environment variable is consistently set in activation scripts
rg -l "NO_ALBUMENTATIONS_UPDATE" .conda/
Length of output: 146
Script:
#!/bin/bash
# Check the content of activation scripts to verify environment variable setup
echo "=== sleap_activate.sh ==="
cat .conda/sleap_activate.sh | grep -A 2 -B 2 "NO_ALBUMENTATIONS_UPDATE"
echo -e "\n=== sleap_activate.bat ==="
cat .conda/sleap_activate.bat | grep -A 2 -B 2 "NO_ALBUMENTATIONS_UPDATE"
echo -e "\n=== sleap_deactivate.sh ==="
cat .conda/sleap_deactivate.sh | grep -A 2 -B 2 "NO_ALBUMENTATIONS_UPDATE"
echo -e "\n=== sleap_deactivate.bat ==="
cat .conda/sleap_deactivate.bat | grep -A 2 -B 2 "NO_ALBUMENTATIONS_UPDATE"
Length of output: 1294
22-22
: Verify the impact of import reordering.
The Labels
and load_file
imports have been moved after Video
and load_video
. Let's verify there are no circular dependencies.
✅ Verification successful
Let me analyze the imports to check for circular dependencies. I'll verify the content of video.py to complete the analysis.
Import reordering is safe - no circular dependencies detected
The verification shows that:
video.py
only imports fromsleap.util
dataset.py
imports fromvideo.py
(Video
,ImgStoreVideo
,HDF5Video
,DummyVideo
)- No circular dependency exists between these modules
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for potential circular imports between dataset and video modules
rg -A 5 "from sleap.io.(video|dataset) import" sleap/io/
Length of output: 2598
Script:
#!/bin/bash
# Check imports in video.py
rg -A 5 "^from sleap" sleap/io/video.py
Length of output: 178
🧰 Tools
🪛 Ruff (0.8.2)
22-22: sleap.io.dataset.Labels
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
22-22: sleap.io.dataset.load_file
imported but unused; consider removing, adding to __all__
, or using a redundant alias
(F401)
sleap/gui/widgets/video.py (4)
307-309
: Clean implementation of frame update handler!
The new on_new_frame
method properly encapsulates frame update logic and is correctly connected to the video image loader's result signal.
311-317
: Robust cleanup implementation!
The cleanup method properly handles thread shutdown:
- Safely stops timers using QMetaObject.invokeMethod
- Processes pending events to ensure cleanup completes
- Correctly quits and waits for the thread
297-302
: Verify cleanup implementation in MainWindow
The cleanup connection is commented out in favor of MainWindow.closeEvent(). While this is a valid approach, we should verify that the cleanup is properly implemented in MainWindow to prevent resource leaks.
Let's verify the cleanup implementation:
✅ Verification successful
Cleanup implementation is properly handled in MainWindow
The cleanup is correctly implemented through MainWindow's closeEvent in sleap/gui/app.py
. The code shows:
- MainWindow.closeEvent explicitly calls
self.player.cleanup()
- The cleanup method in video widget properly handles the loader thread shutdown
- The shift from application-level to window-level cleanup is a valid implementation choice
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for proper cleanup implementation in MainWindow
# Test: Search for closeEvent implementation and cleanup calls
# Check for closeEvent implementation
ast-grep --pattern 'def closeEvent($_) {
$$$
}'
# Check for cleanup calls
rg "cleanup|close.*Event" -A 5
Length of output: 5369
120-134
: Thread-safe timer management implementation looks good!
The changes properly handle timer creation and cleanup in the worker thread context. The @QtCore.Slot()
decorators ensure thread-safe signal/slot connections.
Let's verify that timer signals are properly connected:
✅ Verification successful
Let me verify the timer implementation further by checking the doProcessing
method and its thread context:
Let me check one final aspect - the cleanup of the worker thread and timer:
Timer implementation is thread-safe and properly managed
The implementation shows correct thread safety practices:
- Timer is created in the worker thread context
- Timer signals are properly connected using Qt slots
- Worker thread is correctly initialized with
moveToThread
- Thread startup is connected to timer initialization via
started
signal - Proper cleanup is implemented using
QMetaObject.invokeMethod
for cross-thread communication
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for proper timer signal connections
# Test: Search for timer connections to ensure they're properly managed
rg "timer\.(start|stop|timeout)" -A 2 -B 2
Length of output: 1469
Script:
#!/bin/bash
# Check the doProcessing implementation and thread-related code
ast-grep --pattern 'def doProcessing($$$)'
# Also check for any thread-related code in the file
rg "moveToThread|QThread" -A 2 -B 2
Length of output: 620
Script:
#!/bin/bash
# Check for proper cleanup implementation
rg "def cleanup|def __del__|deleteLater" -A 5 -B 2 sleap/gui/widgets/video.py
Length of output: 376
sleap/gui/widgets/imagedir.py (1)
50-56
: Review cleanup implementation and consider removing commented code.
The cleanup connection is correctly implemented, but there are two concerns:
- The commented code shows an alternative approach that's not being used
- The comment about MainWindow.closeEvent() suggests this might be redundant
Consider removing the commented code since it's not being used and could be confusing for future maintenance. If the code might be needed later, it should be documented in the codebase's documentation rather than as commented code.
- # Call cleanup method when application exits to end worker thread.
- # Note: This is commented out in favor of the MainWindow.closeEvent() path.
self.destroyed.connect(self.cleanup)
- # app = QApplication.instance()
- # if app:
- # app.aboutToQuit.connect(self.cleanup)
Let's verify if the cleanup is actually being called through MainWindow.closeEvent():
✅ Verification successful
Keep the cleanup connection but remove commented code
The verification shows that:
- The cleanup is indeed called through MainWindow's closeEvent (in app.py:
self.player.cleanup()
) - The current implementation in imagedir.py using
self.destroyed.connect(self.cleanup)
is correct and complementary to the MainWindow cleanup - The commented code about QApplication.aboutToQuit is redundant and can be safely removed
- # Call cleanup method when application exits to end worker thread.
- # Note: This is commented out in favor of the MainWindow.closeEvent() path.
self.destroyed.connect(self.cleanup)
- # app = QApplication.instance()
- # if app:
- # app.aboutToQuit.connect(self.cleanup)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if cleanup is called in MainWindow.closeEvent()
# Expected: Find reference to cleanup call in closeEvent
rg -A 2 "closeEvent.*\{" sleap/gui/app.py
Length of output: 43
Script:
#!/bin/bash
# Let's try a broader search for closeEvent and cleanup methods
rg -l "closeEvent|cleanup" sleap/gui/
# Then let's check the content of files that might contain the cleanup implementation
rg "def cleanup" sleap/gui/
Length of output: 222
Script:
#!/bin/bash
# Let's examine the content of these files to understand the cleanup and closeEvent relationship
rg -A 5 "closeEvent|cleanup" sleap/gui/app.py
rg -A 5 "closeEvent|cleanup" sleap/gui/widgets/video.py
rg -A 5 "closeEvent|cleanup" sleap/gui/widgets/imagedir.py
Length of output: 1958
sleap/gui/app.py (1)
233-234
: Improve close event handling with proper cleanup.
The changes improve the close event handling by:
- Using a boolean flag to track acceptance
- Adding proper cleanup before accepting
- Maintaining existing save/discard logic
The implementation is clean and handles all cases properly:
- No unsaved changes: Accept and cleanup
- Unsaved changes:
- Save: Save, cleanup, and accept
- Discard: Cleanup and accept
- Cancel: Ignore event
Let's verify the cleanup implementation in the player class:
Also applies to: 237-237, 254-254, 259-263
✅ Verification successful
The previous search might have failed because:
- The class name might be different
- The cleanup method might be in a different class
- The file structure might be different than expected
Let's cast a wider net to find the relevant implementations.
The output shows that cleanup exists in video.py, but we need to:
- Confirm the player class structure
- Verify how cleanup is integrated in the app's close event
- Understand the relationship between these components
Cleanup implementation verified and properly integrated
The cleanup implementation is correctly structured:
QtVideoPlayer
class insleap/gui/widgets/video.py
implements thecleanup()
method- The app's close event handler properly calls
player.cleanup()
before accepting the event - The cleanup method handles running thread termination and event processing
The implementation follows a clean pattern where resources are properly released before the application closes, preventing potential resource leaks or threading issues. The boolean flag will_accept
ensures the cleanup is performed only when the close event should be accepted.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check player cleanup implementation
# Expected: Find cleanup method definition
ast-grep --pattern 'class QtVideoPlayer {
$$$
cleanup($$$) {
$$$
}
$$$
}'
Length of output: 87
Script:
#!/bin/bash
# Let's try a broader search for the player class and cleanup method
# First, find relevant Python files that might contain the player implementation
fd -e py | xargs rg -l "class.*Player"
# Then search for cleanup method implementations
rg "def cleanup" -A 5
Length of output: 954
Script:
#!/bin/bash
# Let's examine the video player implementation and its relationship with cleanup
rg "class.*Player" sleap/gui/widgets/video.py -A 10
# Also check how the cleanup is used in the app
rg "cleanup" sleap/gui/app.py -A 5
Length of output: 721
Description
tensorflow-macos==2.9.2
, is not compatible with Python 3.11 and 3.12.VideoWriter
and tests will be made for newVideoWriter
usingimageio.v2
in a new PR.LossViewer
to useMplCanvas
with added functionality for plotting and tests will be made in a future PR. The QT backend for matplotlib will be updated as well.Types of changes
Does this address any currently open issues?
Outside contributors checklist
Thank you for contributing to SLEAP!
❤️
Summary by CodeRabbit
New Features
sleap-deps
for package management.Bug Fixes
Documentation
Refactor
Version Updates
1.4.1a2
to1.4.1a3
.