PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
-
Updated
Apr 30, 2025 - Python
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
yolort is a runtime stack for yolov5 on specialized accelerators such as tensorrt, libtorch, onnxruntime, tvm and ncnn.
A Multi-threaded Implementation of AlphaZero (C++)
Lightweight tool to deploy PyTorch models to AWS Lambda
A curated list of awesome inference deployment framework of artificial intelligence (AI) models. OpenVINO, TensorRT, MediaPipe, TensorFlow Lite, TensorFlow Serving, ONNX Runtime, LibTorch, NCNN, TNN, MNN, TVM, MACE, Paddle Lite, MegEngine Lite, OpenPPL, Bolt, ExecuTorch.
Yet another ssd, with its runtime stack for libtorch, onnx and specialized accelerators.
Sample Node.js app for image style transfer using libtorchjs
Just messing around with PyTorch 1.0's JIT compiler and their new C++ API Libtorch.
Codes available of a paper: An Efficient Cervical Whole Slide Image Analysis Framework Based on Multi-scale Semantic and Location Deep Features.
This demo shows you how to build a single pose estimation algorithm in C++ using libtorch The model is trained using pytorch (Alphapose's SPPE model) , Check their github for training the model
Example of loading pytorch model in C++ with libtorch
Supporting materials for my ADC 2024 poster titled "Build your own AI Plugin with JUCE & LibTorch"
Co-DETR (Detection Transformer) compiled from PyTorch to NVIDIA TensorRT
Revised MonoDepth2 for LibTorch C++ inference.
Low light image enhancement on low power devices
UNet for Semantic Segmentation
A multi-thread implementation of AlphaZero for Gomoku.
Add a description, image, and links to the libtorch topic page so that developers can more easily learn about it.
To associate your repository with the libtorch topic, visit your repo's landing page and select "manage topics."