Adaptive Token Sampling for Efficient Vision Transformers (ECCV 2022 Oral Presentation)
-
Updated
May 3, 2024 - Shell
Adaptive Token Sampling for Efficient Vision Transformers (ECCV 2022 Oral Presentation)
A radically simple, reliable, and high performance template to enable you to quickly get set up building multi-agent applications
Code for "Can We Scale Transformers to Predict Parameters of Diverse ImageNet Models?" [ICML 2023]
Deploy KoGPT with Triton Inference Server
Implementation of the paper: "Audio Mamba: Bidirectional State Space Model for Audio Representation Learning" in pytorch
RWKV Wiki website (archived, please visit official wiki)
Japanese NLP sample codes
reference pytorch code for huggingface transformers
Master's Final Degree Project on Artificial Intelligence and Big Data
HydraNet is a state-of-the-art transformer architecture that combines Multi-Query Attention (MQA), Mixture of Experts (MoE), and continuous learning capabilities.
Scripts run to produce the RiboTIE paper
A collection of Jupyter Notebooks exploring key topics in Artificial Intelligence, including recommender systems, explainable AI, reinforcement learning, and transformers.
Adaptive Token Sampling for Efficient Vision Transformers (ECCV 2022 Oral Presentation)
Setup transformers development environment using Docker
Add a description, image, and links to the transformers topic page so that developers can more easily learn about it.
To associate your repository with the transformers topic, visit your repo's landing page and select "manage topics."