Neural Magic
Neural Magic empowers developers to optimize and deploy LLMs at scale. Our model compression and acceleration enable top performance with vLLM.
Pinned Loading
Repositories
Showing 10 of 69 repositories
- compressed-tensors Public
A safetensors extension to efficiently store sparse quantized tensors on disk
- gateway-api-inference-extension Public Forked from kubernetes-sigs/gateway-api-inference-extension
Gateway API Inference Extension
- speculators Public
- model-validation-configs Public
- lmms-eval Public Forked from EvolvingLMMs-Lab/lmms-eval
Accelerating the development of large multimodal models (LMMs) with one-click evaluation module - lmms-eval.