Lightweight C++ tensor library
Cnine is a simple C++/CUDA tensor library developed by Risi Kondor's group at the University of Chicago. Cnine is designed to make some of the power of modern GPU architectures accessible directly from C++ code, without relying on complex proprietary libraries.
Documentation for the Python/PyTorch API is at https://risi-kondor.github.io/cnine/.
Cnine is released under the custom noncommercial license included in the file LICENSE.TXT
Please install the CUDA toolkit first!
Install with pip (includes PyTorch 2.0+):
pip install .
For custom PyTorch versions:
- Install desired PyTorch CPU first
pip install torch --index-url https://download.pytorch.org/whl/cpu
- Then install the build dependencies of cnine manually. (Listed in
pyproject.toml
/build-system
/requires
.) Except fortorch
since we manually installed that first.pip install scikit-build-core pybind11
- Then install cnine, we disable the isolation build, so that module we just manually installed are used as dependencies during the build stage
pip install --no-build-isolation .
Cnine uses scikit-build-core. To build manually:
git clone https://github.com/risi-kondor/cnine.git
cd cnine
mkdir build && cd build
cmake .. -DCMAKE_PREFIX_PATH="/path/to/python/installation"
make -j4