You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Jobs that run on Circle CI start from scratch but may use a shared conan cache (managed by Circle CI).
Having a cache saves a couple of minutes on downloading the binary packages and building boost, protobuf and libtorrent from source. It also reduces the load on conancenter because the packages are downloaded from Circle CI.
Having a single conan cache shared across all jobs is not possible, because it was not designed for this. It may lead to broken and not reproducible builds. That's why on Circle CI a job reuses an existing cache only if the "cache key" matches, meaning that conan profile, conanfile.py and conan.cmake are the same between the current job run and the job run that produced the cache.
On our self-hosted runners all jobs unconditionally share the same conan cache - in /home/qarunner/.conan2, which leads to spurious failures on conan-related file updates. The PRs are not isolated so a cache update in one PR might break all other PRs.
We should implement cache isolation and replicate the cache management logic similar to Circle CI for our self-hosted runners.
The text was updated successfully, but these errors were encountered:
battlmonstr
changed the title
conan cache on self-hosted runners
conan cache on self-hosted runners shared too much
Mar 21, 2025
Jobs that run on Circle CI start from scratch but may use a shared conan cache (managed by Circle CI).
Having a cache saves a couple of minutes on downloading the binary packages and building boost, protobuf and libtorrent from source. It also reduces the load on conancenter because the packages are downloaded from Circle CI.
Having a single conan cache shared across all jobs is not possible, because it was not designed for this. It may lead to broken and not reproducible builds. That's why on Circle CI a job reuses an existing cache only if the "cache key" matches, meaning that conan profile, conanfile.py and conan.cmake are the same between the current job run and the job run that produced the cache.
On our self-hosted runners all jobs unconditionally share the same conan cache - in /home/qarunner/.conan2, which leads to spurious failures on conan-related file updates. The PRs are not isolated so a cache update in one PR might break all other PRs.
We should implement cache isolation and replicate the cache management logic similar to Circle CI for our self-hosted runners.
The text was updated successfully, but these errors were encountered: