Releases: ipfs/kubo
v0.34.1
This patch release was brought to you by the Shipyard team.
- updates
go-libp2p
to v0.41.1- high impact fix from go-libp2p#3221 improves hole punching success rate
- updates
quic-go
to v0.50.1
See 0.34 Release Notes for full list of changes since 0.33.x
v0.34.0
This release was brought to you by the Shipyard team.
- 🔦 Highlights
- AutoTLS now enabled by default for nodes with 1 hour uptime
- New WebUI features
- RPC and CLI command changes
- Bitswap improvements from Boxo
- IPNS publishing TTL change
IPFS_LOG_LEVEL
deprecated- Pebble datastore format update
- Badger datastore update
- Datastore Implementation Updates
- One Multi-error Package
- Fix hanging pinset operations during reprovides
- 📦️ Important dependency updates
- 📝 Changelog
- 👨👩👧👦 Contributors
🗣 Discuss
If you have comments, questions, or feedback on this release, please post here.
If you experienced any bugs with the release, please post an issue.
🔦 Highlights
AutoTLS now enabled by default for nodes with 1 hour uptime
Starting now, any publicly dialable Kubo node with a /tcp
listener that remains online for at least one hour will receive a TLS certificate through the AutoTLS
feature.
This occurs automatically, with no need for manual setup.
To bypass the 1-hour delay and enable AutoTLS immediately, users can explicitly opt-in by running the following commands:
$ ipfs config --json AutoTLS.Enabled true
$ ipfs config --json AutoTLS.RegistrationDelay 0
AutoTLS will remain disabled under the following conditions:
- The node already has a manually configured
/ws
(WebSocket) listener - A private network is in use with a
swarm.key
- TCP or WebSocket transports are disabled, or there is no
/tcp
listener
To troubleshoot, use GOLOG_LOG_LEVEL="error,autotls=info
.
For more details, check out the AutoTLS
configuration documentation or dive deeper with AutoTLS libp2p blog post.
New WebUI features
The WebUI, accessible at http://127.0.0.1:5001/webui/, now includes support for CAR file import and QR code sharing directly from the Files view. Additionally, the Peers screen has been updated with the latest ipfs-geoip
dataset.
RPC and CLI command changes
ipfs config
is now validating json fields (#10679).- Deprecated the
bitswap reprovide
command. Make sure to switch to modernrouting reprovide
. (#10677) - The
stats reprovide
command now shows additional stats forRouting.AcceleratedDHTClient
, indicating the last and nextreprovide
times. (#10677) ipfs files cp
now performs basic codec check and will error when source is not a valid UnixFS (onlydag-pb
andraw
codecs are allowed in MFS)
Bitswap improvements from Boxo
This release includes performance and reliability improvements and fixes for minor resource leaks. One of the performance changes greatly improves the bitswap clients ability to operate under high load, that could previously result in an out of memory condition.
IPNS publishing TTL change
Many complaints about IPNS being slow are tied to the default --ttl
in ipfs name publish
, which was set to 1 hour. To address this, we’ve lowered the default IPNS Record TTL during publishing to 5 minutes, matching similar TTL defaults in DNS. This update is now part of boxo/ipfs
(GO, boxo#859) and @helia/ipns
(JS, helia#749).
Tip
IPNS TTL recommendations when even faster update propagation is desired:
- As a Publisher: Lower the
--ttl
(e.g.,ipfs name publish --ttl=1m
) to further reduce caching delays. If using DNSLink, ensure the DNS TXT record TTL matches the IPNS record TTL. - As a Gateway Operator: Override publisher TTLs for faster updates using configurations like
Ipns.MaxCacheTTL
in Kubo orRAINBOW_IPNS_MAX_CACHE_TTL
in Rainbow.
IPFS_LOG_LEVEL
deprecated
The variable has been deprecated. Please use GOLOG_LOG_LEVEL
instead for configuring logging levels.
Pebble datastore format update
If the pebble database format is not explicitly set in the config, then automatically upgrade it to the latest format version supported by the release ob pebble used by kubo. This will ensure that the database format is sufficiently up-to-date to be compatible with a major version upgrade of pebble. This is necessary before upgrading to use pebble v2.
Badger datastore update
An update was made to the badger v1 datastore that avoids use of mmap in 32-bit environments, which has been seen to cause issues on some platforms. Please be aware that this could lead to a performance regression for users of badger in a 32-bit environment. Badger users are advised to move to the flatds or pebble datastore.
Datastore Implementation Updates
The go-ds-xxx datastore implementations have been updated to support the updated go-datastore
v0.8.2 query API. This update removes the datastore implementations' dependency on goprocess
and updates the query API.
One Multi-error Package
Kubo previously depended on multiple multi-error packages, github.com/hashicorp/go-multierror
and go.uber.org/multierr
. These have nearly identical functionality so there was no need to use both. Therefore, go.uber.org/multierr
was selected as the package to depend on. Any future code needing multi-error functionality should use go.uber.org/multierr
to avoid introducing unneeded dependencies.
Fix hanging pinset operations during reprovides
The reprovide process can be quite slow. In default settings, the reprovide process will start reading CIDs that belong to the pinset. During this operation, starvation can occur for other operations that need pinset access (see #10596).
We have now switch to buffering pinset-related cids that are going to be reprovided in memory, so that we can free pinset mutexes as soon as possible so that pinset-writes and subsequent read operations can proceed. The downside is larger pinsets will need some extra memory, with an estimation of ~1GiB of RAM memory-use per 20 million items to be reprovided.
Use Reprovider.Strategy
to balance announcement prioritization, speed, and memory utilization.
📦️ Important dependency updates
- update
go-libp2p
to v0.41.0 (incl. v0.40.0) - update
go-libp2p-kad-dht
to v0.30.2 (incl. v0.29.0, v0.29.1, v0.29.2, v0.30.0, v0.30.1) - update
boxo
to v0.29.1 (incl. v0.28.0 v0.29.0) - update
ipfs-webui
to v4.6.0 (incl. v4.5.0) - update
p2p-forge/client
to v0.4.0 - update
go-datastore
to v0.8.2 (incl. v0.7.0, v0.8.0)
📝 Changelog
Full Changelog
- github.com/ipfs/kubo:
- chore: v0.34.0
- chore: v0.34.0-rc2
- docs: mention Reprovider.Strategy config
- docs: ipns ttl change
- feat: ipfs-webui v4.6 (#10756) (ipfs/kubo#10756)
- docs(readme): update min. requirements + cleanup (#10750) (ipfs/kubo#10750)
- Upgrade to Boxo v0.29.1 (#10755) ([#10755](ht...
v0.34.0-rc2
See the draft changelog: docs/changelogs/v0.34.md
And related issue: #10685
This release is brought to you by the Shipyard team.
v0.34.0-rc1
See the draft changelog: docs/changelogs/v0.34.md
And related issue: #10685
This release is brought to you by the Shipyard team.
v0.33.2

This is a tiny patch release with a single change:
- update
go-libp2p
to v0.38.3
See 0.33 Release Notes for full list of changes since 0.32.x
🗣 Discuss
If you have comments, questions, or feedback on this release, please post here.
If you experienced any bugs with the release, please post an issue.
📝 Changelog
Full Changelog
- github.com/ipfs/kubo:
- chore: v0.33.2
- github.com/libp2p/go-libp2p (v0.38.2 -> v0.38.3):
- Release v0.38.3 (#3184) (libp2p/go-libp2p#3184)
👨👩👧👦 Contributors
This release was brought to you by the Shipyard team.
Contributor | Commits | Lines ± | Files Changed |
---|---|---|---|
sukun | 1 | +122/-23 | 7 |
Marcin Rataj | 1 | +1/-1 | 1 |
v0.33.1
This is a patch release with an important boxo/bitswap
fix that we believe should reach you without waiting for 0.34 :)
See 0.33.0 for full list of changes since 0.32.1.
🔦 Highlights

Bitswap improvements from Boxo
This release includes boxo/bitswap
performance and reliability improvements and fixes for minor resource leaks. One of the performance changes greatly improves the bitswap clients ability to operate under high load, that could previously result in an out of memory condition.
Improved IPNS interop
Improved compatibility with third-party IPNS publishers by restoring support for compact binary CIDs in the Value
field of IPNS Records (IPNS Specs). As long the signature is valid, Kubo will now resolve such records (likely created by non-Kubo nodes) and convert raw CIDs into valid /ipfs/cid
content paths.
Note: This only adds support for resolving externally created records—Kubo’s IPNS record creation remains unchanged. IPNS records with empty Value
fields default to zero-length /ipfs/bafkqaaa
to maintain backward compatibility with code expecting a valid content path.
📦️ Important dependency updates
🗣 Discuss
This release was brought to you by the Shipyard team.
If you have comments, questions, or feedback on this release, please post here.
If you experienced any bugs with the release, please post an issue.
📝 Changelog
Full Changelog v0.33.1
- github.com/ipfs/kubo:
- chore: v0.33.1
- fix: boxo v0.27.4 (#10692) (ipfs/kubo#10692)
- docs: add webrtc-direct fixes to 0.33 release changelog (#10688) (ipfs/kubo#10688)
- fix: config help (#10686) (ipfs/kubo#10686)
- github.com/ipfs/boxo (v0.27.2 -> v0.27.4):
- Release v0.27.4 (ipfs/boxo#832)
- fix(ipns): reading records with raw []byte Value (#830) (ipfs/boxo#830)
- fix(bitswap): blockpresencemanager leak (#833) (ipfs/boxo#833)
- Always send cancels even if peer has no interest (#829) (ipfs/boxo#829)
- tidy changelog (ipfs/boxo#828)
- Update changelog (#827) (ipfs/boxo#827)
- fix(bitswap): filter interests from received messages (#822) (ipfs/boxo#822)
- Reduce unnecessary logging work (#826) (ipfs/boxo#826)
- fix: bitswap lock contention under high load (#817) (ipfs/boxo#817)
- fix: bitswap simplify cancel (#824) (ipfs/boxo#824)
- fix(bitswap): simplify SessionInterestManager (#821) (ipfs/boxo#821)
- feat: Better self-service commands for DHT providing (#815) (ipfs/boxo#815)
- bitswap/client: fewer wantlist iterations in sendCancels (#819) (ipfs/boxo#819)
- style: cleanup code by golangci-lint (#797) (ipfs/boxo#797)
- Move long messagequeue comment to doc.go (#814) (ipfs/boxo#814)
- Describe how bitswap message queue works (ipfs/boxo#813)
👨👩👧👦 Contributors
Contributor | Commits | Lines ± | Files Changed |
---|---|---|---|
Dreamacro | 1 | +304/-376 | 119 |
Andrew Gillis | 7 | +306/-200 | 20 |
Guillaume Michel | 5 | +122/-98 | 14 |
Marcin Rataj | 2 | +113/-7 | 4 |
gammazero | 6 | +41/-11 | 6 |
Sergey Gorbunov | 1 | +14/-2 | 2 |
Daniel Norman | 1 | +9/-0 | 1 |
v0.33.0
This release was brought to you by the Shipyard team.
- 🗣 Discuss
- 🔦 Highlights
- Shared TCP listeners
- AutoTLS takes care of Secure WebSockets setup
- Bitswap improvements from Boxo
- Using default
libp2p_rcmgr
metrics - Flatfs does not
sync
on each write ipfs add --to-files
no longer works with--wrap
ipfs --api
supports HTTPS RPC endpoints- New options for faster writes:
WriteThrough
,BlockKeyCacheSize
,BatchMaxNodes
,BatchMaxSize
- MFS stability with large number of writes
- New DoH resolvers for non-ICANN DNSLinks
- Reliability improvements to the WebRTC Direct listener
- 📦️ Important dependency updates
- Escape Redirect URL for Directory
- 📝 Changelog
- 👨👩👧👦 Contributors
🗣 Discuss
If you have comments, questions, or feedback on this release, please post here.
If you experienced any bugs with the release, please post an issue.
🔦 Highlights
Shared TCP listeners
Kubo now supports sharing the same TCP port (4001
by default) by both raw TCP and WebSockets libp2p transports.
This feature is not yet compatible with Private Networks and can be disabled by setting LIBP2P_TCP_MUX=false
if causes any issues.
AutoTLS takes care of Secure WebSockets setup
It is no longer necessary to manually add /tcp/../ws
listeners to Addresses.Swarm
when AutoTLS.Enabled
is set to true
. Kubo will detect if /ws
listener is missing and add one on the same port as pre-existing TCP (e.g. /tcp/4001
), removing the need for any extra configuration.
Tip
Give it a try:
$ ipfs config --json AutoTLS.Enabled true
And restart the node. If you are behind NAT, make sure your node is publicly diallable (uPnP or port forwarding), and wait a few minutes to pass all checks and for the changes to take effect.
See AutoTLS
for more information.
Bitswap improvements from Boxo
This release includes some refactorings and improvements affecting Bitswap which should improve reliability. One of the changes affects blocks providing. Previously, the bitswap layer took care itself of announcing new blocks -added or received- with the configured provider (i.e. DHT). This bypassed the "Reprovider", that is, the system that manages precisely "providing" the blocks stored by Kubo. The Reprovider knows how to take advantage of the AcceleratedDHTClient, is able to handle priorities, logs statistics and is able to resume on daemon reboot where it left off. From now on, Bitswap will not be doing any providing on-the-side and all announcements are managed by the reprovider. In some cases, when the reproviding queue is full with other elements, this may cause additional delays, but more likely this will result in improved block-providing behaviour overall.
Using default libp2p_rcmgr
metrics
Bespoke rcmgr metrics were removed, Kubo now exposes only the default libp2p_rcmgr
metrics from go-libp2p.
This makes it easier to compare Kubo with custom implementations based on go-libp2p.
If you depended on removed ones, please fill an issue to add them to the upstream go-libp2p.
Flatfs does not sync
on each write
New repositories initialized with flatfs
in Datastore.Spec
will have sync
set to false
.
The old default was overly conservative and caused performance issues in big repositories that did a lot of writes. There is usually no need to flush on every block write to disk before continuing. Setting this to false is safe as kubo will automatically flush writes to disk before and after performing critical operations like pinning. However, we still provide users with ability to set this to true to be extra-safe (at the cost of a slowdown when adding files in bulk).
ipfs add --to-files
no longer works with --wrap
Onboarding files and directories with ipfs add --to-files
now requires non-empty names. due to this, The --to-files
and --wrap
options are now mutually exclusive (#10612).
ipfs --api
supports HTTPS RPC endpoints
CLI and RPC client now supports accessing Kubo RPC over https://
protocol when multiaddr ending with /https
or /tls/http
is passed to ipfs --api
:
$ ipfs id --api /dns/kubo-rpc.example.net/tcp/5001/tls/http
# → https://kubo-rpc.example.net:5001
New options for faster writes: WriteThrough
, BlockKeyCacheSize
, BatchMaxNodes
, BatchMaxSize

Now that Kubo supports pebble
as an experimental datastore backend, it becomes very useful to expose some additional configuration options for how the blockservice/blockstore/datastore combo behaves.
Usually, LSM-tree based datastore like Pebble or Badger have very fast write performance (blocks are streamed to disk) while incurring in read-amplification penalties (blocks need to be looked up in the index to know where they are on disk), specially noticiable on spinning disks.
Prior to this version, BlockService
and Blockstore
implementations performed a Has(cid)
for every block that was going to be written, skipping the writes altogether if the block was already present in the datastore. The performance impact of this Has()
call can vary. The Datastore
implementation itself might include block-caching and things like bloom-filters to speed up lookups and mitigate read-penalties. Our Blockstore
implementation also supports a bloom-filter (controlled by BloomFilterSize
and disabled by default), and a two-queue cache for keys and block sizes. If we assume that most of the blocks added to Kubo are new blocks, not already present in the datastore, or that the datastore itself includes mechanisms to optimize writes and avoid writing the same data twice, the calls to Has()
at both BlockService and Blockstore layers seem superflous to they point they even harm write performance.
For these reasons, from now on, the default is to use a "write-through" mode for the Blockservice and the Blockstore. We have added a new option Datastore.WriteThrough
, which defaults to true
. Previous behaviour can be obtained by manually setting it to false
.
We have also made the size of the two-queue blockstore cache configurable with another option: Datastore.BlockKeyCacheSize
, which defaults to 65536
(64KiB). Additionally, this caching layer can be disabled altogether by setting it to 0
. In particular, this option controls the size of a blockstore caching layer that records whether the blockstore has certain block and their sizes (but does not cache the contents, so it stays relativey small in general).
Finally, we have added two new options to the Import
section to control the maximum size of write-batches: BatchMaxNodes
and BatchMaxSize
. These are set by default to 128
nodes and 20MiB
. Increasing them will batch more items together when importing data with ipfs dag import
, which can speed things up. It is importance to find a balance between available memory (used to hold the batch), disk latencies (when writing the batch) and processing power (when preparing the batch, as nodes are sorted and duplicates removed).
As a reminder, details from all the options are explained in the configuration documentation.
We recommend users trying Pebble as a datastore backend to disable both blockstore bloom-filter and key caching layers and enable write through as a way to evaluate the raw performance of the underlying datastore, which includes its own bloom-filter and caching layers (default cache size is 8MiB
and can be configured in the options.
MFS stability with large number of writes
We have fixed a number of issues that were triggered by writing or copying many files onto an MFS folder: increased memory usage first, then CPU, disk usage, and eventually a deadlock on write operations. The details of the fixes can be read at #10630 and #10623. The result is that writing large amounts of files to an MFS folder should now be possible without major issues. It is possible, as before, to speed up the operations using the `ipfs fi...
v0.33.0-rc3
This is the Third Release Candidate (RC3) with boxo and quic-go fixes.
See the draft changelog: docs/changelogs/v0.33.md
Related: release issue, discussion forum topic
This Kubo release is brought to you by the Shipyard team.
v0.33.0-rc2
Caution
We've identified a regression, working on a fix, there may be RC3 later this week.
This is second Release Candidate (RC2) with multiple fixes since RC1.
See the related issue: #10580, discussion forum topic and the draft changelog: docs/changelogs/v0.33.md
This release was brought to you by the Shipyard team.
v0.33.0-rc1
This is a Release Candidate we managed to ship before holiday break :-)
See the related issue: #10580 + discussion forum topic
And the draft changelog: docs/changelogs/v0.33.md

This release was brought to you by the Shipyard team.