Skip to content

Introduce structured, span-based observability through Logger interface #3766

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

wvanlint
Copy link
Contributor

@wvanlint wvanlint commented May 2, 2025

This change allows users to create hierarchical span objects through the Logger interface for specific computations, such as the handling of HTLCs. These span objects will be held in LDK across the corresponding lifetimes before being dropped, providing insight in durations and latencies.

This API is designed to be compatible with https://docs.rs/opentelemetry/latest/opentelemetry/trace/trait.Tracer.html, but can also be used more directly to derive time-based metrics or to log durations.

Hierarchical RAII spans are currently added for:

  • HTLC lifetimes, HTLC state transitions, inbound to outbound HTLC forwarding (see functional_tests.rs).
  • Ping/pong request-response pairs.

@ldk-reviews-bot
Copy link

ldk-reviews-bot commented May 2, 2025

👋 I see @TheBlueMatt was un-assigned.
If you'd like another reviewer assignemnt, please click here.

@wvanlint wvanlint force-pushed the trace_spans branch 2 times, most recently from d115f88 to 40a6022 Compare May 2, 2025 22:17
Copy link

codecov bot commented May 2, 2025

Codecov Report

Attention: Patch coverage is 84.86395% with 89 lines in your changes missing coverage. Please review.

Project coverage is 89.38%. Comparing base (89f5217) to head (bd59e5e).

Files with missing lines Patch % Lines
lightning/src/ln/channel.rs 84.84% 32 Missing and 28 partials ⚠️
lightning/src/ln/channelmanager.rs 76.36% 13 Missing ⚠️
lightning/src/util/logger.rs 73.91% 6 Missing ⚠️
lightning/src/chain/channelmonitor.rs 0.00% 3 Missing ⚠️
lightning/src/ln/invoice_utils.rs 0.00% 3 Missing ⚠️
lightning-dns-resolver/src/lib.rs 0.00% 1 Missing ⚠️
lightning-net-tokio/src/lib.rs 0.00% 1 Missing ⚠️
lightning/src/ln/onion_payment.rs 83.33% 0 Missing and 1 partial ⚠️
lightning/src/util/test_utils.rs 96.00% 1 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff            @@
##             main    #3766    +/-   ##
========================================
  Coverage   89.37%   89.38%            
========================================
  Files         157      157            
  Lines      124095   124443   +348     
  Branches   124095   124443   +348     
========================================
+ Hits       110915   111228   +313     
- Misses      10469    10495    +26     
- Partials     2711     2720     +9     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@wvanlint wvanlint force-pushed the trace_spans branch 2 times, most recently from 086fbe5 to f7c48f7 Compare May 3, 2025 01:04
This change allows users to create hierarchical span objects through the
Logger interface for specific computations, such as the handling of
HTLCs. These span objects will be held in LDK across the corresponding
lifetimes before being dropped, providing insight in durations and
latencies.
@wvanlint wvanlint marked this pull request as ready for review May 5, 2025 06:58
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, okay so there was def a miscommunication when we spoke about switching to an RAII span concept 😅. At a high level I wasn't anticipate we'd give spans for a total HTLC forwards - I was assuming we'd only do spans for basically function runtime in LDK - "how long did it take us to process events", "how long did it take us to process the pending forwards", etc. From there, we might give spans for things like monitor updates (including the parent idea, which is cool), so you could answer questions like "how much of the time spent processing pending forwards is spent in monitor updates".

By focusing on answering questions about specific function runtime the "parent" thing is super well-defined. But once we switch to non-functional spans, it becomes a bit murky - a monitor update might block several payment forwards, so should it have all of them as "parents"?

Of course using only functional blocks as spans makes it less able to answer the types of questions I think you specifically want, eg "how long is it taking us to forward most HTLCs", but as it lies its hard to take that and answer "why is it taking us long to forward some HTLCs".

We could also split the difference and have functional spans for LDK things, but also non-functional spans for outside-LDK things - ie the above spans plus a span for how long each async monitor update took to persist as well as how long it took to get a response from a peer once we send a message that needs a response. Sadly the latter is a bit annoying to build, commitment_signed -> revoke_and_ack latency is pretty easy to measure, though once we get the RAA we're still waiting on another commitment_signed in many cases. There may be a way to do it specific to an HTLC kinda like you have here but just around messages themselves.

WDYT?

cltv_expiry: u32,
payment_hash: PaymentHash,
state_wrapper: InboundHTLCStateWrapper,
span: BoxedSpan,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh man its a bit weird to start storing user objects in all our structs. I was kinda anticipating an RAII struct of our own that calls an end method on the Logger, though thinking about it now its definitely obvious that letting the user define the struct is easier for them (I guess otherwise they'd have to keep a map of pending spans and drop the opentelemetry::Tracer by hand?).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah exactly, this was to avoid reconciliation on the user side.

@TheBlueMatt TheBlueMatt removed the request for review from valentinewallace May 5, 2025 14:33
@wvanlint
Copy link
Contributor Author

wvanlint commented May 5, 2025

Haha no worries, I imagined it it wasn't the same but wanted to write this up as a starting point.

I agree on the difference between function scope spans and higher-level state machine spans and how they answer different questions. Given the amount of logic that depends on messages, timers, and asynchronous persistence completions, I think we would need both complementing each other to have a complete understanding.

  • Function scope spans are clear and easy to optimize for, but don't cross certain boundaries and would not give a complete picture. User trait implementations are already easy to measure but it's difficult to decide when it's impactful to improve them.
  • Higher-level spans around the Lightning state machine would give an end-to-end view for Lightning payments and can identify rough areas of investigation.

If we're building higher-level state machine spans on top of messages outside of LDK as in https://discord.com/channels/915026692102316113/978829624635195422/1296877062015029258, would it lead to rebuilding/duplicating something similar to the Lightning state machine to get the full picture? It seems like channel.rs already has all the necessary information.

I'm also curious what we should optimize for. I'm assuming we would like to have maximal observability, while minimizing overhead. Is maintaining span objects in the Lightning state machine too invasive? Would there be a more minimal event API that can be added to channel.rs?

Having start(span: Span, parent: Option<&Span>) -> () and end(span: Span) -> () methods without the RAII concept could be less invasive, possibly with some helpers. We could also still verify in TestLogger whether all spans are ended correctly during tests.

@wvanlint wvanlint requested a review from TheBlueMatt May 13, 2025 18:09
@TheBlueMatt
Copy link
Collaborator

Oops, was travelling last week.

Of course using only functional blocks as spans makes it less able to answer the types of questions I think you specifically want, eg "how long is it taking us to forward most HTLCs", but as it lies its hard to take that and answer "why is it taking us long to forward some HTLCs".

I think this is somewhat the key point - we want to be able to answer questions like how long its taking to send/receive/forward an HTLC but also importantly why.

That may imply we need a bit of both - functional spans for CPU-blocked blocks (like process_pending_htlc_forwards) as well as "state step" notifications for HTLCs - ie something more like the following on the HTLC side:

enum HTLC {
Inbound(InboundHTLCStateDetails),
Outbound(OutboundHTLCStateDetails + AddedToOutboundQueue)
}
fn htlc_state_transitioned(channel_id, htlc_id: u64, payment_hash, new_state: HTLC);

This still wouldn't quite answer the "why it it taking so long to forward HTLCs", but maybe if we joined it with blame on wait-time inside a channel:

fn channel_waiting_on_peer_response(channel_id);
fn channel_peer_responded(channel_id);
fn channel_monitor_persist_start(channel_id, monitor_update_id);
fn channel_monitor_persist_finish(channel_id, monitor_update_id);
fn waiting_on_async_signer(channel_id);
fn async_signer_complete(channel_id);

From there we could answer the above question by looking at how long individual state transitions took - from InboundHTLCStateDetails::AwaitingRemoteRevokeToAdd to InboundHTLCStateDetails::Committed we can look at how much time was spent waiting on a peer vs waiting on a monitor persist or an async signing op. Then the latency from there to a corresponding HTLC::Outbound::AddedToOutboundQueue is on us (visible via monitor and async signer latency), and then the time from there to OutboundHTLCStateDetails::Committed can be seen in the ratio between waiting-on-peer and monitor/async-signing.

It's pretty gnarly to actually answer the why, here, sadly, though - you have to track the HTLC:channel relation and then track time in the channel "states" during the time the HTLC is in different states...I suppose we could break out the HTLC state transitions into channel sub-states like

enum ChanState {
WaitingOnPeerResponse,
WaitingOnMonitorPersist,
WaitingOnAsyncSigner,
NotWaiting
}
fn htlc_state_transition(channel_id, htlc_id: u64, payment_hash, state: HTLC, channel_state: ChanState)

but that would require calling htlc_state_transition once for each HTLC in a channel every time the channel updates, which seems kinda ugly (I guess we could skip it if the HTLC is in the Committed state which would reduce it to only HTLCs that are changing) and it would still require tracking the HTLCs as they cross channel boundaries when forwarded.

Of course this is all somewhat orthogonal to whether to do things RAII - either we require the user do the mapping on their side or we store the RAII objects and forward them around. Personally I prefer to not store user objects in our state machine, but I admit I don't really have a strong argument for not doing so - there's not really much difference in terms of runtime switching, but it does seem maybe-useful for the user to get a &self on the span closure. Its also less clear to me how to map the state transitions onto a Span, I guess we'd just be calling htlc_state_transition on the span object.

@TheBlueMatt TheBlueMatt removed their request for review May 13, 2025 19:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants