Skip to content

Fix GRPC KeepAlive #680

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Mar 26, 2025
Merged

Fix GRPC KeepAlive #680

merged 7 commits into from
Mar 26, 2025

Conversation

mkysel
Copy link
Collaborator

@mkysel mkysel commented Mar 26, 2025

GRPC keep-alives are not sufficient in some load balanced environments. We need to send an actual payload

Fixes #669

Add Connection Keep-Alive Mechanisms and Update Docker Build Workflow

Added keep-alive mechanisms to gRPC connections and message subscriptions, and modified Docker build workflow to always push images. Changes include: 5-second keep-alive ticker in SubscribeEnvelopes, gRPC keepalive parameters in connectToNode, and unconditional Docker image pushing in build workflow.

📍Where to Start

The SubscribeEnvelopes method in service.go which contains the new keep-alive implementation.


Macroscope summarized 958ca65.

Summary by CodeRabbit

  • New Features

    • Enhanced the replication service with configurable periodic health checks to maintain robust connections.
    • Updated service initialization and test setups to include new configuration options.
  • Bug Fixes

    • Improved stream processing by filtering out empty data, ensuring more stable and efficient operations.

Copy link
Contributor

coderabbitai bot commented Mar 26, 2025

Walkthrough

The changes introduce a configurable replication keep-alive mechanism. A new field of type config.ReplicationOptions is added to the Service struct with a SendKeepAliveInterval. This interval drives a ticker in the SubscribeEnvelopes method, which sends periodic keep-alive messages to the stream. Additionally, modifications include updating constructor signatures across API and server code, adding a guard in the sync worker to ignore empty envelope inputs, and adjusting test configurations to reflect the new replication option.

Changes

File(s) Change Summary
pkg/api/message/service.go, pkg/server/server.go, pkg/server/server_test.go, pkg/testutils/api/api.go Replication Options Integration: Added a new field options config.ReplicationOptions to the Service struct; updated the NewReplicationApiService constructor and its invocations to accept and pass replication options; incorporated a ticker in SubscribeEnvelopes that uses SendKeepAliveInterval to send periodic keep-alive messages.
pkg/sync/syncWorker.go Guard Clause Addition: Inserted a conditional check in listenToStream to skip processing when envelope data is nil or empty.
pkg/config/options.go Configuration Enhancement: Extended the ReplicationOptions struct by adding the field SendKeepAliveInterval time.Duration (with CLI and environment variable annotations) and reformatted the existing Enable field alignment.

Sequence Diagram(s)

sequenceDiagram
    participant C as Client
    participant S as Service
    participant T as Ticker
    participant Str as Stream

    C->>S: SubscribeEnvelopes()
    S->>T: Create ticker using SendKeepAliveInterval
    loop Message Processing
        alt Envelope Received
            S->>S: Process envelope and reset ticker
            S->>Str: Forward envelope
        else Ticker Fires
            S->>Str: Send keep-alive message
            alt Failure in sending
                S->>S: Return error with INTERNAL status
            end
        end
    end
    S-->>C: End processing / close stream
Loading

Assessment against linked issues

Objective Addressed Explanation
[#669] Prevent sync drops due to RST_STREAM with error code INTERNAL_ERROR

Possibly related PRs

Suggested reviewers

  • fbac
  • neekolas
✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
pkg/api/message/service.go (1)

116-122: Consider lowering the log level for keep-alive messages.

While the keep-alive mechanism is appropriate for maintaining the connection, logging at INFO level for every keep-alive message (every 5 seconds) could generate excessive logs in production.

-			log.Info("sending keep-alive")
+			log.Debug("sending keep-alive")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a002a4e and 958ca65.

📒 Files selected for processing (3)
  • .github/workflows/build-xmtpd.yml (1 hunks)
  • pkg/api/message/service.go (1 hunks)
  • pkg/sync/syncWorker.go (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Test (Node)
🔇 Additional comments (5)
.github/workflows/build-xmtpd.yml (1)

65-65:

❓ Verification inconclusive

The build will now push Docker images for all events, including pull requests.

The workflow has been modified to always push Docker images to the registry, regardless of the event type. This change simplifies the workflow logic but will result in images being published for all pull requests.

Please verify that this aligns with your team's container registry policies, as this could potentially increase storage usage with images from short-lived feature branches.


🏁 Script executed:

#!/bin/bash
# Check what tags are being generated for this image
git checkout $GITHUB_SHA
echo "Testing what tags would be generated for PR builds..."
PR_NUMBER=999
FAKE_REF="refs/pull/$PR_NUMBER/merge"
echo "For PR #$PR_NUMBER ($FAKE_REF):"
docker-metadata-action-entrypoint << EOF
{
  "images": ["ghcr.io/xmtp/xmtpd"],
  "tags": [
    {"type": "schedule"},
    {"type": "ref", "event": "branch"},
    {"type": "ref", "event": "tag"},
    {"type": "ref", "event": "pr"},
    {"type": "sha"}
  ],
  "github": {
    "event_name": "pull_request",
    "ref": "$FAKE_REF",
    "sha": "$GITHUB_SHA"
  }
}
EOF

Length of output: 417


Action Required: Confirm PR Image Push Behavior & Docker Metadata Command Availability

The workflow now pushes Docker images for every event—including pull requests—which could lead to increased storage usage from images generated by short-lived branches. During our test, invoking the metadata command resulted in a “command not found” error for docker-metadata-action-entrypoint. This suggests that either the command isn’t available in the current testing context or may require additional environment setup.

Please manually verify that:

  • Pushing images for pull requests aligns with our container registry policies.
  • The docker-metadata-action-entrypoint command is intended to be available in this runtime or configured correctly.
pkg/sync/syncWorker.go (2)

24-24: LGTM: Adding keepalive package for gRPC connection maintenance.

The new import will be used to configure keepalive parameters for the gRPC client connections.


302-306: Implemented gRPC keepalive to prevent connection drops.

Adding these keepalive parameters is a good approach to maintain long-lived gRPC connections, particularly in environments with intermediary proxies or load balancers that might terminate idle connections.

The chosen parameters look reasonable:

  • 20-second ping interval when idle
  • 5-second timeout for ping responses
  • Pings allowed even without active streams

This complements the stream-level keepalive mechanism added in the message service.

pkg/api/message/service.go (2)

111-113: LGTM: Setting up keepalive ticker for subscription streams.

Creating a ticker to periodically send keep-alive messages is a good approach to maintain long-running streams.


117-120: LGTM: Keep-alive mechanism implementation.

Sending a nil message as a keepalive is an effective way to prevent connection drops due to inactivity, especially with proxies or load balancers that might terminate idle connections.

However, the error message doesn't match the operation - it mentions "could not send header" when actually sending a keepalive message.

-				return status.Errorf(codes.Internal, "could not send header: %v", err)
+				return status.Errorf(codes.Internal, "could not send keep-alive message: %v", err)

@mkysel mkysel changed the title Mkysel/build custom Fix GRPC KeepAlive Mar 26, 2025
Copy link

macroscopeapp bot commented Mar 26, 2025

Changes in bdc40d7

  • Modified Service.SubscribeEnvelopes method to send empty SubscribeEnvelopesResponse objects instead of nil for GRPC keepalive messages
  • Configured Docker build action to push images only for non-pull request events using conditional expression ${{ github.event_name != 'pull_request' }}

Macroscope summarized bdc40d7.

@mkysel mkysel marked this pull request as ready for review March 26, 2025 19:19
@mkysel mkysel requested a review from a team as a code owner March 26, 2025 19:19
Copy link

macroscopeapp bot commented Mar 26, 2025

Changes in 2f44dda

  • Removed gRPC keepalive configuration from syncWorker.connectToNode method, eliminating 15-second ping intervals, 5-second timeouts, and permit-without-stream settings

Macroscope summarized 2f44dda.

@mkysel mkysel requested a review from neekolas March 26, 2025 19:20
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
pkg/api/message/service.go (1)

115-147: Consider adding debug logging for keep-alive messages.

While the implementation is solid, adding debug logging when keep-alive messages are sent would improve observability, especially when troubleshooting connection issues in production.

		case <-ticker.C:
			err = stream.Send(&message_api.SubscribeEnvelopesResponse{})
			if err != nil {
				return status.Errorf(codes.Internal, "could not send keepalive: %v", err)
			}
+			log.Debug("sent keep-alive message")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 234e11f and 2f44dda.

📒 Files selected for processing (2)
  • pkg/api/message/service.go (5 hunks)
  • pkg/sync/syncWorker.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • pkg/sync/syncWorker.go
🧰 Additional context used
🧬 Code Definitions (1)
pkg/api/message/service.go (1)
pkg/config/options.go (1)
  • ReplicationOptions (50-53)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Test (Node)
🔇 Additional comments (7)
pkg/api/message/service.go (7)

7-7: Appropriate import addition.

Adding the import for the config package is necessary to use the ReplicationOptions type for the new keep-alive feature.


49-49: Good addition of configuration options field.

Adding the options field to the Service struct is appropriate for storing the replication configuration, including the new keep-alive interval setting.


60-60: Constructor properly updated with new parameter.

The constructor signature and struct initialization correctly include the new options parameter, maintaining consistency with the dependency injection pattern used throughout the codebase.

Also applies to: 82-82


115-119: Well-documented keep-alive implementation.

The comments clearly explain the rationale behind implementing application-level keep-alive messages, with a reference to the specific GitHub issue. The ticker initialization and proper resource cleanup with defer ticker.Stop() follow Go best practices.


123-127: Fixed keep-alive message per previous review.

The implementation now correctly sends an empty SubscribeEnvelopesResponse instead of a nil message, addressing the previous review comment. However, the error message could be more consistent with the suggested wording.

-				return status.Errorf(codes.Internal, "could not send keepalive: %v", err)
+				return status.Errorf(codes.Internal, "error sending keep-alive: %v", err)

129-129: Effective timer reset strategy.

Resetting the keep-alive ticker whenever a new envelope is received is an efficient approach that prevents unnecessary keep-alive messages during active communication periods.


115-147:

❓ Verification inconclusive

Verify the keep-alive interval configuration across environments.

The default value of 30 seconds for the keep-alive interval (from the config snippet) seems reasonable, but depending on your network infrastructure and load balancers, you might need to adjust this value. Consider testing with different intervals to find the optimal balance between connection stability and network overhead.


🏁 Script executed:

#!/bin/bash
# Check the current configuration settings for keep-alive across environments

echo "Looking for keep-alive interval configuration..."
rg -A 2 "XMTPD_API_SEND_KEEP_ALIVE_INTERVAL" --type yaml --type sh

Length of output: 173


Action Required: Verify Keep-Alive Interval Configuration

  • The keep-alive ticker is set using s.options.SendKeepAliveInterval—presumably defaulting to 30 seconds.
  • Our initial search for the configuration parameter (e.g., XMTPD_API_SEND_KEEP_ALIVE_INTERVAL) in YAML and shell files returned no results, suggesting that this value might be hardcoded or configured elsewhere.
  • Please manually verify how the keep-alive interval is set in your various deployment environments and consider testing with alternate intervals to optimize the balance between connection reliability and network overhead.

Copy link
Contributor

@neekolas neekolas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can keep investigating if there is an infrastructure-level solution to this, but it'll unblock us for now.

@mkysel mkysel merged commit 2bcc92c into main Mar 26, 2025
9 checks passed
@mkysel mkysel deleted the mkysel/build-custom branch March 26, 2025 20:10
mkysel added a commit that referenced this pull request Mar 26, 2025
GRPC keep-alives are not sufficient in some load balanced environments.
We need to send an actual payload

Fixes #669
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Sync gets dropped regularly due to RST_STREAM with error code: INTERNAL_ERROR
2 participants