Skip to content

[lldb-dap] Adding additional asserts to unit tests. #140107

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

ashgti
Copy link
Contributor

@ashgti ashgti commented May 15, 2025

Adding an assert that the 'continue' request succeeds caused a number of tests to fail. This showed a number of tests that were not specifying if they should be stopped or not at key points in the test. This is likely contributing to these tests being flaky since the debugger is not in the expected state.

Additionally, I spent a little time trying to improve the readability of the dap_server.py and lldbdap_testcase.py.

Adding an assert that the 'continue' request succeeds caused a number of tests to fail. This showed a number of tests that were not specifying if they should be stopped or not at key points in the test. This is likely contributing to these tests being flaky since the debugger is not in the expected state.

Additionally, I spent a little time trying to improve the readability of the dap_server.py and lldbdap_testcase.py.
@llvmbot
Copy link
Member

llvmbot commented May 15, 2025

@llvm/pr-subscribers-lldb

Author: John Harrison (ashgti)

Changes

Adding an assert that the 'continue' request succeeds caused a number of tests to fail. This showed a number of tests that were not specifying if they should be stopped or not at key points in the test. This is likely contributing to these tests being flaky since the debugger is not in the expected state.

Additionally, I spent a little time trying to improve the readability of the dap_server.py and lldbdap_testcase.py.


Patch is 55.89 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/140107.diff

18 Files Affected:

  • (modified) lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py (+130-98)
  • (modified) lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py (+57-170)
  • (modified) lldb/test/API/tools/lldb-dap/attach/TestDAP_attach.py (+9-7)
  • (modified) lldb/test/API/tools/lldb-dap/breakpoint-events/TestDAP_breakpointEvents.py (+2-5)
  • (modified) lldb/test/API/tools/lldb-dap/cancel/TestDAP_cancel.py (+1-3)
  • (modified) lldb/test/API/tools/lldb-dap/commands/TestDAP_commands.py (+3-2)
  • (modified) lldb/test/API/tools/lldb-dap/completions/TestDAP_completions.py (+5-5)
  • (modified) lldb/test/API/tools/lldb-dap/coreFile/TestDAP_coreFile.py (+5-5)
  • (modified) lldb/test/API/tools/lldb-dap/exception/TestDAP_exception.py (+1)
  • (modified) lldb/test/API/tools/lldb-dap/launch/TestDAP_launch.py (+11-10)
  • (modified) lldb/test/API/tools/lldb-dap/module/TestDAP_module.py (+2-2)
  • (modified) lldb/test/API/tools/lldb-dap/restart/TestDAP_restart.py (+21-22)
  • (modified) lldb/test/API/tools/lldb-dap/stop-hooks/TestDAP_stop_hooks.py (-5)
  • (modified) lldb/test/API/tools/lldb-dap/variables/TestDAP_variables.py (+1-1)
  • (modified) lldb/tools/lldb-dap/DAPError.cpp (+9)
  • (modified) lldb/tools/lldb-dap/DAPError.h (+10-1)
  • (modified) lldb/tools/lldb-dap/Handler/ContinueRequestHandler.cpp (+4-1)
  • (modified) lldb/tools/lldb-dap/Handler/RequestHandler.h (+40-33)
diff --git a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py
index 73f7b0e91d57a..205b30ff4d2ec 100644
--- a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py
+++ b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/dap_server.py
@@ -12,6 +12,13 @@
 import sys
 import threading
 import time
+from typing import Any, Optional, Union, BinaryIO, TextIO
+
+## DAP type references
+Event = dict[str, Any]
+Request = dict[str, Any]
+Response = dict[str, Any]
+ProtocolMessage = Union[Event, Request, Response]
 
 
 def dump_memory(base_addr, data, num_per_line, outfile):
@@ -98,55 +105,40 @@ def dump_dap_log(log_file):
     print("========= END =========", file=sys.stderr)
 
 
-def read_packet_thread(vs_comm, log_file):
-    done = False
-    try:
-        while not done:
-            packet = read_packet(vs_comm.recv, trace_file=vs_comm.trace_file)
-            # `packet` will be `None` on EOF. We want to pass it down to
-            # handle_recv_packet anyway so the main thread can handle unexpected
-            # termination of lldb-dap and stop waiting for new packets.
-            done = not vs_comm.handle_recv_packet(packet)
-    finally:
-        # Wait for the process to fully exit before dumping the log file to
-        # ensure we have the entire log contents.
-        if vs_comm.process is not None:
-            try:
-                # Do not wait forever, some logs are better than none.
-                vs_comm.process.wait(timeout=20)
-            except subprocess.TimeoutExpired:
-                pass
-        dump_dap_log(log_file)
-
-
 class DebugCommunication(object):
-    def __init__(self, recv, send, init_commands, log_file=None):
-        self.trace_file = None
+    def __init__(
+        self,
+        recv: BinaryIO,
+        send: BinaryIO,
+        init_commands: list[str],
+        log_file: Optional[TextIO] = None,
+    ):
+        # For debugging test failures, try setting `trace_file = sys.stderr`.
+        self.trace_file: Optional[TextIO] = None
+        self.log_file = log_file
         self.send = send
         self.recv = recv
-        self.recv_packets = []
+        self.recv_packets: list[Optional[ProtocolMessage]] = []
         self.recv_condition = threading.Condition()
-        self.recv_thread = threading.Thread(
-            target=read_packet_thread, args=(self, log_file)
-        )
+        self.recv_thread = threading.Thread(target=self._read_packet_thread)
         self.process_event_body = None
-        self.exit_status = None
+        self.exit_status: Optional[int] = None
         self.initialize_body = None
-        self.thread_stop_reasons = {}
-        self.progress_events = []
+        self.progress_events: list[Event] = []
         self.reverse_requests = []
         self.sequence = 1
         self.threads = None
+        self.thread_stop_reasons = {}
         self.recv_thread.start()
         self.output_condition = threading.Condition()
-        self.output = {}
+        self.output: dict[str, list[str]] = {}
         self.configuration_done_sent = False
         self.frame_scopes = {}
         self.init_commands = init_commands
         self.disassembled_instructions = {}
 
     @classmethod
-    def encode_content(cls, s):
+    def encode_content(cls, s: str) -> bytes:
         return ("Content-Length: %u\r\n\r\n%s" % (len(s), s)).encode("utf-8")
 
     @classmethod
@@ -156,6 +148,18 @@ def validate_response(cls, command, response):
         if command["seq"] != response["request_seq"]:
             raise ValueError("seq mismatch in response")
 
+    def _read_packet_thread(self):
+        done = False
+        try:
+            while not done:
+                packet = read_packet(self.recv, trace_file=self.trace_file)
+                # `packet` will be `None` on EOF. We want to pass it down to
+                # handle_recv_packet anyway so the main thread can handle unexpected
+                # termination of lldb-dap and stop waiting for new packets.
+                done = not self._handle_recv_packet(packet)
+        finally:
+            dump_dap_log(self.log_file)
+
     def get_modules(self):
         module_list = self.request_modules()["body"]["modules"]
         modules = {}
@@ -190,13 +194,13 @@ def collect_output(self, category, timeout_secs, pattern, clear=True):
                     break
         return collected_output if collected_output else None
 
-    def enqueue_recv_packet(self, packet):
+    def _enqueue_recv_packet(self, packet: Optional[ProtocolMessage]):
         self.recv_condition.acquire()
         self.recv_packets.append(packet)
         self.recv_condition.notify()
         self.recv_condition.release()
 
-    def handle_recv_packet(self, packet):
+    def _handle_recv_packet(self, packet: Optional[ProtocolMessage]) -> bool:
         """Called by the read thread that is waiting for all incoming packets
         to store the incoming packet in "self.recv_packets" in a thread safe
         way. This function will then signal the "self.recv_condition" to
@@ -205,7 +209,7 @@ def handle_recv_packet(self, packet):
         """
         # If EOF, notify the read thread by enqueuing a None.
         if not packet:
-            self.enqueue_recv_packet(None)
+            self._enqueue_recv_packet(None)
             return False
 
         # Check the packet to see if is an event packet
@@ -235,6 +239,18 @@ def handle_recv_packet(self, packet):
                 # When a new process is attached or launched, remember the
                 # details that are available in the body of the event
                 self.process_event_body = body
+            elif event == "exited":
+                # Process exited, mark the status to indicate the process is not
+                # alive.
+                self.exit_status = body["exitCode"]
+            elif event == "continued":
+                # When the process continues, clear the known threads and
+                # thread_stop_reasons.
+                all_threads_continued = body.get("allThreadsContinued", True)
+                tid = body["threadId"]
+                if tid in self.thread_stop_reasons:
+                    del self.thread_stop_reasons[tid]
+                self._process_continued(all_threads_continued)
             elif event == "stopped":
                 # Each thread that stops with a reason will send a
                 # 'stopped' event. We need to remember the thread stop
@@ -252,10 +268,16 @@ def handle_recv_packet(self, packet):
         elif packet_type == "response":
             if packet["command"] == "disconnect":
                 keepGoing = False
-        self.enqueue_recv_packet(packet)
+        self._enqueue_recv_packet(packet)
         return keepGoing
 
-    def send_packet(self, command_dict, set_sequence=True):
+    def _process_continued(self, all_threads_continued: bool):
+        self.threads = None
+        self.frame_scopes = {}
+        if all_threads_continued:
+            self.thread_stop_reasons = {}
+
+    def send_packet(self, command_dict: Request, set_sequence=True):
         """Take the "command_dict" python dictionary and encode it as a JSON
         string and send the contents as a packet to the VSCode debug
         adapter"""
@@ -273,7 +295,12 @@ def send_packet(self, command_dict, set_sequence=True):
             self.send.write(self.encode_content(json_str))
             self.send.flush()
 
-    def recv_packet(self, filter_type=None, filter_event=None, timeout=None):
+    def recv_packet(
+        self,
+        filter_type: Optional[str] = None,
+        filter_event: Optional[Union[str, list[str]]] = None,
+        timeout: Optional[float] = None,
+    ) -> Optional[ProtocolMessage]:
         """Get a JSON packet from the VSCode debug adapter. This function
         assumes a thread that reads packets is running and will deliver
         any received packets by calling handle_recv_packet(...). This
@@ -309,8 +336,6 @@ def recv_packet(self, filter_type=None, filter_event=None, timeout=None):
             finally:
                 self.recv_condition.release()
 
-        return None
-
     def send_recv(self, command):
         """Send a command python dictionary as JSON and receive the JSON
         response. Validates that the response is the correct sequence and
@@ -360,47 +385,36 @@ def send_recv(self, command):
 
         return None
 
-    def wait_for_event(self, filter=None, timeout=None):
-        while True:
-            return self.recv_packet(
-                filter_type="event", filter_event=filter, timeout=timeout
-            )
-        return None
-
-    def wait_for_events(self, events, timeout=None):
-        """Wait for a list of events in `events` in any order.
-        Return the events not hit before the timeout expired"""
-        events = events[:]  # Make a copy to avoid modifying the input
-        while events:
-            event_dict = self.wait_for_event(filter=events, timeout=timeout)
-            if event_dict is None:
-                break
-            events.remove(event_dict["event"])
-        return events
+    def wait_for_event(
+        self, filter: Union[str, list[str]], timeout: Optional[float] = None
+    ) -> Optional[Event]:
+        """Wait for the first event that matches the filter."""
+        return self.recv_packet(
+            filter_type="event", filter_event=filter, timeout=timeout
+        )
 
-    def wait_for_stopped(self, timeout=None):
+    def wait_for_stopped(
+        self, timeout: Optional[float] = None
+    ) -> Optional[list[Event]]:
         stopped_events = []
         stopped_event = self.wait_for_event(
             filter=["stopped", "exited"], timeout=timeout
         )
-        exited = False
         while stopped_event:
             stopped_events.append(stopped_event)
             # If we exited, then we are done
             if stopped_event["event"] == "exited":
-                self.exit_status = stopped_event["body"]["exitCode"]
-                exited = True
                 break
             # Otherwise we stopped and there might be one or more 'stopped'
             # events for each thread that stopped with a reason, so keep
             # checking for more 'stopped' events and return all of them
-            stopped_event = self.wait_for_event(filter="stopped", timeout=0.25)
-        if exited:
-            self.threads = []
+            stopped_event = self.wait_for_event(
+                filter=["stopped", "exited"], timeout=0.25
+            )
         return stopped_events
 
-    def wait_for_breakpoint_events(self, timeout=None):
-        breakpoint_events = []
+    def wait_for_breakpoint_events(self, timeout: Optional[float] = None):
+        breakpoint_events: list[Event] = []
         while True:
             event = self.wait_for_event("breakpoint", timeout=timeout)
             if not event:
@@ -408,14 +422,14 @@ def wait_for_breakpoint_events(self, timeout=None):
             breakpoint_events.append(event)
         return breakpoint_events
 
-    def wait_for_exited(self):
-        event_dict = self.wait_for_event("exited")
+    def wait_for_exited(self, timeout: Optional[float] = None):
+        event_dict = self.wait_for_event("exited", timeout=timeout)
         if event_dict is None:
             raise ValueError("didn't get exited event")
         return event_dict
 
-    def wait_for_terminated(self):
-        event_dict = self.wait_for_event("terminated")
+    def wait_for_terminated(self, timeout: Optional[float] = None):
+        event_dict = self.wait_for_event("terminated", timeout)
         if event_dict is None:
             raise ValueError("didn't get terminated event")
         return event_dict
@@ -576,6 +590,7 @@ def replay_packets(self, replay_file_path):
 
     def request_attach(
         self,
+        /,
         program=None,
         pid=None,
         waitFor=None,
@@ -671,7 +686,7 @@ def _process_stopped(self):
         self.threads = None
         self.frame_scopes = {}
 
-    def request_continue(self, threadId=None):
+    def request_continue(self, threadId=None, singleThread=False):
         if self.exit_status is not None:
             raise ValueError("request_continue called after process exited")
         # If we have launched or attached, then the first continue is done by
@@ -681,13 +696,18 @@ def request_continue(self, threadId=None):
         args_dict = {}
         if threadId is None:
             threadId = self.get_thread_id()
-        args_dict["threadId"] = threadId
+        if threadId:
+            args_dict["threadId"] = threadId
+        if singleThread:
+            args_dict["singleThread"] = True
         command_dict = {
             "command": "continue",
             "type": "request",
             "arguments": args_dict,
         }
         response = self.send_recv(command_dict)
+        if response["success"]:
+            self._process_continued(response["body"]["allThreadsContinued"])
         # Caller must still call wait_for_stopped.
         return response
 
@@ -775,7 +795,7 @@ def request_exceptionInfo(self, threadId=None):
         }
         return self.send_recv(command_dict)
 
-    def request_initialize(self, sourceInitFile):
+    def request_initialize(self, sourceInitFile=False):
         command_dict = {
             "command": "initialize",
             "type": "request",
@@ -802,11 +822,12 @@ def request_initialize(self, sourceInitFile):
 
     def request_launch(
         self,
-        program,
-        args=None,
+        program: Optional[str],
+        /,
+        args: Optional[list[str]] = None,
         cwd=None,
         env=None,
-        stopOnEntry=False,
+        stopOnEntry=True,
         disableASLR=True,
         disableSTDIO=False,
         shellExpandArguments=False,
@@ -1190,7 +1211,8 @@ def request_testGetTargetBreakpoints(self):
 
     def terminate(self):
         self.send.close()
-        # self.recv.close()
+        if self.recv_thread.is_alive():
+            self.recv_thread.join()
 
     def request_setInstructionBreakpoints(self, memory_reference=[]):
         breakpoints = []
@@ -1211,11 +1233,11 @@ def request_setInstructionBreakpoints(self, memory_reference=[]):
 class DebugAdapterServer(DebugCommunication):
     def __init__(
         self,
-        executable=None,
-        connection=None,
-        init_commands=[],
-        log_file=None,
-        env=None,
+        executable: Optional[str] = None,
+        connection: Optional[str] = None,
+        init_commands: list[str] = [],
+        log_file: Optional[TextIO] = None,
+        env: Optional[dict[str, str]] = None,
     ):
         self.process = None
         self.connection = None
@@ -1247,7 +1269,14 @@ def __init__(
             )
 
     @classmethod
-    def launch(cls, /, executable, env=None, log_file=None, connection=None):
+    def launch(
+        cls,
+        /,
+        executable: str,
+        env: Optional[dict[str, str]] = None,
+        log_file: Optional[TextIO] = None,
+        connection: Optional[str] = None,
+    ) -> tuple[subprocess.Popen, Optional[str]]:
         adapter_env = os.environ.copy()
         if env is not None:
             adapter_env.update(env)
@@ -1289,26 +1318,29 @@ def launch(cls, /, executable, env=None, log_file=None, connection=None):
 
         return (process, connection)
 
-    def get_pid(self):
+    def get_pid(self) -> int:
         if self.process:
             return self.process.pid
         return -1
 
     def terminate(self):
-        super(DebugAdapterServer, self).terminate()
-        if self.process is not None:
-            process = self.process
-            self.process = None
-            try:
-                # When we close stdin it should signal the lldb-dap that no
-                # new messages will arrive and it should shutdown on its own.
-                process.stdin.close()
-                process.wait(timeout=20)
-            except subprocess.TimeoutExpired:
-                process.kill()
-                process.wait()
-            if process.returncode != 0:
-                raise DebugAdapterProcessError(process.returncode)
+        try:
+            if self.process is not None:
+                process = self.process
+                self.process = None
+                try:
+                    # When we close stdin it should signal the lldb-dap that no
+                    # new messages will arrive and it should shutdown on its
+                    # own.
+                    process.stdin.close()
+                    process.wait(timeout=20)
+                except subprocess.TimeoutExpired:
+                    process.kill()
+                    process.wait()
+                if process.returncode != 0:
+                    raise DebugAdapterProcessError(process.returncode)
+        finally:
+            super(DebugAdapterServer, self).terminate()
 
 
 class DebugAdapterError(Exception):
diff --git a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
index c5a7eb76a58c7..f125c11160395 100644
--- a/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
+++ b/lldb/packages/Python/lldbsuite/test/tools/lldb-dap/lldbdap_testcase.py
@@ -78,13 +78,13 @@ def waitUntil(self, condition_callback):
             time.sleep(0.5)
         return False
 
-    def verify_breakpoint_hit(self, breakpoint_ids):
+    def verify_breakpoint_hit(self, breakpoint_ids, timeout=timeoutval):
         """Wait for the process we are debugging to stop, and verify we hit
         any breakpoint location in the "breakpoint_ids" array.
         "breakpoint_ids" should be a list of breakpoint ID strings
         (["1", "2"]). The return value from self.set_source_breakpoints()
         or self.set_function_breakpoints() can be passed to this function"""
-        stopped_events = self.dap_server.wait_for_stopped()
+        stopped_events = self.dap_server.wait_for_stopped(timeout)
         for stopped_event in stopped_events:
             if "body" in stopped_event:
                 body = stopped_event["body"]
@@ -110,16 +110,15 @@ def verify_breakpoint_hit(self, breakpoint_ids):
                     match_desc = "breakpoint %s." % (breakpoint_id)
                     if match_desc in description:
                         return
-        self.assertTrue(False, "breakpoint not hit")
+        self.assertTrue(False, f"breakpoint not hit, stopped_events={stopped_events}")
 
     def verify_stop_exception_info(self, expected_description, timeout=timeoutval):
         """Wait for the process we are debugging to stop, and verify the stop
         reason is 'exception' and that the description matches
         'expected_description'
         """
-        stopped_events = self.dap_server.wait_for_stopped(timeout=timeout)
+        stopped_events = self.dap_server.wait_for_stopped(timeout)
         for stopped_event in stopped_events:
-            print("stopped_event", stopped_event)
             if "body" in stopped_event:
                 body = stopped_event["body"]
                 if "reason" not in body:
@@ -263,46 +262,61 @@ def set_global(self, name, value, id=None):
         return self.dap_server.request_setVariable(2, name, str(value), id=id)
 
     def stepIn(
-        self, threadId=None, targetId=None, waitForStop=True, granularity="statement"
+        self,
+        threadId=None,
+        targetId=None,
+        waitForStop=True,
+        granularity="statement",
+        timeout=timeoutval,
     ):
         response = self.dap_server.request_stepIn(
             threadId=threadId, targetId=targetId, granularity=granularity
         )
         self.assertTrue(response["success"])
         if waitForStop:
-            return self.dap_server.wait_for_stopped()
+            return self.dap_server.wait_for_stopped(timeout)
         return None
 
-    def stepOver(self, threadId=None, waitForStop=True, granularity="statement"):
+    def stepOver(
+        self,
+        threadId=None,
+        waitForStop=True,
+        granularity="statement",
+        timeout=timeoutval,
+    ):
         self.dap_server.request_next(threadId=threadId, granularity=granularity)
         if waitForStop:
-            return self.dap_server.wait_for_stopped()
+            return self.dap_server.wait_fo...
[truncated]

Copy link
Member

@JDevlieghere JDevlieghere left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good. Reviewing this, I think this could probably be broken down further into NFC-ish changes, but I'm not sure it's worth the overhead of doing that. No need to do it on my behalf at least.

commandEscapePrefix=commandEscapePrefix,
customFrameFormat=customFrameFormat,
customThreadFormat=customThreadFormat,
**kwargs,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice :-)

runInTerminal=False,
disconnectAutomatically=True,
postRunCommands=None,
/,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's this? Could this use *args, **kwargs?

Copy link
Contributor Author

@ashgti ashgti May 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/ is a separator to mark the end of the positional args. I switched it to * which means the remainder of the args are required to be kwargs.

At a high level its:

def name(positional_only_parameters, /, positional_or_keyword_parameters,
         *, keyword_only_parameters):

(see https://peps.python.org/pep-0570/#specification)

I was trying to DRY up the definitions by using kwargs to forward to the dap_server.py method.

@@ -67,7 +67,7 @@ def test_core_file_source_mapping_array(self):
self.create_debug_adapter()

source_map = [["/home/labath/test", current_dir]]
self.attach(exe_file, coreFile=core_file, sourceMap=source_map)
self.attach(program=exe_file, coreFile=core_file, sourceMap=source_map)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIf you use *args you don't need this anymore.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For attach and launch, I wanted them to require kwargs because they have so many arguments and almost all of them are optional. launch I only left program as a positional arg for the simplicity of self.launch(program) but the rest, I was using the syntax that requires kwargs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants