Skip to content

polish code/spring clean #566

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

till-m
Copy link
Member

@till-m till-m commented May 22, 2025

This PR contains various improvements:

  • Makes the random_state not a property of the acquisition function, but something to be provided during .suggest
    • Before, when choosing a non-default acquisition function and wanting deterministic behaviour, one had to set a lot of states manually. Now, the optimizer state will be used for the acquisition function
  • Rename n_lbfgs_b to n_smart since we now have the DE optimization
  • fix another bug related to acq maximization, which caused seeding of the smart optimization to use suboptimal points
    • this also caused a test to pass that should've failed. I fixed this test.

Summary by CodeRabbit

  • Bug Fixes
    • Improved reproducibility and consistency by ensuring all randomness is controlled via explicit parameters rather than internal state.
    • Enhanced error handling in optimization routines with clearer failure reporting.
  • Tests
    • Updated tests to use a fixed random seed and pass random state explicitly to relevant methods, improving test determinism.
    • Modified constraint-related tests to align with updated constraint handling and tolerance adjustments.
  • Refactor
    • Simplified how random state is managed throughout the optimization process, requiring it to be passed directly to suggestion methods rather than at initialization.
    • Streamlined parameter names for optimization strategies for consistency.
    • Updated constraint management by removing constraint parameters from initializations and introducing explicit constraint setting methods.
  • Improvements
    • Deprecated and removed random state handling from acquisition function constructors, encouraging explicit passing during suggestion calls.
    • Relaxed precision tolerance in constraint-related tests for improved stability.

Copy link

coderabbitai bot commented May 22, 2025

"""

Walkthrough

The changes refactor random state management in the acquisition functions of the Bayesian optimization codebase. Instead of storing a random state internally, acquisition functions now require the random state to be passed explicitly to each suggest() call and related methods. Constructors no longer accept or handle random state, and tests are updated for deterministic behavior by passing random state directly to method calls. Additionally, error handling in optimization routines is improved, test tolerances are relaxed, and constraint handling is refactored to use a setter method rather than constructor injection.

Changes

File(s) Change Summary
bayes_opt/acquisition.py Refactored acquisition functions to remove internal random state storage. Constructors no longer accept or serialize random state. All randomness is now controlled by passing a random_state argument to suggest() and internal optimization methods. Deprecated constructor random state usage with warnings. Improved error handling in optimization routines, including raising errors on differential evolution failure.
bayes_opt/bayesian_optimization.py Updated instantiation of default acquisition functions to exclude random state. Modified suggest method to pass the optimizer's random state explicitly to acquisition function suggest() calls. Reorganized initialization of TargetSpace and constraint model, passing kernel transform to constraint model.
bayes_opt/target_space.py Removed constraint parameter from TargetSpace constructor. Added set_constraint method to assign constraint model and initialize related attributes post-construction.
tests/test_acquisition.py Updated tests to remove random state from constructors and pass it explicitly to relevant methods. Introduced a fixed random seed for reproducibility. Adjusted parameter names and removed a deprecated test. Constructors of mock and custom acquisition classes no longer accept random state.
tests/test_constraint.py Relaxed precision tolerance in constraint approximation test assertions and removed a redundant assertion.
tests/test_target_space.py Modified tests to instantiate constraints separately and set them on TargetSpace instances using the new set_constraint method instead of passing constraints in the constructor.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant BayesianOptimizer
    participant AcquisitionFunction
    participant RandomState

    User->>BayesianOptimizer: suggest()
    BayesianOptimizer->>AcquisitionFunction: suggest(..., random_state=self._random_state)
    AcquisitionFunction->>RandomState: Use random_state for optimization
    AcquisitionFunction-->>BayesianOptimizer: Return suggested point
    BayesianOptimizer-->>User: Return suggestion
Loading

Poem

In the meadow of code where randomness grew,
The rabbits have tidied where old seeds once flew.
Now each suggestion, with state passed along,
Is crisp and deterministic, never gone wrong.
With tests now consistent, the fields are in line—
Hopping through order, our results all align!
🐇✨
"""

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
bayes_opt/acquisition.py (3)

155-156: Passing an int seed here will silently recreate a fresh RNG every call

ensure_rng(random_state) converts an int into a new RandomState each time, meaning two consecutive suggest() calls with the same integer seed will always sample the exact same “random” candidates.
If that is unintended, cache the converted RandomState once per suggest() invocation:

-        random_state = ensure_rng(random_state)
+        random_state = ensure_rng(random_state)         # convert once
+        # Keep a reference so children & helpers re-use the same generator
+        rng = random_state

230-233: Docstring now mentions differential-evolution but examples still talk about “warm-up points”

Lines 231-233 updated the wording, but the rest of the paragraph still references the old n_warmup constant. Consider updating the whole block for consistency and to avoid confusion.


276-280: Return-type annotation out of sync with actual return value

_random_sample_minimize returns three values (x_min, min_acq, x_seeds) but the type hint says tuple[NDArray | None, float]. Update the annotation to reflect the extra element.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2c78f7c and 4479243.

📒 Files selected for processing (3)
  • bayes_opt/acquisition.py (27 hunks)
  • bayes_opt/bayesian_optimization.py (2 hunks)
  • tests/test_acquisition.py (12 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (9)
  • GitHub Check: Python 3.13 - numpy >=2
  • GitHub Check: Python 3.11 - numpy >=2
  • GitHub Check: Python 3.12 - numpy >=1.25,<2
  • GitHub Check: Python 3.11 - numpy >=1.25,<2
  • GitHub Check: Python 3.10 - numpy >=2
  • GitHub Check: Python 3.12 - numpy >=2
  • GitHub Check: Python 3.10 - numpy >=1.25,<2
  • GitHub Check: Python 3.9 - numpy >=1.25,<2
  • GitHub Check: Python 3.9 - numpy >=2
🔇 Additional comments (32)
bayes_opt/bayesian_optimization.py (1)

252-254: Random state propagation looks good

BayesianOptimization.suggest() now forwards its internal RNG to the acquisition function, so determinism is preserved even after the refactor. No issues spotted here.

tests/test_acquisition.py (31)

45-46: New random_state fixture provides consistent test behavior

Adding a fixed RandomState fixture with seed 0 is a good practice that ensures tests are deterministic across different runs.


73-74: Constructor simplified to align with new random state management

The MockAcquisition constructor no longer accepts or stores a random_state parameter, which aligns with the PR's objective of making random_state a method parameter rather than a class property.


99-99: Parameter renamed from n_l_bfgs_b to n_smart

This rename better reflects the purpose of the parameter, especially now that Differential Evolution optimization has been introduced alongside L-BFGS-B.


102-106: Random state now passed explicitly to suggest method

The test has been updated to pass the random_state explicitly to the suggest method, consistent with the PR's objective of making random_state a method parameter rather than a class property.


109-114: New test validates acquisition function maximization

This addition helps verify the fix for the bug mentioned in the PR objectives - ensuring that the best random sample is correctly included in the seeds for smart optimization.


120-120: Parameter renamed from n_l_bfgs_b to n_smart

Consistent with other changes, the parameter name has been updated to better reflect its purpose in the optimization process.


125-125: UpperConfidenceBound constructor no longer accepts random_state

The constructor has been simplified to remove the random_state parameter, aligned with the PR's goal of centralizing random state management.


135-137: Random state now passed explicitly to suggest method

Acquisition functions now require random_state as a parameter to the suggest method, making the stochastic behavior more explicit and controlled.


142-142: UpperConfidenceBound constructor simplified

Constructor no longer accepts random_state, consistent with the new approach to random state management.


150-152: Random state passed explicitly to _smart_minimize method

The internal optimization method now receives random_state as a parameter, ensuring deterministic behavior during testing.


157-157: UpperConfidenceBound constructor simplified for constraint test

Constructor no longer accepts random_state, consistent with the new approach to random state management.


165-165: ProbabilityOfImprovement constructor simplified

The constructor no longer accepts random_state, aligned with the refactored random state management.


171-173: Random state now passed explicitly to suggest method

The suggest method now receives random_state as a parameter, ensuring deterministic behavior in tests.


177-181: Consistent pattern for passing random_state to methods

The constructor no longer accepts random_state, and it's now passed explicitly to the suggest method, maintaining the consistent pattern throughout the codebase.


186-192: ProbabilityOfImprovement with constraints follows new pattern

The constructor no longer accepts random_state, and the suggest method now requires it as a parameter, consistent with other acquisition functions.


196-199: Consistent pattern for passing random_state to methods

Random state is passed explicitly to suggest method calls, ensuring deterministic behavior in constraint-related tests.


203-220: ExpectedImprovement follows new random state pattern

All instances of ExpectedImprovement initialization and suggest method calls have been updated to follow the new pattern: no random_state in constructor, explicit random_state in method calls.


224-237: ExpectedImprovement with constraints follows new pattern

The constructor no longer accepts random_state, and the suggest method now requires it as a parameter, consistent with other acquisition functions.


242-244: ConstantLiar constructor simplified

The base acquisition and ConstantLiar constructors no longer accept random_state, aligned with the refactored random state management.


252-253: Random state passed explicitly to ConstantLiar suggest method

The suggest method now receives random_state as a parameter, ensuring deterministic behavior in tests.


266-266: Consistent pattern for passing random_state to methods

Random state is passed explicitly to the suggest method call, maintaining consistency throughout the test suite.


277-281: ConstantLiar with constraints follows new pattern

The constructor no longer accepts random_state, and the suggest method now requires it as a parameter, consistent with other acquisition functions.


285-285: Consistent pattern for passing random_state to methods

Random state is passed explicitly to suggest method calls, ensuring deterministic behavior in constraint-related tests.


293-293: GPHedge constructor simplified

The constructor no longer requires base acquisitions to have random_state parameters, aligned with the refactored random state management.


311-315: Base acquisitions for GPHedge follow new pattern

All base acquisition constructors have been simplified to remove the random_state parameter, consistent with the refactored approach.


335-339: Base acquisitions for softmax sampling follow new pattern

All base acquisition constructors have been simplified to remove the random_state parameter, consistent with the refactored approach.


355-355: Random state passed explicitly to _sample_idx_from_softmax_gains method

The internal method now receives random_state as a parameter, ensuring deterministic behavior during testing.


359-366: GPHedge integration test follows new pattern

Base acquisitions no longer receive random_state in constructors, and the suggest method now requires it as a parameter, consistent with the refactored approach.


370-371: Random state passed explicitly to suggest method in loop

The suggest method consistently receives random_state as a parameter, ensuring deterministic behavior across multiple iterations.


597-598: Custom acquisition constructors simplified

The constructor for custom acquisition functions has been simplified to no longer accept random_state, aligned with the refactored random state management.


618-619: Custom acquisition without set params follows new pattern

The constructor has been simplified to remove the random_state parameter, consistent with the refactored approach throughout the codebase.

Copy link

codecov bot commented May 22, 2025

Codecov Report

Attention: Patch coverage is 89.65517% with 6 lines in your changes missing coverage. Please review.

Project coverage is 97.77%. Comparing base (2c78f7c) to head (6e59e96).

Files with missing lines Patch % Lines
bayes_opt/acquisition.py 86.95% 6 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #566      +/-   ##
==========================================
- Coverage   97.95%   97.77%   -0.19%     
==========================================
  Files          10       10              
  Lines        1175     1166       -9     
==========================================
- Hits         1151     1140      -11     
- Misses         24       26       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@till-m
Copy link
Member Author

till-m commented May 22, 2025

@CodeRabbit review

Copy link

coderabbitai bot commented May 22, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🔭 Outside diff range comments (1)
bayes_opt/acquisition.py (1)

230-272: ⚠️ Potential issue

_acq_min can return None when both optimisers fail

If n_random > 0 succeeds but every smart optimisation run fails (min_acq_s == np.inf) and min_acq_r == np.inf (e.g. because n_random==0), the function falls through and returns x_min_r, which is None.
Down-stream callers expect a NumPy array and will crash.

-        if n_smart:
+        if n_smart:
             ...
-                return x_min_s
-        return x_min_r
+                return x_min_s
+        # Fallbacks
+        if x_min_r is None:
+            return x_min_s
+        if x_min_s is None:
+            return x_min_r
+        return x_min_r
🧹 Nitpick comments (4)
tests/test_target_space.py (2)

101-105: Lambda signature does not match TargetSpace parameter names

ConstraintModel expects the constraint function to accept named parameters identical to the optimization variables (p1, p2).
Using lambda x: x will raise a TypeError if constraint.eval(**kwargs) is ever invoked (e.g. when calling probe()).

-constraint = ConstraintModel(lambda x: x, -2, 2, transform=None)
+constraint = ConstraintModel(lambda p1, p2: p1 - p2, -2, 2, transform=None)

Even though the current test registers explicit constraint_values and therefore never calls eval, using a correctly-typed lambda future-proofs the test and documents intent.


199-206: Redundant transform=None argument

ConstraintModel’s transform parameter already defaults to None.
Unless the test explicitly verifies that the explicit None is propagated, the extra argument is superfluous and makes the call longer than necessary.

-constraint = ConstraintModel(lambda p1, p2: p1 - p2, -2, 2)
+constraint = ConstraintModel(lambda p1, p2: p1 - p2, -2, 2)
bayes_opt/acquisition.py (2)

66-73: Deprecation warning is good – update docstring too

random_state is now deprecated at construction time but the class-level docstring still advertises it as an active parameter. Updating the docstring will avoid confusing library users.


1187-1203: Type hint mismatch in _sample_idx_from_softmax_gains

The function signature now specifies RandomState, but ensure_rng may supply a Generator once the earlier SciPy fix (see above) is applied.
Consider typing against np.random.RandomState | np.random.Generator or simply Any to avoid mypy/IDE warnings.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between acd4c54 and 6e59e96.

📒 Files selected for processing (5)
  • bayes_opt/acquisition.py (29 hunks)
  • bayes_opt/bayesian_optimization.py (2 hunks)
  • bayes_opt/target_space.py (1 hunks)
  • tests/test_acquisition.py (12 hunks)
  • tests/test_target_space.py (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • bayes_opt/bayesian_optimization.py
  • tests/test_acquisition.py
🧰 Additional context used
🧬 Code Graph Analysis (2)
tests/test_target_space.py (3)
bayes_opt/target_space.py (3)
  • constraint (203-210)
  • TargetSpace (34-713)
  • set_constraint (102-117)
bayes_opt/constraint.py (1)
  • ConstraintModel (23-263)
tests/test_bayesian_optimization.py (1)
  • target_func (19-21)
bayes_opt/acquisition.py (5)
tests/test_acquisition.py (9)
  • random_state (45-46)
  • target_space (55-56)
  • fun (144-148)
  • gp (50-51)
  • pbounds (30-31)
  • constraint (35-36)
  • base_acq (82-83)
  • base_acq (600-601)
  • base_acq (621-622)
bayes_opt/util.py (1)
  • ensure_rng (8-30)
bayes_opt/target_space.py (8)
  • max (623-651)
  • TargetSpace (34-713)
  • random_sample (564-602)
  • bounds (193-200)
  • continuous_dimensions (223-234)
  • res (653-684)
  • set_constraint (102-117)
  • constraint (203-210)
tests/test_bayesian_optimization.py (1)
  • random_sample (500-511)
bayes_opt/parameter.py (4)
  • random_sample (68-87)
  • random_sample (262-281)
  • random_sample (359-381)
  • bounds (59-61)
⏰ Context from checks skipped due to timeout of 90000ms (9)
  • GitHub Check: Python 3.12 - numpy >=2
  • GitHub Check: Python 3.10 - numpy >=1.25,<2
  • GitHub Check: Python 3.13 - numpy >=2
  • GitHub Check: Python 3.12 - numpy >=1.25,<2
  • GitHub Check: Python 3.11 - numpy >=1.25,<2
  • GitHub Check: Python 3.11 - numpy >=2
  • GitHub Check: Python 3.10 - numpy >=2
  • GitHub Check: Python 3.9 - numpy >=2
  • GitHub Check: Python 3.9 - numpy >=1.25,<2

Comment on lines +114 to 116
if constraint.lb.size == 1:
self._constraint_values = np.empty(shape=(0), dtype=float)
else:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

constraint.lb may be a scalar float

float objects do not have a .size attribute.
Unless ConstraintModel converts scalars with np.atleast_1d, the line

if constraint.lb.size == 1:

will raise AttributeError.
Safer alternative:

lb = np.atleast_1d(constraint.lb)
if lb.size == 1:
    ...
🤖 Prompt for AI Agents
In bayes_opt/target_space.py around lines 114 to 116, the code assumes
constraint.lb has a .size attribute, but if constraint.lb is a scalar float,
this causes an AttributeError. To fix this, convert constraint.lb to a numpy
array using np.atleast_1d before checking its size, then use the size attribute
on the resulting array instead of directly on constraint.lb.

Comment on lines +100 to 118
self._constraint: ConstraintModel | None = None

if constraint is not None:
# preallocated memory for constraint fulfillment
self._constraint_values: NDArray[Float]
if constraint.lb.size == 1:
self._constraint_values = np.empty(shape=(0), dtype=float)
else:
self._constraint_values = np.empty(shape=(0, self._constraint.lb.size), dtype=float)
def set_constraint(self, constraint: ConstraintModel) -> None:
"""Set the constraint model.

Parameters
----------
constraint : ConstraintModel
The constraint model to be set.
"""
self._constraint = constraint

# preallocated memory for constraint fulfillment
self._constraint_values: NDArray[Float]
if constraint.lb.size == 1:
self._constraint_values = np.empty(shape=(0), dtype=float)
else:
self._constraint = None
self._constraint_values = np.empty(shape=(0, self._constraint.lb.size), dtype=float)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Multiple calls to set_constraint overwrite data silently

set_constraint() re-initialises self._constraint_values every time it is called, discarding any previously registered constraint data without warning.
If a user mistakenly calls it twice (e.g. during a resume/restore cycle) the optimisation history becomes inconsistent.

Consider guarding against re-assignment or issuing a warning:

 def set_constraint(self, constraint: ConstraintModel) -> None:
-    self._constraint = constraint
+    if self._constraint is not None:
+        warn(
+            "Overwriting an existing constraint; previously stored "
+            "constraint values will be lost.",
+            stacklevel=2,
+        )
+    self._constraint = constraint

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In bayes_opt/target_space.py around lines 100 to 118, the set_constraint method
re-initializes self._constraint_values on every call, which silently discards
any previously stored constraint data. To fix this, add a guard at the start of
set_constraint to check if self._constraint is already set; if so, either raise
an exception or issue a warning to prevent silent overwriting and preserve
optimization history consistency.

Comment on lines +995 to +999
None, pbounds=pbounds, allow_duplicate_points=target_space._allow_duplicate_points
)
if target_space._constraint is not None:
target_space_copy.set_constraint(target_space.constraint)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Shared ConstraintModel instance between target-space copies

_copy_target_space assigns the same ConstraintModel object to the cloned space.
Fitting the constraint inside the dummy GP will mutate the original model and leak state across optimisation loops.

-    target_space_copy.set_constraint(target_space.constraint)
+    target_space_copy.set_constraint(deepcopy(target_space.constraint))

This ensures isolation between the real and the dummy constraint models.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
None, pbounds=pbounds, allow_duplicate_points=target_space._allow_duplicate_points
)
if target_space._constraint is not None:
target_space_copy.set_constraint(target_space.constraint)
None, pbounds=pbounds, allow_duplicate_points=target_space._allow_duplicate_points
)
if target_space._constraint is not None:
- target_space_copy.set_constraint(target_space.constraint)
+ target_space_copy.set_constraint(deepcopy(target_space.constraint))
🤖 Prompt for AI Agents
In bayes_opt/acquisition.py around lines 995 to 999, the code assigns the same
ConstraintModel instance from the original target_space to the copied
target_space_copy, causing shared mutable state and potential side effects. To
fix this, create a deep copy or a new instance of the ConstraintModel for
target_space_copy instead of assigning the original one directly, ensuring that
fitting or mutating the constraint in the dummy GP does not affect the original
model and maintains isolation between them.

Comment on lines +376 to +386
xinit = space.random_sample(15 * len(space.bounds), random_state=random_state)
if len(x_seeds) > 0:
n_seeds = min(len(x_seeds), len(xinit))
xinit[:n_seeds] = x_seeds[:n_seeds]

de_parameters = {"func": acq, "bounds": space.bounds, "polish": False, "init": xinit}
if version.parse(scipy_version) < version.parse("1.15.0"):
de_parameters["seed"] = random_state
else:
de_parameters["rng"] = random_state

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

rng parameter mismatch for SciPy ≥ 1.15

For SciPy ≥ 1.15 DifferentialEvolutionSolver expects rng to be a numpy.random.Generator.
ensure_rng returns a RandomState, so the current code will raise TypeError: Unsupported rng type.

Convert the RNG where necessary:

if version.parse(scipy_version) < version.parse("1.15.0"):
    de_parameters["seed"] = random_state
else:
-    de_parameters["rng"] = random_state
+    de_parameters["rng"] = np.random.default_rng(random_state.randint(0, 2**32 - 1))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
xinit = space.random_sample(15 * len(space.bounds), random_state=random_state)
if len(x_seeds) > 0:
n_seeds = min(len(x_seeds), len(xinit))
xinit[:n_seeds] = x_seeds[:n_seeds]
de_parameters = {"func": acq, "bounds": space.bounds, "polish": False, "init": xinit}
if version.parse(scipy_version) < version.parse("1.15.0"):
de_parameters["seed"] = random_state
else:
de_parameters["rng"] = random_state
xinit = space.random_sample(15 * len(space.bounds), random_state=random_state)
if len(x_seeds) > 0:
n_seeds = min(len(x_seeds), len(xinit))
xinit[:n_seeds] = x_seeds[:n_seeds]
de_parameters = {"func": acq, "bounds": space.bounds, "polish": False, "init": xinit}
if version.parse(scipy_version) < version.parse("1.15.0"):
de_parameters["seed"] = random_state
else:
de_parameters["rng"] = np.random.default_rng(
random_state.randint(0, 2**32 - 1)
)
🤖 Prompt for AI Agents
In bayes_opt/acquisition.py around lines 376 to 386, the code passes
random_state directly as the rng parameter to DifferentialEvolutionSolver for
SciPy versions >= 1.15, but rng must be a numpy.random.Generator, not a
RandomState. To fix this, convert random_state to a numpy.random.Generator
instance before passing it as the rng parameter when SciPy version is >= 1.15.

@till-m
Copy link
Member Author

till-m commented May 26, 2025

@fmfn since you asked me to tag you, here you can have a look.

What's nice about this feature is that it does catch some things that are hard for humans to catch (e.g. this commit was actually a result of me testing the feature on a PR of my fork and getting this problem pointed out to me). OTOH, sometimes it misreports things, e.g. this comment is wrong, since the .lb property of the ConstraintModel is always an array and thus always permits .size calls.

In the end, it's a good tool to point out potential problems but one should probably not mindlessly "fix" them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant