-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated naives theorem #3
base: main
Are you sure you want to change the base?
Changes from all commits
8dd159a
e3b267f
9ac665c
d13c4bd
8fb3406
4eb2382
52472c4
76c14b0
86511d7
91c3610
4b6698d
5b8505e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The changes made in the pull request to the
In summary, while the revised code introduces some stylistic changes, it does not offer functional improvements and may reduce the overall readability and maintainability of the code. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,66 +1,52 @@ | ||
# The minimax function is the heart of the AI. It recursively calculates the optimal move for the AI. | ||
def minimax(total, turn, alpha, beta): | ||
# Base case: if total is 20, it's a draw, so return 0 | ||
if total == 20: | ||
return 0 | ||
# Base case: if total is more than 20, the last player to move loses | ||
elif total > 20: | ||
if turn: # If it's the AI's turn, AI loses, so return -1 | ||
return -1 | ||
else: # If it's the human's turn, human loses, so return 1 | ||
return 1 | ||
def minimax(total, is_ai_turn, alpha, beta): | ||
# Base case: If total is 20 or more, the game is over | ||
if total >= 20: | ||
return -1 if is_ai_turn else 1 if total > 20 else 0 # -1 if AI loses, 1 if AI wins, 0 if draw | ||
|
||
# Set initial evaluation depending on whose turn it is | ||
best_eval = -float('inf') if is_ai_turn else float('inf') | ||
|
||
# Explore each possible move (1, 2, or 3) | ||
for i in range(1, 4): | ||
eval = minimax(total + i, not is_ai_turn, alpha, beta) # Recursive call for the next move | ||
|
||
# Maximize if it's AI's turn, else minimize | ||
if is_ai_turn: | ||
best_eval = max(best_eval, eval) | ||
alpha = max(alpha, eval) | ||
else: | ||
best_eval = min(best_eval, eval) | ||
beta = min(beta, eval) | ||
|
||
# Alpha-beta pruning to cut off unnecessary branches | ||
if beta <= alpha: | ||
break | ||
return best_eval | ||
|
||
# If it's the AI's turn, we want to maximize the score | ||
if turn: | ||
max_eval = -float('inf') # Initialize max_eval to negative infinity | ||
for i in range(1, 4): # For each possible move (1, 2, or 3) | ||
# Recursively call minimax for the next state of the game | ||
eval = minimax(total + i, False, alpha, beta) | ||
max_eval = max(max_eval, eval) # Update max_eval if necessary | ||
alpha = max(alpha, eval) # Update alpha if necessary | ||
if beta <= alpha: # If beta is less than or equal to alpha, break the loop (alpha-beta pruning) | ||
break | ||
return max_eval # Return the maximum evaluation | ||
# If it's the human's turn, we want to minimize the score | ||
else: | ||
min_eval = float('inf') # Initialize min_eval to positive infinity | ||
for i in range(1, 4): # For each possible move (1, 2, or 3) | ||
# Recursively call minimax for the next state of the game | ||
eval = minimax(total + i, True, alpha, beta) | ||
min_eval = min(min_eval, eval) # Update min_eval if necessary | ||
beta = min(beta, eval) # Update beta if necessary | ||
if beta <= alpha: # If beta is less than or equal to alpha, break the loop (alpha-beta pruning) | ||
break | ||
return min_eval # Return the minimum evaluation | ||
|
||
# The total score of the game is initially 0 | ||
total = 0 | ||
total = 0 # Initialize total score | ||
|
||
# Game loop | ||
while True: | ||
# Get the human player's move from input and add it to the total | ||
while total < 20: | ||
# Human player's move | ||
human_move = int(input("Enter your move (1, 2, or 3): ")) | ||
while human_move not in [1, 2, 3]: # If the move is not valid, ask for input again | ||
print("Invalid move. Please enter 1, 2, or 3.") | ||
human_move = int(input("Enter your move (1, 2, or 3): ")) | ||
while human_move not in [1, 2, 3]: # Validate input | ||
human_move = int(input("Invalid move. Enter 1, 2, or 3: ")) | ||
total += human_move | ||
print(f"After your move, total is {total}") | ||
if total >= 20: # If the total is 20 or more after the human's move, the human wins | ||
print(f"Total after your move: {total}") | ||
|
||
# Check if human wins | ||
if total >= 20: | ||
print("You win!") | ||
break | ||
|
||
# If the game is not over, it's the AI's turn | ||
# AI's turn | ||
print("AI is making its move...") | ||
ai_move = 1 | ||
max_eval = -float('inf') | ||
for i in range(1, 4): # For each possible move (1, 2, or 3) | ||
# Call minimax to get the evaluation of the move | ||
eval = minimax(total + i, False, -float('inf'), float('inf')) | ||
if eval > max_eval: # If the evaluation is greater than max_eval, update max_eval and ai_move | ||
max_eval = eval | ||
ai_move = i | ||
total += ai_move # Add the AI's move to the total | ||
print(f"AI adds {ai_move}. Total is {total}") | ||
if total >= 20: # If the total is 20 or more after the AI's move, the AI wins | ||
# Select the best move by calling minimax on each possible option | ||
best_move = max((minimax(total + i, False, -float('inf'), float('inf')), i) for i in range(1, 4))[1] | ||
total += best_move | ||
print(f"AI adds {best_move}. Total is {total}") | ||
|
||
# Check if AI wins | ||
if total >= 20: | ||
print("AI wins!") | ||
break |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,50 +1,35 @@ | ||
import numpy as np | ||
def hill_climbing(func, start, step=0.01, max_iter=1000): | ||
x = start | ||
for _ in range(max_iter): | ||
fx = func(x) | ||
fx_positive = func(x + step) | ||
fx_negative = func(x - step) | ||
|
||
def hill_climbing(func, start, step_size=0.01, max_iterations=1000): | ||
current_position = start | ||
current_value = func(current_position) | ||
|
||
for i in range(max_iterations): | ||
next_position_positive = current_position + step_size | ||
next_value_positive = func(next_position_positive) | ||
|
||
next_position_negative = current_position - step_size | ||
next_value_negative = func(next_position_negative) | ||
|
||
if next_value_positive > current_value and next_value_positive >= next_value_negative: | ||
current_position = next_position_positive | ||
current_value = next_value_positive | ||
elif next_value_negative > current_value and next_value_negative > next_value_positive: | ||
current_position = next_position_negative | ||
current_value = next_value_negative | ||
if fx_positive > fx and fx_positive >= fx_negative: | ||
x += step | ||
elif fx_negative > fx and fx_negative > fx_positive: | ||
x -= step | ||
else: | ||
break | ||
|
||
return current_position, current_value | ||
return x, func(x) | ||
|
||
# Get the function from the user | ||
while True: | ||
func_str = input("\nEnter a function of x: ") | ||
try: | ||
# Test the function with a dummy value | ||
x = 0 | ||
eval(func_str) | ||
break | ||
except Exception as e: | ||
print(f"Invalid function. Please try again. Error: {e}") | ||
|
||
# Convert the string into a function | ||
func = lambda x: eval(func_str) | ||
if __name__ == "__main__": | ||
while True: | ||
try: | ||
func_str = input("Enter a function of x (e.g., -(x-2)**2 + 4): ") | ||
func = eval(f"lambda x: {func_str}") # Convert string to function | ||
func(0) # Test the function with x = 0 | ||
break | ||
except Exception as e: | ||
print(f"Invalid function. Please try again. Error: {e}") | ||
|
||
# Get the starting point from the user | ||
while True: | ||
start_str = input("\nEnter the starting value to begin the search: ") | ||
try: | ||
start = float(start_str) | ||
break | ||
except ValueError: | ||
print("Invalid input. Please enter a number.") | ||
while True: | ||
try: | ||
start = float(input("Enter the starting value (e.g., 0): ")) | ||
break | ||
except ValueError: | ||
print("Invalid input. Please enter a valid number.") | ||
|
||
maxima, max_value = hill_climbing(func, start) | ||
print(f"The maxima is at x = {maxima}") | ||
print(f"The maximum value obtained is {max_value}") | ||
max_x, max_val = hill_climbing(func, start) | ||
print(f"Maxima found at x = {max_x}") | ||
print(f"Maximum value = {max_val}") |
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Useful changes but add more comments to explain |
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Not a useful change |
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is done at RVCE, you could have added course code and retain RVCE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes made to the
AlphaBetaPruning/Explaination.md
file are not useful for the following reasons:Redundant Code: The new implementation reintroduces the definition of the
minimax
function, which is already explained earlier in the document. This redundancy does not add value and clutters the explanation.Clarity and Consistency: The original document is more concise and clear in explaining the game loop and the AI's move selection process. The revised document introduces unnecessary complexity without improving the understanding.
Example Dry Run: The changes to the dry run example do not provide additional insights or clarity over the original example. The original example was sufficient to demonstrate the AI's decision-making process.
Documentation Quality: The original explanation was well-structured and easy to follow. The changes disrupt the flow and coherence of the document, making it harder to understand the overall logic and approach.
Overall, the changes do not enhance the documentation and may reduce its readability and effectiveness.