Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated naives theorem #3

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
46 changes: 35 additions & 11 deletions AlphaBetaPruning/Explaination.md
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes made to the AlphaBetaPruning/Explaination.md file are not useful for the following reasons:

  1. Redundant Code: The new implementation reintroduces the definition of the minimax function, which is already explained earlier in the document. This redundancy does not add value and clutters the explanation.

  2. Clarity and Consistency: The original document is more concise and clear in explaining the game loop and the AI's move selection process. The revised document introduces unnecessary complexity without improving the understanding.

  3. Example Dry Run: The changes to the dry run example do not provide additional insights or clarity over the original example. The original example was sufficient to demonstrate the AI's decision-making process.

  4. Documentation Quality: The original explanation was well-structured and easy to follow. The changes disrupt the flow and coherence of the document, making it harder to understand the overall logic and approach.

Overall, the changes do not enhance the documentation and may reduce its readability and effectiveness.

Original file line number Diff line number Diff line change
Expand Up @@ -39,32 +39,56 @@ Here's how the code would execute these moves:
# Initial total is 0
total = 0

# Define the minimax function
def minimax(total, is_maximizing, alpha, beta):
# Base case: If total reaches or exceeds 20, return a high value for winning, low for losing
if total >= 20:
return 1 if is_maximizing else -1

if is_maximizing:
max_eval = -float('inf')
for i in range(1, 4): # AI can add 1, 2, or 3
eval = minimax(total + i, False, alpha, beta)
max_eval = max(max_eval, eval)
alpha = max(alpha, eval)
if beta <= alpha:
break
return max_eval
else:
min_eval = float('inf')
for i in range(1, 4): # Human can add 1, 2, or 3
eval = minimax(total + i, True, alpha, beta)
min_eval = min(min_eval, eval)
beta = min(beta, eval)
if beta <= alpha:
break
return min_eval

# Game loop
while True:
# Human's turn
human_move = int(input("Enter your move (1, 2, or 3): ")) # Let's say the human enters 1
total += human_move # total becomes 1
print(f"After your move, total is {total}") # Prints "After your move, total is 1"
if total >= 20: # The total is not 20 or more, so the game continues
human_move = int(input("Enter your move (1, 2, or 3): "))
total += human_move
print(f"After your move, total is {total}")
if total >= 20:
print("You win!")
break

# AI's turn
print("AI is making its move...")
ai_move = 1
max_eval = -float('inf')
for i in range(1, 4): # For each possible move (1, 2, or 3)
eval = minimax(total + i, False, -float('inf'), float('inf')) # Call minimax to get the evaluation of the move
if eval > max_eval: # If the evaluation is greater than max_eval, update max_eval and ai_move
for i in range(1, 4):
eval = minimax(total + i, False, -float('inf'), float('inf'))
if eval > max_eval:
max_eval = eval
ai_move = i
total += ai_move # Add the AI's move to the total. Let's say the AI adds 3, so total becomes 4
print(f"AI adds {ai_move}. Total is {total}") # Prints "AI adds 3. Total is 4"
if total >= 20: # The total is not 20 or more, so the game continues
total += ai_move
print(f"AI adds {ai_move}. Total is {total}")
if total >= 20:
print("AI wins!")
break

# The game continues in this way until the total reaches or exceeds 20
```

In this example, the AI wins because the total reaches 20 after the AI's move. The AI uses the Minimax algorithm with Alpha-Beta pruning to decide its moves, always choosing the move that maximizes its score assuming that the human player is also playing optimally.
96 changes: 41 additions & 55 deletions AlphaBetaPruning/GameOf20.py
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes made in the pull request to the AlphaBetaPruning/GameOf20.py file are not useful for the following reasons:

  1. Redundant Logic: The new implementation combines multiple conditional checks into a single line, which, while concise, reduces clarity and maintainability. The original implementation had clearer separation of conditions, making it easier to understand the game logic.

  2. Function Parameters: The parameter name change from turn to is_ai_turn does not add significant value and might confuse readers familiar with the original code. Consistency in naming conventions is important for readability.

  3. Alpha-Beta Pruning: The new code introduces alpha-beta pruning logic, but the changes do not enhance the performance or accuracy of the minimax function. The original implementation already had alpha-beta pruning effectively integrated.

  4. Code Clarity: The original code had well-defined comments that explained each step of the minimax algorithm. The revised code has fewer comments, which may make it harder for others to understand the algorithm without additional context.

  5. Game Loop: The changes in the game loop, such as the handling of human and AI moves, introduce unnecessary complexity without improving functionality. The original loop was straightforward and easy to follow.

  6. Evaluation and Move Selection: The new code's approach to evaluating and selecting moves does not offer a clear advantage over the original method. The original code's approach was simpler and achieved the same goal.

In summary, while the revised code introduces some stylistic changes, it does not offer functional improvements and may reduce the overall readability and maintainability of the code.

Original file line number Diff line number Diff line change
@@ -1,66 +1,52 @@
# The minimax function is the heart of the AI. It recursively calculates the optimal move for the AI.
def minimax(total, turn, alpha, beta):
# Base case: if total is 20, it's a draw, so return 0
if total == 20:
return 0
# Base case: if total is more than 20, the last player to move loses
elif total > 20:
if turn: # If it's the AI's turn, AI loses, so return -1
return -1
else: # If it's the human's turn, human loses, so return 1
return 1
def minimax(total, is_ai_turn, alpha, beta):
# Base case: If total is 20 or more, the game is over
if total >= 20:
return -1 if is_ai_turn else 1 if total > 20 else 0 # -1 if AI loses, 1 if AI wins, 0 if draw

# Set initial evaluation depending on whose turn it is
best_eval = -float('inf') if is_ai_turn else float('inf')

# Explore each possible move (1, 2, or 3)
for i in range(1, 4):
eval = minimax(total + i, not is_ai_turn, alpha, beta) # Recursive call for the next move

# Maximize if it's AI's turn, else minimize
if is_ai_turn:
best_eval = max(best_eval, eval)
alpha = max(alpha, eval)
else:
best_eval = min(best_eval, eval)
beta = min(beta, eval)

# Alpha-beta pruning to cut off unnecessary branches
if beta <= alpha:
break
return best_eval

# If it's the AI's turn, we want to maximize the score
if turn:
max_eval = -float('inf') # Initialize max_eval to negative infinity
for i in range(1, 4): # For each possible move (1, 2, or 3)
# Recursively call minimax for the next state of the game
eval = minimax(total + i, False, alpha, beta)
max_eval = max(max_eval, eval) # Update max_eval if necessary
alpha = max(alpha, eval) # Update alpha if necessary
if beta <= alpha: # If beta is less than or equal to alpha, break the loop (alpha-beta pruning)
break
return max_eval # Return the maximum evaluation
# If it's the human's turn, we want to minimize the score
else:
min_eval = float('inf') # Initialize min_eval to positive infinity
for i in range(1, 4): # For each possible move (1, 2, or 3)
# Recursively call minimax for the next state of the game
eval = minimax(total + i, True, alpha, beta)
min_eval = min(min_eval, eval) # Update min_eval if necessary
beta = min(beta, eval) # Update beta if necessary
if beta <= alpha: # If beta is less than or equal to alpha, break the loop (alpha-beta pruning)
break
return min_eval # Return the minimum evaluation

# The total score of the game is initially 0
total = 0
total = 0 # Initialize total score

# Game loop
while True:
# Get the human player's move from input and add it to the total
while total < 20:
# Human player's move
human_move = int(input("Enter your move (1, 2, or 3): "))
while human_move not in [1, 2, 3]: # If the move is not valid, ask for input again
print("Invalid move. Please enter 1, 2, or 3.")
human_move = int(input("Enter your move (1, 2, or 3): "))
while human_move not in [1, 2, 3]: # Validate input
human_move = int(input("Invalid move. Enter 1, 2, or 3: "))
total += human_move
print(f"After your move, total is {total}")
if total >= 20: # If the total is 20 or more after the human's move, the human wins
print(f"Total after your move: {total}")

# Check if human wins
if total >= 20:
print("You win!")
break

# If the game is not over, it's the AI's turn
# AI's turn
print("AI is making its move...")
ai_move = 1
max_eval = -float('inf')
for i in range(1, 4): # For each possible move (1, 2, or 3)
# Call minimax to get the evaluation of the move
eval = minimax(total + i, False, -float('inf'), float('inf'))
if eval > max_eval: # If the evaluation is greater than max_eval, update max_eval and ai_move
max_eval = eval
ai_move = i
total += ai_move # Add the AI's move to the total
print(f"AI adds {ai_move}. Total is {total}")
if total >= 20: # If the total is 20 or more after the AI's move, the AI wins
# Select the best move by calling minimax on each possible option
best_move = max((minimax(total + i, False, -float('inf'), float('inf')), i) for i in range(1, 4))[1]
total += best_move
print(f"AI adds {best_move}. Total is {total}")

# Check if AI wins
if total >= 20:
print("AI wins!")
break
73 changes: 29 additions & 44 deletions HillClimbSearch/HillClimbSearchEval.py
Original file line number Diff line number Diff line change
@@ -1,50 +1,35 @@
import numpy as np
def hill_climbing(func, start, step=0.01, max_iter=1000):
x = start
for _ in range(max_iter):
fx = func(x)
fx_positive = func(x + step)
fx_negative = func(x - step)

def hill_climbing(func, start, step_size=0.01, max_iterations=1000):
current_position = start
current_value = func(current_position)

for i in range(max_iterations):
next_position_positive = current_position + step_size
next_value_positive = func(next_position_positive)

next_position_negative = current_position - step_size
next_value_negative = func(next_position_negative)

if next_value_positive > current_value and next_value_positive >= next_value_negative:
current_position = next_position_positive
current_value = next_value_positive
elif next_value_negative > current_value and next_value_negative > next_value_positive:
current_position = next_position_negative
current_value = next_value_negative
if fx_positive > fx and fx_positive >= fx_negative:
x += step
elif fx_negative > fx and fx_negative > fx_positive:
x -= step
else:
break

return current_position, current_value
return x, func(x)

# Get the function from the user
while True:
func_str = input("\nEnter a function of x: ")
try:
# Test the function with a dummy value
x = 0
eval(func_str)
break
except Exception as e:
print(f"Invalid function. Please try again. Error: {e}")

# Convert the string into a function
func = lambda x: eval(func_str)
if __name__ == "__main__":
while True:
try:
func_str = input("Enter a function of x (e.g., -(x-2)**2 + 4): ")
func = eval(f"lambda x: {func_str}") # Convert string to function
func(0) # Test the function with x = 0
break
except Exception as e:
print(f"Invalid function. Please try again. Error: {e}")

# Get the starting point from the user
while True:
start_str = input("\nEnter the starting value to begin the search: ")
try:
start = float(start_str)
break
except ValueError:
print("Invalid input. Please enter a number.")
while True:
try:
start = float(input("Enter the starting value (e.g., 0): "))
break
except ValueError:
print("Invalid input. Please enter a valid number.")

maxima, max_value = hill_climbing(func, start)
print(f"The maxima is at x = {maxima}")
print(f"The maximum value obtained is {max_value}")
max_x, max_val = hill_climbing(func, start)
print(f"Maxima found at x = {max_x}")
print(f"Maximum value = {max_val}")
20 changes: 11 additions & 9 deletions Logistic_Regression/LogisticReg.py
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Useful changes but add more comments to explain

Original file line number Diff line number Diff line change
@@ -1,28 +1,34 @@
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

def sigmoid(z):
return 1.0 / (1.0 + np.exp(-z))

def logistic_regression(X, y, num_iterations=200, learning_rate=0.001):
def cost_function(h, y):
return (-y * np.log(h) - (1 - y) * np.log(1 - h)).mean()

def gradient(X, h, y):
return np.dot(X.T, (h - y)) / y.shape[0]

def logistic_regression(X, y, num_iterations=5000, learning_rate=0.1):
weights = np.zeros(X.shape[1])
for _ in range(num_iterations):
z = np.dot(X, weights)
h = sigmoid(z)
gradient_val = np.dot(X.T, (h - y)) / y.shape[0]
gradient_val = gradient(X, h, y)
weights -= learning_rate * gradient_val
return weights

# Load Iris dataset
iris = load_iris()
X = iris.data[:, :2] # Use only the first two features (sepal length and width)
X = iris.data[:, :2] # Use only the first two features
y = (iris.target != 0) * 1 # Convert to binary classification

# Split the dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=9)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Standardize features
sc = StandardScaler()
Expand All @@ -38,7 +44,6 @@ def logistic_regression(X, y, num_iterations=200, learning_rate=0.001):
# Print accuracy
print(f'Accuracy: {np.mean(y_pred == y_test):.4f}')


# Plot decision boundary
x_min, x_max = X_train_std[:, 0].min() - 1, X_train_std[:, 0].max() + 1
y_min, y_max = X_train_std[:, 1].min() - 1, X_train_std[:, 1].max() + 1
Expand All @@ -53,7 +58,4 @@ def logistic_regression(X, y, num_iterations=200, learning_rate=0.001):
plt.title('Logistic Regression Decision Boundaries')
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')

plt.savefig('plot.png')


Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@

## Naive Bayes Classifier
## Naive Bayes Classifier using Gaussian method

The Naive Bayes classifier is a simple and effective classification algorithm that uses probabilities and Bayes' theorem to predict the class of an instance. The 'naive' part comes from the assumption that all features are independent of each other, which is not always the case in real-world data, but it simplifies the calculations and often works well in practice.

Expand Down
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a useful change

Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
X, y = iris.data, iris.target
class_names = iris.target_names


class NaiveBayes:
def fit(self, X, y):
self._classes = np.unique(y)
Expand Down Expand Up @@ -54,4 +55,4 @@ def _pdf(self, class_idx, x):

# Print classification report
print("\nClassification Report:")
print(classification_report(y_test, y_pred, target_names=class_names))
print(classification_report(y_test, y_pred, target_names=class_names))
2 changes: 1 addition & 1 deletion ReadMe.md
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is done at RVCE, you could have added course code and retain RVCE

Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ pip install -r requirements.txt

## Usage

1. Can be used for the AI_ML Lab for the course 21AI52 at RVCE
1. Can be used for the AI_ML Lab for the course 21AI52 and IS353IA

## Contributing

Expand Down
Loading