This project is designed to create a scalable system for logging and calculating NBA player and team statistics. The
system can run either on-premises or on AWS and includes two main microservices: StatsService
and AggregationService
. The project uses Go with Gin, PostgreSQL, Kafka, and Redis, Docker Compose.
- Log NBA player statistics
- Calculate aggregate statistics (season average per player and team)
- Highly available and scalable architecture
- Support for batches of tens or hundreds of requests concurrently
- Up-to-date data available immediately after writing
- Maintainable and supports frequent updates
The StatsService
is responsible for logging player statistics into the PostgreSQL database and publishing these events
to Kafka.
- The service exposes a POST endpoint
/log
to accept player statistics in JSON format. - The received statistics are validated and then saved to the PostgreSQL database.
- After saving the statistics, a message is published to a Kafka topic to notify other services about the new data.
- The service exposes GET endpoints to retrieve player and team statistics, as well as their IDs.
- Additionally, endpoints are available to calculate and return the season average statistics for a player or a team.
The AggregationService
is responsible for consuming events from Kafka, updating aggregate statistics, and caching
these statistics in Redis for fast access.
- The service subscribes to the Kafka topic where
StatsService
publishes new statistics events. - Upon receiving a new event, it updates the aggregate statistics for the respective player and team.
- The aggregate statistics are calculated using the data received from the Kafka events.
- These statistics are then cached in Redis for quick retrieval.
- The service can also recalculate all aggregate statistics from the database upon startup to ensure the cache is up-to-date.
- The service exposes GET endpoints to retrieve the cached aggregate statistics for players and teams.
- If the requested data is not in the cache, the service can recompute the aggregates from the database.
- A client sends a POST request to
StatsService
with player statistics. StatsService
validates and saves the statistics to PostgreSQL.StatsService
publishes an event to Kafka about the new statistics.
AggregationService
consumes the event from Kafka.AggregationService
updates the aggregate statistics for the player and team.- The updated statistics are cached in Redis.
- A client sends a GET request to
AggregationService
to retrieve player or team aggregate statistics. AggregationService
checks Redis for cached data.- If the data is not cached,
AggregationService
retrieves raw statistics fromStatsService
, calculates the aggregates, and caches the result.
POST /log
: Log NBA player statisticsGET /player/:player_id/stats
: Get player statsGET /team/:team_id/stats
: Get team statsGET /player/ids
: Get all player IDsGET /team/ids
: Get all team IDsGET /average/player/:player_id
: Get player season averageGET /average/team/:team_id
: Get team season average
GET /average/player/:player_id
: Get cached player season averageGET /average/team/:team_id
: Get cached team season average
Configuration is managed through environment variables. The following environment variables are required:
DB_USER
: Database userDB_PASSWORD
: Database passwordDB_NAME
: Database nameDB_HOST
: Database hostDB_PORT
: Database portKAFKA_BROKER
: Kafka broker addressREDIS_ADDR
: Redis address
- Create a
.env
file with the necessary environment variables. - Run the following command to start the services:
docker-compose up --build