Enemax is a microservices-based platform designed to empower energy management systems with sustainability in mind. Our platform offers a range of services that optimize energy usage, monitor renewable energy sources, and facilitate efficient grid management. With Enemax, you can achieve greater control over energy consumption, improve demand response capabilities, manage energy storage efficiently, and ensure predictive maintenance for a sustainable energy ecosystem.
-
Energy Monitoring: Enemax provides real-time monitoring of energy consumption at various levels, including individual devices, buildings, and entire energy systems. It offers comprehensive data visualization and analytics to help you understand energy usage patterns, identify areas for improvement, and make informed decisions.
-
Demand Response: With Enemax, you can implement demand response strategies to manage energy consumption during peak demand periods. Our microservices enable dynamic load shedding, load shifting, and load balancing, helping you optimize energy usage, reduce costs, and minimize strain on the grid.
-
Energy Storage Management: Enemax supports efficient management of energy storage systems, including batteries and other storage technologies. By integrating with renewable energy sources and grid infrastructure, Enemax enables seamless energy storage, distribution, and utilization, maximizing the benefits of sustainable energy.
-
Predictive Maintenance: Enemax incorporates predictive maintenance capabilities to ensure the optimal performance and longevity of energy systems. By analyzing real-time and historical data, our platform identifies potential issues, predicts maintenance requirements, and proactively alerts you to mitigate downtime and costly repairs.
Enemax is built using a microservices architecture, providing flexibility, scalability, and modularity. The architecture consists of the following components:
-
Service Registry: A central service registry manages the discovery and registration of microservices within the Enemax platform. It allows easy access to different services and enables seamless communication between them.
-
API Gateway: The API gateway serves as a single entry point for clients to access Enemax services. It handles authentication, routing, load balancing, and other cross-cutting concerns. The API gateway also provides security measures such as rate limiting and access control.
-
Energy Monitoring Service: This service collects energy consumption data from various sources, such as smart meters, sensors, and IoT devices. It processes the data, performs analytics, and generates insights and visualizations to help users track and optimize energy usage.
-
Demand Response Service: The demand response service enables users to implement demand response strategies based on predefined rules or intelligent algorithms. It communicates with energy-consuming devices, enabling load shedding or shifting to balance energy demand and supply.
-
Energy Storage Management Service: This service manages the storage and distribution of energy from renewable sources and the grid. It optimizes the usage of energy storage systems, coordinates energy flows, and ensures efficient utilization of stored energy.
-
Predictive Maintenance Service: The predictive maintenance service leverages machine learning algorithms and historical data to identify potential maintenance requirements. It generates alerts and recommendations to help users prevent equipment failures, optimize maintenance schedules, and minimize downtime.
To get started with Enemax, follow these steps:
-
Clone the Enemax repository from GitHub:
git clone https://github.com/enemax/enemax.git
-
Install the necessary dependencies for each microservice by following the instructions in their respective directories.
-
Configure the service registry and API gateway according to your environment.
-
Start the microservices individually or use a container orchestration tool, such as Kubernetes, to manage the deployment and scaling.
-
Access the Enemax API gateway and start utilizing the available services for energy monitoring, demand response, energy storage management, and predictive maintenance.
For detailed installation and usage instructions, please refer to the documentation in the Enemax repository.
We welcome contributions from the community to enhance Enemax and make it even more powerful and robust. If you're interested in contributing, please follow the guidelines outlined in the CONTRIBUTING.md file in the Enemax repository.
Enemax is released under the MIT License. You are free to use, modify, and distribute Enemax for both commercial and non-commercial purposes.
If you encounter any issues, have questions, or need assistance with Enemax, please reach out to our support team at support@enemax.com. We are here to help!
We would like to express our gratitude to the open-source community for their valuable contributions and the continuous support they provide. Thank you for being part of the sustainable energy revolution with Enemax!
Before installing and running Enemax, ensure that your system meets the following requirements:
- Operating System: [Specify the supported operating systems]
- Memory: [Minimum required memory]
- Disk Space: [Minimum required disk space]
- Processor: [Minimum required processor]
Provide step-by-step instructions for installing Enemax. Include any prerequisites, dependencies, and configuration steps needed for a successful installation. You can include code snippets or commands to help users easily follow the installation process.
Explain how users can configure Enemax to suit their specific environment and requirements. Include information on how to set up connections to external systems, adjust settings, and customize the behavior of the microservices. Provide clear instructions and examples to guide users through the configuration process.
Provide detailed instructions on how to use Enemax and its various microservices. Explain the available APIs, endpoints, and parameters, along with examples to demonstrate how to interact with the services. Provide sample code snippets or API requests to help users quickly get started.
Include some practical examples that showcase how Enemax can be used in real-world scenarios. Demonstrate how to leverage different microservices to achieve specific energy management goals or solve common energy-related challenges. Provide code samples, configurations, or visual representations to enhance understanding.
Outline the future development plans and features that are planned for Enemax. This section helps users understand the project's direction and upcoming enhancements. If possible, provide an estimated timeline or versioning strategy for the planned updates.
Compile a list of frequently asked questions and their answers to address common queries from users. Cover topics such as installation issues, troubleshooting, and best practices. Update this section regularly based on user feedback and inquiries.
Maintain a changelog that documents the version history of Enemax. Include release notes, bug fixes, new features, and any other notable changes made to the platform. This allows users to stay informed about the updates and improvements in each release.
Explain the security measures implemented in Enemax to protect user data, ensure secure communication, and prevent unauthorized access. Highlight any encryption, authentication, or authorization mechanisms employed within the microservices. If applicable, provide guidelines for users to enhance the security of their Enemax deployment.
Provide detailed guidelines on how users can contribute to the development of Enemax. Explain the process for submitting bug reports, feature requests, and pull requests. Include information on code style, testing, and documentation standards to maintain consistency across contributions.
Reiterate the licensing information for Enemax and provide a link to the full license text.
Provide information on how users can seek support or get in touch with the Enemax team. Include an email address, community forum, or any other channels where users can ask questions, report issues, or provide feedback.
Express gratitude to individuals, organizations, or projects that have contributed to Enemax. Recognize the efforts of open-source libraries, frameworks, or tools that have been utilized in the development of the platform.
Feel free to modify and customize these sections based on the specific needs and structure of your Enemax project.
Node is required for generation and recommended for development. package.json
is always generated for a better development experience with prettier, commit hooks, scripts and so on.
In the project root, JHipster generates configuration files for tools like git, prettier, eslint, husky, and others that are well known and you can find references in the web.
/src/*
structure follows default Java structure.
-
.yo-rc.json
- Yeoman configuration file JHipster configuration is stored in this file atgenerator-jhipster
key. You may findgenerator-jhipster-*
for specific blueprints configuration. -
.yo-resolve
(optional) - Yeoman conflict resolver Allows to use a specific action when conflicts are found skipping prompts for files that matches a pattern. Each line should match[pattern] [action]
with pattern been a Minimatch pattern and action been one of skip (default if ommited) or force. Lines starting with#
are considered comments and are ignored. -
.jhipster/*.json
- JHipster entity configuration files -
npmw
- wrapper to use locally installed npm. JHipster installs Node and npm locally using the build tool by default. This wrapper makes sure npm is installed locally and uses it avoiding some differences different versions can cause. By using./npmw
instead of the traditionalnpm
you can configure a Node-less environment to develop or test your application. -
/src/main/docker
- Docker configurations for the application and services that the application depends on
Before you can build this project, you must install and configure the following dependencies on your machine:
- Node.js: We use Node to run a development web server and build the project. Depending on your system, you can install Node either from source or as a pre-packaged bundle.
After installing Node, you should be able to run the following command to install development tools. You will only need to run this command when dependencies change in package.json.
npm install
We use npm scripts and Angular CLI with Webpack as our build system.
Run the following commands in two separate terminals to create a blissful development experience where your browser auto-refreshes when files change on your hard drive.
./mvnw
npm start
Npm is also used to manage CSS and JavaScript dependencies used in this application. You can upgrade dependencies by
specifying a newer version in package.json. You can also run npm update
and npm install
to manage dependencies.
Add the help
flag on any command to see how you can use it. For example, npm help update
.
The npm run
command will list all of the scripts available to run for this project.
JHipster ships with PWA (Progressive Web App) support, and it's turned off by default. One of the main components of a PWA is a service worker.
The service worker initialization code is disabled by default. To enable it, uncomment the following code in src/main/webapp/app/app.module.ts
:
ServiceWorkerModule.register('ngsw-worker.js', { enabled: false }),
For example, to add Leaflet library as a runtime dependency of your application, you would run following command:
npm install --save --save-exact leaflet
To benefit from TypeScript type definitions from DefinitelyTyped repository in development, you would run following command:
npm install --save-dev --save-exact @types/leaflet
Then you would import the JS and CSS files specified in library's installation instructions so that Webpack knows about them: Edit src/main/webapp/app/app.module.ts file:
import 'leaflet/dist/leaflet.js';
Edit src/main/webapp/content/scss/vendor.scss file:
@import 'leaflet/dist/leaflet.css';
Note: There are still a few other things remaining to do for Leaflet that we won't detail here.
For further instructions on how to develop with JHipster, have a look at Using JHipster in development.
Microservices doesn't contain every required backend feature to allow microfrontends to run alone. You must start a pre-built gateway version or from source.
Start gateway from source:
cd gateway
npm run docker:db:up # start database if necessary
npm run docker:others:up # start service discovery and authentication service if necessary
npm run app:start # alias for ./(mvnw|gradlew)
Microfrontend's build-watch
script is configured to watch and compile microfrontend's sources and synchronizes with gateway's frontend.
Start it using:
cd microfrontend
npm run docker:db:up # start database if necessary
npm run build-watch
It's possible to run microfrontend's frontend standalone using:
cd microfrontend
npm run docker:db:up # start database if necessary
npm watch # alias for `npm start` and `npm run backend:start` in parallel
You can also use Angular CLI to generate some custom client code.
For example, the following command:
ng generate component my-component
will generate few files:
create src/main/webapp/app/my-component/my-component.component.html
create src/main/webapp/app/my-component/my-component.component.ts
update src/main/webapp/app/app.module.ts
JHipster Control Center can help you manage and control your application(s). You can start a local control center server (accessible on http://localhost:7419) with:
docker compose -f src/main/docker/jhipster-control-center.yml up
Congratulations! You've selected an excellent way to secure your JHipster application. If you're not sure what OAuth and OpenID Connect (OIDC) are, please see What the Heck is OAuth?
To log in to your app, you'll need to have Keycloak up and running. The JHipster Team has created a Docker container for you that has the default users and roles. Start Keycloak using the following command.
docker compose -f src/main/docker/keycloak.yml up
The security settings in src/main/resources/config/application.yml
are configured for this image.
spring:
...
security:
oauth2:
client:
provider:
oidc:
issuer-uri: http://localhost:9080/realms/jhipster
registration:
oidc:
client-id: web_app
client-secret: web_app
scope: openid,profile,email
Some of Keycloak configuration is now done in build time and the other part before running the app, here is the list of all build and configuration options.
Before moving to production, please make sure to follow this guide for better security and performance.
Also, you should never use start-dev
nor KC_DB=dev-file
in production.
When using Kubernetes, importing should be done using init-containers (with a volume when using db=dev-file
).
If you'd like to use Okta instead of Keycloak, it's pretty quick using the Okta CLI. After you've installed it, run:
okta register
Then, in your JHipster app's directory, run okta apps create
and select JHipster. This will set up an Okta app for you, create ROLE_ADMIN
and ROLE_USER
groups, create a .okta.env
file with your Okta settings, and configure a groups
claim in your ID token.
Run source .okta.env
and start your app with Maven or Gradle. You should be able to sign in with the credentials you registered with.
If you're on Windows, you should install WSL so the source
command will work.
If you'd like to configure things manually through the Okta developer console, see the instructions below.
First, you'll need to create a free developer account at https://developer.okta.com/signup/. After doing so, you'll get your own Okta domain, that has a name like https://dev-123456.okta.com
.
Modify src/main/resources/config/application.yml
to use your Okta settings.
spring:
...
security:
oauth2:
client:
provider:
oidc:
issuer-uri: https://{yourOktaDomain}/oauth2/default
registration:
oidc:
client-id: {clientId}
client-secret: {clientSecret}
security:
Create an OIDC App in Okta to get a {clientId}
and {clientSecret}
. To do this, log in to your Okta Developer account and navigate to Applications > Add Application. Click Web and click the Next button. Give the app a name you’ll remember, specify http://localhost:8080
as a Base URI, and http://localhost:8080/login/oauth2/code/oidc
as a Login Redirect URI. Click Done, then Edit and add http://localhost:8080
as a Logout redirect URI. Copy and paste the client ID and secret into your application.yml
file.
Create a ROLE_ADMIN
and ROLE_USER
group and add users into them. Modify e2e tests to use this account when running integration tests. You'll need to change credentials in src/test/javascript/e2e/account/account.spec.ts
and src/test/javascript/e2e/admin/administration.spec.ts
.
Navigate to API > Authorization Servers, click the Authorization Servers tab and edit the default one. Click the Claims tab and Add Claim. Name it "groups", and include it in the ID Token. Set the value type to "Groups" and set the filter to be a Regex of .*
.
After making these changes, you should be good to go! If you have any issues, please post them to Stack Overflow. Make sure to tag your question with "jhipster" and "okta".
If you'd like to use Auth0 instead of Keycloak, follow the configuration steps below:
- Create a free developer account at https://auth0.com/signup. After successful sign-up, your account will be associated with a unique domain like
dev-xxx.us.auth0.com
- Create a new application of type
Regular Web Applications
. Switch to theSettings
tab, and configure your application settings like:- Allowed Callback URLs:
http://localhost:8080/login/oauth2/code/oidc
- Allowed Logout URLs:
http://localhost:8080/
- Allowed Callback URLs:
- Navigate to User Management > Roles and create new roles named
ROLE_ADMIN
, andROLE_USER
. - Navigate to User Management > Users and create a new user account. Click on the Role tab to assign roles to the newly created user account.
- Navigate to Auth Pipeline > Rules and create a new Rule. Choose
Empty rule
template. Provide a meaningful name likeJHipster claims
and replaceScript
content with the following and Save.
function (user, context, callback) {
user.preferred_username = user.email;
const roles = (context.authorization || {}).roles;
function prepareCustomClaimKey(claim) {
return `https://www.jhipster.tech/${claim}`;
}
const rolesClaim = prepareCustomClaimKey('roles');
if (context.idToken) {
context.idToken[rolesClaim] = roles;
}
if (context.accessToken) {
context.accessToken[rolesClaim] = roles;
}
callback(null, user, context);
}
- In your
JHipster
application, modifysrc/main/resources/config/application.yml
to use your Auth0 application settings:
spring:
...
security:
oauth2:
client:
provider:
oidc:
# make sure to include the ending slash!
issuer-uri: https://{your-auth0-domain}/
registration:
oidc:
client-id: {clientId}
client-secret: {clientSecret}
scope: openid,profile,email
jhipster:
...
security:
oauth2:
audience:
- https://{your-auth0-domain}/api/v2/
OpenAPI-Generator is configured for this application. You can generate API code from the src/main/resources/swagger/api.yml
definition file by running:
./mvnw generate-sources
Then implements the generated delegate classes with @Service
classes.
To edit the api.yml
definition file, you can use a tool such as Swagger-Editor. Start a local instance of the swagger-editor using docker by running: docker compose -f src/main/docker/swagger-editor.yml up -d
. The editor will then be reachable at http://localhost:7742.
Refer to Doing API-First development for more details.
To build the final jar and optimize the Enemax application for production, run:
./mvnw -Pprod clean verify
This will concatenate and minify the client CSS and JavaScript files. It will also modify index.html
so it references these new files.
To ensure everything worked, run:
java -jar target/*.jar
Then navigate to http://localhost:8081 in your browser.
Refer to Using JHipster in production for more details.
To package your application as a war in order to deploy it to an application server, run:
./mvnw -Pprod,war clean verify
To launch your application's tests, run:
./mvnw verify
Unit tests are run by Jest. They're located in src/test/javascript/ and can be run with:
npm test
Performance tests are run by Gatling and written in Scala. They're located in src/test/java/gatling/simulations.
You can execute all Gatling tests with
./mvnw gatling:test
For more information, refer to the Running tests page.
Sonar is used to analyse code quality. You can start a local Sonar server (accessible on http://localhost:9001) with:
docker compose -f src/main/docker/sonar.yml up -d
Note: we have turned off forced authentication redirect for UI in src/main/docker/sonar.yml for out of the box experience while trying out SonarQube, for real use cases turn it back on.
You can run a Sonar analysis with using the sonar-scanner or by using the maven plugin.
Then, run a Sonar analysis:
./mvnw -Pprod clean verify sonar:sonar -Dsonar.login=admin -Dsonar.password=admin
If you need to re-run the Sonar phase, please be sure to specify at least the initialize
phase since Sonar properties are loaded from the sonar-project.properties file.
./mvnw initialize sonar:sonar -Dsonar.login=admin -Dsonar.password=admin
Additionally, Instead of passing sonar.password
and sonar.login
as CLI arguments, these parameters can be configured from sonar-project.properties as shown below:
sonar.login=admin
sonar.password=admin
For more information, refer to the Code quality page.
You can use Docker to improve your JHipster development experience. A number of docker-compose configuration are available in the src/main/docker folder to launch required third party services.
For example, to start a oracle database in a docker container, run:
docker compose -f src/main/docker/oracle.yml up -d
To stop it and remove the container, run:
docker compose -f src/main/docker/oracle.yml down
You can also fully dockerize your application and all the services that it depends on. To achieve this, first build a docker image of your app by running:
npm run java:docker
Or build a arm64 docker image when using an arm64 processor os like MacOS with M1 processor family running:
npm run java:docker:arm64
Then run:
docker compose -f src/main/docker/app.yml up -d
When running Docker Desktop on MacOS Big Sur or later, consider enabling experimental Use the new Virtualization framework
for better processing performance (disk access performance is worse).
For more information refer to Using Docker and Docker-Compose, this page also contains information on the docker-compose sub-generator (jhipster docker-compose
), which is able to generate docker configurations for one or several JHipster applications.
To configure CI for your project, run the ci-cd sub-generator (jhipster ci-cd
), this will let you generate configuration files for a number of Continuous Integration systems. Consult the Setting up Continuous Integration page for more information.