A time measurement script for Elasticsearch analyzers written in Python.
Elastisearch analyzers convert text into tokens which are added to the inverted index for searching. An analyzer consist of three blocks: character filters, tokenizers and token filters. In this order the analyzer performs operations on the input stream (character filters), then divides the input stream into tokens (tokenizer) and later on it performs some more operations on the tokens (token filters).
This Python script executes search queries to the Elasticsearch search API and collects the time needed to execute these queries. Before the execution of each query the cache gets cleared by the script.
There are a few things you need, before you can start working. Please make sure you've installed and properly configured the following software:
- Elasticsearch 5.4.3
- Python 3
- Python Requests Library for executing curl requests.
pip3 install requests
- Python Click Library for the CLI.
pip3 install click
Not required but helpful:
- Elasticdump to dump docs easily into a testing Elasticsearch database
- Cerebro to create Elasticsearch index templates
git clone https://github.com/unidario/elasticsearch-analyzer-bench.git
cd elasticsearch-analyzer-bench
pip install -r requirements.txt
Create a txt file with example queries in this directory.
Each query has to be written on a new line inside the txt file.
Start the script by providing the path to the queries txt file and at least one index.
python3 analyzer_bench.py $PATH_TO_QUERY_TXT_FILE -i $index -i $index2
See the configuration for all options.
Option | Description | Required | Default |
---|---|---|---|
--help |
Show help message and exit | ||
-i , --index |
One Elasticsearch index to test (can be used multiple times) | yes | |
--protocol |
Hypertext transfer protocol (either http or https ) |
no | http |
--url |
The url of Elasticsearch | no | localhost |
--port |
The port number of Elasticsearch | no | 9200 |
-r , --runs |
The amount of executions per query | no | 1 |
The script creates the output inside the bash.
The output consists of 4 Elements:
- Indices which could not be tested because they don't exist in the database (only if necessary).
- Output of all analyzers for each index.
- Stats for the executed queries per index.
- Stats for the timing, doc amount and size of the index.
Sample Output:
Indices for which no calculation can be done because they don't exist:
index_x, index_y
Analyzers:
| Index | Tokenizer | Token Filters | Char Filters |
| index_a | standard | standard, lowercase | |
| index_b | pattern | lowercase, unique | html_strip, mapping |
Query stats:
| Index | Queries | Repetitions | Total | Successful | Failed | Success rate |
| index_a | 6 | 70 | 420 | 420 | 0 | 100.0 % |
| index_b | 6 | 70 | 420 | 350 | 70 | 0.8333333333333 % |
Speed:
| Index | Docs | Size [GB] | Average speed [ms] |
| index_a | 193438 | 0.238148 | 257.3333333333333 |
| index_a | 386876 | 0.423296 | 312.42 |
The unittests are located in the file unit_test.py.
Run the tests:
python3 unit_test.py
- Dario Segger - Initial work
This project is licensed under the MIT License - see the LICENSE file for details.