Skip to content

unidario/elasticsearch-analyzer-bench

Repository files navigation

Elasticsearch Analyzer Bench

A time measurement script for Elasticsearch analyzers written in Python.

Use cases

Elastisearch analyzers convert text into tokens which are added to the inverted index for searching. An analyzer consist of three blocks: character filters, tokenizers and token filters. In this order the analyzer performs operations on the input stream (character filters), then divides the input stream into tokens (tokenizer) and later on it performs some more operations on the tokens (token filters).

This Python script executes search queries to the Elasticsearch search API and collects the time needed to execute these queries. Before the execution of each query the cache gets cleared by the script.

Prerequisites

There are a few things you need, before you can start working. Please make sure you've installed and properly configured the following software:

Not required but helpful:

  • Elasticdump to dump docs easily into a testing Elasticsearch database
  • Cerebro to create Elasticsearch index templates

Getting Started

git clone https://github.com/unidario/elasticsearch-analyzer-bench.git
cd elasticsearch-analyzer-bench
pip install -r requirements.txt

Create a txt file with example queries in this directory.
Each query has to be written on a new line inside the txt file.

Start the script by providing the path to the queries txt file and at least one index.

python3 analyzer_bench.py $PATH_TO_QUERY_TXT_FILE -i $index -i $index2

See the configuration for all options.

Configuration

Option Description Required Default
--help Show help message and exit
-i, --index One Elasticsearch index to test (can be used multiple times) yes
--protocol Hypertext transfer protocol (either http or https) no http
--url The url of Elasticsearch no localhost
--port The port number of Elasticsearch no 9200
-r, --runs The amount of executions per query no 1

Output

The script creates the output inside the bash.

The output consists of 4 Elements:

  1. Indices which could not be tested because they don't exist in the database (only if necessary).
  2. Output of all analyzers for each index.
  3. Stats for the executed queries per index.
  4. Stats for the timing, doc amount and size of the index.

Sample Output:

Indices for which no calculation can be done because they don't exist:
index_x, index_y

Analyzers:
|  Index  | Tokenizer |    Token Filters    |    Char Filters     |
| index_a | standard  | standard, lowercase |                     |
| index_b |  pattern  |  lowercase, unique  | html_strip, mapping |

Query stats:
|  Index  | Queries | Repetitions | Total | Successful | Failed |   Success rate    |
| index_a |    6    |     70      |  420  |    420     |   0    |      100.0 %      |
| index_b |    6    |     70      |  420  |    350     |   70   | 0.8333333333333 % |

Speed:
|  Index  |  Docs  | Size [GB] | Average speed [ms] |
| index_a | 193438 | 0.238148  | 257.3333333333333  |
| index_a | 386876 | 0.423296  |       312.42       |

Unittests

The unittests are located in the file unit_test.py.

Run the tests:

python3 unit_test.py

Authors

  • Dario Segger - Initial work

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Collects execution time for elasticsearch queries.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages