Name: puppet-elasticsearch
Owner: CERN Operations
Description: Elasticsearch Puppet module
Forked from: elastic/puppet-elasticsearch
Created: 2016-11-10 13:44:17.0
Updated: 2016-11-10 13:44:19.0
Pushed: 2017-05-22 07:10:25.0
Size: 1662
Language: Ruby
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
This module sets up Elasticsearch instances with additional resource for plugins, templates, and more.
This module is actively tested against Elasticsearch 2.x and 5.x.
When using the repository management, the following module dependencies are required:
Declare the top-level elasticsearch
class (managing repositories) and set up an instance:
s { 'elasticsearch':
va_install => true,
nage_repo => true,
po_version => '5.x',
ticsearch::instance { 'es-01': }
Note: Elasticsearch 5.x requires a recent version of the JVM.
If you are on a recent version of your distribution of choice (such as Ubuntu 16.04 or CentOS 7), setting java_install => true
will work out-of-the-box.
If you are on an earlier distribution, you may need to take additional measures to install Java 1.8.
Most top-level parameters in the elasticsearch
class are set to reasonable defaults.
The following are some parameters that may be useful to override:
s { 'elasticsearch':
rsion => '1.4.2'
Note: This will only work when using the repository.
By default, the module will not restart Elasticsearch when the configuration file, package, or plugins change. This can be overridden globally with the following option:
s { 'elasticsearch':
start_on_change => true
Or controlled with the more granular options: restart_config_change
, restart_package_change
, and restart_plugin_change.
s { 'elasticsearch':
toupgrade => true
s { 'elasticsearch':
sure => 'absent'
s { 'elasticsearch':
atus => 'disabled'
Some resources, such as elasticsearch::template
, require communicating with the Elasticsearch REST API.
By default, these API settings are set to:
s { 'elasticsearch':
i_protocol => 'http',
i_host => 'localhost',
i_port => 9200,
i_timeout => 10,
i_basic_auth_username => undef,
i_basic_auth_password => undef,
i_ca_file => undef,
i_ca_path => undef,
lidate_tls => true,
Each of these can be set at the top-level elasticsearch
class and inherited for each resource or overridden on a per-resource basis.
This module supports managing all of its defined types through top-level parameters to better support Hiera and Puppet Enterprise.
For example, to manage an instance and index template directly from the elasticsearch
class:
s { 'elasticsearch':
stances => {
'es-01' => {
'config' => {
'network.host' => '0.0.0.0'
}
}
mplates => {
'logstash' => {
'content' => {
'template' => 'logstash-*',
'settings' => {
'number_of_replicas' => 0
}
}
}
This module works with the concept of instances. For service to start you need to specify at least one instance.
ticsearch::instance { 'es-01': }
This will set up its own data directory and set the node name to $hostname-$instance_name
Instance specific options can be given:
ticsearch::instance { 'es-01':
nfig => { }, # Configuration hash
it_defaults => { }, # Init defaults hash
tadir => [ ], # Data directory
See Advanced features for more information.
This module can help manage a variety of plugins.
Note that module_dir
is where the plugin will install itself to and must match that published by the plugin author; it is not where you would like to install it yourself.
ticsearch::plugin { 'lmenezes/elasticsearch-kopf':
stances => 'instance_name'
ticsearch::plugin { 'jetty':
l => 'https://oss-es-plugins.s3.amazonaws.com/elasticsearch-jetty/elasticsearch-jetty-1.2.1.zip',
stances => 'instance_name'
You can also use a proxy if required by setting the proxy_host
and proxy_port
options:
ticsearch::plugin { 'lmenezes/elasticsearch-kopf',
stances => 'instance_name',
oxy_host => 'proxy.host.com',
oxy_port => 3128
Proxies that require usernames and passwords are similarly supported with the proxy_username
and proxy_password
parameters.
Plugin name formats that are supported include:
elasticsearch/plugin/version
(for official elasticsearch plugins downloaded from download.elastic.co)groupId/artifactId/version
(for community plugins downloaded from maven central or OSS Sonatype)username/repository
(for site plugins downloaded from github master)When you specify a certain plugin version, you can upgrade that plugin by specifying the new version.
ticsearch::plugin { 'elasticsearch/elasticsearch-cloud-aws/2.1.1': }
And to upgrade, you would simply change it to
ticsearch::plugin { 'elasticsearch/elasticsearch-cloud-aws/2.4.1': }
Please note that this does not work when you specify 'latest' as a version number.
For the Elasticsearch commercial plugins you can refer them to the simple name.
See Plugin installation for more details.
Installs scripts to be used by Elasticsearch. These scripts are shared across all defined instances on the same host.
ticsearch::script { 'myscript':
sure => 'present',
urce => 'puppet:///path/to/my/script.groovy'
Script directories can also be recursively managed for large collections of scripts:
ticsearch::script { 'myscripts_dir':
sure => 'directory,
urce => 'puppet:///path/to/myscripts_dir'
curse => 'remote',
By default templates use the top-level elasticsearch::api_*
settings to communicate with Elasticsearch.
The following is an example of how to override these settings:
ticsearch::template { 'templatename':
i_protocol => 'https',
i_host => $::ipaddress,
i_port => 9201,
i_timeout => 60,
i_basic_auth_username => 'admin',
i_basic_auth_password => 'adminpassword',
i_ca_file => '/etc/ssl/certs',
i_ca_path => '/etc/pki/certs',
lidate_tls => false,
urce => 'puppet:///path/to/template.json',
This will install and/or replace the template in Elasticsearch:
ticsearch::template { 'templatename':
urce => 'puppet:///path/to/template.json',
This will install and/or replace the template in Elasticsearch:
ticsearch::template { 'templatename':
ntent => {
'template' => "*",
'settings' => {
'number_of_replicas' => 0
}
Plain JSON strings are also supported.
ticsearch::template { 'templatename':
ntent => '{"template":"*","settings":{"number_of_replicas":0}}'
ticsearch::template { 'templatename':
sure => 'absent'
Pipelines behave similar to templates in that their contents can be controlled
over the Elasticsearch REST API with a custom Puppet resource.
API parameters follow the same rules as templates (those settings can either be
controlled at the top-level in the elasticsearch
class or set per-resource).
This will install and/or replace an ingestion pipeline in Elasticsearch (ingestion settings are compared against the present configuration):
ticsearch::pipeline { 'addfoo':
ntent => {
'description' => 'Add the foo field',
'processors' => [{
'set' => {
'field' => 'foo',
'value' => 'bar'
}
}]
ticsearch::pipeline { 'addfoo':
sure => 'absent'
This module includes basic support for ensuring an index is present or absent with optional index settings. API access settings follow the pattern previously mentioned for templates.
At the time of this writing, only index settings are supported.
Note that some settings (such as number_of_shards
) can only be set at index
creation time.
ticsearch::index { 'foo':
ttings => {
'index' => {
'number_of_replicas' => 0
}
ticsearch::index { 'foo':
sure => 'absent'
Install a variety of clients/bindings:
ticsearch::python { 'rawes': }
ticsearch::ruby { 'elasticsearch': }
This module offers a way to make sure an instance has been started and is up and running before
doing a next action. This is done via the use of the es_instance_conn_validator
resource.
nstance_conn_validator { 'myinstance' :
rver => 'es.example.com',
rt => '9200',
A common use would be for example :
s { 'kibana4' :
quire => Es_Instance_Conn_Validator['myinstance'],
There are two different ways of installing Elasticsearch:
This option allows you to use an existing repository for package installation.
The repo_version
corresponds with the major.minor
version of Elasticsearch for versions before 2.x.
s { 'elasticsearch':
nage_repo => true,
po_version => '1.4',
For 2.x versions of Elasticsearch, use repo_version => '2.x'
.
s { 'elasticsearch':
nage_repo => true,
po_version => '2.x',
For users who may wish to install via a local repository (for example, through a mirror), the repo_baseurl
parameter is available:
s { 'elasticsearch':
nage_repo => true,
po_baseurl => 'https://repo.local/yum'
When a repository is not available or preferred you can install the packages from a remote source:
s { 'elasticsearch':
ckage_url => 'https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.2.deb',
oxy_url => 'http://proxy.example.com:8080/',
Setting proxy_url
to a location will enable download using the provided proxy
server.
This parameter is also used by elasticsearch::plugin
.
Setting the port in the proxy_url
is mandatory.
proxy_url
defaults to undef
(proxy disabled).
s { 'elasticsearch':
ckage_url => 'puppet:///path/to/elasticsearch-1.4.2.deb'
s { 'elasticsearch':
ckage_url => 'file:/path/to/elasticsearch-1.4.2.deb'
Most sites will manage Java separately; however, this module can attempt to install Java as well. This is done by using the puppetlabs-java module.
s { 'elasticsearch':
va_install => true
Specify a particular Java package/version to be installed:
s { 'elasticsearch':
va_install => true,
va_package => 'packagename'
When configuring Elasticsearch's memory usage, you can do so by either changing init defaults for Elasticsearch 1.x/2.x (see the following example), or modify it globally in 5.x using jvm.options
:
s { 'elasticsearch':
m_options => [
'-Xms4g',
'-Xmx4g'
Currently only the basic SysV-style init and Systemd service providers are supported, but other systems could be implemented as necessary (pull requests welcome).
The defaults file (/etc/defaults/elasticsearch
or /etc/sysconfig/elasticsearch
) for the Elasticsearch service can be populated as necessary.
This can either be a static file resource or a simple key value-style hash object, the latter being particularly well-suited to pulling out of a data source such as Hiera.
s { 'elasticsearch':
it_defaults_file => 'puppet:///path/to/defaults'
fig_hash = {
S_HEAP_SIZE' => '30g',
s { 'elasticsearch':
it_defaults => $config_hash
Note: init_defaults
hash can be passed to the main class and to the instance.
X-Pack and Shield file-based users, roles, and certificates can be managed by this module.
Note: If you are planning to use these features, it is highly recommended you read the following documentation to understand the caveats and extent of the resources available to you.
Although this module can handle several types of Shield/X-Pack resources, you are expected to manage the plugin installation and versions for your deployment. For example, the following manifest will install Elasticseach with a single instance running X-Pack:
s { 'elasticsearch':
va_install => true,
nage_repo => true,
po_version => '5.x',
curity_plugin => 'x-pack',
ticsearch::instance { 'es-01': }
ticsearch::plugin { 'x-pack': instances => 'es-01' }
The following manifest will do the same, but with Shield:
s { 'elasticsearch':
va_install => true,
nage_repo => true,
po_version => '2.x',
curity_plugin => 'shield',
ticsearch::instance { 'es-01': }
ticsearch::Plugin { instances => ['es-01'], }
ticsearch::plugin { 'license': }
ticsearch::plugin { 'shield': }
The following examples will assume the preceding resources are part of your puppet manifest.
Roles in the file realm (the esusers
realm in Shield) can be managed using the elasticsearch::role
type.
For example, to create a role called myrole
, you could use the following resource in X-Pack:
ticsearch::role { 'myrole':
ivileges => {
'cluster' => [ 'monitor' ],
'indices' => [{
'names' => [ '*' ],
'privileges' => [ 'read' ],
}]
And in Shield:
ticsearch::role { 'myrole':
ivileges => {
'cluster' => 'monitor',
'indices' => {
'*' => 'read'
}
This role would grant users access to cluster monitoring and read access to all indices.
See the Shield or X-Pack documentation for your version to determine what privileges
to use and how to format them (the Puppet hash representation will simply be translated into yaml.)
Note: The Puppet provider for esusers
/users
has fine-grained control over the roles.yml
file and thus will leave the default roles Shield installs in-place.
If you would like to explicitly purge the default roles (leaving only roles managed by puppet), you can do so by including the following in your manifest:
urces { 'elasticsearch_role':
rge => true,
Associating mappings with a role for file-based management is done by passing an array of strings to the mappings
parameter of the elasticsearch::role
type.
For example, to define a role with mappings:
ticsearch::role { 'logstash':
ppings => [
'cn=group,ou=devteam',
ivileges => {
'cluster' => 'manage_index_templates',
'indices' => [{
'names' => ['logstash-*'],
'privileges' => [
'write',
'delete',
'create_index',
],
}],
Note: Observe the brackets around indices
in the preceding role definition; which is an array of hashes per the format in Shield 2.3.x. Follow the documentation to determine the correct formatting for your version of Shield or X-Pack.
If you'd like to keep the mappings file purged of entries not under Puppet's control, you should use the following resources
declaration because mappings are a separate low-level type:
urces { 'elasticsearch_role_mapping':
rge => true,
Users can be managed using the elasticsearch::user
type.
For example, to create a user mysuser
with membership in myrole
:
ticsearch::user { 'myuser':
ssword => 'mypassword',
les => ['myrole'],
The password
parameter will also accept password hashes generated from the esusers
/users
utility and ensure the password is kept in-sync with the Shield users
file for all Elasticsearch instances.
ticsearch::user { 'myuser':
ssword => '$2a$10$IZMnq6DF4DtQ9c4sVovgDubCbdeH62XncmcyD1sZ4WClzFuAdqspy',
les => ['myrole'],
Note: When using the esusers
/users
provider (the default for plaintext passwords), Puppet has no way to determine whether the given password is in-sync with the password hashed by Shield/X-Pack.
In order to work around this, the elasticsearch::user
resource has been designed to accept refresh events in order to update password values.
This is not ideal, but allows you to instruct the resource to change the password when needed.
For example, to update the aforementioned user's password, you could include the following your manifest:
fy { 'update password': } ~>
ticsearch::user { 'myuser':
ssword => 'mynewpassword',
les => ['myrole'],
SSL/TLS can be enabled by providing an elasticsearch::instance
type with paths to the certificate and private key files, and a password for the keystore.
ticsearch::instance { 'es-01':
l => true,
_certificate => '/path/to/ca.pem',
rtificate => '/path/to/cert.pem',
ivate_key => '/path/to/key.pem',
ystore_password => 'keystorepassword',
Note: Setting up a proper CA and certificate infrastructure is outside the scope of this documentation, see the aforementioned Shield or X-Pack guide for more information regarding the generation of these certificate files.
The module will set up a keystore file for the node to use and set the relevant options in elasticsearch.yml
to enable TLS/SSL using the certificates and key provided.
Shield/X-Pack system keys can be passed to the module, where they will be placed into individual instance configuration directories.
This can be set at the elasticsearch
class and inherited across all instances:
s { 'elasticsearch':
stem_key => 'puppet:///path/to/key',
Or set on a per-instance basis:
ticsearch::instance { 'es-01':
stem_key => '/local/path/to/key',
The module supports pinning the package version to avoid accidental upgrades that are not done by Puppet. To enable this feature:
s { 'elasticsearch':
ckage_pin => true,
rsion => '1.5.2',
In this example we pin the package version to 1.5.2.
There are several different ways of setting data directories for Elasticsearch.
In every case the required configuration options are placed in the elasticsearch.yml
file.
By default we use:
/usr/share/elasticsearch/data/$instance_name
Which provides a data directory per instance.
s { 'elasticsearch':
tadir => '/var/lib/elasticsearch-data'
Creates the following for each instance:
/var/lib/elasticsearch-data/$instance_name
s { 'elasticsearch':
tadir => [ '/var/lib/es-data1', '/var/lib/es-data2']
Creates the following for each instance:
/var/lib/es-data1/$instance_name
and
/var/lib/es-data2/$instance_name
.
s { 'elasticsearch': }
ticsearch::instance { 'es-01':
tadir => '/var/lib/es-data-es01'
Creates the following for this instance:
/var/lib/es-data-es01
s { 'elasticsearch': }
ticsearch::instance { 'es-01':
tadir => ['/var/lib/es-data1-es01', '/var/lib/es-data2-es01']
Creates the following for this instance:
/var/lib/es-data1-es01
and
/var/lib/es-data2-es01
.
In some cases, you may want to share a top-level data directory among multiple instances.
s { 'elasticsearch':
tadir_instance_directories => false,
nfig => {
'node.max_local_storage_nodes' => 2
ticsearch::instance { 'es-01': }
ticsearch::instance { 'es-02': }
Will result in the following directories created by Elasticsearch at runtime:
/var/lib/elasticsearch/nodes/0
/var/lib/elasticsearch/nodes/1
See the Elasticsearch documentation for additional information regarding this configuration.
The config
option in both the main class and the instances can be configured to work together.
The options in the instance
config hash will merged with the ones from the main class and override any duplicates.
s { 'elasticsearch':
nfig => { 'cluster.name' => 'clustername' }
ticsearch::instance { 'es-01':
nfig => { 'node.name' => 'nodename' }
ticsearch::instance { 'es-02':
nfig => { 'node.name' => 'nodename2' }
This example merges the cluster.name
together with the node.name
option.
When duplicate options are provided, the option in the instance config overrides the ones from the main class.
s { 'elasticsearch':
nfig => { 'cluster.name' => 'clustername' }
ticsearch::instance { 'es-01':
nfig => { 'node.name' => 'nodename', 'cluster.name' => 'otherclustername' }
ticsearch::instance { 'es-02':
nfig => { 'node.name' => 'nodename2' }
This will set the cluster name to otherclustername
for the instance es-01
but will keep it to clustername
for instance es-02
The config
hash can be written in 2 different ways:
Instead of writing the full hash representation:
s { 'elasticsearch':
nfig => {
cluster' => {
'name' => 'ClusterName',
'routing' => {
'allocation' => {
'awareness' => {
'attributes' => 'rack'
}
}
}
}
s { 'elasticsearch':
nfig => {
'cluster' => {
'name' => 'ClusterName',
'routing.allocation.awareness.attributes' => 'rack'
}
This module is built upon and tested against the versions of Puppet listed in the metadata.json file (i.e. the listed compatible versions on the Puppet Forge).
The module has been tested on:
Other distro's that have been reported to work:
Testing on other platforms has been light and cannot be guaranteed.
Please see the CONTRIBUTING.md file for instructions regarding development environments and testing.
Need help? Join us in #elasticsearch on Freenode IRC or on the discussion forum.