Name: puppet-google-container
Owner: Google Cloud Platform
Description: null
Created: 2017-06-16 23:21:45.0
Updated: 2018-04-18 08:09:27.0
Pushed: 2018-04-18 08:09:25.0
Homepage: null
Size: 88
Language: Ruby
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
This Puppet module manages the resource of Google Container Engine. You can manage its resources using standard Puppet DSL and the module will, under the hood, ensure the state described will be reflected in the Google Cloud Platform resources.
To install this module on your Puppet Master (or Puppet Client/Agent), use the Puppet module installer:
puppet module install google-gcontainer
Optionally you can install support to all Google Cloud Platform products at
once by installing our “bundle” google-cloud
module:
puppet module install google-cloud
All Google Cloud Platform modules use an unified authentication mechanism,
provided by the google-gauth
module. Don't worry, it is automatically
installed when you install this module.
h_credential { 'mycred':
th => $cred_path, # e.g. '/home/nelsonjr/my_account.json'
ovider => serviceaccount,
opes => [
'https://www.googleapis.com/auth/cloud-platform',
Please refer to the google-gauth
module for further requirements, i.e.
required gems.
gcontainer_cluster
tainer_cluster { "mycluster-${cluster_id}":
sure => present,
itial_node_count => 2,
ster_auth => {
username => 'cluster_admin',
password => 'my-secret-password',
de_config => {
machine_type => 'n1-standard-4', # we want a 4-core machine for our cluster
disk_size_gb => 500, # ... and a lot of disk space
ne => 'us-central1-a',
oject => 'google.com:graphite-playground',
edential => 'mycred',
gcontainer_node_pool
node pool requires a container to exist. Please ensure its presence with:
ontainer_cluster { ..... }
tainer_node_pool { 'web-servers':
sure => present,
itial_node_count => 4,
uster => "mycluster-${cluster_id}",
ne => 'us-central1-a',
oject => 'google.com:graphite-playground',
edential => 'mycred',
gcontainer_kube_config
.kube/config is used by Kubernetes client (kubectl)
tainer_kube_config { '/home/nelsona/.kube/config':
sure => present,
ntext => "gke-mycluster-${cluster_id}",
uster => "mycluster-${cluster_id}",
ne => 'us-central1-a',
oject => 'google.com:graphite-playground',
edential => 'mycred',
file named ~/.puppetlabs/etc/puppet/kubernetes is used by the
rethr-kubernetes module.
tainer_kube_config { '/home/nelsona/.puppetlabs/etc/puppet/kubernetes.conf':
sure => present,
uster => "mycluster-${cluster_id}",
ne => 'us-central1-a',
oject => 'google.com:graphite-playground',
edential => 'mycred',
gcontainer_cluster
:
A Google Container Engine cluster.gcontainer_node_pool
:
NodePool contains the name and configuration for a cluster's node pool.
Node pools are a set of nodes (i.e. VM's), with a common configuration
and
specification, under the control of the cluster master. They may have a
set of Kubernetes labels applied to them, which may be used to
reference
them during pod scheduling. They may also be resized up or down, to
accommodate the workload.gcontainer_kube_config
:
Generates a compatible Kuberenetes '.kube/config' fileSome fields are output-only. It means you cannot set them because they are provided by the Google Cloud Platform. Yet they are still useful to ensure the value the API is assigning (or has assigned in the past) is still the value you expect.
For example in a DNS the name servers are assigned by the Google Cloud DNS service. Checking these values once created is useful to make sure your upstream and/or root DNS masters are in sync. Or if you decide to use the object ID, e.g. the VM unique ID, for billing purposes. If the VM gets deleted and recreated it will have a different ID, despite the name being the same. If that detail is important to you you can verify that the ID of the object did not change by asserting it in the manifest.
gcontainer_cluster
A Google Container Engine cluster.
tainer_cluster { "mycluster-${cluster_id}":
sure => present,
itial_node_count => 2,
ster_auth => {
username => 'cluster_admin',
password => 'my-secret-password',
de_config => {
machine_type => 'n1-standard-4', # we want a 4-core machine for our cluster
disk_size_gb => 500, # ... and a lot of disk space
ne => 'us-central1-a',
oject => 'google.com:graphite-playground',
edential => 'mycred',
tainer_cluster { 'id-of-resource':
dons_config => {
horizontal_pod_autoscaling => {
disabled => boolean,
},
http_load_balancing => {
disabled => boolean,
},
uster_ipv4_cidr => string,
eate_time => time,
rrent_master_version => string,
rrent_node_count => integer,
rrent_node_version => string,
scription => string,
dpoint => string,
pire_time => time,
itial_cluster_version => string,
itial_node_count => integer,
cation => [
string,
...
gging_service => 'logging.googleapis.com' or 'none',
ster_auth => {
client_certificate => string,
client_key => string,
cluster_ca_certificate => string,
password => string,
username => string,
nitoring_service => 'monitoring.googleapis.com' or 'none',
me => string,
twork => string,
de_config => {
disk_size_gb => integer,
image_type => string,
labels => namevalues,
local_ssd_count => integer,
machine_type => string,
metadata => namevalues,
oauth_scopes => [
string,
...
],
preemptible => boolean,
service_account => string,
tags => [
string,
...
],
de_ipv4_cidr_size => integer,
rvices_ipv4_cidr => string,
bnetwork => string,
ne => string,
oject => string,
edential => reference to gauth_credential,
name
The name of this cluster. The name must be unique within this project and zone, and can be up to 40 characters. Must be Lowercase letters, numbers, and hyphens only. Must start with a letter. Must end with a number or a letter.
description
An optional description of this cluster.
initial_node_count
Required. The number of nodes to create in this cluster. You must ensure that your Compute Engine resource quota is sufficient for this number of instances. You must also have available firewall and routes quota. For requests, this field should only be used in lieu of a “nodePool” object, since this configuration (along with the “nodeConfig”) will be used to create a “NodePool” object with an auto-generated name. Do not use this and a nodePool at the same time.
node_config
Parameters used in creating the cluster's nodes. For requests, this field should only be used in lieu of a “nodePool” object, since this configuration (along with the “initialNodeCount”) will be used to create a “NodePool” object with an auto-generated name. Do not use this and a nodePool at the same time. For responses, this field will be populated with the node configuration of the first node pool. If unspecified, the defaults are used.
The name of a Google Compute Engine machine type (e.g. n1-standard-1). If unspecified, the default machine type is n1-standard-1.
Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. If unspecified, the default disk size is 100GB.
The set of Google API scopes to be made available on all of the node VMs under the “default” service account. The following scopes are recommended, but not required, and by default are not included: https://www.googleapis.com/auth/compute is required for mounting persistent storage on your nodes. https://www.googleapis.com/auth/devstorage.read_only is required for communicating with gcr.io (the Google Container Registry). If unspecified, no scopes are added, unless Cloud Logging or Cloud Monitoring are enabled, in which case their required scopes will be added.
The Google Cloud Platform Service Account to be used by the node VMs. If no Service Account is specified, the “default” service account is used.
The metadata key/value pairs assigned to instances in the cluster. Keys must conform to the regexp [a-zA-Z0-9-_]+ and be less than 128 bytes in length. These are reflected as part of a URL in the metadata server. Additionally, to avoid ambiguity, keys must not conflict with any other metadata keys for the project or be one of the four reserved keys: “instance-template”, “kube-env”, “startup-script”, and “user-data” Values are free-form strings, and only have meaning as interpreted by the image running in the instance. The only restriction placed on them is that each value's size must be less than or equal to 32 KB. The total size of all keys and values must be less than 512 KB. An object containing a list of “key”: value pairs. Example: { “name”: “wrench”, “mass”: “1.3kg”, “count”: “3” }.
The image type to use for this node. Note that for a given image type, the latest version of it will be used.
The map of Kubernetes labels (key/value pairs) to be applied to each node. These will added in addition to any default label(s) that Kubernetes may apply to the node. In case of conflict in label keys, the applied set may differ depending on the Kubernetes version – it's best to assume the behavior is undefined and conflicts should be avoided. For more information, including usage and the valid values, see: http://kubernetes.io/v1.1/docs/user-guide/labels.html An object containing a list of “key”: value pairs. Example: { “name”: “wrench”, “mass”: “1.3kg”, “count”: “3” }.
The number of local SSD disks to be attached to the node. The limit for this value is dependant upon the maximum number of disks available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/ local-ssd#local_ssd_limits for more information.
The list of instance tags applied to all nodes. Tags are used to identify valid sources or targets for network firewalls and are specified by the client during cluster or node pool creation. Each tag within the list must comply with RFC1035.
Whether the nodes are created as preemptible VM instances. See: https://cloud.google.com/compute/docs/instances/preemptible for more inforamtion about preemptible VM instances.
master_auth
The authentication information for accessing the master endpoint.
The username to use for HTTP basic authentication to the master endpoint.
The password to use for HTTP basic authentication to the master endpoint. Because the master endpoint is open to the Internet, you should create a strong password.
Output only. Base64-encoded public certificate that is the root of trust for the cluster.
Output only. Base64-encoded public certificate used by clients to authenticate to the cluster endpoint.
Output only. Base64-encoded private key used by clients to authenticate to the cluster endpoint.
logging_service
The logging service the cluster should use to write logs. Currently available options: logging.googleapis.com - the Google Cloud Logging service. none - no logs will be exported from the cluster. if left as an empty string,logging.googleapis.com will be used.
monitoring_service
The monitoring service the cluster should use to write metrics. Currently available options: monitoring.googleapis.com - the Google Cloud Monitoring service. none - no metrics will be exported from the cluster. if left as an empty string, monitoring.googleapis.com will be used.
network
The name of the Google Compute Engine network to which the cluster is connected. If left unspecified, the default network will be used. To ensure it exists and it is operations, configure the network using 'gcompute_network' resource.
cluster_ipv4_cidr
The IP address range of the container pods in this cluster, in CIDR notation (e.g. 10.96.0.0/14). Leave blank to have one automatically chosen or specify a /14 block in 10.0.0.0/8.
addons_config
Configurations for the various addons available to run in the cluster.
Configuration for the HTTP (L7) load balancing controller addon, which makes it easy to set up HTTP load balancers for services in a cluster.
Whether the HTTP Load Balancing controller is enabled in the cluster. When enabled, it runs a small pod in the cluster that manages the load balancers.
Configuration for the horizontal pod autoscaling feature, which increases or decreases the number of replica pods a replication controller has based on the resource usage of the existing pods.
Whether the Horizontal Pod Autoscaling feature is enabled in the cluster. When enabled, it ensures that a Heapster pod is running in the cluster, which is also used by the Cloud Monitoring service.
subnetwork
The name of the Google Compute Engine subnetwork to which the cluster is connected.
location
The list of Google Compute Engine locations in which the cluster's nodes should be located.
zone
Required. The zone where the cluster is deployed
endpoint
: Output only.
The IP address of this cluster's master endpoint.
The endpoint can be accessed from the internet at
https://username:password@endpoint/
See the masterAuth property of this resource for username and password
information.
initial_cluster_version
: Output only.
The software version of the master endpoint and kubelets used in the
cluster when it was first created. The version can be upgraded over
time.
current_master_version
: Output only.
The current software version of the master endpoint.
current_node_version
: Output only.
The current version of the node software components. If they are
currently at multiple versions because they're in the process of being
upgraded, this reflects the minimum version of all nodes.
create_time
: Output only.
The time the cluster was created, in RFC3339 text format.
node_ipv4_cidr_size
: Output only.
The size of the address space on each node for hosting containers.
This is provisioned from within the container_ipv4_cidr range.
services_ipv4_cidr
: Output only.
The IP address range of the Kubernetes services in this cluster, in
CIDR notation (e.g. 1.2.3.4/29). Service addresses are typically put
in the last /16 from the container CIDR.
current_node_count
: Output only.
The number of nodes currently in the cluster.
expire_time
: Output only.
The time the cluster will be automatically deleted in RFC3339 text
format.
gcontainer_node_pool
NodePool contains the name and configuration for a cluster's node pool. Node pools are a set of nodes (i.e. VM's), with a common configuration and specification, under the control of the cluster master. They may have a set of Kubernetes labels applied to them, which may be used to reference them during pod scheduling. They may also be resized up or down, to accommodate the workload.
node pool requires a container to exist. Please ensure its presence with:
ontainer_cluster { ..... }
tainer_node_pool { 'web-servers':
sure => present,
itial_node_count => 4,
uster => "mycluster-${cluster_id}",
ne => 'us-central1-a',
oject => 'google.com:graphite-playground',
edential => 'mycred',
tainer_node_pool { 'id-of-resource':
toscaling => {
enabled => boolean,
max_node_count => integer,
min_node_count => integer,
uster => reference to gcontainer_cluster,
nfig => {
disk_size_gb => integer,
image_type => string,
labels => namevalues,
local_ssd_count => integer,
machine_type => string,
metadata => namevalues,
oauth_scopes => [
string,
...
],
preemptible => boolean,
service_account => string,
tags => [
string,
...
],
itial_node_count => integer,
nagement => {
auto_repair => boolean,
auto_upgrade => boolean,
upgrade_options => {
auto_upgrade_start_time => time,
description => string,
},
me => string,
rsion => string,
ne => string,
oject => string,
edential => reference to gauth_credential,
name
The name of the node pool.
config
The node configuration of the pool.
The name of a Google Compute Engine machine type (e.g. n1-standard-1). If unspecified, the default machine type is n1-standard-1.
Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. If unspecified, the default disk size is 100GB.
The set of Google API scopes to be made available on all of the node VMs under the “default” service account. The following scopes are recommended, but not required, and by default are not included: https://www.googleapis.com/auth/compute is required for mounting persistent storage on your nodes. https://www.googleapis.com/auth/devstorage.read_only is required for communicating with gcr.io (the Google Container Registry). If unspecified, no scopes are added, unless Cloud Logging or Cloud Monitoring are enabled, in which case their required scopes will be added.
The Google Cloud Platform Service Account to be used by the node VMs. If no Service Account is specified, the “default” service account is used.
The metadata key/value pairs assigned to instances in the cluster. Keys must conform to the regexp [a-zA-Z0-9-_]+ and be less than 128 bytes in length. These are reflected as part of a URL in the metadata server. Additionally, to avoid ambiguity, keys must not conflict with any other metadata keys for the project or be one of the four reserved keys: “instance-template”, “kube-env”, “startup-script”, and “user-data” Values are free-form strings, and only have meaning as interpreted by the image running in the instance. The only restriction placed on them is that each value's size must be less than or equal to 32 KB. The total size of all keys and values must be less than 512 KB. An object containing a list of “key”: value pairs. Example: { “name”: “wrench”, “mass”: “1.3kg”, “count”: “3” }.
The image type to use for this node. Note that for a given image type, the latest version of it will be used.
The map of Kubernetes labels (key/value pairs) to be applied to each node. These will added in addition to any default label(s) that Kubernetes may apply to the node. In case of conflict in label keys, the applied set may differ depending on the Kubernetes version – it's best to assume the behavior is undefined and conflicts should be avoided. For more information, including usage and the valid values, see: http://kubernetes.io/v1.1/docs/user-guide/labels.html An object containing a list of “key”: value pairs. Example: { “name”: “wrench”, “mass”: “1.3kg”, “count”: “3” }.
The number of local SSD disks to be attached to the node. The limit for this value is dependant upon the maximum number of disks available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/ local-ssd#local_ssd_limits for more information.
The list of instance tags applied to all nodes. Tags are used to identify valid sources or targets for network firewalls and are specified by the client during cluster or node pool creation. Each tag within the list must comply with RFC1035.
Whether the nodes are created as preemptible VM instances. See: https://cloud.google.com/compute/docs/instances/preemptible for more inforamtion about preemptible VM instances.
initial_node_count
Required. The initial node count for the pool. You must ensure that your Compute Engine resource quota is sufficient for this number of instances. You must also have available firewall and routes quota.
autoscaling
Autoscaler configuration for this NodePool. Autoscaler is enabled only if a valid configuration is present.
Is autoscaling enabled for this node pool.
Minimum number of nodes in the NodePool. Must be >= 1 and <= maxNodeCount.
Maximum number of nodes in the NodePool. Must be >= minNodeCount. There has to enough quota to scale up the cluster.
management
Management configuration for this NodePool.
A flag that specifies whether node auto-upgrade is enabled for the node pool. If enabled, node auto-upgrade helps keep the nodes in your node pool up to date with the latest release version of Kubernetes.
A flag that specifies whether the node auto-repair is enabled for the node pool. If enabled, the nodes in this node pool will be monitored and, if they fail health checks too many times, an automatic repair action will be triggered.
Specifies the Auto Upgrade knobs for the node pool.
Output only. This field is set when upgrades are about to commence with the approximate start time for the upgrades, in RFC3339 text format.
Output only. This field is set when upgrades are about to commence with the description of the upgrade.
cluster
Required. A reference to Cluster resource
zone
Required. The zone where the node pool is deployed
version
: Output only.
The version of the Kubernetes of this node.gcontainer_kube_config
Generates a compatible Kuberenetes '.kube/config' file
.kube/config is used by Kubernetes client (kubectl)
tainer_kube_config { '/home/nelsona/.kube/config':
sure => present,
ntext => "gke-mycluster-${cluster_id}",
uster => "mycluster-${cluster_id}",
ne => 'us-central1-a',
oject => 'google.com:graphite-playground',
edential => 'mycred',
file named ~/.puppetlabs/etc/puppet/kubernetes is used by the
rethr-kubernetes module.
tainer_kube_config { '/home/nelsona/.puppetlabs/etc/puppet/kubernetes.conf':
sure => present,
uster => "mycluster-${cluster_id}",
ne => 'us-central1-a',
oject => 'google.com:graphite-playground',
edential => 'mycred',
tainer_kube_config { 'id-of-resource':
uster => reference to gcontainer_cluster,
ntext => string,
me => string,
ne => string,
oject => string,
edential => reference to gauth_credential,
name
Required. The config file kubectl settings will be written to.
cluster
Required. A reference to Cluster resource
zone
Required. The zone where the container is deployed
context
Required. The name of the context. Defaults to cluster name.
tasks/resize.rb
Resizes a cluster container node pool
This task takes inputs as JSON from standard input.
name
:
The name of the node pool to resize
cluster
:
The name of the cluster that hosts the node pool
size
:
The new size of the container (in nodes)
zone
:
The zone that hosts the container
project
:
the project name where the cluster is hosted
credential
:
Path to a service account credentials file
This module has been tested on:
Testing on other platforms has been minimal and cannot be guaranteed.
Some files in this package are automatically generated by Magic Modules.
We use a code compiler to produce this module in order to avoid repetitive tasks and improve code quality. This means all Google Cloud Platform Puppet modules use the same underlying authentication, logic, test generation, style checks, etc.
Learn more about the way to change autogenerated files by reading the CONTRIBUTING.md file.
Contributions to this library are always welcome and highly encouraged.
See CONTRIBUTING.md for more information on how to get started.
This project contains tests for rspec, rspec-puppet and rubocop to verify functionality. For detailed information on using these tools, please see their respective documentation.
install bundler
le install
le exec rspec
le exec rubocop
In case you need to debug tests in this module you can set the following variables to increase verbose output:
Variable | Side Effect
————————|—————————————————
PUPPET_HTTP_VERBOSE=1
| Prints network access information by Puppet provier.
PUPPET_HTTP_DEBUG=1
| Prints the payload of network calls being made.
GOOGLE_HTTP_VERBOSE=1
| Prints debug related to the network calls being made.
GOOGLE_HTTP_DEBUG=1
| Prints the payload of network calls being made.
During test runs (using rspec) you can also set:
Variable | Side Effect
————————|—————————————————
RSPEC_DEBUG=1
| Prints debug related to the tests being run.
RSPEC_HTTP_VERBOSE=1
| Prints network expectations and access.