Demande.tn

🔒
❌
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierdeviantony

The ELK stack powered by Docker – Updated !

Par deviantony

Hola,

In a previous post, I’ve introduced the ELK stack powered by Docker & Fig (see the ELK stack powered by Docker).

I’ve recently decided to update the project to replace the usage of fig with compose and to replace all my custom images with the latest official images !

It is now based on the following Docker images available on Dockerhub:

01/11/2015 : Project updated !

As the project is based on the latest Docker images versions, it means Elasticsearch 2.x, Logstash 2.x and Kibana 4.2.x ! Feel free to discover the new features of these releases (have a look here: https://www.elastic.co/blog/release-we-have).

Note: For the nostalgic folks, you can still access the 1.x version (Elasticsearch 1.x, Logstash 1.x and Kibana 4.1.x) on the 1.x branch ! Here it is: https://github.com/deviantony/docker-elk/tree/1.x

Usage

Pre-requisites

You’ll need Docker and Docker Compose.

The following installation procedures have been tested on Ubuntu 14.04.

Docker installation

Use the following command to install Docker:

$ curl -sSL https://get.docker.com/ubuntu/ | sudo sh

Docker Compose installation

Follow the procedure available at https://docs.docker.com/compose/install/ to install the latest version of Docker Compose.

Use the stack

First, you’ll need to checkout the git repository:

$ git clone https://github.com/deviantony/docker-elk.git

By default, the stack is shipped with a simple Logstash configuration, it will listen for any TCP input on port 5000.

Then start the stack using Compose:

$ cd docker-elk
$ docker-compose up

Compose will start a container for each service of the ELK stack and output their logs.

If you’re still using the default input configuration for Logstash, you can inject some data into Elasticsearch from a file:

$ nc localhost 5000 < /some/log/file.log

Then you can check the results in Kibana by hitting the following URL in your browser: http://localhost:5601

Enjoy 🙂


Scaffold your Puppet modules !

Par deviantony

Hej,

I’ve recently worked with a project scaffolding tool called Yeoman (see Yeoman homepage).

My first experiments where with an AngularJS generator used to bootstrap an AngularJS web application and I must say that the tool is doing it’s job fine.

Right after that, I’ve began to think about how could I use this bootstrapping tool in my company and the thing is that we are writing a lot of Puppet modules (we’ve wrote more than 80 at the time of writing).

Time to ease the module creation pain and introduce standardisation directly at the beginning, I’ve created a Puppet module generator.

You’ll need Node.js and npm in order to use it and I recommend you to use Node Version Manager (nvm).
For example, here is how to install nvm and a recent version of Node.js:

$ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.25.4/install.sh | bash
$ nvm install 0.12.7

That’s it, you’ve installed Node.js v0.12.7, now let’s use it and install the Puppet module generator !

$ nvm use 0.12.7
$ npm install -g generator-puppet-module

The generator is now installed. All you need to do is to use it:

$ yo puppet-module myModuleName

Note: You’ll need to redo the nvm use step again if you’ve left your previous shell session.

It will prompt you for an author name and a company name (these parameters will be remembered at the next execution) and then will setup all the requirements for a proper Puppet module in your current directory !

Have fun with it, and do no hesitate to open issues/pull requests on the project: https://github.com/deviantony/yeoman-puppet


Puppet and Foreman : infrastructure as legos

Par deviantony

Ahoj,

It’s been a while since I posted my last article. I’ve began writing this article 6 months ago when I wanted to prepare a simple introduction to Puppet and Foreman for all the curious people at my workplace.

We’ve been very busy with other subjects recently and I gave up on the subject… but I’ve recently decided to finish it because I hate things left undone !

The idea is to give an introduction to Puppet and Foreman (and especially Foreman and how to manage a set of server with it) using a simple analogy: you can build your servers as if you were building constructs with Legos.

Everybody has played with Legos, right? Time to play again !

Infrastructure as legos

First, let’s define what we are trying to achieve here.

We need to manage our server infrastructure. In our context, we’ll compare a server (either physical or virtual) with a Lego model, such as this car below.

lego-blocks-node

What we want is:

  • Produce as many of these models as we need
  • Customize these models attributes (colours, number of wheels…)

To resume, we want to provision & configure our servers.

Puppet

Let’s recap what is Puppet first. As stated on their website:

Puppet is a configuration management system that allows you to define the state of your IT infrastructure, then automatically enforces the correct state.

It is a tool that will help you automate your infrastructure management and sysadmin tasks.

In order to start with the Lego comparison, Puppet is the open-source Lego company, providing schematics to create your own block blueprints. It also provides you a workshop where you can share and retrieve blueprints created by other people: the Puppet forge.

The Foreman

The Foreman is a tool that can manage your servers lifecycle from creation, configuration, to destruction. In this analogy, it will be used in two different ways:

  1. Our blueprints catalog, as the Puppet ENC (what is an ENC?)
  2. Our model factory, as our provisioning system

Puppet module

A puppet class should be defined as a simple Lego block, it should do one thing and do it well ! They can have different shapes, color, sizes…

lego-block-class

Foreman configuration group

A foreman configuration group is a logical group of Puppet classes. It is not tied to an environment, so you can group in there any classes. I tend to see it as a logical block group to build my hostgroups / nodes.

lego-blocks-config-groups

Puppet environment

A Puppet environment is a set of Lego blocks available for the creation of configuration groups, hostgroups and nodes ! It is a list of classes in a defined state.

lego-blocks-env

Foreman hostgroup

A Foreman hostgroup is a container for multiple Puppet modules and/or configuration groups.

A hostgroup is a model structure that will be used by your Foreman nodes as their base. Note that you can nest hostgroups to inherit from previously defined structures.

lego-blocks-hostgroup

Foreman node

This is it, this is our server. At this point, we have the base structure and we can add other blocks and choose their colors (by specifying parameter values).

A foreman node is a server. In the Lego way, it is the final structure you want to create using available blocks inside a specific environment, a hostgroup as the base structure and any configuration groups.

lego_blocks_node

Have fun !


The ELK stack powered by Docker

Par deviantony

UPDATE: The stack is now powered by docker-compose and using the latest official images for Elasticsearch/Logstash/Kibana. See my new article https://deviantony.wordpress.com/2015/07/23/the-elk-stack-powered-by-docker-updated/ — 23/07/2015

Ahoy,

I’ve recently created a solution to setup an ELK stack using the Docker engine, it can be used to:

  • Quickly boot an ELK stack for demo purposes
  • Use it to test your Logstash configurations and see the imported data in Elasticsearch
  • Import a subset of data into Elasticsearch and prepare dashboards on Kibana 3
  • Use Kibana 4 !

UPDATE: The stack is now fully functionnal with docker-compose as a replacement for fig. See http://docs.docker.com/compose/install/ for docker-compose installation. — 27/02/2015

The solution is available on Github: https://github.com/deviantony/docker-elk

It is based on multiple Docker images available on Dockerhub:

  • elk-elasticsearch: Latest 1.5 stable version of Elasticsearch + Marvel with Oracle Java JDK 7
  • elk-logstash: Latest 1.4 stable version of Logstash with Oracle Java JDK 7
  • elk-kibana: Kibana 3.1.2 or Kibana 4

Prerequisites

You’ll need Docker and Fig.

The following installation procedures have been tested on Ubuntu 14.04.

Docker installation

Use the following command to install Docker:

$ curl -sSL https://get.docker.com/ubuntu/ | sudo sh

Fig installation

Fig is available via pip, a tool dedicated to manage Python packages.

First, you’ll need to install pip if it is not already present on your system:

$ sudo apt-get install python-pip

Then, install fig:

$ sudo pip install -U fig

Use the stack

First, you’ll need to checkout the git repository:

$ git clone https://github.com/deviantony/fig-elk.git

By default, the stack is shipped with a simple Logstash configuration, it will listen for any TCP input on port 5000.

You can update the Logstash configuration by updating the file logstash-conf/logstash.conf (to test your filters, for example).

Then start the stack using fig:

$ cd fig-elk
$ fig up

Fig will start a container for each service of the ELK stack and output their logs.

If you’re still using the default input configuration for Logstash, you can inject some data into Elasticsearch from a file:

$ nc localhost 5000 < /some/log/file.log

Then you can check the results in Kibana 3, by hitting the following URL in your browser: http://localhost:8080

Or, if you’d like to use Kibana 4, hit the following URL: http://localhost:5601

Also Elasticsearch is shipped with Marvel, you have access to the cluster monitoring on the following URL http://localhost:9200/_plugin/marvel

Have fun with the ELK stack 🙂


VMWare tips

Par deviantony

Here is a few tips I’ve learned with VMWare virtual machines hosting Ubuntu 12.04 / 14.04.

How to refresh an upgraded disk

If you want to resize a disk at runtime, go to your VM preferences and update your disk size. Now, how to refresh the state of the disk in the virtual machine without rebooting it?

For each disk in your VM, there is a related directory in the /sys/class/scsi_devices directory. Use the following command to rescan your disk capacity:

$ echo 1 > /sys/class/scsi_device/DISK/device/rescan

How to refresh new disk

If you want to add another disk at runtime, go to your VM preferences and add a new disk. Now, how to refresh the system so it can see the new disk without rebooting the VM?

Basically, you just need to rescan the SCSI bus to which the storages are connected.

The following commands assume you have root access on the system.

Find your host bus number

$ grep mpt /sys/class/scsi_host/host?/proc_name

This will return a line like the following:

/sys/class/scsi_host/host0/proc_name:mptspi

Where host0 is the relevant bus number.

Rescan the SCSI bus

$ echo "- - -" > /sys/class/scsi_host/host0/scan

RabbitMQ and HAProxy: a timeout issue

Par deviantony

If you’re trying to setup a highly available RabbitMQ cluster using HAProxy, you may encounter a disconnection issue from your clients.

This problem is due to HAProxy having a timeout client (clitimeout is deprecated) setted for the default client timeout parameter. If a connection is considered idle for more than timeout client (ms), the connection is dropped by HAProxy.

RabbitMQ clients use persistent connections to a broker, which never timeout. See the problem here? If your RabbitMQ client is inactive for a period of time, HAProxy will automatically close the connection.

So how do we solve the problem ? I’ve seen that HAProxy got a clitcpka option which enable the sending of TCP keepalive packets on the client side.

Let’s use it !

But it’s not solving the problem, disconnection issue are still there. Damn.

After reading a discuss about RabbitMQ and HAProxy on the RabbitMQ mailing list, Tim Watson pointed out that:

[…]the exact behaviour of tcp keep-alive is determined by the underlying OS/Kernel configuration[…]

On Ubuntu 14.04, in the tcp man, you can see that the default value for the tcp_keepalive_time parameter is set to 2 hours. This parameter defines the time a connection needs to be idle before TCP begins sending out keep-alive packets.

You can also verify it by using the following command:

$ cat /proc/sys/net/ipv4/tcp_keepalive_time
7200

OK ! Let’s raise thetimeout client value in our HAProxy configuration for AQMP, 3 hours should be good. And that’s it ! No more disconnection issues 🙂

Here is a sample HAProxy configuration:

global
        log 127.0.0.1   local1
        maxconn 4096
        #chroot /usr/share/haproxy
        user haproxy
        group haproxy
        daemon
        #debug
        #quiet

defaults
        log     global
        mode    tcp
        option  tcplog
        retries 3
        option redispatch
        maxconn 2000
        timeout connect 5000
        timeout client 50000
        timeout server 50000

listen  stats :1936
        mode http
        stats enable
        stats hide-version
        stats realm Haproxy\ Statistics
        stats uri /

listen aqmp_front :5672
        mode            tcp
        balance         roundrobin
        timeout client  3h
        timeout server  3h
        option          clitcpka
        server          aqmp-1 rabbitmq1.domain:5672  check inter 5s rise 2 fall 3
        server          aqmp-2 rabbitmq2.domain:5672  check inter 5s rise 2 fall 3

Enjoy your highly available RabbitMQ cluster !

I think there may be another solution to this problem by using the heartbeat feature of RabbitMQ, see more about that here: https://www.rabbitmq.com/reliability.html


How to setup an Elasticsearch cluster with Logstash on Ubuntu 12.04

Par deviantony

Hey there !

I’ve recently hit the limitations of a one node elasticsearch cluster in my ELK setup, see my previous blog post: Centralized logging with an ELK stack (Elasticsearch-Logback-Kibana) on Ubuntu

After more researchs, I’ve decided to upgrade the stack architecture and more precisely the elasticsearch cluster and the logstash integration with the cluster.

I’ve been using the following software versions:

  • Elasticsearch 1.4.1
  • Logstash 1.4.2

Setup the Elasticsearch cluster

You’ll need to apply this procedure on each elasticsearch node.

Java

I’ve decided to install the Oracle JDK in replacement of the OpenJDK using the following PPA:

$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update && sudo apt-get install oracle-java7-installer

In case you’re missing the add-apt-repository command, make sure you have the package python-software-properties installed:

$ sudo apt-get install python-software-properties

Install via Elasticsearch repository

$ wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
$ echo "deb http://packages.elasticsearch.org/elasticsearch/1.4/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch.list
$ sudo apt-get update && sudo apt-get install elasticsearch

You can also decide to start the elasticsearch service on boot using the following command:

$ sudo update-rc.d elasticsearch defaults 95 10

Configuration

You’ll need to edit the elasticsearch configuration file in /etc/elasticsearch/elasticsearch.yml and update the following parameters:

  • cluster.name: my-cluster-name

I suggest to update the default cluster name with a defined cluster name. Especially if you want to have another cluster on your network with multicast enabled.

  • index.number_of_replicas: 2

This will ensure a copy of your data on every node of your cluster. Set this property to N-1 where N is the number of nodes in your cluster.

  • gateway.recover_after_nodes: 2

This will ensure the recovery process will start after at least 2 nodes in the cluster have been started.

  • discovery.zen.mininum_master_nodes: 2

Should be set to something like N/2 + 1 where N is the number of nodes in your cluster. This is to avoid the “split-brain” scenario.

See this post for more information on this scenario: http://blog.trifork.com/2013/10/24/how-to-avoid-the-split-brain-problem-in-elasticsearch/

Disabling multicast

Multicast is not recommended in production, disabling it will allow more control over your cluster:

  • discovery.zen.ping.multicast.enabled: false
  • discovery.zen.ping.unicast.hosts: [“host-1”, “host-2”]

Of course, you’ll need to specify the 2 others hosts for each node in your cluster:

  • host-1 will communicate with host-2 & host-3
  • host-2 will communicate with host-1 & host-3
  • host-3 will communicate with host-1 & host-2

Cluster overview via Marvel

It’s free for development use ! See marvel’s homepage for more info.

Install it:

$ /usr/share/elasticsearch/bin/plugin -i elasticsearch/marvel/latest

Restart the elasticsearch service:

$ sudo service elasticsearch start

Now you can access the marvel UI via your browser on any of your elasticsearch nodes.
For example the first node: http://elasticsearch-host-a:9200/_plugin/marvel

Automatic index cleaning via Curator

This tool can be installed on node only.

You can use the curator program to delete indexes. See more information in the github repository: https://github.com/elasticsearch/curator

You’ll need pip in order to install curator:

$ sudo apt-get install python-pip

Once it’s done, you can install curator:

$ sudo pip install elasticsearch-curator

Now, it’s easy to setup a cron to delete the indexes older than 30 days in /etc/cron.d/elasticsearch_curator:

@midnight     root        curator delete --older-than 30 >> /var/log/curator.log 2>&1

Setup the Logstash node

Java

Logstash is using Java, you need to ensure you’ve got a JDK installed on your system. Use either OpenJDK or Oracle JDK.

Install via repository

$ wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
$ echo "deb http://packages.elasticsearch.org/logstash/1.4/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch.list
$ sudo apt-get update && sudo apt-get install logstash

Generate a SSL certificate

Use the following command to generate a SSL certificate request and private key in /etc/ssl:

$ openssl req -x509 -newkey rsa:2048 -keyout /etc/ssl/logstash.key -out /etc/ssl/logstash.pub -nodes -days 1095

Configuration

I’ll skip the configuration for inputs, filter and specify only the output configuration for the communication with the elasticsearch cluster.

We’re not going to specify a elasticsearch host anymore, instead we will specify that this logstash instance needs to communicate with the cluster.

/etc/logstash/conf.d/10_output.conf

output {
       elasticsearch { }
}

Then we’ll edit the init script of logstash in /etc/init/logstash.conf and update the JAVA_OPTS var with:

LS_JAVA_OPTS="-Djava.io.tmpdir=${LS_HOME} -Des.config=/etc/logstash/elasticsearch.yml"

And create the file /etc/logstash/elasticsearch.yml with the following content:

cluster.name: my-cluster-name
node.name: logstash-indexer-01

If you’ve disabled multicast, then you’ll need to add the following line:

discovery.zen.ping.unicast.hosts: "elasticsearch-node-1", "elasticsearch-node-2", "elasticsearch-node-3"]

And start logstash:

$ sudo service logstash start

The logstash node will automatically be added to your elasticsearch cluster.
You can verify that by checking the node count and version in the marvel UI.


High-availability with HAProxy and keepalived on Ubuntu 12.04

Par deviantony

‘Lo there !

Here is a little post on how you can easily setup a highly available HAProxy service on Ubuntu 12.04 ! I tend to use more and more HAProxy these times, adding more backends and connections on it. Then, I thought, what if it goes down? How can I ensure high availability on that service?

Here enters keepalived, which allows to setup another HAProxy node to create a active/passive cluster. If the main HAProxy node goes down, the second one will take the relay.

In the following examples, I assume the following:

  • Master node address: 10.10.1.1
  • Slave node address: 10.10.1.2
  • Highy available HAProxy virtual address: 10.10.1.3

Install HAProxy

You’ll need to install it on both nodes:

$ sudo apt-get install haproxy

Now, edit the file /etc/default/haproxy and set the property ENABLED to 1.

Start the service, and you’re done 🙂

$ sudo service haproxy start

Install keepalived

Prerequisite

You’ll need to update your sysctl configuration to allow non-local addresses binding:

$ echo "net.ipv4.ip_nonlocal_bind = 1" | sudo tee -a /etc/sysctl.conf
$ sudo sysctl -p

Setup

Install the package:

$ sudo apt-get install keepalived

Create a configuration /etc/keepalived/keepalived.conf file for the master node:

/etc/keepalived/keepalived.conf

global_defs {
 # Keepalived process identifier
 lvs_id haproxy_KA
}

# Script used to check if HAProxy is running
vrrp_script check_haproxy {
 script "killall -0 haproxy"
 interval 2
 weight 2
}

# Virtual interface
vrrp_instance VIP_01 {
 state MASTER
 interface eth0
 virtual_router_id 7
 priority 101

 virtual_ipaddress {
 10.10.1.3
 }

 track_script {
 check_haproxy
 }
}

Do the same for the slave node, with a few changes:

/etc/keepalived/keepalived.conf

global_defs {
 # Keepalived process identifier
 lvs_id haproxy_KA_passive
}

# Script used to check if HAProxy is running
vrrp_script check_haproxy {
 script "killall -0 haproxy"
 interval 2
 weight 2
}

# Virtual interface
vrrp_instance VIP_01 {
 state SLAVE
 interface eth0
 virtual_router_id 7
 priority 100

 virtual_ipaddress {
 10.10.1.3
 }

 track_script {
 check_haproxy
 }
}

WARNING: Be sure to assign a unique virtual_router_id for that keepalived configuration on the subnet 10.10.1.0.

Last step, start the service keepalived service on the master node first and then on the slave.

$ sudo service keepalived start

You can check that the virtual IP address is created with the following command on the master node:

$ ip a | grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 inet 10.10.1.1/25 brd 10.10.1.127 scope global eth0
 inet 10.10.1.3/32 scope global eth0

If you stop the HAProxy service on the master node or shutdown the node, the virtual IP will be transfered on the passive node, you can use the last command to verify that the VIP has been transfered.

 


Puppet & Foreman : The scalable way

Par deviantony

Oye there !

We’ve been struggling with a single Puppetmaster for around 250+ nodes in my company for the last months, and I decided to review the actual architecture toward something more evolved. Managing the nodes in site.pp didn’t seem natural for me and the file began to be unmaintanable.

So I’ve googled around for some alternatives and found something called ENC, standing for External Node Classifier, and more specifically Foreman.

I’ve also decided to add high availability and load balancing in my architecture, and I’ve come up with the following:

Puppet & Foreman architecture

I’ll describe in this post how to setup the same stack on Ubuntu Server 12.04 nodes using Foreman 1.5.1 and Puppet 3.6.2.

Requirements

Nodes

As you can see in the scheme above, this setup is composed of 10 nodes.

You’ll need to setup the following nodes (I’ve specified IPs for example) :

  • foreman.domain – 10.0.0.1
  • foreman-enc.domain – 10.0.0.2
  • foreman-reports.domain – 10.0.0.3
  • foreman-db.domain – 10.0.0.4
  • memcached.domain – 10.0.0.5
  • puppetmaster-1.domain – 10.0.0.6
  • puppetmaster-2.domain – 10.0.0.7
  • puppet-ca.domain – 10.0.0.8
  • puppet-lb.domain – 10.0.0.9
  • puppet-lb-passive.domain – 10.0.0.10

A virtual IP will be required for the high available load balancer : 10.0.0.11.

Puppet

Ensure you have Puppet installed on each node:

$ wget https://apt.puppetlabs.com/puppetlabs-release-precise.deb
$ sudo dpkg -i puppetlabs-release-precise.deb
$ sudo apt-get update && sudo apt-get install puppet

SSL (optional)

In my company, we got our own SSL certificates for our internal services.

If you want to setup this kind of architecture with your certificates, ensure you have the following files on all the nodes:

  • /etc/ssl/domain/certs/ca.pem
  • /etc/ssl/domain/certs/domain.pem
  • /etc/ssl/domain/private_keys/domain.pem

Of course you can skip this step, you’ll also need to skip the configurations steps for the SSL setups in the next parts.

Setup the Foreman block

Setup for the foreman block introduction.

MySQL server

The installation of a MySQL server is beyond the scope of this topic, ensure you have a MySQL instance running.

Then create a database for foreman and the required users:

CREATE DATABASE foreman CHARACTER SET utf8;
CREATE USER 'foreman'@'foreman.domain';
GRANT ALL PRIVILEGES ON foreman.* TO 'foreman'@'foreman.domain' IDENTIFIED BY 'foreman_password';
CREATE USER 'foreman'@'foreman-enc.wit.prod';
GRANT ALL PRIVILEGES ON foreman.* TO 'foreman'@'foreman-enc.domain' IDENTIFIED BY 'foreman_password';
CREATE USER 'foreman'@'foreman-reports.wit.prod';
GRANT ALL PRIVILEGES ON foreman.* TO 'foreman'@'foreman-reports.domain' IDENTIFIED BY 'foreman_password';

Memcached server

It’s quite easy to setup a memcached node. The package is available in the Ubuntu repositories.

$ sudo apt-get install memcached

Once installed, update the following values in configuration file /etc/memcached.conf:

-m 128
-l 0.0.0.0

Then restart the service.

$ sudo service memcached restart

Foreman (main instance)

Setup the foreman-installer from the debian repository:

$ wget -q http://deb.theforeman.org/foreman.asc -O- | sudo apt-key add -
$ echo "deb http://deb.theforeman.org/ $(lsb_release -sc) 1.5" | sudo tee -a /etc/apt/sources.list.d/foreman.list
$ echo "deb http://deb.theforeman.org/ plugins 1.5" | sudo tee -a /etc/apt/sources.list.d/foreman.list
$ sudo apt-get update && sudo apt-get install foreman-installer

Setup the foreman instance via foreman-installer:

$ sudo foreman-installer \
--no-enable-puppet \
--no-enable-foreman-plugin-bootdisk \
--no-enable-foreman-proxy \
--foreman-db-adapter=mysql2 \
--foreman-db-database=foreman \
--foreman-db-host=foreman-db.domain \
--foreman-db-manage=true \
--foreman-db-username=foreman \
--foreman-db-password='foreman_password' \
--foreman-db-port=3306 \
--foreman-db-type=mysql \
--foreman-server-ssl-ca=/etc/ssl/domain/certs/ca.pem \
--foreman-server-ssl-chain=/etc/ssl/domain/certs/ca.pem \
--foreman-server-ssl-cert=/etc/ssl/domain/certs/domain.pem \
--foreman-server-ssl-key=/etc/ssl/domain/private_keys/domain.pem

Now that the service is installed, we need to setup the memcached plugin:

$ echo "gem 'foreman_memcache'" | sudo tee -a /usr/share/foreman/bundler.d/Gemfile.local.rb
$ sudo chown foreman:foreman /usr/share/foreman/bundler.d/Gemfile.local.rb
$ cd ~foreman
$ sudo -u foreman bundle update foreman_memcached

Then configure it by appending the following content in /etc/foreman/settings.yaml:

:memcache:
 :hosts:
 - memcached.domain:11211
 :options:
 :namespace: foreman
 :expires_in: 86400
 :compress: true

And restart the Apache service:

$ sudo service apache2 restart

You can now access your foreman instance via http://foreman.domain, access it and use the default credentials to login : admin/changeme.

Once logged in, you’ll need to retrieve some specific values from the settings, but also setup a few ones. Go to Administer > Settings > Auth and retrieve the following settings:

  • oauth_consumer_key
  • oauth_consumer_secret

These settings will be used in the following setup procedures, you’ll need to replace the values MY_CONSUMER_KEY and MY_CONSUMER_SECRET with the values you’ve retrieved.

We also need to setup the following settings:

  • require_ssl_puppetmasters=false
  • trusted_puppetmaster_hosts=[puppet.domain, puppet-lb.domain, puppet-lb-passive.domain]

SSL (optional):

Update the following settings in Administer > Settings > Provisioning:

  • ssl_ca_file = /etc/ssl/domain/certs/ca.pem
  •  ssl_certificate = /etc/ssl/domain/certs/domain.pem
  •  ssl_priv_key = /etc/ssl/domain/private_keys/domain.pem

Foreman ENC & Reports

Apply these configurations on each nodes.

Setup the foreman-installer from the debian repository:

$ wget -q http://deb.theforeman.org/foreman.asc -O- | sudo apt-key add -
$ echo "deb http://deb.theforeman.org/ $(lsb_release -sc) 1.5" | sudo tee -a /etc/apt/sources.list.d/foreman.list
$ echo "deb http://deb.theforeman.org/ plugins 1.5" | sudo tee -a /etc/apt/sources.list.d/foreman.list
$ sudo apt-get update && sudo apt-get install foreman-installer

Setup the foreman instance via foreman-installer:

$ sudo foreman-installer \
--no-enable-puppet \
--no-enable-foreman-plugin-bootdisk \
--no-enable-foreman-proxy \
--foreman-db-adapter=mysql2 \
--foreman-db-database=foreman \
--foreman-db-host=foreman-db.domain \
--foreman-db-manage=false \
--foreman-db-username=foreman \
--foreman-db-password='foreman_password' \
--foreman-db-port=3306 \
--foreman-db-type=mysql \
--foreman-server-ssl-ca=/etc/ssl/domain/certs/ca.pem \
--foreman-server-ssl-chain=/etc/ssl/domain/certs/ca.pem \
--foreman-server-ssl-cert=/etc/ssl/domain/certs/domain.pem \
--foreman-server-ssl-key=/etc/ssl/domain/private_keys/domain.pem \
--foreman-oauth-consumer-key=MY_CONSUMER_KEY \
--foreman-oauth-consumer-secret=MY_CONSUMER_SECRET

After the service is installed, we need to setup the memcached plugin like previously:

$ echo "gem 'foreman_memcache'" | sudo tee -a /usr/share/foreman/bundler.d/Gemfile.local.rb
$ sudo chown foreman:foreman /usr/share/foreman/bundler.d/Gemfile.local.rb
$ cd ~foreman
$ sudo -u foreman bundle update foreman_memcached

Then configure it by appending the following content in /etc/foreman/settings.yaml:

:memcache:
 :hosts:
 - memcached.domain:11211
 :options:
 :namespace: foreman
 :expires_in: 86400
 :compress: true

And restart the Apache service:

$ sudo service apache2 restart

Setup the Puppet block

Setup for the foreman block introduction.

Puppet CA

Setup the foreman-installer from the debian repository:

$ wget -q http://deb.theforeman.org/foreman.asc -O- | sudo apt-key add -
$ echo "deb http://deb.theforeman.org/ $(lsb_release -sc) 1.5" | sudo tee -a /etc/apt/sources.list.d/foreman.list
$ echo "deb http://deb.theforeman.org/ plugins 1.5" | sudo tee -a /etc/apt/sources.list.d/foreman.list
$ sudo apt-get update && sudo apt-get install foreman-installer

Clear your Puppet SSL folder before the installation:

$ sudo rm -rf /var/lib/puppet/ssl/*

Setup the puppetmaster instance via foreman-installer:

$ sudo foreman-installer \
--no-enable-foreman-plugin-bootdisk \
--no-enable-foreman-plugin-setup \
--no-enable-foreman \
--foreman-proxy-foreman-base-url=https://foreman.domain \
--foreman-proxy-tftp=false \
--foreman-proxy-ssl-ca=/etc/ssl/domain/certs/ca.pem \
--foreman-proxy-ssl-cert=/etc/ssl/domain/certs/wit.prod.pem \
--foreman-proxy-ssl-key=/etc/ssl/domain/private_keys/wit.prod.pem \
--foreman-proxy-oauth-consumer-key=MY_CONSUMER_KEY \
--foreman-proxy-oauth-consumer-secret=MY_CONSUMER_SECRET \

Once the service is installed, you’ll need to generate certificates for your puppet masters:

$ sudo puppet cert generate puppetmaster-1.domain --dns_alt_names=puppet,puppet.domain,puppetmaster-1.domain
$ sudo puppet cert generate puppetmaster-2.domain --dns_alt_names=puppet,puppet.domain,puppetmaster-2.domain

This will generate the following files:

  • /var/lib/puppet/ssl/certs/puppetmaster-1.domain.pem
  • /var/lib/puppet/ssl/certs/puppetmaster-2.domain.pem
  • /var/lib/puppet/ssl/private_keys/puppetmaster-1.domain.pem
  • /var/lib/puppet/ssl/private_keys/puppetmaster-2.domain.pem

Puppetmasters 1 & 2

Retrieve the certificates previously generated and put them in the same folder on the puppetmasters nodes:

  • /var/lib/puppet/ssl/certs/
  • /var/lib/puppet/ssl/private_keys/

Ensure the permissions are OK:

$ chown -R puppet:puppet /var/lib/puppet/ssl

Setup the foreman-installer from the debian repository:

$ wget -q http://deb.theforeman.org/foreman.asc -O- | sudo apt-key add -
$ echo "deb http://deb.theforeman.org/ $(lsb_release -sc) 1.5" | sudo tee -a /etc/apt/sources.list.d/foreman.list
$ echo "deb http://deb.theforeman.org/ plugins 1.5" | sudo tee -a /etc/apt/sources.list.d/foreman.list
$ sudo apt-get update && sudo apt-get install foreman-installer

Setup the puppetmaster instances using the foreman-installer:

$ sudo foreman-installer \
--no-enable-foreman-plugin-bootdisk \
--no-enable-foreman-plugin-setup \
--no-enable-foreman \
--puppet-ca-server=puppet.domain \
--puppet-server-ca=false \
--puppet-server-foreman-ssl-ca=/etc/ssl/domain/certs/ca.pem \
--puppet-server-foreman-ssl-cert=/etc/ssl/domain/certs/domain.pem \
--puppet-server-foreman-ssl-key=/etc/ssl/domain/private_keys/domain.pem \
--puppet-server-foreman-url=https://foreman.domain \
--foreman-proxy-foreman-base-url=https://foreman.domain \
--foreman-proxy-tftp=false \
--foreman-proxy-puppetca=false \
--foreman-proxy-ssl-ca=/etc/ssl/domain/certs/ca.pem \
--foreman-proxy-ssl-cert=/etc/ssl/domain/certs/domain.pem \
--foreman-proxy-ssl-key=/etc/ssl/domain/private_keys/domain.pem \
--foreman-proxy-oauth-consumer-key=MY_CONSUMER_KEY \
--foreman-proxy-oauth-consumer-secret=MY_CONSUMER_SECRET \

Now edit your Puppet configuration under /etc/puppet/puppet.conf and add the following lines under the appropriate sections:

[main]
ca_port = 8141
[master]
dns_alt_names = puppet,puppet.domain,puppetmaster-X.domain

Note: replace puppetmaster-X.domain by the fqdn of the instance you setup (either puppetmaster-1.domain or puppetmaster-2.domain).

After that, we’ll need to tell the puppetmaster to use the foreman-enc node as the ENC. Edit the foreman URL in the file /etc/puppet/node.rb to point foreman-enc.domain.

Next, we’ll need to tell the puppetmaster to use the foreman-reports node for the reports. So edit the foreman URL in the file /usr/lib/ruby/vendor_ruby/puppet/reports/foreman.rb to point foreman-reports.domain.

Restart apache and you’re done with the puppetmaster setup :

$ sudo service apache2 restart

HAProxy

You’ll need to setup haproxy and keepalived on the following nodes:

  • puppet-lb.domain
  • puppet-lb-passive.domain

Install haproxy and keepalived on both nodes:

$ sudo apt-get install haproxy keepalived

Configure keepalived

On the active node (puppet-lb.domain), create a file /etc/keepalived/keepalived.conf with the following content:

global_defs {
  lvs_id lb_internal_KA
}

vrrp_script check_haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
}

vrrp_instance VIP_01 {
  state MASTER
  interface eth0
  virtual_router_id 57
  priority 101

  virtual_ipaddress {
    10.0.0.11
  }

  track_script {
    check_haproxy
  }
}

On the passive node (puppet-lb-passive.domain), the content of this file will be different:

global_defs {
  lvs_id lb_internal_KA_passive
}

vrrp_script check_haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
}

vrrp_instance VIP_01 {
  state SLAVE
  interface eth0
  virtual_router_id 57
  priority 100

  virtual_ipaddress {
    10.0.0.11
  }

  track_script {
    check_haproxy
  }
}

NOTE: The directive virtual_router_id NEEDS to have a unique value, if another keepalived cluster exists on the same network/vlan an error will occur if you set the same ID.

Configure haproxy

Here is a sample configuration for haproxy, use it on both nodes (puppet-lb.domain & puppet-lb-passive.domain) :

global
        log 127.0.0.1   local1
        maxconn 4096
        user haproxy
        group haproxy
        daemon

defaults
        log     global
        option  dontlognull
        retries 3
        option redispatch
        maxconn 2000
        contimeout      300000
        clitimeout      300000
        srvtimeout      300000

listen  stats *:1936
        mode http
        stats enable
        stats hide-version
        stats realm Haproxy\ Statistics
        stats uri /


listen puppet    *:8140
       mode      tcp
       option    tcplog
       option    ssl-hello-chk
       server    puppetmaster-1        puppetmaster-1.domain:8140     check inter 5000 fall 3 rise 2
       server    puppetmaster-2        puppetmaster-2.domain:8140     check inter 5000 fall 3 rise 2

listen  puppetca  *:8141
        mode      tcp
        option    tcplog
        option    ssl-hello-chk
        option    abortonclose
        server    puppetca-1            puppet-ca-1.domain:8140              check inter 5000 fall 3 rise 2

Now, restart your haproxy service :

$ sudo service haproxy restart

And that’s it ! Your Foreman/Puppet model is ready !

The node configuration

You will need to add the following properties under the [agent] section in your Puppet node configuration file in /etc/puppet/puppet.conf:

[agent]
    report        = true
    masterport    = 8140
    server        = puppet.domain
    ca_port       = 8141

Suggestions and evolutions

Modules synchronisation between the Puppet masters

I recommend the use of r10k to easily synchronize your modules between the Puppet masters.

More information here: https://github.com/puppetlabs/r10k

The main SPOF: the Foreman ENC

Actually, the main SPOF in this architecture is the Foreman ENC box. I guess you could add an Apache service in front of the Foreman services to achieve a passive/active Foreman ENC. You can also update the URL of the ENC on the Puppet masters with the URL of the Foreman reports box while trying to diagnosis your problem with the ENC.

High availability on the database

Of course, the other SPOF in this architecture is MySQL. There is many existing solutions for MySQL high availability, you could use another HAProxy service in front of your MySQL instances for an active/passive cluster and setup a replication architecture.

High availability on the Puppet CA

About the Puppet CA, I do not consider it as a SPOF because if the service goes down, the model will still be working. You’ll just not be able to add new nodes.

You could have a highly available service by adding another Puppet CA to the load balancer (a passive CA for example) but you would need a way to synchronize your certificates between your Puppet CA servers. Maybe using rsync or a NFS mount point.

Special thanks and feedbacks

Feel free to ask any questions or comment on this blog, or you can catch me on Freenode IRC, @deviantony on channel #theforeman.

Finally I’d like to thanks the people who helped me to setup this architecture: @elobato, @dominic and @gwmngilfen from the channel #theforeman. Do not hesitate to ask any questions over there, the channel is pretty active and the Foreman team members are really reactive !


Install PhantomJS as a service on Ubuntu 12.04

Par deviantony

Hello there,

I’ll show you how to install the headless webkit PhantomJS 1.9.7 on Ubuntu 12.04 and how to manage it as a system service.

Installation

The following lines will download the phantomjs archive, extract it and create the symbolic links to the binary in /usr/local/share/usr/local/bin and /usr/bin.

$ cd /usr/local/share
$ sudo wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-1.9.7-linux-x86_64.tar.bz2
$ sudo tar xjf phantomjs-1.9.7-linux-x86_64.tar.bz2
$ sudo ln -s /usr/local/share/phantomjs-1.9.7-linux-x86_64/bin/phantomjs /usr/local/share/phantomjs
$ sudo ln -s /usr/local/share/phantomjs-1.9.7-linux-x86_64/bin/phantomjs /usr/local/bin/phantomjs
$ sudo ln -s /usr/local/share/phantomjs-1.9.7-linux-x86_64/bin/phantomjs /usr/bin/phantomjs

You can retrieve the installation script on Gist: phantomjs_installer.

Service configuration

Once the binary is installed, we’ll create a script to manage the service in /etc/init.d and a configuration file in /etc/default.

You can also retrieve the init script on Gist: phantomjs init script.

/etc/init.d/phantomjs

#! /bin/sh
# Init. script for phantomjs, based on Ubuntu 12.04 skeleton.
# Author: Anthony Lapenna <lapenna.anthony@gmail.com>

PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="Phantomjs service"
NAME=phantomjs
DAEMON=/usr/bin/$NAME
PIDFILE=/var/run/$NAME.pid
SCRIPTNAME=/etc/init.d/$NAME

# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0

# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh

# Define LSB log_* functions.
# Depend on lsb-base (>= 3.2-14) to ensure that this file is present
# and status_of_proc is working.
. /lib/lsb/init-functions

# Overridable options section
WEBDRIVER_PORT=8190
DEBUG=false
LOG_FILE=/var/log/phantomjs.log

# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME

DAEMON_ARGS="--webdriver=$WEBDRIVER_PORT --debug=$DEBUG"

#
# Function that starts the daemon/service
#
do_start()
{
 # Return
 # 0 if daemon has been started
 # 1 if daemon was already running
 # 2 if daemon could not be started
 start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \
 || return 1
 start-stop-daemon --start --background --quiet --pidfile $PIDFILE --exec /bin/bash -- -c "$DAEMON $DAEMON_ARGS > $LOG_FILE 2>&1" \
 || return 2
 # Add code here, if necessary, that waits for the process to be ready
 # to handle requests from services started subsequently which depend
 # on this one. As a last resort, sleep for some time.
}

#
# Function that stops the daemon/service
#
do_stop()
{
 # Return
 # 0 if daemon has been stopped
 # 1 if daemon was already stopped
 # 2 if daemon could not be stopped
 # other if a failure occurred
 start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE --name $NAME
 RETVAL="$?"
 [ "$RETVAL" = 2 ] && return 2
 # Wait for children to finish too if this is a daemon that forks
 # and if the daemon is only ever run from this initscript.
 # If the above conditions are not satisfied then add some other code
 # that waits for the process to drop all resources that could be
 # needed by services started subsequently. A last resort is to
 # sleep for some time.
 start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON
 [ "$?" = 2 ] && return 2
 # Many daemons don't delete their pidfiles when they exit.
 rm -f $PIDFILE
 return "$RETVAL"
}

#
# Function that sends a SIGHUP to the daemon/service
#
do_reload() {
 #
 # If the daemon can reload its configuration without
 # restarting (for example, when it is sent a SIGHUP),
 # then implement that here.
 #
 start-stop-daemon --stop --signal 1 --quiet --pidfile $PIDFILE --name $NAME
 return 0
}

case "$1" in
 start)
 [ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME"
 do_start
 case "$?" in
 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
 esac
 ;;
 stop)
 [ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME"
 do_stop
 case "$?" in
 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
 esac
 ;;
 status)
 status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
 ;;
 #reload|force-reload)
 #
 # If do_reload() is not implemented then leave this commented out
 # and leave 'force-reload' as an alias for 'restart'.
 #
 #log_daemon_msg "Reloading $DESC" "$NAME"
 #do_reload
 #log_end_msg $?
 #;;
 restart|force-reload)
 #
 # If the "reload" option is implemented then remove the
 # 'force-reload' alias
 #
 log_daemon_msg "Restarting $DESC" "$NAME"
 do_stop
 case "$?" in
 0|1)
 do_start
 case "$?" in
 0) log_end_msg 0 ;;
 1) log_end_msg 1 ;; # Old process is still running
 *) log_end_msg 1 ;; # Failed to start
 esac
 ;;
 *)
 # Failed to stop
 log_end_msg 1
 ;;
 esac
 ;;
 *)
 echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" >&2
 exit 3
 ;;
esac

:

This file needs to have execution permissions:

$ sudo chmod +x /etc/init.d/phantomjs

You can define overridable options in the following file:

/etc/default/phantomjs

WEBDRIVER_PORT=8190

If you want the service to start at boot time, type the following command:

$ update-rc.d phantomjs defaults

And here you go, you can now manage phantomjs using the service start/stop/status commands. Note that the stop command will requires a few seconds to shutdown the process.

Enjoy!


❌