J'aime pas les piles

My lost place

Rebuilding a Package Using ABS

| Comments

Rebuilding an Archlinux package using Arch Build System (ABS)

Rebuilding a system package on Archlinux as a user is really simple using ABS

1
2
3
4
5
6
7
8
yaourt -S abs
cp /etc/abs.conf ~/.abs.conf
sed -i 's#.*ABSROOT.*#ABSROOT="\$HOME/abs"#' ~/.abs.conf
mkdir ~/abs
abs extra/nvidia
cd ~/abs/extra/nvidia
makepkg -s
yaourt nvidia*.pkg.tar.xz

XenServer Installation Using Http Source.

| Comments

Small notes on installing XenServer using a remote installation source. See

Citrix Official Documentation for more.

Downloading XenServer

Log into Citrix site as login is required in order to see the Product Software entry into the Find Downloads tool.

Extracting ISO content

1
2
3
4
mkdir xenserver-6.0.0_iso
sudo mount -o loop XenServer-6.0.0-install-cd.iso xenserver-6.0.0_iso
sudo mkdir -p /srv/mirror/xen/xenserver/6.0.0/
sudo cp -r xenserver-6.0.0_iso/* /srv/mirror/xen/xenserver/6.0.0/

Making ISO available at a reachable http/ftp server

Simple alias example for apache:

1
2
3
4
5
6
7
8
Alias /xen /srv/mirror/xen
<Directory /srv/mirror/xen>
   Options Indexes FollowSymLinks
   AllowOverride None
   Order deny,allow
   deny from all
   allow from all
</Directory>

Installation files will be available at http://server.domain.tld/xen/xenserver/6.0.0/.

Installing XenServer

  • Boot installation ISO on server
  • Configure network
  • Configure XenServer installer to retrieve files of http/ftp source

Regenerating Puppet Certificates.

| Comments

Bleeding Heart…

Following the Heartbleed bug and as all Debian stable (wheezy for the time being) are affected and as the puppetmaster is running on debian it is a good idea to regenerate the puppet certificates, here is a quick how-to when using puppet with passenger on debian wheezy.

Please refer to the official documentation.

On the puppet master

1
2
3
4
5
6
service apache2 stop
cp -r /var/lib/puppet/ssl ~/puppet-ssl-backup
rm -rf /var/lib/puppet/ssl/*
# Kill the master once the CA and certs have been generated using ctrl+c
puppet master --no-daemonize --verbose
service apache2 start

Now a new CA has been created in /var/lib/puppet/ssl, and a cert for the master has been generated and signed, and all the existing agent certificates are now unknown to the CA.

1
puppet cert list --all

The puppetdb certificates should also be updated.

1
2
3
rm /etc/puppetdb/ssl/*
puppetdb ssl-setup
service puppetdb restart

Launch the agent on the master to check that everything is OK.

1
puppet agent -tv

On the puppet agents

Stop the agent if it is running and clean the SSL dir.

1
2
service puppet stop
rm -rf /var/lib/puppet/ssl/*

Launch the agent to generate a cert and wait for the cert to be signed.

1
puppet agent -tv --waitforcert 60
Sign the certificate request on the master
1
2
puppet cert list
puppet cert sign xxx.xxx.xxx

Dependencies With Hiera and Create_resources.

| Comments

In a nodeless setup it is possible to manage dependencies between resources created by create_resources, but the syntax is quite strict and caused me some troubles. If the syntax is not correct the traditionnal Could not find dependency error message will be displayed.

site.pp
1
2
3
4
5
6
7
8
9
node default {
  hiera_include ('classes', [])

  $packages = hiera_hash('packages', {})
  create_resources('package', $packages)

  $services = hiera_hash('services', {})
  create_resources('service', $services)
}

The following won’t work:

common.yaml
1
2
3
4
services:
  mysql:
    ensure: 'running'
    require: Package['mysql-server']

Nor the following:

common.yaml
1
2
3
4
services:
  mysql:
    ensure: 'running'
    require: "Package['mysql-server']"

But the following two syntaxes will work:

common.yaml
1
2
3
4
5
6
7
8
9
10
---
classes:
  - 'puppet::agent'
packages:
  mysql-server:
    ensure: 'installed'
services:
  mysql:
    ensure: 'running'
    require: Package[mysql-server]
common.yaml
1
2
3
4
5
6
7
8
9
10
---
classes:
  - 'puppet::agent'
packages:
  mysql-server:
    ensure: 'installed'
services:
  mysql:
    ensure: 'running'
    require: 'Package[mysql-server]'

See a full example running in Vagrant at https://github.com/gwarf/puppet-vagrant-playground

Resizing a VM Filesystem From a Xen Host

| Comments

Resizing the disk and filesystem of a VM

I had to add more disk space to a Virtual Machine running under XenServer 6.0. In such situation I ususally do like this:

  • create a snapshot (if possible, sometimes disk space makes it impossible)
  • increase the virtual disk size using XenCenter (running from a Windows Virtual Machine…)
  • boot Sysrescuecd and resize the filesystem using gparted

But this time, due to some known and unknown magic particular things (VT extensions disabled, ISO library avaialble on a remote NFS share with latency problems… I wasn’t able to boot sysrescuecd (nor other recent Live CD I tried), so one of my last option was to attach the VM’s virtual disk on the xen host (another VM could have been used too), here are the steps:

  • Find xen host uuid
  • Find VM uuid
  • Find VM’s disk VDI uuid
  • Create a new VBD for the xen host to plug the VDI in
  • Stop the VM
  • Plug the VBD
  • Resize the partition and filesystem
  • unplug and destroy the VBD
  • Restart the VM

Hands-on…

From the xenserver shell:

Finding the Xen host uuid
1
2
3
hostuuid=$(xe vm-list name-label='Control domain on host: xenserver-XXX' \
  | awk '/uuid/ {print $5}')
xe vm-list uuid=$hostuuid
Finding VM uuid
1
2
vmuuid=$(xe vm-list name-label=xxx.domain.tld | awk '/uuid/ {print $5}')
xe vm-list uuid=$vmuuid
Finding VDI uuid (check labels, userdevice number)
1
2
3
xe vm-disk-list uuid=$vmuuid
vdiuuid=xxxx-xxxx-xxx-xxx-xxxx
xe vbd-list vdi-uuid=$vdiuuid
Creating a VBD for the Xen host to plug the VDI in
1
2
3
vbduuid=$(xe vbd-create device=0 vm-uuid=$hostuuid vdi-uuid=$vdiuuid \
  bootable=false mode=RW type=Disk)
xe vbd-list vdi-uuid=$vdiuuid
Plugging the VBD into the Xen host
1
2
xe vbd-plug uuid=$vbduuid
xe vbd-list vdi-uuid=$vdiuuid

Now the virtual disk of the VM is seen as a local device (/dev/xvda) on the xen host, it can be accessed using the standard fdisk, parted, mount tools.

At first I tried to use parted’s resize command but it didn’t work, so I add to delete and recreate the partition manually according to the cylinders. (be sure to save the layout of the partitions before messing with them…) (and using tmux is highly recommended for such tasks.)

Resizing the partition using parted
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
parted /dev/xvda                                                                                                                                [86/818]
GNU Parted 1.8.1
Using /dev/xvda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s
(parted) print

Model: Xen Virtual Block Device (xvd)
Disk /dev/xvda: 83886079s
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start  End        Size       Type     File system  Flags
 1      2048s  15988735s  15986688s  primary  ext3         boot

(parted) rm 1
(parted) mkpart
Partition type?  primary/extended? primary
File system type?  [ext2]? ext3
Start? 2048s
End? 83886078s
(parted) print

Model: Xen Virtual Block Device (xvd)
Disk /dev/xvda: 83886079s
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start  End        Size       Type     File system  Flags
 1      2048s  83886078s  83884031s  primary  ext3

(parted) toggle 1 boot
(parted) print

Model: Xen Virtual Block Device (xvd)
Disk /dev/xvda: 83886079s
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start  End        Size       Type     File system  Flags
 1      2048s  83886078s  83884031s  primary  ext3         boot

(parted) quit
Information: Don't forget to update /etc/fstab, if necessary.
Resizing the filesystem using resize2fs
1
2
3
4
5
6
7
8
9
10
11
12
13
resize2fs /dev/xvda1
resize2fs 1.39 (29-May-2006)
Resizing the filesystem on /dev/xvda1 to 10485503 (4k) blocks.
The filesystem on /dev/xvda1 is now 10485503 blocks long

fsck.ext3 -f /dev/xvda1
e2fsck 1.39 (29-May-2006)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/xvda1: 144207/2621440 files (3.3% non-contiguous), 1506575/10485503 blocks
Unplugging the VBD
1
2
3
xe vbd-unplug uuid=$vbduuid
xe vbd-destroy uuid=$vbduuid
xe vbd-list vdi-uuid=$vdiuuid

Done, the VM can now be restarted!

Starting to Play With Puppet Enterprise

| Comments

Why Puppet Enterprise?

@work we are using puppet to manage our servers, currently we are using Puppet Open Source edition with great satisfaction.

Being curious I always wanted to try the Puppet Enterprise edition, wich is no more free, following my server migration (moving from a FreeBSD-based OVH dedicated server to a less expensive kimsufi) I have to setup a new way of managing my VMs, after some quick and opiniated reading, I dropped the idead to use Chief and after a bit of additional readings I skipped Ansible too as I like some puppet concepts like exported/collected resources allowint to setup a dynamic environment.

As Puppet Enterprise is free for managing up to 10 nodes (I don’t plan to have more than 10 nodes for my personnal use ;) ) it turns out it is the perfect occasion to git it a try, so let’s jump in!

Installing Puppet Enterprise

Downloading Puppet Enterprise only requires to register with an email, a mail with the download information for the different OS will be sent.

1
2
3
wget https://s3.amazonaws.com/pe-builds/released/3.1.3/puppet-enterprise-3.1.3-debian-7-amd64.tar.gz
tar xf puppet-enterprise-3.1.3-debian-7-amd64.tar.gz
cd puppet-enterprise-3.1.3-debian-7-amd64/
Archive content
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
% tree
.
├── answers
│   ├── agent_no_cloud.answer.sample
│   ├── agent_with_cloud.answer.sample
│   ├── console_only.answer.sample
│   ├── full_suite.answer.sample
│   ├── full_suite_existing_postgres.sample
│   ├── full_suite_existing_remote_postgres.sample
│   └── master_only.answer.sample
├── db_import_export.rake
├── erb
│   ├── auth.conf.erb
│   ├── autosign.conf.erb
│   ├── cas_client_config.yml.erb
│   ├── config.ru.erb
│   ├── console_auth_config.yml.erb
│   ├── console_auth_db_config.yml.erb
│   ├── console_auth_log_config.yml
│   ├── databases.erb
│   ├── database.yml.erb
│   ├── event_inspector_config.yml.erb
│   ├── external_node.erb
│   ├── license_status_config.yml.erb
│   ├── puppet.conf.erb
│   ├── puppetdashboard.conf.erb
│   ├── puppetdb_master.pp.erb
│   ├── puppetdb.pp.erb
│   ├── puppetmaster.conf.erb
│   ├── read_console_auth_db_config.erb
│   ├── rewrite_rubycas_config.yml.erb
│   ├── rubycas_config_upgrade_comments.txt
│   ├── rubycas_config.yml.erb
│   ├── settings.yml.erb
│   └── site.pp.erb
├── gpg
│   └── GPG-KEY-puppetlabs
├── LICENSE.txt
├── modules
│   ├── cprice404-inifile-0.10.3.tar.gz
│   ├── install_modules.txt
│   ├── puppetlabs-apt-1.1.0.tar.gz
│   ├── puppetlabs-auth_conf-0.1.7.tar.gz
│   ├── puppetlabs-firewall-0.3.0.tar.gz
│   ├── puppetlabs-java_ks-1.1.0.tar.gz
│   ├── puppetlabs-pe_accounts-2.0.1.tar.gz
│   ├── puppetlabs-pe_common-0.1.0.tar.gz
│   ├── puppetlabs-pe_mcollective-0.1.14.tar.gz
│   ├── puppetlabs-pe_postgresql-0.0.5.tar.gz
│   ├── puppetlabs-pe_puppetdb-0.0.11.tar.gz
│   ├── puppetlabs-postgresql-2.5.0.tar.gz
│   ├── puppetlabs-puppetdb-1.5.1.tar.gz
│   ├── puppetlabs-puppet_enterprise-3.1.0.tar.gz
│   ├── puppetlabs-reboot-0.1.2.tar.gz
│   ├── puppetlabs-request_manager-0.0.10.tar.gz
│   ├── puppetlabs-stdlib-3.2.0.tar.gz
│   └── ripienaar-concat-0.2.0.tar.gz
├── noask
│   └── solaris-noask
├── packages
│   ├── debian-7-amd64
│   │   ├── Packages
│   │   ├── Packages.gz
│   │   ├── pe-activemq_5.8.0-1puppet5_all.deb
│   │   ├── pe-activerecord_2.3.17-1puppet2_all.deb
│   │   ├── pe-activesupport_2.3.17-1puppet2_all.deb
│   │   ├── pe-augeas_1.1.0-1puppet1_amd64.deb
│   │   ├── pe-bundler_1.3.5-1puppet2_all.deb
│   │   ├── pe-certificate-manager_0.4.7-1puppet1_all.deb
│   │   ├── pe-certificate-manager-test_0.4.7-1puppet1_all.deb
│   │   ├── pe-cloud-provisioner_1.1.4-puppet1_all.deb
│   │   ├── pe-cloud-provisioner-libs_0.3.2-1puppet1_amd64.deb
│   │   ├── pe-console_0.3.10-1puppet1_all.deb
│   │   ├── pe-console-auth_1.2.21-1puppet1_all.deb
│   │   ├── pe-console-test_0.3.10-1puppet1_all.deb
│   │   ├── pe-event-inspector_0.1.0-1puppet1_all.deb
│   │   ├── pe-facter_1.7.3.1-1puppet1_amd64.deb
│   │   ├── pe-hiera_1.2.2.1-1puppet1_all.deb
│   │   ├── pe-httpd_2.2.25-1puppet4_amd64.deb
│   │   ├── pe-httpd-bin_2.2.25-1puppet4_amd64.deb
│   │   ├── pe-httpd-common_2.2.25-1puppet4_amd64.deb
│   │   ├── pe-httpd-doc_2.2.25-1puppet4_all.deb
│   │   ├── pe-httpd-mpm-worker_2.2.25-1puppet4_amd64.deb
│   │   ├── pe-httpd-prefork-dev_2.2.25-1puppet4_amd64.deb
│   │   ├── pe-httpd-utils_2.2.25-1puppet4_amd64.deb
│   │   ├── pe-java_1.7.0.19-1puppet1_amd64.deb
│   │   ├── pe-libevent_2.0.13-1puppet3_amd64.deb
│   │   ├── pe-libevent-devel_2.0.13-1puppet3_amd64.deb
│   │   ├── pe-libyaml_0.1.4-1puppet3_amd64.deb
│   │   ├── pe-license_0.1.1-1puppet1_all.deb
│   │   ├── pe-license-status_0.1.5-1puppet1_all.deb
│   │   ├── pe-license-status-test_0.1.5.1-1puppet1_all.deb
│   │   ├── pe-live-management_1.2.16-1puppet1_all.deb
│   │   ├── pe-mcollective_2.2.4-1puppet2_all.deb
│   │   ├── pe-mcollective-client_2.2.4-1puppet2_all.deb
│   │   ├── pe-mcollective-common_2.2.4-1puppet2_all.deb
│   │   ├── pe-memcached_1.4.7-1puppet4_amd64.deb
│   │   ├── pe-memcached-devel_1.4.7-1puppet4_amd64.deb
│   │   ├── pe-passenger_4.0.18-1puppet3_amd64.deb
│   │   ├── pe-postgresql_9.2.4-2puppet5_amd64.deb
│   │   ├── pe-postgresql-contrib_9.2.4-2puppet5_amd64.deb
│   │   ├── pe-postgresql-devel_9.2.4-2puppet5_amd64.deb
│   │   ├── pe-postgresql-server_9.2.4-2puppet5_amd64.deb
│   │   ├── pe-puppet_3.3.3.2-1puppet1_all.deb
│   │   ├── pe-puppet-dashboard_2.0.14-1puppet1_amd64.deb
│   │   ├── pe-puppetdb_1.5.1.pe-1puppetlabs1_all.deb
│   │   ├── pe-puppetdb-terminus_1.5.1.pe-1puppetlabs1_all.deb
│   │   ├── pe-puppet-enterprise-release_3.1.3-1puppet1_all.deb
│   │   ├── pe-puppet-license-cli_0.1.6-1puppet1_all.deb
│   │   ├── pe-puppet-server_3.3.3.2-1puppet1_all.deb
│   │   ├── pe-ruby_1.9.3.448-1puppet5_amd64.deb
│   │   ├── pe-ruby-augeas_0.5.0-1puppet2_amd64.deb
│   │   ├── pe-rubycas-server_1.1.15-1puppet1_all.deb
│   │   ├── pe-rubygem-deep-merge_1.0.0-1puppet1_all.deb
│   │   ├── pe-rubygem-net-ssh_2.1.4-1puppet2_all.deb
│   │   ├── pe-rubygem-rack_1.4.5-1puppet2_all.deb
│   │   ├── pe-rubygem-sequel_3.47.0-1puppet1_all.deb
│   │   ├── pe-rubygem-stomp_1.2.9-1puppet1_all.deb
│   │   ├── pe-ruby-ldap_0.9.12-1puppet2_amd64.deb
│   │   ├── pe-ruby-mysql_2.8.2-1puppet2_amd64.deb
│   │   ├── pe-ruby-rgen_0.6.5-1puppet1_all.deb
│   │   ├── pe-ruby-shadow_2.2.0-1puppet3_amd64.deb
│   │   ├── pe-ruby-stomp_1.2.9-1puppet2_all.deb
│   │   ├── pe-tanukiwrapper_3.5.9-1puppet5_amd64.deb
│   │   ├── Release
│   │   └── Release.gpg
│   └── debian-7-amd64-package-versions.json
├── puppet-enterprise-installer
├── puppet-enterprise-uninstaller
├── README.markdown
├── support
├── supported_platforms
├── util
│   └── pe-man
├── utilities
└── VERSION

8 directories, 126 files

$ du -schx ../puppet-enterprise-3.1.3-debian-7-amd64
254M    ../puppet-enterprise-3.1.3-debian-7-amd64

Yep, that’s huge!

In order to install the agent on my workstation (running Archlinux) I downloaded the non packaged *nix flavour, which is 3.6GB fat!

I will see later if thre is an alternate way.

Installation steps

According to Puppet Enterprise documentation it is required to follow the following deployment order:

  • Puppet Master
  • Database Support/PuppetDB
  • Console
  • Agents

The puppet master will be installed on the same node as the puppetdb and web console.

Other nodes will only be agents.

Creating the PostgreSQL databases

The node already runs a postgresql database, so I won’t use puppet’s embedded one.

PostregreSQL database creation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
sudo su - postgres
postgres@misc:~$ createuser -P -D -R -S pe-puppetdb
Enter password for new role:
Enter it again:
postgres@misc:~$ createdb -O pe-puppetdb -E UTF8 pe-puppetdb
postgres@misc:~$ createuser -P -D -R -S console
Enter password for new role:
Enter it again:
postgres@misc:~$ createdb -O console -E UTF8 console
postgres@misc:~$ createuser -P -D -R -S console_auth
Enter password for new role:
Enter it again:
postgres@misc:~$ createdb -O console_auth -E UTF8 console_auth
postgres@misc:~$

Launching the installation of the puppet master

The installation script asks for configrations settings and is really straight-forward, just be sure to set y to ask for master and puppetdb installation.

The script can also be called using a response file (see —help output). A response file will be generated using the interactive choices.

1
./puppet-enterprise-installer

Once the installation is over rembember to open the following 3000, 8140, 61613 TCP ports.

The web Console is now reachable at: https://host.tld.domain:3000

Following is the answer file generated by the installer, it can be provided to the installer like this:

using an answer file
1
./puppet-enterprise-installer -a answerfile
answer file for the puppetmaster
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
q_all_in_one_install=y
q_backup_and_purge_old_configuration=n
q_database_host=localhost
q_database_install=n
q_database_port=5432
q_install=y
q_pe_database=n
q_puppet_cloud_install=n
q_puppet_enterpriseconsole_auth_database_name=console_auth
q_puppet_enterpriseconsole_auth_database_password=XXXXXXXXXX
q_puppet_enterpriseconsole_auth_database_user=console_auth
q_puppet_enterpriseconsole_auth_password='XXXXXXXXXXXXXXXXX'
q_puppet_enterpriseconsole_auth_user_email=baptiste@bapt.name
q_puppet_enterpriseconsole_database_name=console
q_puppet_enterpriseconsole_database_password=XXXXXXXXXX
q_puppet_enterpriseconsole_database_user=console
q_puppet_enterpriseconsole_httpd_port=3000
q_puppet_enterpriseconsole_install=y
q_puppet_enterpriseconsole_master_hostname=puppet.tld.domain
q_puppet_enterpriseconsole_smtp_host=localhost
q_puppet_enterpriseconsole_smtp_password=
q_puppet_enterpriseconsole_smtp_port=25
q_puppet_enterpriseconsole_smtp_use_tls=n
q_puppet_enterpriseconsole_smtp_user_auth=n
q_puppet_enterpriseconsole_smtp_username=
q_puppet_symlinks_install=y
q_puppetagent_certname=puppet.bapt.name
q_puppetagent_install=y
q_puppetagent_server=puppet.bapt.name
q_puppetdb_database_name=pe-puppetdb
q_puppetdb_database_password=XXXXXXXXXX
q_puppetdb_database_user=pe-puppetdb
q_puppetdb_hostname=puppet.bapt.name
q_puppetdb_install=y
q_puppetdb_port=8081
q_puppetmaster_certname=puppet.tld.domain
q_puppetmaster_dnsaltnames=puppet,puppet.tld.domain
q_puppetmaster_enterpriseconsole_hostname=localhost
q_puppetmaster_enterpriseconsole_port=3000
q_puppetmaster_install=y
q_run_updtvpkg=n
q_vendor_packages_install=y

Launching the installation of a node

On agent nodes you too have to download and extract the archive.

1
2
3
4
wget https://s3.amazonaws.com/pe-builds/released/3.1.3/puppet-enterprise-3.1.3-debian-7-amd64.tar.gz
tar xf puppet-enterprise-3.1.3-debian-7-amd64.tar.gz
cd puppet-enterprise-3.1.3-debian-7-amd64/
./puppet-enterprise-installer

Then the installation can be launched in the same way as for the master, but this time all the defaults should be OK, I just put the complete FQDN for the puppet server.

1
./puppet-enterprise-installer

Following is the answer file generated by the installer.

answer file
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
q_all_in_one_install=n
q_database_install=n
q_fail_on_unsuccessful_master_lookup=y
q_install=y
q_puppet_cloud_install=n
q_puppet_enterpriseconsole_install=n
q_puppet_symlinks_install=y
q_puppetagent_certname=host.domain.tld
q_puppetagent_install=y
q_puppetagent_server=puppet.domain.tld
q_puppetca_install=n
q_puppetdb_install=n
q_puppetmaster_install=n
q_run_updtvpkg=n
q_vendor_packages_install=y

Cleaning Puppet Enterprise installation

uninstalling
1
./puppet-enterprise-uninstaller -d -p -y

First steps

Puppetlabs provides an helpful quick start guide.

Checking SE and LFC Consistency

| Comments

Dealing with inconsistency between the LFC and DPM

The biomed VO team made accesible some useful tools to enforce consistency of Storage Elements (SE) and Logical File CAtalog (LFC) available on github.

In the neugrid4you project we have a lot of big dataset made available to our user community, but some of the datasets are very big, (ADNI is something like 30 GiB) and we have to copy and register them in the grid on multiple SE and register in our LFC. In order to ensure that there isn’t any inconsistency (i.e. in order to cl ean all the inconsistencies ;) )

Retrieving inconsistencies

The diff-se-dump-lfc.sh script is used to find inconsistencies between a dump of a DPM and of the LFC.

Dumping the DPNS VO-specific folder
1
2
3
dump-se-files.py \
  --url srm://<dpm_server_name>/dpm/<dpm_domain_name>/home/<vo_name> \
  --output-file <dpm_server_name>-dpm-dump.txt
Dumping the LFC VO-specific folder
1
2
export LFC_HOST=<lfc_host>
LFCBrowseSE <dpm_server_name> --vo <vo_name> --sfn > <lfc_host>-<vo_name>-lfc-dump.txt
Searching for inconsistencies for files older than 1 month
1
2
3
4
diff-se-dump-lfc.sh --older-than 1 \
  --se <dpm_server_name> \
  --se-dump <dpm_server_name>-dpm-dump.txt \
  --lfc-dump <lfc_host>-<vo_name>-lfc-dump.txt

The script will report zombie files, ghost entries and files found both in the SE and the LFC.

Zombie files are files that exist in the SE but are no more referenced in the LFC.

Ghost entries are entries in the LFC that have no corresponding files on the SE.

Cleaning ghost entries

There is no automatic way of cleaning ghost entries, and in order to retrieve the corresponding LFN it is required to directly access the LFC database.

1
2
3
4
5
SELECT Cns_file_metadata.fileid,guid,name
  FROM Cns_file_metadata
  INNER JOIN Cns_file_replica
  ON Cns_file_metadata.fileid=Cns_file_replica.fileid
  WHERE sfn='srm: //<dpm_server_name>:8446/dpm/<dpm_domain_name>/home/<vo_name>/generated/2014-02-10/file-121aa7e8-a9ec-4401-84f1-24341a74433c';
Script for retrieving the guid of a sfn
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#!/bin/sh

set -e

SFN="$1"
DB_NAME=cns_db
QUERY="SELECT guid FROM Cns_file_metadata INNER
JOIN Cns_file_replica
ON Cns_file_metadata.fileid=Cns_file_replica.fileid
WHERE sfn='$SFN'"

GUID=$(mysql --batch --silent "$DB_NAME" -e "$QUERY")

echo "guid:$GUID"

exit 0
1
2
3
for sfn in $(cat <dpm_server_name>.output_lfc_lost_files); do
  ./get-lfn-for-ghostfile.sh $sfn
done > guid-list.txt
1
2
3
for guid in $(cat guid-list.txt); do
  lcg-la $guid
done

Once the list of lfn has been retrieved it is possible to remove the wrong entries.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
% lcg-lr lfn:/grid/<vo_name>/data/ADNI/IMAGES/128_S_4607/ADNI2/nG+ADNI2+128_S_4607+20121109+0847+S174741+3T0+T2ST+ORIG+V01.tar.bz2
srm://<dpm_server_name>/dpm/<dpm_domain_name>/home/<vo_name>/generated/2013-09-20/file7fa6f030-f029-418c-a60c-5a8d04253a68
srm://<dpm2_server_name>/dpm/<dpm2_domain_name>/home/<vo_name>/generated/2013-09-20/file2e44af61-a0e0-4868-af30-d08d9e3a7a69

% lcg-del --force srm://<dpm_server_name>/dpm/<dpm_domain_name>/home/<vo_name>/generated/2013-09-20/file7fa6f030-f029-418c-a60c-5a8d04253a68

% lcg-lr lfn:/grid/<vo_name>/data/ADNI/IMAGES/128_S_4607/ADNI2/nG+ADNI2+128_S_4607+20121109+0847+S174741+3T0+T2ST+ORIG+V01.tar.bz2
srm://<dpm2_server_name>/dpm/<dpm2_domain_name>/home/<vo_name>/generated/2013-09-20/file2e44af61-a0e0-4868-af30-d08d9e3a7a69

% lcg-rep -d <dpm_server_name>
lfn:/grid/<vo_name>/data/ADNI/IMAGES/128_S_4607/ADNI2/nG+ADNI2+128_S_4607+20121109+0847+S174741+3T0+T2ST+ORIG+V01.tar.bz2

% lcg-lr
% lfn:/grid/<vo_name>/data/ADNI/IMAGES/128_S_4607/ADNI2/nG+ADNI2+128_S_4607+20121109+0847+S174741+3T0+T2ST+ORIG+V01.tar.bz2
srm://<dpm_server_name>/dpm/<dpm_domain_name>/home/<vo_name>/generated/2014-02-12/filecb922278-02c3-4642-b085-0f3695c9aaee
srm://<dpm2_server_name>/dpm/<dpm2_domain_name>/home/<vo_name>/generated/2013-09-20/file2e44af61-a0e0-4868-af30-d08d9e3a7a69

Mail Server Migration

| Comments

Goal

Migrating a mail server from one server (FreeBSD physical server) to another one (Debian virtual machine) without loosing mails.

Mail server

Retrieving mails from other email providers

  • fetchmail (cron)
1
2
crontab -e
#*/3 * * * * $HOME/bin/getmymailnow > /dev/null
~/bin/getmymailnow
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#!/bin/sh
LOCKFILE="$0.lock"
PATH="/bin:/usr/bin"

if [ -f "$LOCKFILE" ]; then
  PID=$(cat "$LOCKFILE")
  if ! ps $PID 2> /dev/null; then
    echo "Ignoring stalled lock file" >&2
  else
    echo "Script already running (PID=$PID)" >&2
    exit 1
  fi
fi

fetchmail -a -s -m "procmail -d %T" 2>&1

echo $! > $LOCKFILE
wait
rm "$LOCKFILE"

#/usr/bin/gotmail

exit 0

Tools used for local/virtual mail handling

  • Postfix
  • Dovecot
  • Spamassassin
  • roundcube
  • procmail
  • fetchmail
  • bind

Initial step

  • Create required users on new server
  • Configure postfix on new server as it was on old one

    • remove mail domain from mydestination to old server
    • set relayhost to old server
  • If different domain should be relayed to different places:

/etc/postfix/main.cf
1
transport_maps = hash:/etc/postfix/transport_maps
/etc/postfix/transport_maps
1
domain.tld smtp:[mail.plop.tld]
1
2
postmap /etc/postfix/transport_maps
service postfix restart
  • Update MX in DNS conf to use new server

All mails should now go to new server, and this one will relay mails to old server.

  • Install dovecot on new server
  • Migrate and update conf (http://wiki2.dovecot.org/Upgrading/2.0)
  • Migrate/update required certificates
  • Make an initial copy of the mailstores to the new server using rsync
1
rsync -avz --stats ~plop/Maildir -e ssh plop@new.server.tld:
  • Validates that imap/dovecot is working as expected
1
2
3
4
openssl s_client -connect new.server.tld:993
a01 login plop the_PassWord
a02 SELECT INBOX
a03 logout
  • Copy spamassassin conf (global/local) to new server (and review it)
  • Copy procmailconf to new server (and review it)
  • Copy fetchmail conf (global/local) to new server (and review it)

  • Install postgresql on new server

  • Migrate roundcube postgres database/user
    • Database dump have to be updated as postgres user name is different
1
sed -i 's/pgsql/postgres/g' roundcube.sql
  • Install roundcube on new server
  • Update roundcube conf
  • Validate that roundcube is working as expected

  • Wait at least a week to ensure that DNS will be up-to-date with new MX (and check that you have a small DNS TTL)

  • Update DNS entries for SPF

Final step – including small downtime

  • Stops email fetching on old server
  • Stops postfix on old server
  • Stops dovecot on old server
  • Stops roundcube vhost on old server
  • Make an incremental copy of the mailstores, deleting no more present emails using rsync
1
rsync -avz --delete-after --stats ~plop/Maildir -e ssh plop@new.server.tld:
  • Configure postfix on new server:
    • disable relaying to old server
    • fix mydestination
  • Switch IPs (or hostnames if not possible) to new server
  • Update required hostnames in new server
  • Update roundcube postgres database
  • Enable email fetching on new server
  • Configure postfix on old server to relay mails to new server
  • Clean old server
  • Clean DNS conf

No emails should be lost as if the server is not available, the contacting servers should hold and resend the mails.