” Do something once. Do something twice, automate it “, A quote that is pretty much the mantra of any budding hacker, in the midst of modern computing. Usually to implement it we start programming by first making small scripts, but it can be more.
From preseed to Infrastructure-as-code
This is particularly the case for servers and hosting. Initially, there was a possibility to automate the configuration of the Linux or Windows distribution by hand, to use PXE for deployment on “bare metal” machines. Then were born solutions (useful but complicated) about ten years ago like Chef, Puppet or Tinkerbell to fix it all. But now the cloud is within reach.
And with this there are opportunities costing a few pennies an hour, not to mention hypervisors like Proxmox VE that you can use on a local server, in your “Home Lab”. Anyone can easily launch an out-of-the-box server in seconds and have it available to various service providers (CSPs).
For users, the need for automation has changed and it has been necessary to implement solutions to unify the configuration or the implementation of complete infrastructures, sometimes “multi-cloud”.
We have now entered the era of APIs, where a server can launch an HTTPS request. From Cloud-heat that allows to manage post-installation. From Ansible or Terraform to get away a bit. Not to mention the management of services in the form of containers, with deployments via Kubernetes (K8s).
After a few months of “playing” with it all, we will now open a file where we will discuss these different solutions, what they allow and their limitations. And the way in which they allow those managing infrastructures to no longer rely on a particular service provider.
One way to improve its stability when major breakdowns occur and to limit its dependence on technology. A topic we explored in our magazine #3, currently being funded.
What is Cloud-heat?
Cloud-Heat was initially an Ubuntu-specific project. As can be seen in its documentation history, the first versions were built in 2010, where it was presented as a package integrated with Enterprise Cloud (UEC) images and those offered by AWS in its EC2 service. Allowed at the time to configure some default settings such as language, hostname, SSH key, mount point, etc.
Today it is a true standard of the cloud industry. The package is integrated with images of Ubuntu Server, Cloud as well as EC2, but also many public clouds and especially many other distributions: Alpine, Arch, Debian, Fedora, Gentoo, openSUSE, Red Hat and others. its derivatives as well as all major xBSD.
The tool, which is still open source (double license Apache 2.0 and GPLv3), is also more complete as its documentation shows. It is always a question of creating a configuration file (user-data) that indicates the parameters that should be considered at first start (but not only). This can be accompanied by instance metadata.
As its documentation examples show, it is also designed to work in addition to tools like Chef and Puppet. Feel free to explore its various possibilities.
How does this work?
There are many ways to pass user data. The most classic, which we will study in this first example, is the cloud-config file, using the YAML markup language. It is therefore a text file that contains parameters related to a particular, relatively classic formatting.
Let’s move on to a small textbook case with a fairly simple configuration file that we use for test machines and instances. It consists of updating them before creating a user, installing various applications, downloading files, unpacking them, creating aliases, etc. Its contents are as follows:
# On met à jour le système
# On créé l'utilisateur avec droits sudo en précisant sa clé SSH publique
- name: davlgd
groups: [ wheel , sudo ]
sudo: ['ALL=(ALL) NOPASSWD:ALL']
- collez ici le contenu de votre clé publique
# On installe les applications nécessaires
# On se place dans le dossier de l'utilisateur
- cd /home/davlgd
# On ajoute un alias pour mettre à jour facilement le système
- echo "alias fup='sudo apt update && sudo apt full-upgrade -y && sudo apt autoremove -y'" >> .bashrc
# On coupe l'accès au compte root pour la connexion SSH
- sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin no/g' /etc/ssh/sshd_config
- systemctl restart sshd.service
# On télécharge les fichiers de travail
- wget http://ftp.nluug.nl/pub/graphics/blender/demo/movies/ToS/tearsofsteel_4k.mov
- wget https://download.blender.org/demo/test/BMW27_2.blend.zip
- wget https://download.blender.org/demo/test/benchmark.zip
# On décompresse les archives et on nettoie
- 7z x BMW27_2.blend.zip && rm BMW27_2.blend.zip
- 7z x benchmark.zip && rm benchmark.zip
- mv bmw27/*.blend . && rm -r bmw27
- mv benchmark/*.blend . && rm -r benchmark
# Les fichiers étant créés par le compte utilisateur, on change le propriétaire
- chown davlgd:davlgd *
final_message: "Le système est configuré. Délai nécessaire : $UPTIME secondes"
If you like, you can also add an automatic restart:
message: Redémarrage du système
In the above settings you can change the delay
timeout (in seconds) or
delay which can be specified in the form
"+5" for 5 minutes for example. Machine stopping can be requested by mode
Creating a VM or Instance using Cloud-heat
You can use this when creating a virtual machine (VM) with Multipass in the case of Ubuntu. Freebox Delta also uses the Cloud-heat field to configure its VMs installed from cloud images of various distributions. You can modify it and adapt to your needs.
Some hypervisors natively manage Cloud-heat, such as Proxmox VE. This is in the form of a section in the settings of your VMs, which allows you to declare the content of the script. So it is enough to fill it out before the first start to consider the start adjustment.
As mentioned above, many hosts manage Cloud-heat such as Gandi, OVHcloud and Scaleway in France or Infomaniak in Switzerland which has just launched its IaaS OpenStack. Thus, we can declare the configuration in the management interface when creating a virtual machine. In Scaleway, go through these advanced options:
On the road to automation, the Scaleway CLI example …
But you can also create instances automatically, using the open source in-house CLI (Apache 2.0 license), scw. It installs through various package managers, including chocolatey (but not yet winget). It is also distributed in the form of binaries that you just need to download and run.
You will need to identify yourself via your email and password or an API key. You can also add an SSH key to your account easily in the startup phase that starts like this:
You can then create an instance using just one command:
scw instance server create type=DEV1-S image=ubuntu_focal zone=fr-par-1
The documentation specifies that the list of opportunities is obtained in this way:
scw instance server list
To target a specific project, add its identifier:
scw instance server list project-id=identifiant
To get details in a command, just add –help:
scw instance server --help
… with some necessary adjustments
It is also possible to provide Cloud-heat configuration files. The client initially caused us problems but we confirmed that it was a known bug, fixed in version 2.4.0. Like the documentation where we have suggested changes since it is in CC-BY-NC-SA-4.0.
All you have to do is launch instance creation like this:
scw instance server create type=DEV1-S image=ubuntu_focal zone=fr-par-1 firstname.lastname@example.org
The old method created an instance that would not start by default:
scw instance server create type=DEV1-S image=ubuntu_focal zone=fr-par-1 stopped=true
Displays its parameters, then retrieves its identifier (ID). Put your YAML configuration file in one file (
cloudconfig.yml in our case). And type the following command:
scw instance user-data set key=cloud-init server-id=identifiant email@example.com
If you go to the Scaleway interface, you will see in the advanced instance settings that the configuration file is sent here. So just run the example:
scw instance server start identifiant
You will find similar functionality in CLIs offered by other CSPs.