[Ubutu 19.04] [Docker] [Docker-compose] Stack trace when running docker build from golang-docker-credential-helpers.

I had been seeing some issues with the docker build command on my new T480 with Ubuntu 19.04. There was a stack trace when running the command but it didn’t seem to impact the build process as the image was still properly built at the end.

It looked like a go package might be the cause and digging around, it seems I wasn’t the only one with the issue.



Removing the package seems to be enough the fix the issue : sudo apt-get remove golang-docker-credential-helpers. I am not using any Docker credentials so it’s not a big loss for the moment.

The fix is coming in 0.6.2! :





[Debian / Ubuntu] Scrolling input is being duplicated through minimized Electron applications.

If you have been using Slack/Chrome and VSCode for the past few weeks, you might have been experiencing the issue. Basically, when VSCode is minimized and you are scrolling in Chrome with the mouse wheel, the input is “stacked” and applied when VSCode is brought back to the forefront. When you click inside the application window, the input is immediately processed.

This causes the cursor to jump around and is a pretty big annoyance.

Ubuntu 19.10 fixes the issue but I’ve reinstalled a Debian 9.9 which still faces the issue.

You can track the following Github issue for more details : https://github.com/Microsoft/vscode/issues/28795



Ansible & Cisco – Automating configuration management.

Configuring network equipment has always been somewhat of a tedious affair. Copy and pasting a configuration file through the console port doesn’t scale (or you need a lot of interns!) and other solutions like Cisco Prime are slightly overkill if you only want to change a few lines of configuration.

This is where Ansible comes to the rescue. Originally built as a Linux Configuration Management Tool, in the vein of Chef/Puppet/Salt, it’s built around an SSH agent-less push model. Directly using the SSH connection, it’s remotely executing command you define in “Playbooks” This is why it’s a great fit when it comes to network devices as those are still telnet/SSH based. API/Netconf are starting to be more and more common, but SSH is still present. Especially on older network equipment.

Recently, Ansible has been augmented with a series of module that allow a network operator to leverage Ansible to deploy configuration to remote equipment. For example, if you just realized that your template is mysteriously missing NTP, Syslog and SNMP configuration and there are about 40 pieces of equipment deployed – I’m not saying this totally happened to us – Ansible is here to help.

The following playbook does the following :

  • Define a role : cisco-ios-common – which holds all the default configuration that is used by our devices.
  • In our case, the SSH connection is initiated from the sysadmin computer running the playbook – but it can be adapted to run from a bastion host.
  • No need to gather facts.
  • Create the variable holding the credentials necessary for accessing the equipment.

The actual file with the tasks is fairly simple.

This is literally lines that are parsed from the “show running-config” command and compared to each task. If it’s not there, it’s added or changed . In this case, we configure the RO SNMP community and the NTP server. To be extra careful, we also take a backup of the running-config before applying the changes. Using the “provider” statement, we are referencing the variables defined in the playbook file. All other private variables can be stored in an Ansible vault file and encrypted.

With the appropriate inventory file I was able to quickly fix our deployment mistake without manually connecting to every single switch.


Vagrant and Ansible – Building a test cluster from scratch.

Sometimes, you don’t have the luxury of a dedicated lab to test your new shiny tools. Over the past few weeks, I’ve had the need to test multiple Ansible playbooks that I’m writing. I could spin up new VM manually in VirtualBox, but it’s a tedious process and doesn’t scale when you are building a large environment. Here is where Ansible and Vagrant can be useful.

Vagrant is basically an abstraction between your virtualization provider and you. It’s designed so that you can interact with a wide variety of Hypervisors from a single tool. You can use it as a quick way to tell VirtualBox to spin up any number of VMs. It’s much faster than using the command line.

Ansible is a orchestration and configuration management tool. In the line of tools like Puppet, Chef and Salt, it’s designed as a single source of truth for server configuration and resource management. In a perfect world, nothing is managed manually and no one should have to log into a server. Everything is defined and templated within Ansible and then applied to the servers.

Here is an example of a Vagrantfile used by Vagrant in order to know what it must do.


At it’s core, it’s a simple list of instructions for Vagrant.

  1. You select a base image with the config.vm.box line. In our case, Debian 9 was selected.
  2. You set the CPU/Memory resources set per VM. You also set DNS/NAT options as well as the networking for the cluster of VMs.
  3. The interesting part is during the “Master” configuration section where I call Ansible in order to actually configure the cluster of machines. Usually, Ansible is called by the System Administrator for either a Bastion host or his own machine. In this case, the VM called “dhmtl1” will act as the Ansible master machine and actively connect to the other hosts and use the playbooks I provide.

It basically goes like this

  1. From within the directory where the Vagrantfile is located, I call vagrant up –provision
  2. Reading the instructions from the Vagrantfile, Vagrant will create three machines and set any specific option I have selected.
  3. Using the machine labeled as “dhmtl1” as the Ansible master, it’s going to install Ansible on itself and use my playbooks to dynamically apply the configuration to itself and all the other nodes. This means that once the Playbooks have finished running, I have a fully functioning cluster of machines with all of my roles deployed.

Since it’s actually running a real Debian image, I can actually fix what doesn’t work. I have since actually deployed all the Playbooks to a real infrastructure and I can report that it deployed on the first try. The playbooks were originally written for Debian 8 and had to be extensively modified to work for Debian 9. Vagrant allowed me to easily migrate the image from Debian 8 to 9, and see what broke so that I could fix it.

One other bonus, is that the testing platform is very portable. I can send the Vagrantfile and the playbooks to a coworker and he can immediately have his own environment up and running. You could even couple everything with a self-provisioning Ansible playbook that installs VirtualBox and Vagrant.

This is just a small taste of what using Vagrant and Virtualbox can alllow you to do.


[Ubuntu 16.04] [kubeadm] An easier way to spin up Kubernetes clusters

Over the past few days, I’ve been experimenting with Kubernetes – also known as k8s – as a way to scale applications containers. Kubernetes is a cloud-ready platform developed by Google and is seen mostly in providers clouds like GCE and AWS. There are ways to deploy Bare-Metals cluster but it’s a mix and match of different tools without a specific one being in the lead. For hosting on AWS, “kops” is a well maintained and production ready tool. For GCE, google already offers the option to deploy K8s clusters.

For bare-metal clusters, a tool named “Kubeadm” is currently being worked on by the community/Google in order to provide a quick and secure way to deploy a K8s cluster. It’s currently in alpha and is not quite ready for production as it cannot deploy HA clusters just yet. But it’s still is an awesome way to spin up a quick test cluster to play around with.

I’ve recently built a few custom Ansible playbooks in order to leverage the automation platform to remotely build my clusters without ever logging into a server.


To use it, simply follow the following steps!

  • Install Ansible
  • Install Make
  • Edit the k8s-inventory file to reflect your machines.
  • From the root folder – use the following syntax : make k8s-*TAB for auto completion*
  • I recommend deploying the “k8s-common” role first as it ensures is setup to support the kubernetes components.

Old school throwback – How to register your nickname to a IRC server.

IRC is back!

Wait, not really. I just had to connect to a few IRC channels for research reasons. Depending on your IRC server, it might require you to register your nickname. Here is how to do it :
First, create the identity on your IRC client. Then, join the server hosting the channel and type :

/msg NickServ REGISTER password [email protected]
/msg NickServ IDENTIFY identity_name password

Then, you can use /join #channel_name to actually enter the chat.


Ubuntu 14.04 – Installing Graphite to visualize Icinga2 data with Grafana.


#Initial installation from the graphite bits and pieces | No configuration




#Installation of the graphite-web to use the API so that Grafana can query Graphite.











Centos 6.5 – Counter-Strike GO startup script –

Linux Source / Others game servers made easy : http://danielgibbs.co.uk/lgsm/

I whipped up a quick script that calls the function itself without having to move it around. Farily simple but useful nonetheless.