Introduction

Hello, thank you for visiting my site.

Let me start by introducing myself. My name is Yannick M. Richard. Who I am is a different philosophical question entirely, but the one general theme that has been recurrent throughout my life is my passion for engineering and technology.

From a young age, I was very much into toys like Lego, and puzzle games. I also grew up in a family who valued doing things themselves and fixing things when they broke instead buying new things. My grandfather was a technician for Canadian National Rail most of his life, and my father also also worked in various technical roles for different Canadian government subcontractors. I would often find myself in my dad's workshop when he was soldering electronics or helped around and learnt what I could when anything of a technical nature was happening in the home, including installing solar panels, wiring the house, or anything related to renovations. I was just always curious to learn as much as possible when it came to our physical reality and the way things are built and how we interact with them.

Starting work from a young age, my very first purchase with money I saved from a summer job was a mountain bike. The second thing I purchased by saving was new computer parts to build my very first computer at the age of 16, which I saved for by working as a line cook in a local sports bar. At 16, we have had computers in my home already for at least 7-8 years by then. I still vividly remember using floppy disks to play 2D DOS games like Commander Keen, for example, and the introduction of dial-up.

Being born in 1989, I grew up with the internet, quite literally. Not in the sense where kids today are born and the internet is already there, quite established, and dominated by social media and tech giants and trendy garbage. When I was younger, it still felt a little nerdy and not understood or used by still a vast majority of the population. This fact fuelled my curiosity towards it.

My first foray into the high-tech world, started in high school, funnily enough with Myspace and Geocities. Myspace allowed HTML customisation, so I thought it was cool to differentiate my profile from others in my school by tweaking with code that most people probably wouldn't understand at the time. Geocities, it turns out, I would use to host keyloggers that I would fool people into downloading and running to steal the contents of their videogame accounts in online games. Even at a young age, I could easily understand the amplitude of the potential maliciousness one can do with networks and inter-connected software.

Fast forward almost two decades, and I have since then studied Electromechanics, as well as Engineering, worked as a contractor and freelancer in various aspects involving Information Technology on and off, have supported and/or worked for various organisations of all sizes, from individual entrepreneurs, small sized businesses, up to Government.

What I consider to be my "real world skills" education, I'd have to say was time spent in trade school in Electromechanics, which covers more "hands-on" technology, like machining, welding, electronics, hydraulics, pneumatics, electricity, as well as motors. I find this gives me deeper insights and understanding into the electronics and physical components used in modern technology, as I often use my understanding of electronics to comprehend what is going on "behind the scenes" when it comes to interaction between the various levels of the OSI model of computer system interconnection. It helps to understand what is going on with compute, ram, disks, video cards, or hardware in general, when you can think of how the bits are moving through a logical system at the physical level.

As for what I'd consider to be my more "professional" or "formal" education, based on my already advanced understanding of computers, when I decided to study Engineering, I gravitated towards Systems and Processes Engineering, as I felt it to be a great complimentary engineering framework to wrap around my passion for technology and computers in general. It is a bit more generalised, and is more focused on the understanding of systems of people, machines, processes, resources, and money, as a whole, from the perspective of supply chains or service providers with internal systems.

I found it quite funny when I started to work more in the software and web service development industry that a lot of the concepts, especially when it comes to project management, methodology, and automation, all come out of the results of analysing and looking at software delivery from a systems and processes point of view. CI/CD pipelines instead of factory floors. Data and information instead of physical resources and raw material. The strive for operational efficiency is still the same.

All that being said, what is it that I am working on these days and what would I like to share?

Great question. After working a long time with distributed systems, one of the recurring pain points that would catch teams off-guard is dealing with software updates, dependency updates and changes, dealing with known or newly discovered CVEs for hosts and software, and dealing with the challenges faced with managing local and cloud development or production environments when it comes to businesses having teams with potentially various languages, as well as development and deployment strategies.

As anyone who is familiar with Linux and cloud services can tell you, it is extremely easy to misconfigure something, or configure something in an unsecured manner, if you are not fully aware of the implications of your changes. This is why it is generally considered best practice to have a way or environment in which to test changes before applying them in production. There are various commercial options, and especially cloud providers, and "serverless", which essentially obfuscates the underlying software and operations behind an API and web console front-end, but at the end of the day what you are generally running is still running on Linux, and using modified versions of software that tends to be generally publicly available. You could even use things like terraform for provisioning your cloud environment, but unless you are full on "serverless" and at that rate locked in with a cloud vendor or a few, there are still going to be services running on Linux somewhere, and from my experience managing many servers utilising different technology stacks without some kind of centralisation for your configuration can pose to be a challenge at scale. Sure there are tools like Ansible and Chef, but even that is applying external complexity to server management and still lacks some interesting features that one would love to see in a convenient way of managing Linux servers.

Managing Linux by hand can be time consuming. I've spent far longer than i'd like to admit in my earlier days with Linux trying to fix things I've broken only to give up and restart over with a clean slate. This was before I discovered NixOS.

Why is NixOS significant? Well, let's start with the core. At its core, is the Nix Package Manager, and the Nix Expression Language.

If you are unfamiliar with Linux, a package manager is software that allows you install, remove, and updates software packages and Linux components, and using software packages found through your distribution's package manager generally means that there is a high likelhood that that software will work on your version of Linux. If you are familiar with Linux, there is DNF/yum, apt-get, apk, pacman/yay, etc. Well, in NixOS, there is the Nix Package Manager. If you are curious, and don't necessarily want to go with a whole OS install, the Nix package manager is also available to download and use in other distributions. This could be a great way to test it out first if you are already using linux but don't want to jump in the Nix world entirely just yet.

Okay, so a fancy package manager. How is it any different?
Well, the way it works with most package managers is you will via terminal, or script, or through some connected external process, installing and manage software, their versions, and software configuration for that linux or its services is done very "interactively", as in you're modifying config files or installing, removing, or updating changes on a live machine, or something that is potentially serving requests to customers or running important software.

Even if you know what you are doing, mistakes happen, and configurations don't always work in the way you expect it to, or maybe its running fine but then people fear updating anything and then unknowingly, that old service is still running, but could potentially be full of vulnurabilities that someone simply doesn't know how to tackle.

In Nix, I guess the biggest difference in the way of looking at service and operating system management, is that it can be done "declaratively", and many aspects of the the operating system and software configuration remains read-only, until a configuration change is made to the OS's configuration codebase and tested by the package manager's Nix Expression evaluation. The evaluation of your code will either cause the OS to "rebuild", and create a kind of versioned branch of your OS configuration, or fail, based on your code.

The other neat aspects of this, is that every time your OS's configuration file is evaluated and the nix package manager proceeds to rebuild the operating system, it also downloads all dependencies for the services that you have configured, and a "version" of your OS build stays in the "Nix store" (where all software packages live in the filesyste). This allows for easy roll back or booting into a previous or different historical version of your OS's configuration in case anything doesn't work after a reboot or configuration switch.

Did I mention you can store the configuration file in Git, and CI/CD deploy your entire OS, not just the services running on top of it? Yeah, personally I think that's cool as hell, seeing as no other distribution of Linux allows you to look managing your entire operating system using a programming language that can also be used for automated testing and deployment.

Anyways, I won't get too much into the technical details of my fascination with NixOS and the possibilities entailed in looking at distributed service development from that angle, but I guess my point is that I'm leveraging that at the moment to centralise the management of my systems, including physical computers and cloud instances.