We are in the cloud
We are in the cloud, running on someone else’s computer.
*Sell not virtue to purchase wealth, nor Liberty to purchase power*
We are in the cloud, running on someone else’s computer.
Since the whole COVID19 pandemic hoax started a couple of months ago, working from home has become the new hip thing every company brags about on every social media known to humankind. The first step to be able to call yourself a proper COVID19 ready(tm) company is the ability to bother every employees with just a few mouse clicks. So here we are, with Microsoft Teams(tm) and a lot of other not very secure and massively bloated software elected as the center of the office life. Coffee break? XYZ software chatroom. Kick-off meeting? XYZ software chatroom. And so on. Because of my special snowflake syndrome and my deep hatred for all things Microsoft and especially Windows I always end up making my life a bit harder. After having used Teams in a Windows 10 VM (after all I paid for a license when I got my latest Thinkpad) for a few weeks, I decided it was time to finally try to make it work on my main OS: Fedora 31. The catch was also that I wanted to do that more or less without installing any third party non free software. The OS I use is Fedora 31, which comes with pipewire and xgd-desktop-portal both installed and configured out of the box. Since using the official closed source Electron crapware client was out of the question, the obvious choice was to make Microsoft Teams work in a regular WEB browser. The situation is the following: …
- Introduction A quite big company that produces parts for some of the most important automotive industry companies was interested in a cloud-based system to monitor the overall efficiency of their production machines, analyze some key parameters and optimize the production activities scheduling. Goal of the project was to connect eleven industrial manufacturing machines to the cloud, extract specific machine data and develop a web application for the visualization and management of such data, while guaranteeing information confidentiality and security. Furthermore, one fundamental requirement of such system has been the bidirectional integration with the customer’s ERP system, in order to synchronize the production JOBs and manage their execution on the corresponding machines. To fulfill such requirements, the project team has engineered and then implemented an hybrid edge-cloud solution in which the software has been packed into various containers that are orchestrated and managed, at the edge level, by a Kubernetes cluster. This technology ensures an optimal load balancing between the available resources as well as a high availability in case of hardware or software failures. While IT enterprises do not question the value of containerized applications anymore, the use of such kind of technologies within a manufacturing environment hasn’t been completely explored yet. In the following paragraphs we will go into details on how we engineered and built the system despite all the difficulties we had to overcome. …
QCOW2 disk images can be easily grown using libvirt command line utils. Unfortunately it isn’t possible to grow QCOW2 images in-place or online. First of all, power off the virtual machine, grow the file and make a copy of it: $ qemu-img resize image.qcow2 +200G $ cp image.qcow2 image-new.qcow2 Identify the specific partion you intend to grow: $ virt-filesystems -a image.qcow2 -l Name Type VFS Label Size Parent /dev/sda1 filesystem ext4 - 536870912 - /dev/sda3 filesystem xfs - 45885612000 - Expand the actual partition: …
Yesterday I was reading phoronix 0 and phoronix 1 articles on STIBP mitigation impact on CPU performance, since I run a pretty old laptop equiped with a Sandy Bridge CPU I figured that I should do my own tests to see how bad things really are or aren’t. CPU: Intel Core i3-2310M - 2 cores / 4 threads Motherboard: Lenovo Thinkpad RAM: 2x4 GB DDR3 @1333 MHz HDD: Plextor M5pro OS: Fedora 29 x86_64 with stock kernels My benchmark of choice is compiling the Linux kernel (version 4.19.2). What I do is download the kernel version to /dev/shm ramdisk and compile it using the defconfig configuration while checking how many seconds it takes to complete the task. …
For some reason gpg gen-key still defaults to SHA1 and RSA2048, due to the known weaknesses of SHA1 it is probably a better idea to use SHA256. First of all, we need to create a configuration file. cat ~/.gnupg/gpg.conf" --- personal-digest-preferences SHA256 cert-digest-algo SHA256 default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed To generate a new key type (also specify to use RSA 4096): gpg --gen-key ### or gpg --full-generate-key Other useful commands are: …
Finally we have some new hardware worth writing of and also spending money on. I have been using an AMD Ryzen 7 1700X based build for some time now and so far I am really liking it, the CPU is marvelous considering the pricetag and felt like a worthwhile upgrade from the Xeon E3-1241v3 I was using before; it is basically twice the cores clocked at pretty much the same speed. Awesome. There are a couple of points worth spending some words on tho. …
This does not really works, read this: https://uwot.eu/monitor-hard-disk-smart-status-in-python/ First of all install smartmontools, it has the same name on pretty much every distro: $ emerge -a1 smartmontools Proceed to edit its configuration file, at the bottom of the file there is a quick explaination of all the available parameters: cat/etc/smartd.conf --- DEVICESCAN -H -R 1 -R 5 -R 7 -R 10 -R 11 -R 196 -R 197 -R 199 -R 200 -m user@domain.tld -n standby,10,q Parameter -H tells smartd to check the result of overall-health self-assesment test which is pretty much useless, -R is used to specify a single SMART attribute, if its value changes a mail is sent to user@domain.tld. To send emails a MTA must be installed, in centos that is sendmail, in gentoo it is not strictly necessary to have a full fledget MTA installed, nullmailer will suffice. If it is not already installed: …