snabelen.no er en av mange uavhengige Mastodon-servere du kan bruke for å delta i det desentraliserte sosiale nettet.
Ein norsk heimstad for den desentraliserte mikroblogge-plattformen.

Administrert av:

Serverstatistikk:

359
aktive brukere

#rhel

ett innlegg1 deltaker0 innlegg i dag

Well, my first conclusion after, well, about two weeks of using Linux Mint: mixed.

If you've been using Fedora Linux for years, it's a noticeable step backwards. What runs smoothly under Fedora, where devices are recognized without any problems, requires a little extra help with Mint - and often more time...

Examples: I have a Brother DCP3515 multifunction laser printer. Under Fedora, it is recognized immediately via WLAN and if you want to print something, it can be done in seconds. With Mint, a connection was only possible via USB, only then could I set it up via WLAN. And it takes a few minutes for the printout to come out of the printer. Very annoying!

Or my uGreen NAS: under Fedora, it is there immediately after logging on to the PC when I click on the shortcuts in the file explorer. Under Mint it feels like it takes forever before I can access the NAS at all. Quite apart from the fact that it didn't work out of the box, I had to install Samba first. And I find such annoying little things in every corner of Mint.

So I can't understand why Linux Mint is constantly cited as the ideal distribution for Windows users as an introduction to the Linux world. Just because of the Win10-like Cinnamon desktop? I would only advise anyone to use Fedora if it's not about ideology, like me. And even I'm beginning to doubt whether it's worth it...

Cool feature to help with troubleshooting your system!

"sosreport acts as a black box recorder for Linux — capturing everything from system logs and kernel messages to active configurations and command outputs — helping support engineers trace problems without needing direct access to the system."

Learn more: fedoramagazine.org/%F0%9F%94%A

Fedora Magazine · 🔧 Deep dive into sosreport: understanding the data pack layout in Fedora & RHEL - Fedora MagazineThis article is a description of the structure and contents of the "sosreport" tarball.

The Best Boring #Benchmarks: #RockyLinux10 & #AlmaLinux10 Performance Against #RHEL10 Review
Testing on an AMD EPYC 9755 2P (EPYC Turin) server and using the same hardware across all tests, the performance of #RockyLinux 10 and #AlmaLinux 10 were right on-par with #RedHat #EnterpriseLinux 10 itself. Hence the best kind of boring benchmarks when the performance is right on track for where it should be.
phoronix.com/review/almalinux-
#RHEL #Linux

www.phoronix.comThe Best Boring Benchmarks: Rocky Linux 10 & AlmaLinux 10 Performance Against RHEL 10 Review

#Debian is going down the snake-oil road to #AI.

They are going to bundle in AI with #RHEL. It's just a matter of time before this garbage finds it's way downstream into every Debian-derived distro.

Self developing and self updating code? Seriously? FUCK NO. Not today, not ever. Burn that shit with fire.

Looks like I am done with Debian. Gotta find a new distro. A shame because I really like #BunsenLabs.

Hate to see this filth seeping into the #Linux distro world.

As #RHEL clones hit version 10, Rocky and Alma chart diverging paths
Take a quick look at the headline features – and the growing differences
Perhaps the biggest and most obvious technological difference in this version is that #AlmaLinux offers a separate version for x86-64-v2 hardware. #RHEL10 itself, and Rocky with it, now require x86-64-v3, meaning Intel "Haswell"
But you'd expect all the RHELatives to be similar. That remains their primary selling point.
theregister.com/2025/06/14/roc

The Register · As RHEL clones hit version 10, Rocky and Alma chart diverging pathsAv Liam Proven

Thrilled to have a new, production-ready CI/CD pipeline live! It automatically builds and deploys my Jekyll static WIP site (hofstede.it) on every push to the main branch.

The architecture is a showcase of modern Linux tools:

🔹 Server running on Red Hat Enterprise Linux 10 (RHEL)
🔹 Forgejo for Git hosting & Actions.
🔹 A rootless Forgejo Runner, running in Podman, managed by a systemd Quadlet file.
🔹 Traefik reverse proxy running as a Podman container.
🔹 An Nginx web server for the site, also in a container for easy discovery by Traefik.

The Forgejo Runner and the Nginx Webserver run in different unprivileged user contexts.

The magic is the secure bridge between the rootless CI job and the web server. The pipeline creates a build artifact, and a systemd.path watcher on the host instantly triggers a deployment script.
It's fully decoupled, secure, and works like a charm.

Wanted - A Computing Engineer at the start of your career, with a strong foundation in Linux and a desire to work on critical computing services for the research community? Join the CERN Linux Team to help operate, evolve and support Linux-based services used by thousands of scientists across the organisation and worldwide.

smartrecruiters.com/CERN/74400

CERNLinux Engineer (IT-CD-CLI-2025-109-GRAE) Job Description: Your responsibilitiesAre you a passionate Computing Engineer at the start of your career, with a strong foundation in Linux and a desire to work on critical computing services for the research community? Join the CERN Linux Team to help operate, evolve and support Linux-based services used by thousands of scientists across the organisation and worldwide.The CERN IT Linux Service provides essential infrastructure for the scientific community, including public software mirrors, package build systems, and network-booting (PXE) infrastructure. We support a hybrid environment based on Red Hat Enterprise Linux (RHEL), AlmaLinux, and Debian, and we contribute upstream to open-source communities.In this position, you will:* Help operate and maintain key Linux services at CERN. * Contribute to package building infrastructure for RPM- and DEB-based systems. * Support PXE boot services and associated configuration tools. * Collaborate with upstream open-source communities and partner HEP institutes to improve Linux tooling and reliability. * Develop automation and tooling to streamline operations. * Provide support and guidance to CERN's Linux user community. * Work with modern DevOps tools, including GitLab CI and configuration management systems. Your profileSkills:* Essential to have: experience using and managing Linux systems; * Expected to have: basic experience in service operations or system administration; * Useful to have: understanding of Linux package management (RPM/DEB); * Nice to have: experience with GitLab CI/CD pipelines, Python development and configuration management systems (such as Puppet or Ansible); * Spoken and written English, with a commitment to learn French. Eligibility criteria:You are a national of a CERN Member or Associate Member State.By the application deadline, you have a maximum of two years of professional experience since graduation in Computer Science or a related field (or a related field) and your highest educational qualification is either a Bachelor's or Master's degree.* You have never had a CERN fellow or graduate contract before. * Applicants without University degree are not eligible. * Applicants with a PhD are not eligible. Additional Information: Job closing date: 10.07.2025 at 23:59 CEST.Contract duration: 24 months, with a possible extension up to 36 months maximum.Working hours: 40 hours per weekTarget start date: 01-September-2025This position involves:* Stand-by duty, when required by the needs of the Organisation. * Work during nights, Sundays and official holidays, when required by the needs of the Organisation. Job reference: IT-CD-CLI-2025-109-GRAEField of work: Software Engineering and ITWhat we offerA monthly stipend ranging between 5196 and 5716 Swiss Francs (net of tax).Coverage by CERN's comprehensive health scheme (for yourself, your spouse and children), and membership of the CERN Pension Fund.* Depending on your individual circumstances: installation grant; family, child and infant allowances; payment of travel expenses at the beginning and end of contract. 30 days of paid leave per year.* On-the-job and formal training at CERN as well as in-house language courses for English and/or French. About usAt CERN, the European Organization for Nuclear Research, physicists and engineers are probing the fundamental structure of the universe. Using the world's largest and most complex scientific instruments, they study the basic constituents of matter - fundamental particles that are made to collide together at close to the speed of light. The process gives physicists clues about how particles interact, and provides insights into the fundamental laws of nature. Find out more on http://home.cern. Diversity has been an integral part of CERN's mission since its foundation and is an established value of the Organization. Employing a diverse workforce is central to our success.
CERNLinux Engineer (IT-CD-CLI-2025-109-GRAE) Job Description: Your responsibilitiesAre you a passionate Computing Engineer at the start of your career, with a strong foundation in Linux and a desire to work on critical computing services for the research community? Join the CERN Linux Team to help operate, evolve and support Linux-based services used by thousands of scientists across the organisation and worldwide.The CERN IT Linux Service provides essential infrastructure for the scientific community, including public software mirrors, package build systems, and network-booting (PXE) infrastructure. We support a hybrid environment based on Red Hat Enterprise Linux (RHEL), AlmaLinux, and Debian, and we contribute upstream to open-source communities.In this position, you will:* Help operate and maintain key Linux services at CERN. * Contribute to package building infrastructure for RPM- and DEB-based systems. * Support PXE boot services and associated configuration tools. * Collaborate with upstream open-source communities and partner HEP institutes to improve Linux tooling and reliability. * Develop automation and tooling to streamline operations. * Provide support and guidance to CERN's Linux user community. * Work with modern DevOps tools, including GitLab CI and configuration management systems. Your profileSkills:* Essential to have: experience using and managing Linux systems; * Expected to have: basic experience in service operations or system administration; * Useful to have: understanding of Linux package management (RPM/DEB); * Nice to have: experience with GitLab CI/CD pipelines, Python development and configuration management systems (such as Puppet or Ansible); * Spoken and written English, with a commitment to learn French. Eligibility criteria:You are a national of a CERN Member or Associate Member State.By the application deadline, you have a maximum of two years of professional experience since graduation in Computer Science or a related field (or a related field) and your highest educational qualification is either a Bachelor's or Master's degree.* You have never had a CERN fellow or graduate contract before. * Applicants without University degree are not eligible. * Applicants with a PhD are not eligible. Additional Information: Job closing date: 10.07.2025 at 23:59 CEST.Contract duration: 24 months, with a possible extension up to 36 months maximum.Working hours: 40 hours per weekTarget start date: 01-September-2025This position involves:* Stand-by duty, when required by the needs of the Organisation. * Work during nights, Sundays and official holidays, when required by the needs of the Organisation. Job reference: IT-CD-CLI-2025-109-GRAEField of work: Software Engineering and ITWhat we offerA monthly stipend ranging between 5196 and 5716 Swiss Francs (net of tax).Coverage by CERN's comprehensive health scheme (for yourself, your spouse and children), and membership of the CERN Pension Fund.* Depending on your individual circumstances: installation grant; family, child and infant allowances; payment of travel expenses at the beginning and end of contract. 30 days of paid leave per year.* On-the-job and formal training at CERN as well as in-house language courses for English and/or French. About usAt CERN, the European Organization for Nuclear Research, physicists and engineers are probing the fundamental structure of the universe. Using the world's largest and most complex scientific instruments, they study the basic constituents of matter - fundamental particles that are made to collide together at close to the speed of light. The process gives physicists clues about how particles interact, and provides insights into the fundamental laws of nature. Find out more on http://home.cern. Diversity has been an integral part of CERN's mission since its foundation and is an established value of the Organization. Employing a diverse workforce is central to our success.

Unser @stdevel war auf der #RedHatSummit in Boston und hat News mitgebracht, die wir zusammen besprechen.

#RHEL 10 ist erschienen und bringt u.a. einen gereiften Image Mode und eine spannende Dateisystem-Änderung. Red Hat Satellite 6.17 unterstützt erstmalig Flatpaks und erlaubt einen ersten on-prem Insights-Dienst. Auch im Ansible- und OpenShift-Umfeld gibt es Neuigkeiten. Spannende Roadmaps und das neue RHOKP runden die Konferenz ab.

🎙️ user.space/e002-red-hat-summit

Manage your Linux systems like a container!

I’ve got to tell you, I have not been so excited about a technology… probably since Containers. At Summit this year Red Hat announced the General Availability of Image Mode for RHEL. So I got to spend a week in Boston, explaining, over and over again, why that’s important.

See, Image mode is kind of a big deal. It takes container workflows, and applies it to your data center servers using a technology called bootc. This concept isn’t new exactly, this sort of technology has been applied to edge devices, and phones, and other appliances for years. But what we have now is a general purpose linux that you can update using a bootable container image. This changes things.

So think about a Linux system as you know it today. We’re calling that Package Mode now in order to avoid confusion. RHEL Package Mode is a Linux base, with a package manager, where you install and configure things, and then fight to keep those things from drifting pretty much from then until eternity. There’s a whole facet of the IT industry around mitigating that drift. Package and config management is a huge business! For good reason! Drift is what makes your routine 2AM maintenance into a panic attack when the database server doesn’t come back up.

So I talked a lot about Image Mode at Summit, but I have to admit, I hadn’t touched it yet! So Now that I’m back home, and my time is a little less all consumed by prep for the RHEL 10 release, and Summit deadlines, I decided to take some time and get hands on with this revolutionary thing.

Building a pipeline

So, I use Gitlab community edition as a repository for a few container builds I maintain. Some time back I managed to get the CI/CD pipelines working for my container builds. These were nothing fancy, but they work. I commit a change to the repository, and a job kicks off to rebuild the container, and push it into a registry. In some cases that’s just the internal Gitlab registry, in others its Docker Hub. I, of course, do it all with Podman. So when I decided to tackle Image Mode, I thought it would be best to just rip that band-aid right off and do it in Gitlab, and have the builds happen there. How hard could it be? I already had container builds running there!

So I made a repo, and copied my CI config from one of the container builds that just used podman and the local registry, and threw in a basic Containerfile that just sourced FROM the RHEL bootc base image, and then did a package install. Commit, sit back in my arrogance and wait for my image.

It failed. For reasons I still don’t fully understand, the container build uses fuse-overlayfs to do its build, and couldn’t in my runner’s podman in podman build container. I did some research, and luckily I have access to internal Red Hat knowledge, so I was able to bounce some ideas around and came up with a solution. Two things actually. My runner needed some config changes. Here, I’ll share them with you.

Here is my Runner config

[[runners]]  name = "dind-container"  url = "https://git.undrground.org"  id = 3  token = "NoTokenForYou"  token_obtained_at = somedatestamp  token_expires_at = someotherdatestamp  executor = "docker"  environment = ["FF_NETWORK_PER_BUILD=1"]  [runners.cache]    MaxUploadedArchiveSize = 0    [runners.cache.s3]    [runners.cache.gcs]    [runners.cache.azure]  [runners.docker]    tls_verify = false    image = "docker:git"    privileged = true    disable_entrypoint_overwrite = false    oom_kill_disable = false    disable_cache = false    volumes = ["/cache"]    shm_size = 0    network_mtu = 0

The things I had to add were, first, privileged = true. This gives the container the access it needs to do its fusefs work. And the environment “FF_NETWORK_PER_BUILD=1”, which I believe tweaks the podman networking such that it fixed a DNS resolution problem I was having in my builds.

With that fixed, I was able to get builds working! I have two things to share that may help you if you are trying to do the same. First, another Red Hatter built a public example repo that will apparently “just work” if you use it as a base for your Image Mode CI/CD. It didn’t work for me, but I suspect that was more about my gitlab setup and less about the functionality of the example. You can find that example, Here. What I ended up doing was modify my existing podman CI file. That looks like this:

---image: registry.undrground.org/gangrif/podman-builder:latest#services:#    - docker:dindbefore_script:    - dnf -y install podman git subscription-manager buildah skopeo podman    - subscription-manager register --org=${RHT_ORGID} --activationkey=${RHT_ACT_KEY}    - subscription-manager repos --enable codeready-builder-for-rhel-9-x86_64-rpms --enable rhel-9-for-x86_64-baseos-rpms    - export REVISION=$(git rev-parse --short HEAD)    - podman login --username gitlab-ci-token --password $CI_JOB_TOKEN      $CI_REGISTRY    - podman login --username $RHLOGIN --password "$RHPASS" registry.redhat.ioafter_script:    - podman logout $CI_REGISTRY    - subscription-manager unregisterstages:    - buildcontainerize:    stage: build    script:      .    - podman build --secret id=creds,src=/run/containers/0/auth.json --build-arg GIT_HASH=$CI_COMMIT_SHA      -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA -t $CI_REGISTRY_IMAGE:latest       .    - podman push $CI_REGISTRY_IMAGE

Now, this example contains no verification or validation, so I suggest you maybe look into the proper example linked externally. That one has a lot of testing included. Mine will improve with time. 😉

Registry Authentication for your build

Now, there’s a few things to note here. First, Notice that I am not just logging into my own registry, but registry.redhat.io. You register using your Red Hat login for the Red Hat private registry, and that’s where the bootc base images come from. I also use subscription-manager to register the build container to Red Hat’s CDN. That’s because the RHEL Image Mode build is building RHEL, and must be done using an entitled host in order to receive any updates or packages during the container build. This was something I had gotten stuck on for some time, its a little tough to wrap your head around. Once you do though, it makes sense.

Authenticating your bootc system with your registry, automatically

I am also passing the podman authentication token file into a podman secret at build time. This is important later. If your bootc images are stored in a registry that is not public, you will need to authenticate to that registry in order to pull your updated images after deployment. The easiest way to bake in that authentication is to simply take the authentication from the build host, and place it into the built image. There is some trickery that happens in your Containerfile to make this work. You can read more about this here.

Containerfile

So, I told you we build image mode like a container. I meant it. We literally write a Contanerfile, and source it from these special bootc images that are published by Red Hat. There are a few things you’ll want to think about when building a bootc Containerfile vs a standard application container. Things that you wouldn’t normally think about when building a normal container.

Content

First, RHEL is entitled software, that doesn’t change for RHEL Image Mode. This is pretty seemless if you are doing your build directly on an Entitled RHEL system. But if you’re in a ubi container like I am, you’ll need to subscribe the UBI container because the BootC build will depend on that entitlement to enable its own repositories. That is not true, however, for 3rd party public repositories. Those just get enabled right inside of the Containerfile. This sounds confusing, but it boils down to this. RHEL repository? Entitled by the build host, Other repository? Add it via the Containerfile. I add EPEL in my example below.

Users

Something else I don’t usually see done in a standard container is the addition of users. Remember this is going to be a full RHEL host at the other end, so you might need to add users. In my case I am adding a local “breakglass” user, because I am leveraging IdM for my identities. But if something goes wrong during the provisioning, i want a user I can login to the system with to troubleshoot. You can also come in later with other tools to add users. You can enable cloud-init and add them there, or if you are using the image builder tool I’ll talk about in a bit, you can give it a config.toml file to add users at that point.

Other Considerations

Other things that you’ll need to think about might be firewall rules, container registry authentication, and even the lack of an ENTRYPOINT or CMD. Because this system is expected to boot into a full OS, it is not going to run a single dedicated workload. Instead you’ll be enabling services like you would on a standard RHEL system, with systemctl.

My Containerfile

Now that we’re through all of that, let me show you what I ended up with as a Containerfile.

FROM registry.redhat.io/rhel9/rhel-bootc:latest# Enable EPEL, install updates, and install some packagesRUN dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpmRUN dnf -y updateRUN dnf -y install ipa-hcc-client rhc rhc-worker-playbook cloud-init && dnf clean all# This sets up automatic registration with Red Hat InsightsCOPY --chmod=0644 rhc-connect.service /usr/lib/systemd/system/rhc-connect.serviceCOPY .rhc_connect_credentials /etc/rhc/.rhc_connect_credentialsRUN systemctl enable rhc-connect && touch /etc/rhc/.run_rhc_connect_next_boot# This is my backdoor user, in case of IdM join failureRUN useradd breakglassRUN usermod -p '$6$s0m3pAssw0rDHasH' breakglassRUN groupmems -g wheel -a breakglass# This picks up that podman pull secret, and adds it to the build imageCOPY link-podman-credentials.conf /usr/lib/tmpfiles.d/link-podman-credentials.confRUN --mount=type=secret,id=creds,required=true cp /run/secrets/creds /usr/lib/container-auth.json && \    chmod 0600 /usr/lib/container-auth.json && \    ln -sr /usr/lib/container-auth.json /etc/ostree/auth.json# This configures the bootc update timer to run at a time that I consider acceptableRUN mkdir -p /etc/systemd/system/bootc-fetch-apply-updates.timer.d/COPY weekly-timer.conf /etc/systemd/system/bootc-fetch-apply-updates.timer.d/weekly.conf

You can see from my comments what’s going on in the various blocks in that Containerfile. My intention is to use this as a base RHEL system, and then make more derivative images based on this one. For instance, if I wanted a web server, I would base a new Containerfile on this image, and then add in a RUN dnf install httpd. Its important to note that you shouldn’t be installing packages on these deployed systems after they are up and running. Those installations should happen in the image. If you install a package on a running image mode system, that change will not be carried into the next image update on your system unless you then incorporate it into your bootable container image. This means that you will need to plan ahead, but it also means that tracking package drift in the future is a thing of the past!

In my case, the above mentioned CI automation, and this Containerfile worked in my Gitlab instance, with the above Runner modifications. The build job will take some time, a bootc image is much larger than the lightweight container images you are used to if you’ve been building application containers.

But what about turning that into a VM?

So I am covering but ONE method of getting this image deployed to an acutal system. You can use a myriad of different methods including Kickstart, writing an ISO, PXEBOOT, but what I am doing (because it suits my needs) is turning my image into a qcow2 file, which is a virtual disk image for use with Libvirt. If you’re familiar with Image Builder, the tool used to churn out tailored RHEL disk images, then this wont be a surprise. Theres a container that you can grab that just runs image builder, you give it a bootable container image, and it turns it into a qcow2! Ive cooked up a script that pulls my bootable container right from my registry, writes it to a qcow2, then immediately passes that to virt-install and builds a VM out of it!

In my case, it also uses cloud-init to set its hostname, auto registers, and connects to insights, and then uses a slick new tech preview feature that auto-joins my lab’s IdM domain through insights! Here is my script:

#!/bin/bashVMNAME=$1podman login --username my-gitlab-username -p 'gitlab-token' registry.undrground.orgpodman login --username my-redhat-login -p 'redhatpassword registry.redhat.iopodman pull registry.undrground.org/gangrif/rhel9-imagemode:latestsudo podman run \    --rm \    -it \    --privileged \    --pull=newer \    --security-opt label=type:unconfined_t \    -v $(pwd)/config.toml:/config.toml \    -v $(pwd)/output:/output \    -v /var/lib/containers/storage:/var/lib/containers/storage \    registry.redhat.io/rhel9/bootc-image-builder:latest \    --type qcow2 \    registry.undrground.org/gangrif/rhel9-imagemode:latestcat << EOF > $VMNAME.init#cloud-configfqdn: $VMNAME.idm.undrground.orgEOFmv $(pwd)/output/qcow2/disk.qcow2 /var/lib/libvirt/images/$VMNAME-disk0.qcow2virt-install \--name $VMNAME \--memory 4096 \--vcpus 2 \--os-variant rhel9-unknown \--import \--clock offset=localtime \--disk=/var/lib/libvirt/images/$VMNAME-disk0.qcow2 \-w bridge=bridge20-lab \--autoconsole none \--cloud-init user-data=$VMNAME.init 

This, of course, can be improved, but as a proof of concept it works great! Ive build a few test systems and so far its working flawlessly! Now, when I wans to update my systems, I update the gitlab repository with the changes, and let the CI run. Then once it completes, all I do is run this script to make a new vm! The running vms -should- (i have not tested this yet) get the updated bootble container image from the registry on saturday at 3AM, and reboot if new changes are applied.

Wrapping it up

This is, i think, the thing we’ve been promised for years. Ever since the advent of the cloud when we were told that we should stop treating our servers like pets, but never really given a clear definition of how. Image Mode makes that promise a reality. I’m certain I’ll be sharing more as my Image Mode journey progresses. Thanks for reading!

Share via:

0Shares
  • Facebook
  • Twitter
  • LinkedIn
  • More

Hei Linux säätäjät ja wannabeet, RHEL10 esittelypäivä syyskussa hesuleissa, rekisteröidy ja pistä kalenteriin! Uutta taas tulossa. Ylläpito helpompaa avustimilla, qvanttisalaus ja uusi image mode - mikä tämä on, kuvamuoto suomeksi? Näppiharjoituksia kelmujen lisäksi. Tervetuloa!

events.redhat.com/profile/form

events.redhat.comRHEL 10 Roadshow HELSINKIRed Hat Event - RHEL 10 Roadshow HELSINKI

Did you use Fedora 40 at one point? Perhaps you didn't know, but that's the version of Fedora that went on to become Red Hat Enterprise Linux 10!

And RHEL 10 is what you can keep on counting on for 10 years.

Maybe that's too long to not upgrade on your laptop, but it's great for businesses who need reliability and flexibility for their infrastructure.