@gangrif @almalinux I ran #RHEL 9 that way for several years before switching to #unraid. I’d love to go back to #ansible running everything. But there’s something tempting about a full blown hypervisor under the hood that appeals to me.
@gangrif @almalinux I ran #RHEL 9 that way for several years before switching to #unraid. I’d love to go back to #ansible running everything. But there’s something tempting about a full blown hypervisor under the hood that appeals to me.
Red Hat’s new dev program, Red Hat Enterprise Linux for Business Developers, gives teams no-cost access to RHEL on 25 systems.
https://linuxiac.com/red-hat-offers-free-rhel-access-for-business-developers/
RHEL 10 makes a solid desktop/laptop distro. In a lot of ways RHEL 9 did too but #GNOME was too old to have dark mode. #RHEL 10 solved that. With #containers and #flatpak, I get all the new apps and utilities while keeping the rock solid stable base. It's really good. #RedHat
Well, my first conclusion after, well, about two weeks of using Linux Mint: mixed.
If you've been using Fedora Linux for years, it's a noticeable step backwards. What runs smoothly under Fedora, where devices are recognized without any problems, requires a little extra help with Mint - and often more time...
Examples: I have a Brother DCP3515 multifunction laser printer. Under Fedora, it is recognized immediately via WLAN and if you want to print something, it can be done in seconds. With Mint, a connection was only possible via USB, only then could I set it up via WLAN. And it takes a few minutes for the printout to come out of the printer. Very annoying!
Or my uGreen NAS: under Fedora, it is there immediately after logging on to the PC when I click on the shortcuts in the file explorer. Under Mint it feels like it takes forever before I can access the NAS at all. Quite apart from the fact that it didn't work out of the box, I had to install Samba first. And I find such annoying little things in every corner of Mint.
So I can't understand why Linux Mint is constantly cited as the ideal distribution for Windows users as an introduction to the Linux world. Just because of the Win10-like Cinnamon desktop? I would only advise anyone to use Fedora if it's not about ideology, like me. And even I'm beginning to doubt whether it's worth it...
Cool feature to help with troubleshooting your system!
"sosreport acts as a black box recorder for Linux — capturing everything from system logs and kernel messages to active configurations and command outputs — helping support engineers trace problems without needing direct access to the system."
The Best Boring #Benchmarks: #RockyLinux10 & #AlmaLinux10 Performance Against #RHEL10 Review
Testing on an AMD EPYC 9755 2P (EPYC Turin) server and using the same hardware across all tests, the performance of #RockyLinux 10 and #AlmaLinux 10 were right on-par with #RedHat #EnterpriseLinux 10 itself. Hence the best kind of boring benchmarks when the performance is right on track for where it should be.
https://www.phoronix.com/review/almalinux-10-rocky-linux-10
#RHEL #Linux
#Debian is going down the snake-oil road to #AI.
They are going to bundle in AI with #RHEL. It's just a matter of time before this garbage finds it's way downstream into every Debian-derived distro.
Self developing and self updating code? Seriously? FUCK NO. Not today, not ever. Burn that shit with fire.
Looks like I am done with Debian. Gotta find a new distro. A shame because I really like #BunsenLabs.
Hate to see this filth seeping into the #Linux distro world.
Was ich wirklich auf #rhel in einer EPEL freien Umgebung vermisse: htop + ncdu.
CRITICAL: CVE-2025-49794 in libxml2 hits RHEL 10. Remote, unauthenticated use-after-free via crafted XML can crash apps or cause undefined behavior. Monitor for patches, filter XML inputs, and restrict access! https://radar.offseq.com/threat/cve-2025-49794-expired-pointer-dereference-in-red--18de3c2a #OffSeq #Linux #RHEL #CVE2025 #Infosec
As #RHEL clones hit version 10, Rocky and Alma chart diverging paths
Take a quick look at the headline features – and the growing differences
Perhaps the biggest and most obvious technological difference in this version is that #AlmaLinux offers a separate version for x86-64-v2 hardware. #RHEL10 itself, and Rocky with it, now require x86-64-v3, meaning Intel "Haswell"
But you'd expect all the RHELatives to be similar. That remains their primary selling point.
https://www.theregister.com/2025/06/14/rocky_alma_and_rhel_10/
Schlechter Wortwitz / Red Hat & SUSE
Thrilled to have a new, production-ready CI/CD pipeline live! It automatically builds and deploys my Jekyll static WIP site (https://hofstede.it) on every push to the main branch.
The architecture is a showcase of modern Linux tools:
Server running on Red Hat Enterprise Linux 10 (RHEL)
Forgejo for Git hosting & Actions.
A rootless Forgejo Runner, running in Podman, managed by a systemd Quadlet file.
Traefik reverse proxy running as a Podman container.
An Nginx web server for the site, also in a container for easy discovery by Traefik.
The Forgejo Runner and the Nginx Webserver run in different unprivileged user contexts.
The magic is the secure bridge between the rootless CI job and the web server. The pipeline creates a build artifact, and a systemd.path watcher on the host instantly triggers a deployment script.
It's fully decoupled, secure, and works like a charm.
Wanted - A Computing Engineer at the start of your career, with a strong foundation in Linux and a desire to work on critical computing services for the research community? Join the CERN Linux Team to help operate, evolve and support Linux-based services used by thousands of scientists across the organisation and worldwide.
https://www.smartrecruiters.com/CERN/744000064680069-linux-engineer-it-cd-cli-2025-109-grae-
KernelCare service from TuxCare extended to @almalinux 9.2, with 9.6 to be added soon.
https://www.linux-magazine.com/Online/News/TuxCare-Announces-Support-for-AlmaLinux-9.2?utm_source=mlm
#EnterpriseLinux #AlmaLinux #OpenSource #RHEL #patching #support #KernelCare
Unser @stdevel war auf der #RedHatSummit in Boston und hat News mitgebracht, die wir zusammen besprechen.
#RHEL 10 ist erschienen und bringt u.a. einen gereiften Image Mode und eine spannende Dateisystem-Änderung. Red Hat Satellite 6.17 unterstützt erstmalig Flatpaks und erlaubt einen ersten on-prem Insights-Dienst. Auch im Ansible- und OpenShift-Umfeld gibt es Neuigkeiten. Spannende Roadmaps und das neue RHOKP runden die Konferenz ab.
Manage your Linux systems like a container!
I’ve got to tell you, I have not been so excited about a technology… probably since Containers. At Summit this year Red Hat announced the General Availability of Image Mode for RHEL. So I got to spend a week in Boston, explaining, over and over again, why that’s important.
See, Image mode is kind of a big deal. It takes container workflows, and applies it to your data center servers using a technology called bootc. This concept isn’t new exactly, this sort of technology has been applied to edge devices, and phones, and other appliances for years. But what we have now is a general purpose linux that you can update using a bootable container image. This changes things.
So think about a Linux system as you know it today. We’re calling that Package Mode now in order to avoid confusion. RHEL Package Mode is a Linux base, with a package manager, where you install and configure things, and then fight to keep those things from drifting pretty much from then until eternity. There’s a whole facet of the IT industry around mitigating that drift. Package and config management is a huge business! For good reason! Drift is what makes your routine 2AM maintenance into a panic attack when the database server doesn’t come back up.
So I talked a lot about Image Mode at Summit, but I have to admit, I hadn’t touched it yet! So Now that I’m back home, and my time is a little less all consumed by prep for the RHEL 10 release, and Summit deadlines, I decided to take some time and get hands on with this revolutionary thing.
Building a pipeline
So, I use Gitlab community edition as a repository for a few container builds I maintain. Some time back I managed to get the CI/CD pipelines working for my container builds. These were nothing fancy, but they work. I commit a change to the repository, and a job kicks off to rebuild the container, and push it into a registry. In some cases that’s just the internal Gitlab registry, in others its Docker Hub. I, of course, do it all with Podman. So when I decided to tackle Image Mode, I thought it would be best to just rip that band-aid right off and do it in Gitlab, and have the builds happen there. How hard could it be? I already had container builds running there!
So I made a repo, and copied my CI config from one of the container builds that just used podman and the local registry, and threw in a basic Containerfile that just sourced FROM the RHEL bootc base image, and then did a package install. Commit, sit back in my arrogance and wait for my image.
It failed. For reasons I still don’t fully understand, the container build uses fuse-overlayfs to do its build, and couldn’t in my runner’s podman in podman build container. I did some research, and luckily I have access to internal Red Hat knowledge, so I was able to bounce some ideas around and came up with a solution. Two things actually. My runner needed some config changes. Here, I’ll share them with you.
Here is my Runner config
[[runners]] name = "dind-container" url = "https://git.undrground.org" id = 3 token = "NoTokenForYou" token_obtained_at = somedatestamp token_expires_at = someotherdatestamp executor = "docker" environment = ["FF_NETWORK_PER_BUILD=1"] [runners.cache] MaxUploadedArchiveSize = 0 [runners.cache.s3] [runners.cache.gcs] [runners.cache.azure] [runners.docker] tls_verify = false image = "docker:git" privileged = true disable_entrypoint_overwrite = false oom_kill_disable = false disable_cache = false volumes = ["/cache"] shm_size = 0 network_mtu = 0
The things I had to add were, first, privileged = true. This gives the container the access it needs to do its fusefs work. And the environment “FF_NETWORK_PER_BUILD=1”, which I believe tweaks the podman networking such that it fixed a DNS resolution problem I was having in my builds.
With that fixed, I was able to get builds working! I have two things to share that may help you if you are trying to do the same. First, another Red Hatter built a public example repo that will apparently “just work” if you use it as a base for your Image Mode CI/CD. It didn’t work for me, but I suspect that was more about my gitlab setup and less about the functionality of the example. You can find that example, Here. What I ended up doing was modify my existing podman CI file. That looks like this:
---image: registry.undrground.org/gangrif/podman-builder:latest#services:# - docker:dindbefore_script: - dnf -y install podman git subscription-manager buildah skopeo podman - subscription-manager register --org=${RHT_ORGID} --activationkey=${RHT_ACT_KEY} - subscription-manager repos --enable codeready-builder-for-rhel-9-x86_64-rpms --enable rhel-9-for-x86_64-baseos-rpms - export REVISION=$(git rev-parse --short HEAD) - podman login --username gitlab-ci-token --password $CI_JOB_TOKEN $CI_REGISTRY - podman login --username $RHLOGIN --password "$RHPASS" registry.redhat.ioafter_script: - podman logout $CI_REGISTRY - subscription-manager unregisterstages: - buildcontainerize: stage: build script: . - podman build --secret id=creds,src=/run/containers/0/auth.json --build-arg GIT_HASH=$CI_COMMIT_SHA -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA -t $CI_REGISTRY_IMAGE:latest . - podman push $CI_REGISTRY_IMAGE
Now, this example contains no verification or validation, so I suggest you maybe look into the proper example linked externally. That one has a lot of testing included. Mine will improve with time.
Registry Authentication for your build
Now, there’s a few things to note here. First, Notice that I am not just logging into my own registry, but registry.redhat.io. You register using your Red Hat login for the Red Hat private registry, and that’s where the bootc base images come from. I also use subscription-manager to register the build container to Red Hat’s CDN. That’s because the RHEL Image Mode build is building RHEL, and must be done using an entitled host in order to receive any updates or packages during the container build. This was something I had gotten stuck on for some time, its a little tough to wrap your head around. Once you do though, it makes sense.
Authenticating your bootc system with your registry, automatically
I am also passing the podman authentication token file into a podman secret at build time. This is important later. If your bootc images are stored in a registry that is not public, you will need to authenticate to that registry in order to pull your updated images after deployment. The easiest way to bake in that authentication is to simply take the authentication from the build host, and place it into the built image. There is some trickery that happens in your Containerfile to make this work. You can read more about this here.
Containerfile
So, I told you we build image mode like a container. I meant it. We literally write a Contanerfile, and source it from these special bootc images that are published by Red Hat. There are a few things you’ll want to think about when building a bootc Containerfile vs a standard application container. Things that you wouldn’t normally think about when building a normal container.
Content
First, RHEL is entitled software, that doesn’t change for RHEL Image Mode. This is pretty seemless if you are doing your build directly on an Entitled RHEL system. But if you’re in a ubi container like I am, you’ll need to subscribe the UBI container because the BootC build will depend on that entitlement to enable its own repositories. That is not true, however, for 3rd party public repositories. Those just get enabled right inside of the Containerfile. This sounds confusing, but it boils down to this. RHEL repository? Entitled by the build host, Other repository? Add it via the Containerfile. I add EPEL in my example below.
Users
Something else I don’t usually see done in a standard container is the addition of users. Remember this is going to be a full RHEL host at the other end, so you might need to add users. In my case I am adding a local “breakglass” user, because I am leveraging IdM for my identities. But if something goes wrong during the provisioning, i want a user I can login to the system with to troubleshoot. You can also come in later with other tools to add users. You can enable cloud-init and add them there, or if you are using the image builder tool I’ll talk about in a bit, you can give it a config.toml file to add users at that point.
Other Considerations
Other things that you’ll need to think about might be firewall rules, container registry authentication, and even the lack of an ENTRYPOINT or CMD. Because this system is expected to boot into a full OS, it is not going to run a single dedicated workload. Instead you’ll be enabling services like you would on a standard RHEL system, with systemctl.
My Containerfile
Now that we’re through all of that, let me show you what I ended up with as a Containerfile.
FROM registry.redhat.io/rhel9/rhel-bootc:latest# Enable EPEL, install updates, and install some packagesRUN dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpmRUN dnf -y updateRUN dnf -y install ipa-hcc-client rhc rhc-worker-playbook cloud-init && dnf clean all# This sets up automatic registration with Red Hat InsightsCOPY --chmod=0644 rhc-connect.service /usr/lib/systemd/system/rhc-connect.serviceCOPY .rhc_connect_credentials /etc/rhc/.rhc_connect_credentialsRUN systemctl enable rhc-connect && touch /etc/rhc/.run_rhc_connect_next_boot# This is my backdoor user, in case of IdM join failureRUN useradd breakglassRUN usermod -p '$6$s0m3pAssw0rDHasH' breakglassRUN groupmems -g wheel -a breakglass# This picks up that podman pull secret, and adds it to the build imageCOPY link-podman-credentials.conf /usr/lib/tmpfiles.d/link-podman-credentials.confRUN --mount=type=secret,id=creds,required=true cp /run/secrets/creds /usr/lib/container-auth.json && \ chmod 0600 /usr/lib/container-auth.json && \ ln -sr /usr/lib/container-auth.json /etc/ostree/auth.json# This configures the bootc update timer to run at a time that I consider acceptableRUN mkdir -p /etc/systemd/system/bootc-fetch-apply-updates.timer.d/COPY weekly-timer.conf /etc/systemd/system/bootc-fetch-apply-updates.timer.d/weekly.conf
You can see from my comments what’s going on in the various blocks in that Containerfile. My intention is to use this as a base RHEL system, and then make more derivative images based on this one. For instance, if I wanted a web server, I would base a new Containerfile on this image, and then add in a RUN dnf install httpd. Its important to note that you shouldn’t be installing packages on these deployed systems after they are up and running. Those installations should happen in the image. If you install a package on a running image mode system, that change will not be carried into the next image update on your system unless you then incorporate it into your bootable container image. This means that you will need to plan ahead, but it also means that tracking package drift in the future is a thing of the past!
In my case, the above mentioned CI automation, and this Containerfile worked in my Gitlab instance, with the above Runner modifications. The build job will take some time, a bootc image is much larger than the lightweight container images you are used to if you’ve been building application containers.
But what about turning that into a VM?
So I am covering but ONE method of getting this image deployed to an acutal system. You can use a myriad of different methods including Kickstart, writing an ISO, PXEBOOT, but what I am doing (because it suits my needs) is turning my image into a qcow2 file, which is a virtual disk image for use with Libvirt. If you’re familiar with Image Builder, the tool used to churn out tailored RHEL disk images, then this wont be a surprise. Theres a container that you can grab that just runs image builder, you give it a bootable container image, and it turns it into a qcow2! Ive cooked up a script that pulls my bootable container right from my registry, writes it to a qcow2, then immediately passes that to virt-install and builds a VM out of it!
In my case, it also uses cloud-init to set its hostname, auto registers, and connects to insights, and then uses a slick new tech preview feature that auto-joins my lab’s IdM domain through insights! Here is my script:
#!/bin/bashVMNAME=$1podman login --username my-gitlab-username -p 'gitlab-token' registry.undrground.orgpodman login --username my-redhat-login -p 'redhatpassword registry.redhat.iopodman pull registry.undrground.org/gangrif/rhel9-imagemode:latestsudo podman run \ --rm \ -it \ --privileged \ --pull=newer \ --security-opt label=type:unconfined_t \ -v $(pwd)/config.toml:/config.toml \ -v $(pwd)/output:/output \ -v /var/lib/containers/storage:/var/lib/containers/storage \ registry.redhat.io/rhel9/bootc-image-builder:latest \ --type qcow2 \ registry.undrground.org/gangrif/rhel9-imagemode:latestcat << EOF > $VMNAME.init#cloud-configfqdn: $VMNAME.idm.undrground.orgEOFmv $(pwd)/output/qcow2/disk.qcow2 /var/lib/libvirt/images/$VMNAME-disk0.qcow2virt-install \--name $VMNAME \--memory 4096 \--vcpus 2 \--os-variant rhel9-unknown \--import \--clock offset=localtime \--disk=/var/lib/libvirt/images/$VMNAME-disk0.qcow2 \-w bridge=bridge20-lab \--autoconsole none \--cloud-init user-data=$VMNAME.init
This, of course, can be improved, but as a proof of concept it works great! Ive build a few test systems and so far its working flawlessly! Now, when I wans to update my systems, I update the gitlab repository with the changes, and let the CI run. Then once it completes, all I do is run this script to make a new vm! The running vms -should- (i have not tested this yet) get the updated bootble container image from the registry on saturday at 3AM, and reboot if new changes are applied.
Wrapping it up
This is, i think, the thing we’ve been promised for years. Ever since the advent of the cloud when we were told that we should stop treating our servers like pets, but never really given a clear definition of how. Image Mode makes that promise a reality. I’m certain I’ll be sharing more as my Image Mode journey progresses. Thanks for reading!
Share via:
0SharesHei Linux säätäjät ja wannabeet, RHEL10 esittelypäivä syyskussa hesuleissa, rekisteröidy ja pistä kalenteriin! Uutta taas tulossa. Ylläpito helpompaa avustimilla, qvanttisalaus ja uusi image mode - mikä tämä on, kuvamuoto suomeksi? Näppiharjoituksia kelmujen lisäksi. Tervetuloa!
https://events.redhat.com/profile/form/index.cfm?PKformID=0x1457624abcd
Did you use Fedora 40 at one point? Perhaps you didn't know, but that's the version of Fedora that went on to become Red Hat Enterprise Linux 10!
And RHEL 10 is what you can keep on counting on for 10 years.
Maybe that's too long to not upgrade on your laptop, but it's great for businesses who need reliability and flexibility for their infrastructure.
With a focus on usability, @almalinux OS 10 has been released
https://www.admin-magazine.com/News/AlmaLinux-OS-10-Released?utm_source=mam
#EnterpriseLinux #AlmaLinux #RHEL #cryptography #OpenSSH #sudo #SecureBoot