Building An Enterprise on SUSE/openSUSE - EP 01 - Micro OS - FDE, Cockpit, and SSSD

This is article one on a series of designing an enterprise ecosystem on SUSE/openSUSE. The article will only reference openSUSE, but the information will be broadly applicable to the SUSE Enterprise Linux comparable systems.
Understanding the Ecosystem
Building an enterprise ecosystem requires stability, reliability, security, and easy management. Unfortunately most vendors in the enterprise space only provide you stability, reliability, and security, and cannot fathom designing a system that anyone but a doctorate level engineer of their software can make run. Those of us unfortunate enough to have tried, or used, enterprise software know this all too well, and that is why you are reading this now.
I will not say that openSUSE is easy, but unlike the rest of the enterprise Linux ecosystem it is easily available, and with a few more articles like this understandable. The ecosystem has different operating systems customized to perform different responsibilities the first and most important of these systems is MicroOS.
There are two different flavors of this operating system "Standard" just referred to as MicroOS, and "Leap" referred to as "Leap Micro". It is recommended that for container hosts you should use MicroOS and for immutable servers running services on the operating system, you should use Leap Micro. MicroOS is a rolling update system that supports more advanced features such as systemd-boot and full disk encryption, while Leap Micro is a versioning based distribution that remains more stable for software that may need to be installed directly with dependencies.
What is it good for?
Apologies for anyone old enough to have sung this title, and responded "absolutely nothing", but that is not the correct answer.
MicroOS is the ideal minimal container host. While it is smaller to use other tools like Alpine Linux, MicroOS comes packed with the necessary monitoring, security, and user access controls that we rely on in a business environment to not go completely insane. Additionally the systems are easy to monitor and integrate with other tools such as UYUNI/SUSE Manager for simplifying updates and security. We will cover UYUNI in a future article.
0 - 10 | Installing the OS
We will be utilizng MicroOS with Full Disk Encryption as a container host, this is a vital setup for secure enterprise Virtual Machines, but be aware that it does require TPM2.0 for the VM to do auto decryption, so keep your encryption keys safe and don't treat lightly into this if you are new.
You can get the iso for this OS by going to get.opensuse.org/microos, then load it into your hypervisor of choice and setup a machine with 2 cpu cores, 4gb of ram, 20gb of disk space, TPM2.0 and SecureBoot enabled. You can get away with less, but this is my recommended minimum for ensuring you can run something useful on the system. Now proceed with the following steps:
- Follow the on screen instructions to boot to the installer and agree to the EULA.
- Setup the Network settings for the machine, ensure that you do set a fqdn as the hostname for joining to your domain, and that you setup the gateway under "Routing" (This can often confuse new users)
- As system Role select "MicroOS Container Host"
- Enter your NTP server name
- Set a root password. For those of you new to openSUSE, root always has a password set.
- On the "Install Settings" page select "Partitioning"
- Select "Guided Setup"
- Ensure only your install disk is checked and hit next
- Select "Enable Disk Encryption" and enter the password for your FDE
- Hit Accept to return to the "Install Settings" page, and from there select "Software"
- Select the pattern "Web based remote system management", and hit "Okay" to return to the "Install Settings" page. This will install cockpit for web based management.
- Select "Booting" and change "Boot Loader" from "Grub2 for EFI" to "Systemd Boot"
- Go to "kernel Parameters" tab and enter:
"console=tty0 console=ttyS0,115200n8", and select "Ok" to return to the "Install Settings" page. This will enable serial output for hypervisors such as SUSE Harvester. - Scroll to the "Security" section and set the following:
- "Firewall" | "enabled"
- "SSH service" | "disabled"
- "SSH port" | "blocked" - Then select "Install" and wait for the system to install.
- When installation is complete, eject the install ISO and boot into the machine. Interact with the system via serial from here on out for the cli, and enter the decryption key to boot the system.
- Login as root and enter
systemctl enable --now cockpit.socket
this will start and enable the cockpit web server (We'll come back to this after joining the domain) - Now lets enroll our encryption passwords into our tpm module by running
sdbootutil enroll --method tpm2
then enter the disk password to enroll the device, and record the recovery pin that appears on the screen - Now run
sdbootutil update-all-entries
to update the bootloader configuration - Now create the file /etc/sysconfig/fde-tools to prevent the hypervisor from changing the PCR0.
echo 'FDE_SEAL_PCR_LIST="7"' >> /etc/sysconfig/fde-tools
- Open the cockpit firewall port
firewall-cmd --add-service=cockpit --permanent
firewall-cmd --reload
- Install the additional management features for cockpit and reboot
transactional-update pkg install cockpit-podman cockpit-selinux cockpit-storaged cockpit-tukit cockpit-networkmanager podlet
- After reboot you will need to set the cockpit SELinux policies. Please note that SELinux policies are prickly, and I will update these as I am able, but you may need to set
setenforce 0
to temporarily disable SELinux to do debugging if these policies names change:
setsebool -P unconfined_service_transition_to_unconfined_user 1
ausearch -c 'cockpit-session' --raw | audit2allow -M my-cockpitsession
semodule -X 300 -i my-cockpitsession.pp
- Now we need to join the server to our domain for identity management. We do this with sssd configurations. If you are joining a windows domain I recommend this guide from SUSE directly, it is for SUSE Micro, but is applicable in this context: article. However this series will use FreeIPA as our identity control which I will now cover since configuring IPA can be a real PIA to figure out.
Configuring SSSD For FreeIPA Integration
OpenSUSE MicroOS is immutable so while you can try and make changes to the system, they will be undone on the next reboot. So to do basic functions such as install software and alter configuration files you will need to use the command transactional-update
, there are many commands you can place after this command, but outside of scripting you will probably be using transactional-update shell
to enter a configuration shell that persists through the next reboot.
To do the configurations then you will want to enter the transactional-update shell
and perform the following:
- Use the openSUSE package manager "Zypper" to install the following:
zypper in -y sssd \
sssd-tools \
oddjob-mkhomedir \
sssd-ad \
sssd-ipa \
sssd-krb5 \
sssd-krb5-common \
sssd-ldap \
sssd-tools \
bind-utils \
krb5-client
- Now make several directories for SSSD and FreeIPA to refer to:
mkdir -p /etc/sssd/conf.d/
mkdir -p /etc/sssd/pki
mkdir -p /var/lib/sss/db
mkdir -p /var/lib/sss/pines
mkdir -p /var/lib/sss/gpo_cache
mkdir -p /var/log/sssd
mkdir -p /var/bin/sssd
mkdir -p /var/lib/ipa-client/pki
mkdir -p /var/lib/sss/secrets
mkdir -p /etc/ipa
mkdir -p /var/lib/ipa-client/pki
- Now populate the following files with the specifics for your domain
- /etc/krb5.conf
- /etc/sssd/sssd.conf
- /etc/nsswitch.conf
- Certificate Locations
- /etc/ipa/ca.crt
- /var/lib/ipa-client/pki/kdc-ca-bundle.pem
- /var/lib/ipa-client/pki/ca-bundle.pem
- /usr/share/pki/trust/anchors/ca.pem
The following variables are in the text files to be filled specific to your configuration:
- $TLD = The top level domain of your FreeIPA server, typically something like "net.domain.local"
- $REALM = The kuberos realm of your domain, typically something like "NET.DOMAIN.LOCAL"
- $FQDN = The fully qualified domain name of the server typically something like "aaa.net.domain.local"
- $SRVFQDN = The FreeIPA server's fully qualified domain name, in the Windows Ecosystem this would be your primary Domain Controller.
- $DYDNS = The interface for sending dynamic DNS to the server. This is useful for laptop configurations that might move networks, for most servers this will just be eth0
/etc/krb5.conf
#File modified by ipa-client-install
includedir /etc/krb5.conf.d/
[libdefaults]
default_realm = $REALM
dns_lookup_realm = true
rdns = false
dns_canonicalize_hostname = false
dns_lookup_kdc = true
ticket_lifetime = 24h
forwardable = true
udp_preference_limit = 0
default_ccache_name = KEYRING:persistent:%{uid}
[realms]
$REALM = {
pkinit_anchors = FILE:/var/lib/ipa-client/pki/kdc-ca-bundle.pem
pkinit_pool = FILE:/var/lib/ipa-client/pki/ca-bundle.pem
}
[domain_realm]
.$TLD = $REALM
$TLD = $REALM
$FQDN = $REALM
/etc/sssd/sssd.conf
[domain/$TLD]
id_provider = ipa
ipa_server = _srv_, $SRVFQDN
ipa_domain = $TLS
ipa_hostname = $FQDN
auth_provider = ipa
chpass_provider = ipa
access_provider = ipa
cache_credentials = True
ldap_tls_cacert = /etc/ipa/ca.crt
dyndns_update = True
dyndns_iface = $DYDNS
krb5_store_password_if_offline = True
fallback_homedir = /home/%u@%d
default_shell = /bin/bash
use_fully_qualified_names = True
realmd_tags = manages-system
[sssd]
config_file_version = 2
services = nss, pam, ssh, sudo
# SSSD will not start if you do not configure any domains.
# Add new domain configurations as [domain/<NAME>] sections, and
# then add the list of domains (in the order you want them to be
# queried) to the "domains" attribute below and uncomment it.
# domains = LDAP
domains = $TLD
[nss]
[pam]
# Example LDAP domain
# [domain/LDAP]
# id_provider = ldap
# auth_provider = ldap
# ldap_schema can be set to "rfc2307", which stores group member names in the
# "memberuid" attribute, or to "rfc2307bis", which stores group member DNs in
# the "member" attribute. If you do not know this value, ask your LDAP
# administrator.
# ldap_schema = rfc2307
# ldap_uri = ldap://ldap.mydomain.org
# ldap_search_base = dc=mydomain,dc=org
# Note that enabling enumeration will have a moderate performance impact.
# Consequently, the default value for enumeration is FALSE.
# Refer to the sssd.conf man page for full details.
# enumerate = false
# Allow offline logins by locally storing password hashes (default: false).
# cache_credentials = true
# An example Active Directory domain. Please note that this configuration
# works for AD 2003R2 and AD 2008, because they use pretty much RFC2307bis
# compliant attribute names. To support UNIX clients with AD 2003 or older,
# you must install Microsoft Services For UNIX and map LDAP attributes onto
# msSFU30* attribute names.
# [domain/AD]
# id_provider = ldap
# auth_provider = krb5
# chpass_provider = krb5
#
# ldap_uri = ldap://your.ad.example.com
# ldap_search_base = dc=example,dc=com
# ldap_schema = rfc2307bis
# ldap_sasl_mech = GSSAPI
# ldap_user_object_class = user
# ldap_group_object_class = group
# ldap_user_home_directory = unixHomeDirectory
# ldap_user_principal = userPrincipalName
# ldap_account_expire_policy = ad
# ldap_force_upper_case_realm = true
#
# krb5_server = your.ad.example.com
# krb5_realm = EXAMPLE.COM
[ssh]
[sudo]
/etc/nsswitch.conf
Standard configurations list the line "sudoers:" as "file sss" this is incorrect for openSUSE based distributions where sudo requests the root password by default.
#
# /etc/nsswitch.conf
#
# An example Name Service Switch config file. This file should be
# sorted with the most-used services at the beginning.
#
# Valid databases are: aliases, ethers, group, gshadow, hosts,
# initgroups, netgroup, networks, passwd, protocols, publickey,
# rpc, services, and shadow.
#
# Valid service provider entries include (in alphabetical order):
#
# compat Use /etc files plus *_compat pseudo-db
# db Use the pre-processed /var/db files
# dns Use DNS (Domain Name Service)
# files Use the local files in /etc
# hesiod Use Hesiod (DNS) for user lookups
# nis Use NIS (NIS version 2), also called YP
# nisplus Use NIS+ (NIS version 3)
#
# See `info libc 'NSS Basics'` for more information.
#
# Commonly used alternative service providers (may need installation):
#
# ldap Use LDAP directory server
# myhostname Use systemd host names
# mymachines Use systemd machine names
# mdns*, mdns*_minimal Use Avahi mDNS/DNS-SD
# resolve Use systemd resolved resolver
# sss Use System Security Services Daemon (sssd)
# systemd Use systemd for dynamic user option
# winbind Use Samba winbind support
# wins Use Samba wins support
# wrapper Use wrapper module for testing
#
# Notes:
#
# 'sssd' performs its own 'files'-based caching, so it should generally
# come before 'files'.
#
# WARNING: Running nscd with a secondary caching service like sssd may
# lead to unexpected behaviour, especially with how long
# entries are cached.
#
# Installation instructions:
#
# To use 'db', install the appropriate package(s) (provide 'makedb' and
# libnss_db.so.*), and place the 'db' in front of 'files' for entries
# you want to be looked up first in the databases, like this:
#
# passwd: db files sss
# shadow: db files sss
# group: db files sss
passwd: compat sss
group: compat [SUCCESS=merge] systemd sss
shadow: compat sss
# Allow initgroups to default to the setting for group.
# initgroups: compat
hosts: files mdns_minimal [NOTFOUND=return] dns
networks: files dns
aliases: files usrfiles
ethers: files usrfiles
gshadow: files usrfiles sss
netgroup: files sss
protocols: files usrfiles
publickey: files
rpc: files usrfiles
services: files usrfiles
automount: files sss
bootparams: files
netmasks: files
sudoers: sss
Certificate Locations
Certificates need to be placed into four places for four different purposes:
/etc/ipa/ca.crt
This is where SSSD will look for the root certificate authority. Use curl to get it from the FreeIPA server:
curl -k -o /etc/ipa/ca.crt https://$SRVFQDN/ipa/config/ca.crt
/var/lib/ipa-client/pki/kdc-ca-bundle.pem
This is the kuberos Key Distribution Center Key. Unless you have some some special setup to your freeIPA server this will be the same as your root CA certificate
curl -k -o /var/lib/ipa-client/pki/kdc-ca-bundle.pem https://$SRVFQDN/ipa/config/ca.crt
/var/lib/ipa-client/pki/ca-bundle.pem
From what I can tell this is a holdover from Red Hat distributions PKInit locations. It may be more relevant to Red Hat servers, but it does need to be present for SSSD to work correctly. I will update if I figure out more of what this is for.
curl -k -o /var/lib/ipa-client/pki/ca-bundle.pem https://$SRVFQDN/ipa/config/ca.crt
/usr/share/pki/trust/anchors/ca.pem
This is the location for openSUSE to store its pki certificates. This will tell the OS to trust this certificate authority so it can interact with the domain.
curl -k -o /usr/share/pki/trust/anchors/ca.pem https://$SRVFQDN/ipa/config/ca.crt
After running this, update the certificate authorities list for the OS with
update-ca-certificates
- Get Kuberos Keytab for the system to authenticate
Now we have to go to the FreeIPA server to get a kuberos keytab file. For this tutorial we will pre-suppose that the machines cannot touch with SSH, as this typically covers machines with only a single local user joined to a domain. We will start by going to the cli of the FreeIPA server and running
ipa-getkeytab -s $SRVFQDN -p host/$FQDN@$REALM -k /tmp/client.keytab
Now we will convert it to a copyable base64 encoded string with
base64 /tmp/client.keytab
Then paste it into a file with vi/vim into /tmp/client_keytab.b64 on the machine to be joined to the domain. Then convert that to the keytab file with
base64 -d /tmp/client_keytab.b64 > /etc/krb5.keytab
Now clean up the files on both machines
rm /tmp/client.keytab
rm /tmp/client_keytab.b64
- Now let complete some file permission validations. The following should be run to avoid any file ownership conflicts with sssd
chown root:sssd /etc/sssd/sssd.conf
chmod 640 /etc/sssd/sssd.conf
- Set PAM Options to use sssd and make a home directory for new users. Consider leaving the make home directory on even for servers as container hosts often need mounting points, and temporary places to store files when backing up before updates. It is not necessary but I do recommend it until you are building large environments.
pam-config -a --sss
pam-config -a --mkhomedir
- Use "exit" to leave the transactional-update shell and then perform the following SELinux updates and enable the services
setsebool -P unconfined_service_transition_to_unconfined_user 1
setsebool -PV kerberos_enabled on
ausearch -c 'cockpit-session' --raw | audit2allow -M my-cockpitsession
semodule -X 300 -i my-cockpitsession.pp
systemctl enable --now sssd
- Now we need to update the cockpit web certificates into our domain. To do this acquire the certificate from FreeIPA and place them in /etc/cockpit/ws-certs.d/ , there are several ways to achieve this so I will leave the method up to you. [I will cover a way to achieve this in article 2 - Installing and basic usage of FreeIPA]. Then add a new certificate and key with a name starting with an integer higher than 0 so cockpit will utilize that upon reboot. After they are saved run the following for the certificate change to save.
systemctl restart cockpit.socket
Lets Finally Run Something
I promise everything above this gets easier over time and can even be automated with some simple scripting for your organization. But now lets run some actual code. Today we are going to build a PiHole DNS Server running in rootless podman.
** NOTE **
If you are new to podman, it is a daemonless runtime for containers, this removes several security vulnerabilities of docker, you can use docker still, but podman is the supported option for all of my enterprise articles. Additionally, podman uses service pods like Kubernetes making it easier to bind services and files.
If you are new to containers, you probably need a different article and a bottle of strong liquor.
** EOF **
First make a local user for invoking podman with. While it is possible to utilize domain users with GUID and UID mapping, especially with FreeIPA, I don't recommend it at this time. Instead create a local user with a name such as "podman-runner" or "local-podman". Use useradd podman-runner
to create the user, and then lock the user from login with passwd -l podman-runner
, this will prevent the user from being used for normal login.
Now lets create the dependency files and directories for the latest version of PiHole, if you haven't done this since v5 (pre 2024.07.0) follow carefully as the project did some major updates, and everything has changed.
- Create the following directories, at this time do not use volumes for this as I have had unknown bugs with the file mountings.
- /home/podman-runner/customlist
- /home/podman-runner/etc-pihole
- /home/podman-runner/ssl-certs
- /home/podman-runner/dns-share
sudo -u podman-runner mkdir /home/podman-runner/{customlist,ssl-certs,dns-share,etc-pihole}
sudo -u podman-runner touch /home/podman-runner/customlist/custom.list
- Enable login lingering so the user can be used in a non-systemd logged in user session with
sudo loginctl enable-linger podman-runner
- Now acquire the CA Certifiate for PiHole to trust your domains, the pihole website's ssl certificiate, and the pihole website's ssl key. Name them as follows inside of the /home/podman-runner/ssl-certs directory:
- CA Certificate - tls_ca.crt
- PiHole SSL Certificate - tls.crt
- PiHole SSL Key - tls.key
PiHole expects a combined ssl key and certificate file to do this use cat to:
sudo -u podman-runner cat /home/podman-runner/ssl-certs/tls.key /home/podman-runner/ssl-certs/tls.crt | sudo -u podman-runner tee /home/podman-runner/ssl-certs/tls.pem
- Now create the pod as the podman-runner user by running
sudo -u podman-runner
. I recommend binding the volumes to the pod and not the container in this use case, as it simplifies the updates of the container when you don't have to type out all of the volume and port mounts. You will have to navigate yourself to /tmp so that the sudo -u user can navigate you correctly
cd /tmp
sudo -u podman-runner podman pod create \
-p 8080:443 \
-p 8443:443 \
-p 5353:53/tcp \
-p 5353:53/udp \
-v /home/podman-runner/customlist/custom.list:/etc/pihole/custom.list:Z \
-v /home/podman-runner/etc-pihole/:/etc/pihole:Z \
-v /home/podman-runner/dns-share/:/etc/dnsmasq.d:Z \
-v /home/podman-runner/ssl-certs/tls.pem:/etc/pihole/tls.pem:Z \
-v /home/podman-runner/ssl-certs/tls_ca.crt:/etc/pihole/tls_ca.crt:Z \
--name pihole
- Now run the container in the pod to start the service. I will leave the version number as latest for this guide, but ensure you enter a version number when running in production. And Enter the Following Variables into the command before running:
- MACHINEIP - The dns server's IP Address
- TZ - Timezone of the server (i.e. AMERICA/CHICAGO)
- PROXYLOCATION - FQDN of the server
- IPv6 - This is set to false, if you need this guide you probably should leave it there
sudo -u podman-runner podman run -d \
--restart always \
--pod pihole \
-e FTLCONF_LOCAL_IPV4=$MACHINEIP \
-e FTLCONF_dns_upstreams=9.9.9.9 \
--dns=9.9.9.9 \
-e TZ=$TZ \
-e PROXY_LOCATION=$PROXYLOCATION \
-e VIRTUAL_HOST=$PROXYLOCATION \
-e IPv6=False \
-e WEB_PORT=443 \
--name pihole-app \
docker.io/pihole/pihole:latest
- This is a rootless setup, so now we need to add the firewall forwarding rules to allow our rootless container to receive data from protected ports We accomplish this with firewall-cmd port forwarding, which will forward incoming traffic from protected ports to the unprotected ports we assigned above.
sudo firewall-cmd --zone=public --add-forward-port=port=80:proto=tcp:toport=8080 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=443:proto=tcp:toport=8443 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=53:proto=tcp:toport=5353 --permanent
sudo firewall-cmd --zone=public --add-forward-port=port=53:proto=udp:toport=5353 --permanent
sudo firewall-cmd --reload
- This setup disables any form of http access, including forwarding. This isn't an issue on current chrome browsers which enable https by default, but you might run into issues on other browsers which still send an initial request on http before forwarding to https. So go to https://$FQDN/admin/login to login. You will need to set an inital password which you can do with
sudo -u podman-runner podman exec -ti pihole-app pihole setpassword
Rebooting with this container
Rootless podman containers don't auto restart on reboot so we have to make a systemd service for this to work correctly. Note this method is technically depreciated, but the Podman team has failed to provide anything useful to replace this critical feature so it is strongly recommended to ignore depreciation warnings. To do this we will have to run the following:
- Create the file location
sudo -u podman-runner mkdir -p /home/podman-runner/.config/systemd/user
- Create the system service file:
sudo -u podman-runner podman generate systemd --name pihole-app --new | sudo -u podman-runner tee /home/podman-runner/.config/systemd/user/container-pihole-app.service
- Reload systemctl and enable the service
export XDG_RUNTIME_DIR=/run/user/$(id -u podman-runner)
sudo -u podman-runner XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR systemctl --user daemon-reexec
sudo -u podman-runner XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR systemctl --user daemon-reload
sudo -u podman-runner XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR systemctl --user enable --now container-pihole-app.service
YAY! YOU DID IT!
Come Back For More
As you can tell configuring enterprise level setups is a complicated process, but that doesn't mean it has to be difficult. Continue following this series for more on how to use the tools that large and small business can use to make life easier, and more safe. And for those that are new, or even just struggling with the complexities of the world of enterprise Linux DO NOT GIVE UP! It will take time and learning, but you got this.
Until next time this is Oran Clay. EOF.