Browse Source

run_with_podman code cleanup

bryan 5 years ago
parent
commit
2271e75a2a
1 changed files with 46 additions and 46 deletions
  1. 46 46
      blog.bryanroessler.com/_posts/2020-01-23-run-with-podman.md

+ 46 - 46
blog.bryanroessler.com/_posts/2020-01-23-run-with-podman.md

@@ -16,12 +16,12 @@ It is important to remember that a development environment can be just as import
 
 In the past, this was performed on virtual machines (VMs) that contained a default installation of the distribution that you were targeting. Thanks to their snapshotting abilities it was fairly trivial to restore distributions to their default state for software testing. However, this method had its drawbacks:
 
-* The default state was never the *current* default state for long. VMs had to be continually upgraded via their package managers to stay up-to-date with the development environment. They also needed to be modified in some cases (e.g. to enable sshd and allow authentication-less sudo) so deploying newer image versions required manual intervention
-* Retroactive changes to existing VMs is difficult
-* VMs are difficult to automate, requiring third-party tools (e.g. kickstart files, Ansible, etc.) to manage them
-* Each VM gets its own IP address, which makes it difficult to automate ssh-based program building/script running
-* VMs are computationally heavy. Their footprint is an entire deduplication of the host OS and its virtualization stack, in both memory and disk space. Taking and restoring snapshots is slow.
-* There is a meaningful amount of performance loss between the hypervisor and disk i/o because it is handled using network protocols. For example, an Atom VM build command would normally look something like this:
+*   The default state was never the *current* default state for long. VMs had to be continually upgraded via their package managers to stay up-to-date with the development environment. They also needed to be modified in some cases (e.g. to enable sshd and allow authentication-less sudo) so deploying newer image versions required manual intervention
+*   Retroactive changes to existing VMs is difficult
+*   VMs are difficult to automate, requiring third-party tools (e.g. kickstart files, Ansible, etc.) to manage them
+*   Each VM gets its own IP address, which makes it difficult to automate ssh-based program building/script running
+*   VMs are computationally heavy. Their footprint is an entire deduplication of the host OS and its virtualization stack, in both memory and disk space. Taking and restoring snapshots is slow.
+*   There is a meaningful amount of performance loss between the hypervisor and disk i/o because it is handled using network protocols. For example, an Atom VM build command would normally look something like this:
 
 ```
 cat {FILE_ACTIVE} | ssh fedora-build-machine.lan "cat > /tmp/{FILE_ACTIVE_NAME} ; mkdir -p {FILE_ACTIVE_NAME_BASE}; cd {FILE_ACTIVE_NAME_BASE}; chmod 755 /tmp/{FILE_ACTIVE_NAME} ; /tmp/{FILE_ACTIVE_NAME}"
@@ -35,27 +35,27 @@ Containers alleviate all of the problems associated with using VMs to execute co
 
 They:
 
-* Use standardized images of your target distributions and make it possible to execute commands directly on them
-* Allow you to create your own custom base images using Dockerfiles, which are built on top of other rolling images that are automatically maintained
-* Support several different networking options, such as automatically using the host network or operating via its own whitelisted service
-* Perform great because the code is running on the same kernel as the OS
-* Can be created and destroyed nearly instantaneously which makes them much better for executing frequent build commands (I'm a big F5'er)
+*   Use standardized images of your target distributions and make it possible to execute commands directly on them
+*   Allow you to create your own custom base images using Dockerfiles, which are built on top of other rolling images that are automatically maintained
+*   Support several different networking options, such as automatically using the host network or operating via its own whitelisted service
+*   Perform great because the code is running on the same kernel as the OS
+*   Can be created and destroyed nearly instantaneously which makes them much better for executing frequent build commands (I'm a big F5'er)
 
 ### Podman and Toolbox
 
 Podman is a container manager by Red Hat that is available on Fedora and CentOS and integral to Silverblue and CoreOS. Red Hat has also shipped some fun stuff built on top of Podman such as [Toolbox](https://fedoramagazine.org/a-quick-introduction-to-toolbox-on-fedora/) that combine system overlays and containers to provide seamless build environments for past and current CentOS and Fedora releases (theoretically you should be able to provide your own custom image although the documentation is currently scant). Toolbox will get you 90% of the way there to automated builds as long as you:
 
-* only target Red Hat-based distributions
-* don't develop or test systemd scripts or need to utilize existing systemd services (**systemd does not work in Toolbox**)
-* are comfortable with having your entire $HOME exposed to your build environment
-* don't need to nest toolboxes
+*   only target Red Hat-based distributions
+*   don't develop or test systemd scripts or need to utilize existing systemd services (**systemd does not work in Toolbox**)
+*   are comfortable with having your entire $HOME exposed to your build environment
+*   don't need to nest toolboxes
 
 Toolbox may make sense if you run separate instances of your IDE from *inside* the toolbox containers, but then you are just back to creating custom build environments within each container, only now separated from the host OS. Unfortunately, Toolbox does not support nesting containers so testing your code on default images from within a toolbox is impossible as of this moment. Additionally, if your scripts change environmental variables, they may be difficult to test as the toolbox is mutable.
 
 
 ### Prerequisites
 
-1. You have a script or command to execute on build. Let's start with something easy like:
+1.  You have a script or command to execute on build. Let's start with something easy like:
 ```bash
 #!/usr/bin/env bash
 # ./hello-pwd-ls.sh
@@ -64,10 +64,10 @@ pwd
 ls -al
 exit $?
 ```
-2. You have [Atom](https://atom.io/) and the [build](https://atom.io/packages/build) package installed
-   * I won't pontificate on why I am using Atom and the build package as my example IDE. The podman commands I will highlight in this post will work equally as well using whichever IDE you choose to use in conjunction with its external build commands.
-3. You are somewhat familiar with .atom-build.yml (or can copypasta)
-3. You have podman installed
+2.  You have [Atom](https://atom.io/) and the [build](https://atom.io/packages/build) package installed
+*   I won't pontificate on why I am using Atom and the build package as my example IDE. The podman commands I will highlight in this post will work equally as well using whichever IDE you choose to use in conjunction with its external build commands.
+3.  You are somewhat familiar with .atom-build.yml (or can copypasta)
+4.  You have podman installed
 
 
 ### Configuration
@@ -233,20 +233,20 @@ elif [[ $MODE == "1" ]]; then
             -it \
             --systemd="${SYSTEMD}" \
             --name "atom-${CLEAN}-persistent" \
-            -v "{FILE_ACTIVE_PATH}:{FILE_ACTIVE_PATH}" \
-            -v "{FILE_ACTIVE_PATH}/${MASK_DIR}" \
-            -w "{FILE_ACTIVE_PATH}" \
+            -v "${FILE_ACTIVE_PATH}:${FILE_ACTIVE_PATH}" \
+            -v "${FILE_ACTIVE_PATH}/${MASK_DIR}" \
+            -w "${FILE_ACTIVE_PATH}" \
             "${IMAGE}" \
-            /bin/bash -c "chmod 755 {FILE_ACTIVE} && {FILE_ACTIVE} ${OPTIONS}"
+            /bin/bash -c "chmod 755 ${FILE_ACTIVE} && ${FILE_ACTIVE} ${OPTIONS}"
         else
             podman run \
             -it \
             --systemd="${SYSTEMD}" \
             --name "atom-${CLEAN}-persistent" \
-            -v "{FILE_ACTIVE_PATH}:{FILE_ACTIVE_PATH}" \
-            -w "{FILE_ACTIVE_PATH}" \
+            -v "${FILE_ACTIVE_PATH}:${FILE_ACTIVE_PATH}" \
+            -w "${FILE_ACTIVE_PATH}" \
             "${IMAGE}" \
-            /bin/bash -c "chmod 755 {FILE_ACTIVE} && {FILE_ACTIVE} ${OPTIONS}"
+            /bin/bash -c "chmod 755 ${FILE_ACTIVE} && ${FILE_ACTIVE} ${OPTIONS}"
         fi
     fi
 # Recreate persistent container
@@ -261,20 +261,20 @@ elif [[ $MODE == "2" ]]; then
         -it \
         --systemd="${SYSTEMD}" \
         --name "atom-${CLEAN}-persistent" \
-        -v "{FILE_ACTIVE_PATH}:{FILE_ACTIVE_PATH}" \
-        -v "{FILE_ACTIVE_PATH}/${MASK_DIR}" \
-        -w "{FILE_ACTIVE_PATH}" \
+        -v "${FILE_ACTIVE_PATH}:${FILE_ACTIVE_PATH}" \
+        -v "${FILE_ACTIVE_PATH}/${MASK_DIR}" \
+        -w "${FILE_ACTIVE_PATH}" \
         "${IMAGE}" \
-        /bin/bash -c "chmod 755 {FILE_ACTIVE} && {FILE_ACTIVE} ${OPTIONS}"
+        /bin/bash -c "chmod 755 ${FILE_ACTIVE} && ${FILE_ACTIVE} ${OPTIONS}"
     else
         podman run \
         -it \
         --systemd="${SYSTEMD}" \
         --name "atom-${CLEAN}-persistent" \
-        -v "{FILE_ACTIVE_PATH}:{FILE_ACTIVE_PATH}" \
-        -w "{FILE_ACTIVE_PATH}" \
+        -v "${FILE_ACTIVE_PATH}:${FILE_ACTIVE_PATH}" \
+        -w "${FILE_ACTIVE_PATH}" \
         "${IMAGE}" \
-        /bin/bash -c "chmod 755 {FILE_ACTIVE} && {FILE_ACTIVE} ${OPTIONS}"
+        /bin/bash -c "chmod 755 ${FILE_ACTIVE} && ${FILE_ACTIVE} ${OPTIONS}"
     fi
 fi
 
@@ -282,13 +282,13 @@ fi
 
 There are several things to highlight in this script:
 
-1. The filename is first sanitized so that it can be used to generate a unique container name.
-2. Next, we edit SELinux permissions on our `pwd` to allow the container full access to our build directory. Editing SELinux permissions is always a balance between ease-of-use and security and I find setting the container_file_t flag is a nice balance. If your script doesn't do much file i/o it may be possible to run it by only altering permissions on `$FILE_ACTIVE`.
-3. According to the mode we either remove and recreate or create a new container
-4. We mount the `pwd` in the container
-5. If `OUTPUT=0, `we mask the output directory `-v "{FILE_ACTIVE_PATH}/${OUTPUT_DIR}"` by mounting an unnamed volume, so that output is only stored in the container and not on the host filesystem. You can repeat this as many times as necessary to exclude other subdirectories in your build directory.
-6. Enable `--systemd=always` if you plan on interacting with `systemctl` using your script. The default `on` state will only enable systemd when the command passed to the container is `/usr/sbin/init`. Since it is not possible to pass more than one command and we must pass our script, this should be set to `always`.
-7. Make sure to make the script executable in the container using `chmod 755`
+1.  The filename is first sanitized so that it can be used to generate a unique container name.
+2.  Next, we edit SELinux permissions on our `pwd` to allow the container full access to our build directory. Editing SELinux permissions is always a balance between ease-of-use and security and I find setting the container_file_t flag is a nice balance. If your script doesn't do much file i/o it may be possible to run it by only altering permissions on `$FILE_ACTIVE`.
+3.  According to the mode we either remove and recreate or create a new container
+4.  We mount the `pwd` in the container
+5.  If `OUTPUT=0, `we mask the output directory `-v "{FILE_ACTIVE_PATH}/${OUTPUT_DIR}"` by mounting an unnamed volume, so that output is only stored in the container and not on the host filesystem. You can repeat this as many times as necessary to exclude other subdirectories in your build directory.
+6.  Enable `--systemd=always` if you plan on interacting with `systemctl` using your script. The default `on` state will only enable systemd when the command passed to the container is `/usr/sbin/init`. Since it is not possible to pass more than one command and we must pass our script, this should be set to `always`.
+7.  Make sure to make the script executable in the container using `chmod 755`
 
 
 ##### `--file` and `--file-path`
@@ -299,16 +299,16 @@ This can be a script running a list of commands (e.g. build script) or a single
 
 ##### `--mode`
 
-0. Nonpersistent container (always recreate) (Default)
-1. Persistent container
-2. Recreate persistent container
+0.  Nonpersistent container (always recreate) (Default)
+1.  Persistent container
+2.  Recreate persistent container
 
 ##### `--mask-dir`
 
 Optionally, one can mask output from the host system (so that it only resides in a container volume) using `--mask-dir`.  As demonstrated in the [prerequisites](#prerequisites), it is important to have your program output to the `--` specified in your `.atom-build.yml` (in this case 'output'). This provides you the ability to optionally mask the output directory with an unnamed volume so that no files are actually written to the host. This has two benefits:
 
-* If the script is configured to overwrite existing output, it may threaten a live system (like a website or any other running process that depends on the script output)
-* If the script is configured to not overwrite existing output, the script may not run correctly
+*   If the script is configured to overwrite existing output, it may threaten a live system (like a website or any other running process that depends on the script output)
+*   If the script is configured to not overwrite existing output, the script may not run correctly
 
 Output masking gives you the power to control these variables independently of one another by writing output to the container only.