Merge upstream
This commit is contained in:
@@ -11,26 +11,31 @@ The rapidly deflating prices of cloud storage (and other low-end) VPS services m
|
||||
VPS deal aggregate sites, such as [lowendbox](lowendbox.com), regularly provide offers on storage VPS services that rival shared web hosting or dedicated cloud backup providers. However, while the costs of cloud storage space have dropped dramatically, the CPU and memory configurations offered with the most storage VPS configurations are usually quite limited. Oftentimes a VPS provider will limit virtualized storage VPS instances to only 1 CPU core and <1GB memory (512MB is also common), since the intended use case is to simply provide enough resources for effective backup and retrieval.
|
||||
|
||||
### Selecting an OS
|
||||
|
||||
For the purposes of minimizing RAM consumption on a low-end VPS, it is imperative to select an OS that is lightweight, flexible, well-supported, and capable of running your intended services. Unfortunately, while most compute VPS instances provide a seemingly endless selection of server OSes to choose from, choices are usually limited on storage VPS instances by comparison. Windows Server is not a great choice because of its large memory footprint compared to its Linux-based equivalents. **Ubuntu Server LTS, CentOS, or Debian are usually safe bets**, but make sure to check that any third-party software (_i.e._ not in the repositories) that you intend to run are compatible with the libraries that ship with your Linux distribution of choice. The most common problem is trying to use newer software on older, stable Linux distributions. For example, [JRiver Media Center](https://yabb.jriver.com/interact/index.php/board,58.0.html) is built on Debian 8 Jessie but will not run on CentOS 7 because it ships with older C++ libraries. I prefer Redhat distributions (for reasons I will explain in a later post) but [my storage VPS host](www.time4vps.com) does not offer Fedora as an option (likely due to its short release cycle), which meant that my choices were limited to either Ubuntu 16.04 LTS or Debian 8. Expect for a base Linux OS installation to use ~125MB of RAM.
|
||||
|
||||
### Selecting a web server
|
||||
|
||||
[Digital Ocean](https://www.digitalocean.com/community/tutorials/apache-vs-nginx-practical-considerations) offers an in-depth comparison of the two major web servers, Apache and Nginx. Long story short, **Nginx provides better performance and uses fewer resources than Apache** thanks to its asynchronous design. The Nginx process itself is single-threaded, which is perfect for a low-end VPS with only a single CPU core. Nginx can also be used as a [reverse proxy](https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/) (using the `proxy_pass` block) to act as a secure front-end for other web services.
|
||||
|
||||
### Selecting a website generator
|
||||
|
||||
In the past, web development for the layman necessitated using heavy content management systems like [Wordpress](wordpress.com) to handle website design and editing. Unfortunately, **a single instance of Wordpress can easily use up >256MB of RAM**, which would quickly gobble up the available resources on a low-end VPS. Additionally, Wordpress creates dynamic sites that chew up CPU cycles on the server to serve dynamic content whenever the site is accessed. Luckily there has recently been a shift among (primarily smaller) sites to using static site generators such as [**Jekyll**](https://jekyllrb.com/) (my favorite) or [HUGO](https://gohugo.io/) that compile sites beforehand and only serve static HTML/CSS versions of the page. This results in much lower overhead and resources, since post-compile there is nothing actually running except your web server (of course your OS may cache the site in memory to speed up future accesses).
|
||||
|
||||
It's certainly possible to generate your sites locally and only transfer the output to your server's /www directory. However, since content creation in Jekyll is primarily some variation of Markdown language, I prefer using **Git** to push to the server and using post-hooks to generate the site remotely. It only takes a few seconds to generate most sites and as a result your source content is safely duplicated and version controlled on the storage VPS (that is the point of using a storage VPS in the first place, _right_?). As an alternative to git post-hooks (and for a slightly higher performance overhead), you can also use Jekyll's `--incremental` and `--watch` switches to easily and automatically regenerate selected site folders.
|
||||
|
||||
### Additional Services
|
||||
|
||||
In addition to a lightweight web server, consider adding the following useful services:
|
||||
|
||||
* Private Dynamic DNS
|
||||
* Use [MiniDynDNS](https://github.com/arkanis/minidyndns) to listen for IP address updates from your other computers (running a dynamic DNS client).
|
||||
* Use [MiniDynDNS](https://github.com/arkanis/minidyndns) to listen for IP address updates from your other computers (running a dynamic DNS client).
|
||||
* File syncing
|
||||
* Run an [rsync daemon](https://www.atlantic.net/cloud-hosting/how-to-setup-rsync-daemon-linux-server/) to cache large numbers of incrementally-updated files without opening and closing an ssh connection.
|
||||
* For a slightly higher performance penalty, consider using [syncthing](https://syncthing.net/) to keep directories automatically synced between one or more computers. This can be useful when transferring static sites from a build server to the production server. Make sure to enable the file watcher functionality and decrease the full-scan interval to keep the cpu cycles to a minimum.
|
||||
* Run an [rsync daemon](https://www.atlantic.net/cloud-hosting/how-to-setup-rsync-daemon-linux-server/) to cache large numbers of incrementally-updated files without opening and closing an ssh connection.
|
||||
* For a slightly higher performance penalty, consider using [syncthing](https://syncthing.net/) to keep directories automatically synced between one or more computers. This can be useful when transferring static sites from a build server to the production server. Make sure to enable the file watcher functionality and decrease the full-scan interval to keep the cpu cycles to a minimum.
|
||||
* Code repository
|
||||
* Following Microsoft's recent acquisition of GitHub, many free software advocates are looking to plan B solutions or storing their git repositories online. [GitLab](https://about.gitlab.com/) is a popular alternative, but self-hosting can also make a lot of sense for small or personal projects. While GitLab is available to run on your own site, it is quite resource heavy, making it a poor choice for low-end storage VPS's. Lightweight alternatives include [**Gogs**](https://gogs.io/) and [cgit](https://git.zx2c4.com/cgit/). cgit is the lightest of the bunch, but Gogs presents a more familiar "Github-like" interface that is more comfortable for collaborators or others trying to view or clone your code.
|
||||
* Following Microsoft's recent acquisition of GitHub, many free software advocates are looking to plan B solutions or storing their git repositories online. [GitLab](https://about.gitlab.com/) is a popular alternative, but self-hosting can also make a lot of sense for small or personal projects. While GitLab is available to run on your own site, it is quite resource heavy, making it a poor choice for low-end storage VPS's. Lightweight alternatives include [**Gogs**](https://gogs.io/) and [cgit](https://git.zx2c4.com/cgit/). cgit is the lightest of the bunch, but Gogs presents a more familiar "Github-like" interface that is more comfortable for collaborators or others trying to view or clone your code.
|
||||
* Certbot
|
||||
* Keep your LetsEncrypt HTTPS certificates automatically up-to-date. If you are using Nginx as a reverse proxy with subdomains, the new LetsEncrypt wildcard certificates can be used to secure all of your subdomains in one go. Don't forget to also include the root of your site in the certificate!
|
||||
* Keep your LetsEncrypt HTTPS certificates automatically up-to-date. If you are using Nginx as a reverse proxy with subdomains, the new LetsEncrypt wildcard certificates can be used to secure all of your subdomains in one go. Don't forget to also include the root of your site in the certificate!
|
||||
* Media server
|
||||
* This can be as simple as an FTP server or as complicated as a dedicated media server program like the aforementioned JRiver Media Center.
|
||||
* This can be as simple as an FTP server or as complicated as a dedicated media server program like the aforementioned JRiver Media Center.
|
||||
|
||||
@@ -13,6 +13,7 @@ I prefer storing my work in version controlled git repositories in order to keep
|
||||
There are three steps that need to occur for this to happen seamlessly after the git repos have been created (which is outside the scope of this post).
|
||||
|
||||
On the client:
|
||||
|
||||
1. Push website changes to the server
|
||||
|
||||
On the server:
|
||||
@@ -29,6 +30,7 @@ For larger websites, I prefer using nested git repositories to simplify website
|
||||
By combining the subgit strategy with some structured naming conventions it is possible to push, build, and deploy multiple subdomains or sites using a single git push from the client!
|
||||
|
||||
Example:
|
||||
|
||||
~~~bash
|
||||
#!/usr/bin/env bash
|
||||
"/var/lib/git/gogs/gogs" hook --config='/var/lib/git/gogs/conf/app.ini' post-receive
|
||||
|
||||
@@ -164,9 +164,9 @@ The step I'm sure you've been waiting for.
|
||||
A wild goose chase:
|
||||
|
||||
1. LetsEncrypt first asks your <yoursite\>.com domain for the TXT record at _acme-challenge.example.com to complete the challenge
|
||||
2. The Namecheap DNS server responds with a CNAME record that points to ch30791e-33f4-1af1-7db3-1ae95ecdde28.acme.<yoursite>.com, so LetsEncrypt goes there instead
|
||||
3. The authoritative DNS server for \*.acme.<yoursite\>.com is ns1.acme.<yoursite\>.com, which points at your server IP (running acme-dns)
|
||||
4. LetsEncrypt can finally ask ns1.acme.example.com what is the TXT record for ch30791e-33f4-1af1-7db3-1ae95ecdde28.acme.<yoursite\>.com and acme-dns will answer that question
|
||||
2. The Namecheap DNS server responds with a CNAME record that points to ch30791e-33f4-1af1-7db3-1ae95ecdde28.acme.*yoursite*.com, so LetsEncrypt goes there instead
|
||||
3. The authoritative DNS server for \*.acme.*yoursite*.com is ns1.acme.*yoursite*.com, which points at your server IP (running acme-dns)
|
||||
4. LetsEncrypt can finally ask ns1.acme.example.com what is the TXT record for ch30791e-33f4-1af1-7db3-1ae95ecdde28.acme.*yoursite*.com and acme-dns will answer that question
|
||||
|
||||
### Additional Considerations
|
||||
|
||||
|
||||
@@ -39,12 +39,14 @@ We will be using getty to handle autologin for our user. In order to do this, we
|
||||
This will open your default system editor to create an override service file for the systemd getty@tty1.service.
|
||||
|
||||
2. Enter the following into the drop-in override file you just opened/created and save it (replacing username with your actual username):
|
||||
```
|
||||
|
||||
```(text)
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=
|
||||
ExecStart=-/sbin/agetty --autologin username --noclear %I $TERM
|
||||
```
|
||||
|
||||
3. Reload, restart, and enable the service file to load on boot: `sudo systemctl daemon-reload && sudo systemctl restart getty@tty1 && sudo systemctl enable getty@tty1`
|
||||
|
||||
Your system will now autologin the user you specified when you reboot!
|
||||
@@ -54,7 +56,8 @@ Your system will now autologin the user you specified when you reboot!
|
||||
In order to run a graphical program or window manager, you will first need to initiate an X server. We can start one automatically using a shell profile file that is sourced during user login. The location of this file (e.g. `/etc/profile.d/`, `~/.profile`, `~/.bash_profile`, `~/.zprofile` (zsh), etc.) depends on your Linux distribution and shell settings. Here I am assuming that you are using bash shell (you can confirm this via the `echo $SHELL` command), thus we will place the relevant commands in `~/.bash_profile`.
|
||||
|
||||
1. Add the following to the end of your `~/.bash_profile`:
|
||||
```
|
||||
|
||||
```(bash)
|
||||
# Start X11 automatically
|
||||
if [[ -z "$DISPLAY" ]] && [[ $(tty) = /dev/tty1 ]]; then
|
||||
. startx
|
||||
@@ -65,7 +68,8 @@ fi
|
||||
Now that the X server is set to run on login, it needs to be configured to start your window manager, in this case Openbox.
|
||||
|
||||
2. Add the following line to `~/.xinitrc` and save the file:
|
||||
```
|
||||
|
||||
```(bash)
|
||||
exec openbox-session
|
||||
```
|
||||
|
||||
@@ -78,7 +82,8 @@ It's certainly possible to use Openbox to autostart your server's GUI programs b
|
||||
Here are the necessary steps to create and activate a systemd service file to start and stop JRiver Media Center.
|
||||
|
||||
1. Create the file `/etc/systemd/system/jriver.service` and add the following (replacing username with your username):
|
||||
```
|
||||
|
||||
```(text)
|
||||
[Unit]
|
||||
Description=JRiver Media Center 25
|
||||
After=graphical.target
|
||||
|
||||
@@ -27,6 +27,7 @@ In the course of achieving these goals, my JRMC network has been in a state of f
|
||||
JRMC contains a powerful [Media Server](https://wiki.jriver.com/index.php/Media_Server) that enables clients to play and manage media in JRMC as if they were using a local copy of the library. It is certainly possible to use JRMC Media Server exclusively to manage and play your media from the server to your clients in the traditional server-client model.
|
||||
|
||||
JRMC Media Server pros:
|
||||
|
||||
1. Tag changes are synced seamlessly between all devices
|
||||
2. Client devices are easy to add/remove
|
||||
|
||||
@@ -61,12 +62,12 @@ Although our media files are now in sync, and we have configured JRMC to use loc
|
||||
|
||||
The benefits of the client-client model over the server-client model include:
|
||||
|
||||
* Automatic redundancy of the media library
|
||||
* Each client has access to the media library even when offline
|
||||
* Each client can maintain its own set of views, playlists, and smartlists
|
||||
* Useful if you want to give read-only access to a client and allow it to store and display its own set of ratings from a custom tag
|
||||
* Low bandwidth and low latency for playback
|
||||
* Cross-platform since file structure does not have to be identical
|
||||
- Automatic redundancy of the media library
|
||||
- Each client has access to the media library even when offline
|
||||
- Each client can maintain its own set of views, playlists, and smartlists
|
||||
- Useful if you want to give read-only access to a client and allow it to store and display its own set of ratings from a custom tag
|
||||
- Low bandwidth and low latency for playback
|
||||
- Cross-platform since file structure does not have to be identical
|
||||
|
||||
All that must be done to enable this functionality is to set up Auto-Import on each client to point at your shared (via Syncthing) media folder!
|
||||
|
||||
@@ -74,7 +75,7 @@ All that must be done to enable this functionality is to set up Auto-Import on e
|
||||
|
||||
The real magic here is to store as much information as possible in the file tags so that they are synced via Syncthing between JRMC clients. This can include basic information like ratings, artwork, audio analysis data (R128 normalization) or more advanced information like user-defined fields that can be used to keep smartlists in sync (see [Advanced tagging](#advanced-tagging) below for more information).
|
||||
|
||||
##### Sending metadata
|
||||
#### Sending metadata
|
||||
|
||||
To propagate changes from a client to other clients, we will need to enable *Edit>Edit File Tags When File Info Changes* on any JRMC client that we want to have read-write access to the file metadata. If you leave this option unchecked on a client then that client will maintain its own set of metadata in the JRMC database without propagating changes. If you want to edit the actual file tags without affecting other clients (e.g. you are moving files on the client to a handheld device), then go ahead and enable the option but set your Syncthing client to Receive Only so that it maintains its own local database state. I also recommend enabling automatic file tagging during file analysis upon Auto-Import *Options>Library & Folders>Configure auto-import>Tasks>Write file tags when analyzing audio...* so that analysis only needs to be performed once on the client that performs the initial file import.
|
||||
|
||||
@@ -88,7 +89,7 @@ In order to receive metadata updates from other clients, you'll want to use JRMC
|
||||
|
||||
Below I will describe two examples of expanding the functionality of the client-client model using file tags.
|
||||
|
||||
##### Tracking newly added media
|
||||
#### Tracking newly added media
|
||||
|
||||
Sometimes it is useful to keep track of which client has added a particular file to the Syncthing network. You can do this by creating a custom user-defined string field in JRMC (*Options>Library & Folders>Manage Library Fields*) named *Imported From* and check the box to *Save in file tags (when possible)*. Then configure each client to apply their specific client name to the field upon auto-import: In *Options>Library & Folders>Configure auto-import* select your auto-import directory that you are sharing with Syncthing, click *Edit...* and under *Apply these tags (optional)>Add>Custom* select the field you just created and enter the client name as the value. For instance I have named my clients *HTPC*, *Laptop*, *VPS*, and *Work*. In this manner you can track where your files were originally imported from.
|
||||
|
||||
|
||||
@@ -6,28 +6,28 @@ subtitle: Now with systemd!
|
||||
tags: [atom, podman, containers, ide, systemd, run-with-podman]
|
||||
---
|
||||
|
||||
# Note
|
||||
## Note
|
||||
|
||||
**The scripts provided in this tutorial have been superseded by the simpler [podmanRun]({% post_url 2020-05-15-podmanrun-a-simple-podman-wrapper %}) wrapper.**
|
||||
|
||||
### Overview
|
||||
## Overview
|
||||
|
||||
In this tutorial we will be using Atom's [build package](https://atom.io/packages/build) (although you are free to use your own IDE) and a container management script to run files/commands on default system images using podman. We will go one step further by enabling systemd support in our build environment. We will also provide the option of masking the program's output from the host using unnamed volumes.
|
||||
|
||||
### Introduction
|
||||
## Introduction
|
||||
|
||||
It is important to remember that a development environment can be just as important as the code itself. Over time, our development environment morphs into a unique beast that are specific to each user. Therefore, it is imperative to test your programs in several default environments prior to distribution.
|
||||
|
||||
In the past, this was performed on virtual machines (VMs) that contained a default installation of the distribution that you were targeting. Thanks to their snapshotting abilities it was fairly trivial to restore distributions to their default state for software testing. However, this method had its drawbacks:
|
||||
|
||||
* The default state was never the *current* default state for long. VMs had to be continually upgraded via their package managers to stay up-to-date with the development environment. They also needed to be modified in some cases (e.g. to enable sshd and allow authentication-less sudo) so deploying newer image versions required manual intervention
|
||||
* Retroactive changes to existing VMs is difficult
|
||||
* VMs are difficult to automate, requiring third-party tools (e.g. kickstart files, Ansible, etc.) to manage them
|
||||
* Each VM gets its own IP address, which makes it difficult to automate ssh-based program building/script running
|
||||
* VMs are computationally heavy. Their footprint is an entire deduplication of the host OS and its virtualization stack, in both memory and disk space. Taking and restoring snapshots is slow.
|
||||
* There is a meaningful amount of performance loss between the hypervisor and disk i/o because it is handled using network protocols. For example, an Atom VM build command would normally look something like this:
|
||||
* The default state was never the *current* default state for long. VMs had to be continually upgraded via their package managers to stay up-to-date with the development environment. They also needed to be modified in some cases (e.g. to enable sshd and allow authentication-less sudo) so deploying newer image versions required manual intervention
|
||||
* Retroactive changes to existing VMs is difficult
|
||||
* VMs are difficult to automate, requiring third-party tools (e.g. kickstart files, Ansible, etc.) to manage them
|
||||
* Each VM gets its own IP address, which makes it difficult to automate ssh-based program building/script running
|
||||
* VMs are computationally heavy. Their footprint is an entire deduplication of the host OS and its virtualization stack, in both memory and disk space. Taking and restoring snapshots is slow.
|
||||
* There is a meaningful amount of performance loss between the hypervisor and disk i/o because it is handled using network protocols. For example, an Atom VM build command would normally look something like this:
|
||||
|
||||
```
|
||||
```bash
|
||||
cat {FILE_ACTIVE} | ssh fedora-build-machine.lan "cat > /tmp/{FILE_ACTIVE_NAME} ; mkdir -p {FILE_ACTIVE_NAME_BASE}; cd {FILE_ACTIVE_NAME_BASE}; chmod 755 /tmp/{FILE_ACTIVE_NAME} ; /tmp/{FILE_ACTIVE_NAME}"
|
||||
```
|
||||
|
||||
@@ -39,27 +39,27 @@ Containers alleviate all of the problems associated with using VMs to execute co
|
||||
|
||||
They:
|
||||
|
||||
* Use standardized images of your target distributions and make it possible to execute commands directly on them
|
||||
* Allow you to create your own custom base images using Dockerfiles, which are built on top of other rolling images that are automatically maintained
|
||||
* Support several different networking options, such as automatically using the host network or operating via its own whitelisted service
|
||||
* Perform great because the code is running on the same kernel as the OS
|
||||
* Can be created and destroyed nearly instantaneously which makes them much better for executing frequent build commands (I'm a big F5'er)
|
||||
* Use standardized images of your target distributions and make it possible to execute commands directly on them
|
||||
* Allow you to create your own custom base images using Dockerfiles, which are built on top of other rolling images that are automatically maintained
|
||||
* Support several different networking options, such as automatically using the host network or operating via its own whitelisted service
|
||||
* Perform great because the code is running on the same kernel as the OS
|
||||
* Can be created and destroyed nearly instantaneously which makes them much better for executing frequent build commands (I'm a big F5'er)
|
||||
|
||||
### Podman and Toolbox
|
||||
|
||||
Podman is a container manager by Red Hat that is available on Fedora and CentOS and integral to Silverblue and CoreOS. Red Hat has also shipped some fun stuff built on top of Podman such as [Toolbox](https://fedoramagazine.org/a-quick-introduction-to-toolbox-on-fedora/) that combine system overlays and containers to provide seamless build environments for past and current CentOS and Fedora releases (theoretically you should be able to provide your own custom image although the documentation is currently scant). Toolbox will get you 90% of the way there to automated builds as long as you:
|
||||
|
||||
* only target Red Hat-based distributions
|
||||
* don't develop or test systemd scripts or need to utilize existing systemd services (**systemd does not work in Toolbox**)
|
||||
* are comfortable with having your entire $HOME exposed to your build environment
|
||||
* don't need to nest toolboxes
|
||||
* only target Red Hat-based distributions
|
||||
* don't develop or test systemd scripts or need to utilize existing systemd services (**systemd does not work in Toolbox**)
|
||||
* are comfortable with having your entire $HOME exposed to your build environment
|
||||
* don't need to nest toolboxes
|
||||
|
||||
Toolbox may make sense if you run separate instances of your IDE from *inside* the toolbox containers, but then you are just back to creating custom build environments within each container, only now separated from the host OS. Unfortunately, Toolbox does not support nesting containers so testing your code on default images from within a toolbox is impossible as of this moment. Additionally, if your scripts change environmental variables, they may be difficult to test as the toolbox is mutable.
|
||||
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. You have a script or command to execute on build. Let's start with something easy like:
|
||||
1. You have a script or command to execute on build. Let's start with something easy like:
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
# ./hello-pwd-ls.sh
|
||||
@@ -68,26 +68,25 @@ pwd
|
||||
ls -al
|
||||
exit $?
|
||||
```
|
||||
2. You have [Atom](https://atom.io/) and the [build](https://atom.io/packages/build) package installed
|
||||
* I won't pontificate on why I am using Atom and the build package as my example IDE. The podman commands I will highlight in this post will work equally as well using whichever IDE you choose to use in conjunction with its external build commands.
|
||||
3. You are somewhat familiar with .atom-build.yml (or can copypasta)
|
||||
4. You have podman installed
|
||||
|
||||
2. You have [Atom](https://atom.io/) and the [build](https://atom.io/packages/build) package installed (the podman commands I will highlight in this post will work equally as well using whichever IDE you choose to use in conjunction with its external build commands.)
|
||||
3. You are somewhat familiar with .atom-build.yml (or can copypasta)
|
||||
4. You have podman installed
|
||||
|
||||
### Configuration
|
||||
|
||||
|
||||
#### run-with-podman.sh
|
||||
|
||||
I created the following script to handle container execution depending on a few arguments. You can download it and place it in your path here:
|
||||
|
||||
|
||||
Download [run-with-podman.sh](https://git.bryanroessler.com/bryan/run-with-podman/src/master/run-with-podman.sh) and install to `$HOME/.local/bin`:
|
||||
```
|
||||
|
||||
```bash
|
||||
wget -q -O "${HOME}/.local/bin/run-with-podman" "https://git.bryanroessler.com/bryan/run-with-podman/src/master/run-with-podman.sh"
|
||||
```
|
||||
|
||||
If you prefer to copy-paste:
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
|
||||
@@ -286,14 +285,13 @@ fi
|
||||
|
||||
There are several things to highlight in this script:
|
||||
|
||||
1. The filename is first sanitized so that it can be used to generate a unique container name.
|
||||
2. Next, we edit SELinux permissions on our `pwd` to allow the container full access to our build directory. Editing SELinux permissions is always a balance between ease-of-use and security and I find setting the container_file_t flag is a nice balance. If your script doesn't do much file i/o it may be possible to run it by only altering permissions on `$FILE_ACTIVE`.
|
||||
3. According to the mode we either remove and recreate or create a new container
|
||||
4. We mount the `pwd` in the container
|
||||
5. If `OUTPUT=0, `we mask the output directory `-v "{FILE_ACTIVE_PATH}/${OUTPUT_DIR}"` by mounting an unnamed volume, so that output is only stored in the container and not on the host filesystem. You can repeat this as many times as necessary to exclude other subdirectories in your build directory.
|
||||
6. Enable `--systemd=always` if you plan on interacting with `systemctl` using your script. The default `on` state will only enable systemd when the command passed to the container is `/usr/sbin/init`. Since it is not possible to pass more than one command and we must pass our script, this should be set to `always`.
|
||||
7. Make sure to make the script executable in the container using `chmod 755`
|
||||
|
||||
1. The filename is first sanitized so that it can be used to generate a unique container name.
|
||||
2. Next, we edit SELinux permissions on our `pwd` to allow the container full access to our build directory. Editing SELinux permissions is always a balance between ease-of-use and security and I find setting the container_file_t flag is a nice balance. If your script doesn't do much file i/o it may be possible to run it by only altering permissions on `$FILE_ACTIVE`.
|
||||
3. According to the mode we either remove and recreate or create a new container
|
||||
4. We mount the `pwd` in the container
|
||||
5. If `OUTPUT=0`, we mask the output directory `-v "{FILE_ACTIVE_PATH}/${OUTPUT_DIR}"` by mounting an unnamed volume, so that output is only stored in the container and not on the host filesystem. You can repeat this as many times as necessary to exclude other subdirectories in your build directory.
|
||||
6. Enable `--systemd=always` if you plan on interacting with `systemctl` using your script. The default `on` state will only enable systemd when the command passed to the container is `/usr/sbin/init`. Since it is not possible to pass more than one command and we must pass our script, this should be set to `always`.
|
||||
7. Make sure to make the script executable in the container using `chmod 755`
|
||||
|
||||
##### `--file` and `--file-path`
|
||||
|
||||
@@ -303,16 +301,16 @@ This can be a script running a list of commands (e.g. build script) or a single
|
||||
|
||||
##### `--mode`
|
||||
|
||||
0. Nonpersistent container (always recreate) (Default)
|
||||
1. Persistent container
|
||||
2. Recreate persistent container
|
||||
0. Nonpersistent container (always recreate) (Default)
|
||||
1. Persistent container
|
||||
2. Recreate persistent container
|
||||
|
||||
##### `--mask-dir`
|
||||
|
||||
Optionally, one can mask output from the host system (so that it only resides in a container volume) using `--mask-dir`. As demonstrated in the [prerequisites](#prerequisites), it is important to have your program output to the `--` specified in your `.atom-build.yml` (in this case 'output'). This provides you the ability to optionally mask the output directory with an unnamed volume so that no files are actually written to the host. This has two benefits:
|
||||
|
||||
* If the script is configured to overwrite existing output, it may threaten a live system (like a website or any other running process that depends on the script output)
|
||||
* If the script is configured to not overwrite existing output, the script may not run correctly
|
||||
* If the script is configured to overwrite existing output, it may threaten a live system (like a website or any other running process that depends on the script output)
|
||||
* If the script is configured to not overwrite existing output, the script may not run correctly
|
||||
|
||||
Output masking gives you the power to control these variables independently of one another by writing output to the container only.
|
||||
|
||||
@@ -326,7 +324,6 @@ If you are going to release software that integrates with systemd, it is certain
|
||||
|
||||
The container image to be used to execute the command.
|
||||
|
||||
|
||||
#### .atom-build.yml
|
||||
|
||||
In your project directory (next to your script), create the following `.atom-build.yml` file in order to call our script using the appropriate arguments whenever a build is triggered.
|
||||
|
||||
@@ -16,11 +16,13 @@ Here's a simple script/function to keep *n* number of the latest files that matc
|
||||
### Code
|
||||
|
||||
[prunefiles](https://git.bryanroessler.com/bryan/scripts/raw/master/prunefiles):
|
||||
|
||||
~~~bash
|
||||
{% insert_git_code https://git.bryanroessler.com/bryan/scripts/raw/master/prunefiles %}
|
||||
~~~
|
||||
|
||||
### Example
|
||||
|
||||
~~~bash
|
||||
$ ls
|
||||
Package-25-1.rpm
|
||||
|
||||
@@ -22,22 +22,25 @@ Anyone that wants to easily run programs in ephemeral or persistent containers.
|
||||
|
||||
Not much, by design.
|
||||
|
||||
1. Generates a unique container name based on the `--name` argument passed to `podman` within the `podmanRun` `--options` string. If no `--name` is specified in the `--options` string, podmanRun will generate a unique container name based on the concatenated options and commands passed by the user. Thus, if any options or commands are changed, a new container will be recreated regardless if `--mode=persistent` was set.
|
||||
2. Checks whether a container with that name already exists.
|
||||
3. If no matching container was found: the `--options` are passed directly to `podman run` and the commands are executed in the new container.
|
||||
4. If a matching container was found:
|
||||
- `--mode=recreate` will remove the existing container and run the commands in a new container using `podman run` with the provided `--options`.
|
||||
- `--mode=persistent` will run the commands in the existing container using `podman exec` and `--options` will be ignored.
|
||||
3. By default, the container is not removed afterwards (it will only be removed upon subsequent invocations of `podmanRun` using `--mode=recreate`) to allow the user to inspect the container. Containers can be automatically removed after execution by uncommenting the requisite line in `__main()`.
|
||||
1. Generates a unique container name based on the `--name` argument passed to `podman` within the `podmanRun` `--options` string. If no `--name` is specified in the `--options` string, podmanRun will generate a unique container name based on the concatenated options and commands passed by the user. Thus, if any options or commands are changed, a new container will be recreated regardless if `--mode=persistent` was set.
|
||||
2. Checks whether a container with that name already exists.
|
||||
3. If no matching container was found: the `--options` are passed directly to `podman run` and the commands are executed in the new container.
|
||||
4. If a matching container was found:
|
||||
|
||||
- `--mode=recreate` will remove the existing container and run the commands in a new container using `podman run` with the provided `--options`.
|
||||
- `--mode=persistent` will run the commands in the existing container using `podman exec` and `--options` will be ignored.
|
||||
|
||||
5. By default, the container is not removed afterwards (it will only be removed upon subsequent invocations of `podmanRun` using `--mode=recreate`) to allow the user to inspect the container. Containers can be automatically removed after execution by uncommenting the requisite line in `__main()`.
|
||||
|
||||
### Usage
|
||||
|
||||
For the complete list of up-to-date options, run `podmanRun --help`.
|
||||
```
|
||||
|
||||
```bash
|
||||
podmanRun [-m MODE] [-o OPTIONS] [COMMANDS [ARGS]...] [--help] [--debug]
|
||||
```
|
||||
|
||||
##### Options
|
||||
#### Options
|
||||
|
||||
```text
|
||||
--mode, -m MODE
|
||||
@@ -58,22 +61,26 @@ Podman options can be passed to `--options` as a single string to be split on wh
|
||||
##### Examples
|
||||
|
||||
Run an ephemeral PHP webserver container using the current directory as webroot:
|
||||
```
|
||||
|
||||
```shell
|
||||
podmanRun -o "-p=8000:80 --name=php_script -v=$PWD:/var/www/html:z php:7.3-apache"
|
||||
```
|
||||
|
||||
Run an ephemeral PHP webserver container using the current directory as webroot using IDE:
|
||||
```
|
||||
|
||||
```shell
|
||||
podmanRun -o "-p=8000:80 --name=php_{FILE_ACTIVE_NAME_BASE} -v={FILE_ACTIVE_PATH}:/var/www/html:z php:7.3-apache"
|
||||
```
|
||||
|
||||
Run an ephemeral bash script:
|
||||
```
|
||||
|
||||
```sehll
|
||||
podmanRun -o "--name=bash_script -v=$PWD:$PWD:z -w=$PWD debian:testing" ./script.sh
|
||||
```
|
||||
|
||||
Run an ephemeral bash script using IDE:
|
||||
```
|
||||
|
||||
```sehll
|
||||
podmanRun -o "--name=bash_{FILE_ACTIVE_NAME_BASE}" \
|
||||
-o "-v={FILE_ACTIVE_PATH}:{FILE_ACTIVE_PATH}:z" \
|
||||
-o "-w={FILE_ACTIVE_PATH}" \
|
||||
@@ -81,8 +88,6 @@ podmanRun -o "--name=bash_{FILE_ACTIVE_NAME_BASE}" \
|
||||
{FILE_ACTIVE} arg1 arg2
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
## Additional Info
|
||||
|
||||
Did you find `podmanRun` useful? [Buy me a coffee!](https://paypal.me/bryanroessler?locale.x=en_US)
|
||||
|
||||
@@ -16,11 +16,9 @@ Most existing solutions rely on legacy `ifconfig`, which has been deprecated in
|
||||
Steps:
|
||||
|
||||
1. [Download](https://git.bryanroessler.com/bryan/scripts/raw/master/powershell/wsl2-firewall-rules.ps1) or copy-paste the following Powershell script to a local file:
|
||||
|
||||
{% highlight powershell %}
|
||||
{% insert_git_code https://git.bryanroessler.com/bryan/scripts/raw/master/powershell/wsl2-firewall-rules.ps1 %}
|
||||
{% endhighlight %}
|
||||
|
||||
2. Edit the port list to add any additional WSL2 ports you wish to expose
|
||||
3. Create a startup new task in the Windows Task Scheduler:
|
||||
|
||||
|
||||
Reference in New Issue
Block a user