Here at CoreLine we always strive to streamline and automate the repeating processes we regularly go through.
For this specific project, we’re developing an Electron desktop app that’s built as an x86 portable exe (it’s used exclusively on Windows computers) and shipped as a self-extracting installer. The app updates are delivered through an S3 bucket so after a successful build and packaging we need to upload it to the bucket.
We were already using Docker to automate the testing and build process (the app is also built as a web app, packed with Webpack) so we decided to utilise the recently released multistage build feature of the docker build process.
Here are the most important snippets (stages) of our Dockerfile:
FROM node:14 AS build-environment
RUN dpkg --add-architecture i386 && \
apt-get update && \
apt-get install -y --no-install-recommends wine=4.0-2 wine32=4.0-2 && \
apt-get clean && rm -rf /var/lib/apt/lists/*
Here we’re setting up the build environment by using a Node.js 14 base image. It installs Wine (a compatibility layer for running Windows applications on Unix-like operating systems which is required to target Windows), and cleans up the package manager caches. Moving on..
FROM node:14 AS pack-environment
RUN apt-get update && \
apt-get install -y --no-install-recommends p7zip-full=16.02+dfsg-6 && \
apt-get clean && rm -rf /var/lib/apt/lists/*
WORKDIR /extractor
COPY 7z-extractor .
This stage sets up our packaging environment. It installs p7zip, a command-line file archiver with high compression ratios. It then sets up a working directory and copies the 7z-extractor header and config files (used for the self-extracting installer).
FROM build-environment AS project
WORKDIR /project
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY . .
We create a base project stage using the build-environment. It sets up the project directory, copies the package files, installs Node.js dependencies, and copies the project files. Note how we’re copying the package.json before the rest of the app in order to make the most out of Docker’s layer-based build caching.
FROM project AS build
ARG MODE=production
RUN npm run electron:build -- --mode "$MODE" --win portable --ia32
This stage finally builds the Electron app using the npm script electron:build. By default, it builds the app for production, but we are able to override the MODE arg when running our docker build command.
FROM pack-environment AS pack
COPY package.json .
COPY --from=build /project/dist_electron/win-ia32-unpacked unpacked
ARG MODE=production
RUN VERSION=$(node -p "require('./package.json').version") && \
cd unpacked && 7z a -m"0=LZMA" ../app.7z . && cd .. && mkdir build && \
{ [ "$MODE" = "development" ] && SUFFIX="-DEV" || SUFFIX="" ; } && \
cat 7z-extractor.sfx "config-$MODE.txt" app.7z > "build/App-win-x86-$VERSION$SUFFIX.exe"
This stage creates a packaged version of the Electron app. It copies the output files, extracts the built app from the previous stage, compresses it using (p)7z with settings required by our self-extracting archive header, and creates an installer with our naming convention.
FROM amazon/aws-cli:2.8.7 AS publish
WORKDIR /publish
COPY --from=pack /extractor/build .
ARG AWS_ENDPOINT
ARG AWS_DEFAULT_REGION
ARG AWS_BUCKET
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
RUN AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID" && \
AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY" && \
AWS_DEFAULT_REGION="$AWS_DEFAULT_REGION" && \
aws s3 cp --endpoint-url "$AWS_ENDPOINT" --acl public-read --recursive . "s3://$AWS_BUCKET/"
Now this stage is the really weird one. We’re using the AWS CLI to publish the packaged installer to our S3 bucket. It sets up the working directory, copies the packaged app from the pack stage, and performs the AWS S3 copy operation using the provided AWS credentials and endpoint. Keep in mind that this (and all of the above) RUN commands are executed only during build time, so we’re essentially using the build process to build, package and publish our application.
Finally, we use this Dockerfile by running (in our CI/CD):
docker build --platform linux/x86_64 -t desktop-win . && docker image rm desktop-win
As you could probably have guessed, after building this image, it’s not very useful anymore. If we’re doing this locally, we might run a short-lived container that we use to copy the built installer out of the container. That would look like this:
docker build -f Dockerfile-electron-win --target pack --platform linux/x86_64 -t mr-desktop-win --build-arg MODE=development . && \
docker run -it --rm --entrypoint cp -v $PWD:/host mr-desktop-win -r build/. /host/dist_electron && \
docker image rm mr-desktop-win
Note that we’re also targeting the pack stage, because we want to skip the stage that would publish the installer to our bucket.
In conclusion, this might not be the best possible solution to a problem (it certainly isn’t) but it integrates very well with our existing environment and workflow, it uses the build process in a creative way and most importantly, utilises docker’s caching mechanism, so if you run the command again after a successful build, and even after making changes to the source code, it will take much less time to complete each build.