Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How to optimize the security, size and build speed of Docker images (augmentedmind.de)
77 points by mshekow on Feb 20, 2022 | hide | past | favorite | 15 comments


A few more things to consider:

* I've been playing with checkov recently as a way to track Dockerfile quality and best practices

* If you use GitHub, here are some additional considerations

* Use image digests for base images and configure Dependabot to update

* Look into implementing OpenSSF Scorecard and Allstar

* Supply chain security is hot right now. Look into cosign (signing) and syft (SBOM)

* Step Security has a GitHub action to harden the runner. Think of it as Little Snitch for runners


Thanks for the cosign mention! Maintainer here. The link is github.com/sigstore/cosign for anyone reading along!


Thanks for the mention for harden-runner GitHub Action! sharing the link: https://github.com/step-security/harden-runner


One watch out for me, is containers that use musl libc, like Alpine. There's nothing inherently wrong with musl libc, but it does get a lot less real-world use, so your chances of seeing something odd are higher. Perhaps less so now that Alpine has more broad use, but I have specifically seen issues with java.


I would disagree with "Use Docker Content Trust for Docker Hub".

Docker hasn't been signing official images for the last several years, so turning this on means you'll get the last correctly signed images, which happen to be years out of date.


Interesting - why aren't they signing them?


Not sure, there have been a few issues filed but no official reply.


To make OCI images start faster, use stargz. See the image here:

https://github.com/containerd/stargz-snapshotter

It's a lazy file system for images.


It looks like that makes pulls faster, which is not quite the same thing, although it's probably a good trade for many cases. Huh, that could even work out nicely for cases where you don't end up using everything in the image - AIUI, you could just... never pull those bits if you never used them, which would be nice. Of course, you would be distributing the pull slowness through the runtime of the container.... which, again, is probably a good trade to make, but something to bear in mind.

In any event, thanks for pointing that out; I'll have to play with it:)


No problem! Yeah, it makes things faster to start that have to be pulled first.

I mean there is almost no need to worry about making images smaller if you are using stargz, because it just won't pull anything that's not needed.


> 9. Use docker-slim to remove unnecessary files

Doesn't this, in practice, make the Docker image size situation worse? Docker caches images in layers and reuses e.g. base layers for all operations. Creating a custom single-layer image for each of your binaries negates all the benefits of the layered caching. You have to download the full image on each pull, rather than just the diffs.

Conversely, when I pull the Docker image for an updated version of my software, I typically only have to pull the last few small layers because the base image hasn't changed.


> ... I typically only have to pull the last few small layers because the base image hasn't changed.

That probably depends on your circumstances! For example, you could use a particular OS image as your base, software with updates as a set of intermediate layers and your software and whatever else you need as the last ones. That way, leaving the layers as they are would indeed result in some pretty good efficiency, since only the changed layers would need to be pulled.

Whereas if you base your software on a particular runtime image, e.g. OpenJDK or one of its varieties, then it's unlikely that you'll see such nice benefits, at least if you'll regularly update the version of the base image that you're using. Now, whether you should update everything that often in lieu of any serious security vulnerabilities, however, is another question.


I agree. I would say that the reason for using docker-slim should be motivated more by security considerations, than trying to reduce the overall image size. If you want to uphold the highest security, you would very regularly (e.g. every couple of days) invalidate the very first (or second layer), because you would be re-pulling the latest base image, and additionally run something like "apt-get update && apt-get upgrade".

So, in the end, using docker-slim does make image downloads (and container start-up time) _less_ efficient in those specific cases where you are releasing new images very often (e.g. daily, or even multiple times per day), assuming that the base image is released less often (e.g. weekly of monthly, as is e.g. the case for Python).


One of the future capabilities will be auto-generating base images. It'll require several images to figure out the right base image. The easy version of it be will be available in Slim SaaS (it'll have enough data for it). Happy to chat more about the details.


"update system packages" while this is better for security, it breaks immutability/reproducibility of the end image.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: