Showing posts with label docker. Show all posts
Showing posts with label docker. Show all posts

Monday, 12 May 2025

From Fragile to Factory: Building Containers the Right Way

Containers promised us portability, consistency, and scalable deployments. But many development teams are still stuck using them like lightweight VMs — crafting images by hand, running docker exec, and treating the container itself as a mutable environment.

This isn’t just inefficient. It’s dangerous.

If you care about security, reliability, and scalability, it’s time to rethink your container lifecycle. Here's how to do it right — from build to runtime.

1. The Problem with Manual Container Workflows

You’ve probably seen (or written) something like this:

docker run -it base-image bash
# Install some tools, tweak configs
exit
docker commit ...
docker save -o myimage.tar

This “build” process might work once — but it’s:

  • Opaque: Nobody knows what’s inside that image.
  • Irreproducible: You can’t rebuild the exact same image.
  • Insecure: Unscanned, unsigned, unverifiable.
  • Unscalable: CI/CD pipelines can't run you debugging inside a shell.

Manual builds destroy the auditability and automation that containers are supposed to give you. Worse, they make it nearly impossible to maintain a clean supply chain.

2. Why Dockerfiles (and Multi-stage Builds) Matter

A Dockerfile is more than a convenience — it’s your source of truth.

Benefits of Dockerfile-driven builds:

  • 100% declarative: Every step is visible and version-controlled
  • Portable: Runs the same in CI, your laptop, or a Kubernetes cluster
  • Automatable: Can be rebuilt and scanned automatically
  • Secureable: Works with image signing, SBOM generation, and policy enforcement

Want to go further? Use multi-stage builds to separate build tools from runtime dependencies — reducing image size and attack surface.

# Build stage
FROM golang:1.21 AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp

# Runtime stage
FROM alpine
COPY --from=builder /app/myapp /usr/bin/myapp
CMD ["myapp"]

This pattern results in much smaller, leaner containers — which improves startup time, security posture, and efficiency. As highlighted in Container Size Matters, reducing image bloat isn’t just aesthetic — it has real operational and security benefits.

3. Automate Everything: CI/CD as Your Factory Line

Your container builds should run without human hands involved:

  • ✅ Triggered by source commits or tags
  • ✅ Build from Dockerfile
  • ✅ Scan for vulnerabilities
  • ✅ Generate SBOMs
  • ✅ Sign images (e.g., with Sigstore)
  • ✅ Push to a trusted registry

This builds a secure, traceable software supply chain — not just an image blob.

If you’re using docker save, docker load, or dragging tarballs around, you’re not ready for scale.

4. Exec Into Containers? That’s a Red Flag

“Just SSH in and fix it” doesn’t belong in container-native thinking.

Running docker exec to fix a container is:

  • A band-aid, not a solution
  • A breach of immutability
  • A sign your build process is broken

If you need to exec into a container to troubleshoot, fine — but never use that session as part of your build. Fix the Dockerfile or CI pipeline instead.

5. Stateless by Design: Where Does Your State Live?

Containers should be:

  • 🔁 Restartable
  • 📦 Replaceable
  • 🚀 Scalable

But if you're writing logs, cache, or app state inside /var/lib/myapp, you’re asking for trouble.

Best practice:

  • Store logs in external logging systems (e.g., Fluent Bit → ELK or Loki)
  • Store persistent data in mounted volumes
  • Use Kubernetes PersistentVolumeClaims or cloud-native block storage

This way, your containers can be destroyed and recreated at will, without loss of data or downtime.

6. Putting It All Together

A modern container pipeline looks like this:

  1. ✅ Git push triggers automated build
  2. 🐳 Dockerfile builds image using CI
  3. 🔍 Image scanned and signed
  4. 📦 Stored in trusted registry
  5. 🚀 Deployed via declarative manifest (K8s, Helm, etc.)
  6. 💾 Volumes handle persistence, not the container

No manual tweaks. No snowflake images. Full lifecycle awareness.

Conclusion: Treat Containers Like Code, Not Pets

Containers are a critical part of your software supply chain. If you’re still crafting them by hand or baking in state, you’re setting yourself up for security risks, outages, and scaling pain.

Move to:

  • 📄 Declarative Dockerfiles
  • 🤖 Automated CI builds
  • 🔐 Secure, traceable image delivery
  • 🗃️ Clean separation of state and compute

This is how you go from “it works on my machine” to “we ship production-ready containers every commit.”

Friday, 16 February 2024

Container Size Matters

I have an issue

I have 2 containers and I need them to talk, sounds easy right
But these containers only have ports for "listening" on and so cannot be setup directly to talk to one another. The normal got here is to use NetCat. I good and can you can pipe data between the to apps.

The problem comes when you want to put it into a container. NetCat requires a full OS to work so you start with a base container of some description, maybe alpine, maybe Ubuntu or my favourite of Rocky Linux. But this gives you a 100Mb image at best, or worse 500Mb.

I just not going to cut it.

The solution is easier that you think, native golang

But why golang

Golang has the ability to compile to native executable code.

take a look at this Dockerfile

FROM golang:1.20.0 as exporter
ENV GO111MODULE=on
WORKDIR /app
COPY . .
RUN go get ./
RUN CGO_ENABLED=0 GOOS=linux go build -o bin/dump1090-netcat ./

FROM scratch
COPY --from=exporter /app/bin/dump1090-netcat /usr/local/bin/
CMD ["/usr/local/bin/dump1090-netcat"]
First we start off with a full "fat" container for doing the build in
When the executable is built you need to make sure it builds with the "CGO_ENABLE=0" flag so that it will not try to link to any libs in the container. Without this the final stage wont work

The last part is "FROM scratch", this tells docker that there is no file structure nor is there a base image for here on out
Then we copy in the binary from the exporter stage and set it as the command

The resulting image is only as big as the executable but has all the functionality
In this case I pass in several environment options to the container indicating source IP and port and then the destination IP and port and it connects to the source and send to the destination.

No pipes required 
(code) (container)

Sunday, 12 September 2021

Starting a YouTube Chanel

 After a recent gig and making some tutorial videos for the company I have decided to start my own channel and continue supplying tutorials for others to follow.

On the channel I will be covering Kubernetes, reusing old hardware for modern applications, embedded electronics and IoT.

Also occasionally I my even just give a talk on a given subject.


So come on over and take a look, give me feedback, subscribe and even ask for a topic to talk on :)

https://www.youtube.com/channel/UCPYI0WfQF_HkWhPROSvVvGw

Saturday, 13 February 2021

Need a version control of IoT sensors, embedded docker containers

For a while now I have been building sensor software that runs on both Raspberry PI and embedded IoT platforms but I have a problem with deployment of versions.

For the embedded sensors I just just check every run for a new version and install but the problem is that it is an expensive operation as it take a few seconds for each run...

On the pi its a fully manual task that is either copying code or updating a container.

I want better :)


After looking at some of the code for Tasmota I think I found the start of how this will work.

Tasmota uses MQTT to send a subject to the device to inform it to update. This subject has the version number in it and if the current version is less the topic then it will update

I can make the PI also look at this topic for updates too.


Once I have the MQTT topic format sorted ( it will be similar to Tasmota ) then I will work on the PI to make it update a local binary file package, then make a docker controller to do the same in docker. Finally I will make a web site that allows for individual control of packages ( web, api and DB schema )

Multi-arch Docker Container Builds

 Recently I had the need to create some containers to run on both x64 and arm (Raspberry PI). 

I tried for a long time to build each on their respective platforms and the do individual pushes to the repositories ( dockerhub in this case )


But it all failed... why?


I found out that when you do a push for test/foo:6, it will just overwrite the manifest data on dockerhub :(

So if I pushed the arm version last I would no longer have access to the x64 container manifest and therefor I would be unable to run the container......


The solution is Buildkit and 'docker buildx' command

Install was simple enough, just follow the readme @ https://github.com/docker/buildx


There is however one little "gotcha"

whilst following the doco you find the following

docker run --privileged --rm tonistiigi/binfmt --install all

and a few steps after on building. Whats not there is anything to tell you that you need this each time you reboot as the registration of the binfmt into your kernel is reset on boot...


Once this was found it was just a simple little systemd script to run on boot and all was good

To fix, first create this service file in /usr/lib/systemd/system/buildx.service


[Unit]
Description=Docker Application Container Engine - buildx
Documentation=https://github.com/docker/buildx
After=docker.service containerd.service
Wants=docker.service
Requires=docker.socket containerd.service

[Service]
ExecStart=/usr/bin/docker run --privileged --rm tonistiigi/binfmt --install all

StartLimitBurst=3

StartLimitInterval=60s

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

TasksMax=infinity

Delegate=yes

KillMode=process
OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target

So what is this doing, once activated by systemd it will run once and exit on boot after docker has started. At this point it will register every device driver available to qemu that is already installed in the system.

If you wish to install a new architecture for building then just register it by installing the appropriate qemy-system-[driver] package