Sunday, 20 July 2025

From 290 CVEs to Zero: Rebuilding the Repository Server the Hard Way

The container image backing my repository server had quietly accumulated over 290 CVEs. Each of those is not just a statistic—they’re potential entry points on the attack surface.

Let’s be clear: just because this service ran inside Kubernetes doesn't mean those vulnerabilities were somehow magically mitigated. Kubernetes may abstract deployment and orchestration, but it does nothing to shrink the surface exposed by the containers themselves. A vulnerable container in Kubernetes is still a vulnerable system.

This image was built on Rocky Linux 9. While updates were technically available, actually applying them was more difficult than it should have been. Patching wasn't just a matter of running dnf update—dependency entanglements and version mismatches made the process fragile.

I attempted a move to Rocky Linux 10, hoping for a cleaner slate. Unfortunately, that path was blocked: the DEB repo tooling I rely on couldn’t be installed at all. The package dependencies for the deb-dev utilities were broken or missing entirely. At that point, the problem wasn’t patching—it was the platform itself.

That left one real option: rebuild the entire server as a pure Go application. No more relying on shell scripts or external tools for managing Debian or RPM repository metadata. Instead, everything needed—GPG signing, metadata generation, directory layout—was implemented natively in Go.

The Result

  • Container size dropped from 260MB to just 7MB
  • Current CVE count: zero
  • Dependencies are explicit and pinned
  • Future updates are under my control, not gated by an OS vendor

In practical terms, the entire attack surface is now reduced to a single statically-linked Go binary. No base image, no package manager, no lingering system libraries to monitor or patch.

This is one of those changes that doesn’t just feel cleaner—it is objectively safer and more maintainable.

Lesson reinforced: containers don’t remove the need for security hygiene. They just make it easier to ignore it—until it’s too late.

Source on GitLab

Wednesday, 14 May 2025

Pitfalls of the Latest Tag in Deployments and How SBOM Tools Can Help

The Problem with Using the latest Tag

Using the latest tag in your deployments might seem convenient, but it brings a host of problems that can undermine stability and traceability. Here’s why:

  • Lack of Version Control: The latest tag automatically pulls the most recent version of an image. This means you might unknowingly deploy a new version without properly testing it, leading to unexpected failures.
  • Reproducibility Issues: Since the latest tag can change over time, reproducing a bug or incident becomes challenging. You might end up debugging a version that is no longer the same as the one originally deployed.
  • Deployment Drift: Multiple environments (development, staging, production) can end up running different versions even if they all reference latest. This drift breaks the consistency needed for reliable deployments.
  • Lack of Visibility: When things go wrong, it’s hard to know which version is actually running, as latest does not directly indicate a specific build or commit.

How SBOM Tools Like Grype Can Help

Software Bill of Materials (SBOM) tools, such as Grype, are invaluable for overcoming the challenges posed by the latest tag and for managing software throughout its lifecycle. These tools enhance visibility, security, and consistency from build to production.

1. Build Phase: Secure and Compliant Images

  • Automated Vulnerability Scanning: Grype can be integrated into CI/CD pipelines to automatically generate SBOMs and identify vulnerabilities before deployment.
  • Dependency Management: Track dependencies and versions directly from the build process, allowing you to catch outdated or vulnerable libraries early.
  • Compliance Checks: SBOM tools ensure your builds meet internal and external security policies.

2. Deployment Phase: Verifying What You Ship

  • Image Verification: Grype helps confirm that the deployed image by checking hashes and versions.
  • Artifact Integrity: SBOMs can be signed and stored, providing verifiable evidence of what was deployed.
  • Version Locking: Using specific tags linked to SBOMs ensures consistency across environments.

3. Production Phase: Ongoing Monitoring and Maintenance

  • Continuous Vulnerability Scans: Regularly scan running containers to detect new vulnerabilities in your deployed software.
  • Lifecycle Management: SBOMs enable you to track when components reach end-of-life or become deprecated.
  • Audit and Compliance: Maintain an accurate record of all software versions and components running in production, helping with regulatory compliance.

Best Practices to Avoid the latest Pitfall

  • Use Specific Tags: Tag images with a version number or a commit hash to maintain consistency and traceability.
  • Automated SBOM Generation: Integrate tools like Grype in your CI/CD pipeline to automatically generate and store SBOMs for every build.
  • Regular Scanning: Continuously monitor your deployed containers with SBOM tools to catch vulnerabilities as they arise.

Conclusion: Gaining Control and Visibility

By avoiding the use of the latest tag and incorporating SBOM tools like Grype, you significantly improve the stability and security of your deployments. These tools not only mitigate the risks associated with version ambiguity but also enhance the entire software lifecycle—from build to production. With SBOMs, you gain control, maintain visibility, and ensure consistent, secure deployments.

Monday, 12 May 2025

From Fragile to Factory: Building Containers the Right Way

Containers promised us portability, consistency, and scalable deployments. But many development teams are still stuck using them like lightweight VMs — crafting images by hand, running docker exec, and treating the container itself as a mutable environment.

This isn’t just inefficient. It’s dangerous.

If you care about security, reliability, and scalability, it’s time to rethink your container lifecycle. Here's how to do it right — from build to runtime.

1. The Problem with Manual Container Workflows

You’ve probably seen (or written) something like this:

docker run -it base-image bash
# Install some tools, tweak configs
exit
docker commit ...
docker save -o myimage.tar

This “build” process might work once — but it’s:

  • Opaque: Nobody knows what’s inside that image.
  • Irreproducible: You can’t rebuild the exact same image.
  • Insecure: Unscanned, unsigned, unverifiable.
  • Unscalable: CI/CD pipelines can't run you debugging inside a shell.

Manual builds destroy the auditability and automation that containers are supposed to give you. Worse, they make it nearly impossible to maintain a clean supply chain.

2. Why Dockerfiles (and Multi-stage Builds) Matter

A Dockerfile is more than a convenience — it’s your source of truth.

Benefits of Dockerfile-driven builds:

  • 100% declarative: Every step is visible and version-controlled
  • Portable: Runs the same in CI, your laptop, or a Kubernetes cluster
  • Automatable: Can be rebuilt and scanned automatically
  • Secureable: Works with image signing, SBOM generation, and policy enforcement

Want to go further? Use multi-stage builds to separate build tools from runtime dependencies — reducing image size and attack surface.

# Build stage
FROM golang:1.21 AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp

# Runtime stage
FROM alpine
COPY --from=builder /app/myapp /usr/bin/myapp
CMD ["myapp"]

This pattern results in much smaller, leaner containers — which improves startup time, security posture, and efficiency. As highlighted in Container Size Matters, reducing image bloat isn’t just aesthetic — it has real operational and security benefits.

3. Automate Everything: CI/CD as Your Factory Line

Your container builds should run without human hands involved:

  • ✅ Triggered by source commits or tags
  • ✅ Build from Dockerfile
  • ✅ Scan for vulnerabilities
  • ✅ Generate SBOMs
  • ✅ Sign images (e.g., with Sigstore)
  • ✅ Push to a trusted registry

This builds a secure, traceable software supply chain — not just an image blob.

If you’re using docker save, docker load, or dragging tarballs around, you’re not ready for scale.

4. Exec Into Containers? That’s a Red Flag

“Just SSH in and fix it” doesn’t belong in container-native thinking.

Running docker exec to fix a container is:

  • A band-aid, not a solution
  • A breach of immutability
  • A sign your build process is broken

If you need to exec into a container to troubleshoot, fine — but never use that session as part of your build. Fix the Dockerfile or CI pipeline instead.

5. Stateless by Design: Where Does Your State Live?

Containers should be:

  • 🔁 Restartable
  • 📦 Replaceable
  • 🚀 Scalable

But if you're writing logs, cache, or app state inside /var/lib/myapp, you’re asking for trouble.

Best practice:

  • Store logs in external logging systems (e.g., Fluent Bit → ELK or Loki)
  • Store persistent data in mounted volumes
  • Use Kubernetes PersistentVolumeClaims or cloud-native block storage

This way, your containers can be destroyed and recreated at will, without loss of data or downtime.

6. Putting It All Together

A modern container pipeline looks like this:

  1. ✅ Git push triggers automated build
  2. 🐳 Dockerfile builds image using CI
  3. 🔍 Image scanned and signed
  4. 📦 Stored in trusted registry
  5. 🚀 Deployed via declarative manifest (K8s, Helm, etc.)
  6. 💾 Volumes handle persistence, not the container

No manual tweaks. No snowflake images. Full lifecycle awareness.

Conclusion: Treat Containers Like Code, Not Pets

Containers are a critical part of your software supply chain. If you’re still crafting them by hand or baking in state, you’re setting yourself up for security risks, outages, and scaling pain.

Move to:

  • 📄 Declarative Dockerfiles
  • 🤖 Automated CI builds
  • 🔐 Secure, traceable image delivery
  • 🗃️ Clean separation of state and compute

This is how you go from “it works on my machine” to “we ship production-ready containers every commit.”