Wednesday, 14 May 2025

Pitfalls of the Latest Tag in Deployments and How SBOM Tools Can Help

The Problem with Using the latest Tag

Using the latest tag in your deployments might seem convenient, but it brings a host of problems that can undermine stability and traceability. Here’s why:

  • Lack of Version Control: The latest tag automatically pulls the most recent version of an image. This means you might unknowingly deploy a new version without properly testing it, leading to unexpected failures.
  • Reproducibility Issues: Since the latest tag can change over time, reproducing a bug or incident becomes challenging. You might end up debugging a version that is no longer the same as the one originally deployed.
  • Deployment Drift: Multiple environments (development, staging, production) can end up running different versions even if they all reference latest. This drift breaks the consistency needed for reliable deployments.
  • Lack of Visibility: When things go wrong, it’s hard to know which version is actually running, as latest does not directly indicate a specific build or commit.

How SBOM Tools Like Grype Can Help

Software Bill of Materials (SBOM) tools, such as Grype, are invaluable for overcoming the challenges posed by the latest tag and for managing software throughout its lifecycle. These tools enhance visibility, security, and consistency from build to production.

1. Build Phase: Secure and Compliant Images

  • Automated Vulnerability Scanning: Grype can be integrated into CI/CD pipelines to automatically generate SBOMs and identify vulnerabilities before deployment.
  • Dependency Management: Track dependencies and versions directly from the build process, allowing you to catch outdated or vulnerable libraries early.
  • Compliance Checks: SBOM tools ensure your builds meet internal and external security policies.

2. Deployment Phase: Verifying What You Ship

  • Image Verification: Grype helps confirm that the deployed image by checking hashes and versions.
  • Artifact Integrity: SBOMs can be signed and stored, providing verifiable evidence of what was deployed.
  • Version Locking: Using specific tags linked to SBOMs ensures consistency across environments.

3. Production Phase: Ongoing Monitoring and Maintenance

  • Continuous Vulnerability Scans: Regularly scan running containers to detect new vulnerabilities in your deployed software.
  • Lifecycle Management: SBOMs enable you to track when components reach end-of-life or become deprecated.
  • Audit and Compliance: Maintain an accurate record of all software versions and components running in production, helping with regulatory compliance.

Best Practices to Avoid the latest Pitfall

  • Use Specific Tags: Tag images with a version number or a commit hash to maintain consistency and traceability.
  • Automated SBOM Generation: Integrate tools like Grype in your CI/CD pipeline to automatically generate and store SBOMs for every build.
  • Regular Scanning: Continuously monitor your deployed containers with SBOM tools to catch vulnerabilities as they arise.

Conclusion: Gaining Control and Visibility

By avoiding the use of the latest tag and incorporating SBOM tools like Grype, you significantly improve the stability and security of your deployments. These tools not only mitigate the risks associated with version ambiguity but also enhance the entire software lifecycle—from build to production. With SBOMs, you gain control, maintain visibility, and ensure consistent, secure deployments.

Monday, 12 May 2025

From Fragile to Factory: Building Containers the Right Way

Containers promised us portability, consistency, and scalable deployments. But many development teams are still stuck using them like lightweight VMs — crafting images by hand, running docker exec, and treating the container itself as a mutable environment.

This isn’t just inefficient. It’s dangerous.

If you care about security, reliability, and scalability, it’s time to rethink your container lifecycle. Here's how to do it right — from build to runtime.

1. The Problem with Manual Container Workflows

You’ve probably seen (or written) something like this:

docker run -it base-image bash
# Install some tools, tweak configs
exit
docker commit ...
docker save -o myimage.tar

This “build” process might work once — but it’s:

  • Opaque: Nobody knows what’s inside that image.
  • Irreproducible: You can’t rebuild the exact same image.
  • Insecure: Unscanned, unsigned, unverifiable.
  • Unscalable: CI/CD pipelines can't run you debugging inside a shell.

Manual builds destroy the auditability and automation that containers are supposed to give you. Worse, they make it nearly impossible to maintain a clean supply chain.

2. Why Dockerfiles (and Multi-stage Builds) Matter

A Dockerfile is more than a convenience — it’s your source of truth.

Benefits of Dockerfile-driven builds:

  • 100% declarative: Every step is visible and version-controlled
  • Portable: Runs the same in CI, your laptop, or a Kubernetes cluster
  • Automatable: Can be rebuilt and scanned automatically
  • Secureable: Works with image signing, SBOM generation, and policy enforcement

Want to go further? Use multi-stage builds to separate build tools from runtime dependencies — reducing image size and attack surface.

# Build stage
FROM golang:1.21 AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp

# Runtime stage
FROM alpine
COPY --from=builder /app/myapp /usr/bin/myapp
CMD ["myapp"]

This pattern results in much smaller, leaner containers — which improves startup time, security posture, and efficiency. As highlighted in Container Size Matters, reducing image bloat isn’t just aesthetic — it has real operational and security benefits.

3. Automate Everything: CI/CD as Your Factory Line

Your container builds should run without human hands involved:

  • ✅ Triggered by source commits or tags
  • ✅ Build from Dockerfile
  • ✅ Scan for vulnerabilities
  • ✅ Generate SBOMs
  • ✅ Sign images (e.g., with Sigstore)
  • ✅ Push to a trusted registry

This builds a secure, traceable software supply chain — not just an image blob.

If you’re using docker save, docker load, or dragging tarballs around, you’re not ready for scale.

4. Exec Into Containers? That’s a Red Flag

“Just SSH in and fix it” doesn’t belong in container-native thinking.

Running docker exec to fix a container is:

  • A band-aid, not a solution
  • A breach of immutability
  • A sign your build process is broken

If you need to exec into a container to troubleshoot, fine — but never use that session as part of your build. Fix the Dockerfile or CI pipeline instead.

5. Stateless by Design: Where Does Your State Live?

Containers should be:

  • 🔁 Restartable
  • 📦 Replaceable
  • 🚀 Scalable

But if you're writing logs, cache, or app state inside /var/lib/myapp, you’re asking for trouble.

Best practice:

  • Store logs in external logging systems (e.g., Fluent Bit → ELK or Loki)
  • Store persistent data in mounted volumes
  • Use Kubernetes PersistentVolumeClaims or cloud-native block storage

This way, your containers can be destroyed and recreated at will, without loss of data or downtime.

6. Putting It All Together

A modern container pipeline looks like this:

  1. ✅ Git push triggers automated build
  2. 🐳 Dockerfile builds image using CI
  3. 🔍 Image scanned and signed
  4. 📦 Stored in trusted registry
  5. 🚀 Deployed via declarative manifest (K8s, Helm, etc.)
  6. 💾 Volumes handle persistence, not the container

No manual tweaks. No snowflake images. Full lifecycle awareness.

Conclusion: Treat Containers Like Code, Not Pets

Containers are a critical part of your software supply chain. If you’re still crafting them by hand or baking in state, you’re setting yourself up for security risks, outages, and scaling pain.

Move to:

  • 📄 Declarative Dockerfiles
  • 🤖 Automated CI builds
  • 🔐 Secure, traceable image delivery
  • 🗃️ Clean separation of state and compute

This is how you go from “it works on my machine” to “we ship production-ready containers every commit.”

Tuesday, 6 May 2025

One Container to Host Them All

What if you could run a single container to host your .deb, .rpm, Docker images, and Helm charts—without the overhead of setting up a full artifact manager like Nexus or Artifactory?

This idea started with a simple goal: reduce complexity in self-hosted infrastructure. I needed a unified repo that would be:

  • Lightweight: Run anywhere with Docker
  • Self-contained: No external dependencies
  • Multi-purpose: Support the most common packaging formats in one place

So I built exactly that.

A Unified Artifact Repository

The container I created serves as a multi-protocol artifact repository. It handles:

  • APT (Debian) repositories for .deb packages
  • YUM/DNF (Red Hat) repositories for .rpm packages
  • Helm chart repositories for Kubernetes deployments
  • Docker Registry v2 compatible endpoints for image hosting

All in a single Dockerfile.

No glue scripts. No multiple services. Just one clean container that can be dropped into any GitOps or CI/CD pipeline.

GitLab-Compatible, But Not GitLab-Bound

While I tested and showcased this container using GitLab CI/CD (as seen in my recent posts), it's not tied to GitLab. It works anywhere Docker containers run—whether that’s a home lab, a cloud VM, or inside Kubernetes.

Why This Matters

This project was born out of real pain: managing too many different tools for different artifact types. If you've ever had to set up S3-backed Helm repos, host private Docker registries, or deal with GPG-signed .deb packages—you know the sprawl.

By consolidating these into a single endpoint, you:

  • Reduce maintenance surface area
  • Improve developer onboarding
  • Simplify CI/CD configurations
  • Own your own supply chain

How to Use It

For full setup details, configuration, and source code, visit the GitLab repo:
👉 gitlab.com/jlcox70/repository-server