Git

Random comments about Git

Terraform scales better without a centralized remote state

It’s 4:53 PM on a Friday. You’re pushing a one-line change to an IAM policy. A change so trivial, so utterly benign, that you barely give it a second thought. You run terraform apply, lean back in your chair, and dream of the weekend. Then, your terminal returns a greeting from the abyss: Error acquiring state lock.

Somewhere across the office, or perhaps across the country, a teammate has just started a plan on their own, seemingly innocuous change. You are now locked in a digital standoff. The weekend is officially on hold. Your shared Terraform state file, once a symbol of collaboration and a single source of truth, has become a temperamental roommate who insists on using the kitchen right when you need to make dinner. And they’re a very, very slow cook.

Our Terraform honeymoon phase

It wasn’t always like this. Most of us start our Terraform journey in a state of blissful simplicity. Remember those early days? A single, elegant main.tf file, a tidy remote backend in an S3 bucket, and a DynamoDB table to handle the locking. It was the infrastructure equivalent of a brand-new, minimalist apartment. Everything had its place. Deployments were clean, predictable, and frankly, a little bit boring.

Our setup looked something like this, a testament to a simpler time:

# in main.tf
terraform {
  backend "s3" {
    bucket         = "our-glorious-infra-state-prod"
    key            = "global/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-state-lock-prod"
    encrypt        = true
  }
}

resource "aws_vpc" "main" {
  cidr_block = "10.0.0.0/16"
  # ... and so on
}

It worked beautifully. Until it didn’t. The problem with minimalist apartments is that they don’t stay that way. You add a person, then another. You buy more furniture. Soon, you’re tripping over things, and that one clean kitchen becomes a chaotic battlefield of conflicting needs.

The kitchen gets crowded

As our team and infrastructure grew, our once-pristine state file started to resemble a chaotic shared kitchen during rush hour. The initial design, meant for a single chef, was now buckling under the pressure of a full restaurant staff.

The state lock standoff

The first and most obvious symptom was the state lock. It’s less of a technical “race condition” and more of a passive-aggressive duel between two colleagues who both need the only good frying pan at the exact same time. The result? Burnt food, frayed nerves, and a CI/CD pipeline that spends most of its time waiting in line.

The mystery of the shared spice rack

With everyone working out of the same state file, we lost any sense of ownership. It became a communal spice rack where anyone could move, borrow, or spill things. You’d reach for the salt (a production security group) only to find someone had replaced it with sugar (a temporary rule for a dev environment). Every Terraform apply felt like a gamble. You weren’t just deploying your change; you were implicitly signing off on the current, often mysterious, state of the entire kitchen.

The pre-apply prayer

This led to a pervasive culture of fear. Before running an apply, engineers would perform a ritualistic dance of checks, double-checks, and frantic Slack messages: “Hey, is anyone else touching prod right now?” The Terraform plan output would scroll for pages, a cryptic epic poem of changes, 95% of which had nothing to do with you. You’d squint at the screen, whispering a little prayer to the DevOps gods that you wouldn’t accidentally tear down the customer database because of a subtle dependency you missed.

The domino effect of a single spilled drink

Worst of all was the tight coupling. Our infrastructure became a house of cards. A team modifying a network ACL for their new microservice could unintentionally sever connectivity for a legacy monolith nobody had touched in years. It was the architectural equivalent of trying to change a lightbulb and accidentally causing the entire building’s plumbing to back up.

An uncomfortable truth appears

For a while, we blamed Terraform. We complained about its limitations, its verbosity, and its sharp edges. But eventually, we had to face an uncomfortable truth: the tool wasn’t the problem. We were. Our devotion to the cult of the single centralized state—the idea that one file to rule them all was the pinnacle of infrastructure management—had turned our single source of truth into a single point of failure.

The great state breakup

The solution was as terrifying as it was liberating: we had to break up with our monolithic state. It was time to move out of the chaotic shared house and give every team their own well-equipped studio apartment.

Giving everyone their own kitchenette

First, we dismantled the monolith. We broke our single Terraform configuration into dozens of smaller, isolated stacks. Each stack managed a specific component or application, like a VPC, a Kubernetes cluster, or a single microservice’s infrastructure. Each had its own state file.

Our directory structure transformed from a single folder into a federation of independent projects:

infra/
├── networking/
│   ├── vpc.tf
│   └── backend.tf      # Manages its own state for the VPC
├── databases/
│   ├── rds-main.tf
│   └── backend.tf      # Manages its own state for the primary RDS
└── services/
    ├── billing-api/
    │   ├── ecs-service.tf
    │   └── backend.tf  # Manages state for just the billing API
    └── auth-service/
        ├── iam-roles.tf
        └── backend.tf  # Manages state for just the auth service

The state lock standoffs vanished overnight. Teams could work in parallel without tripping over each other. The blast radius of any change was now beautifully, reassuringly small.

Letting infrastructure live with its application

Next, we embraced GitOps patterns. Instead of a central infrastructure repository, we decided that infrastructure code should live with the application it supports. It just makes sense. The code for an API and the infrastructure it runs on are a tightly coupled couple; they should live in the same house. This meant code reviews for application features and infrastructure changes happened in the same pull request, by the same team.

Tasting the soup before serving it

Finally, we made surprises a thing of the past by validating plans before they ever reached the main branch. We set up simple CI workflows that would run a Terraform plan on every pull request. No more mystery meat deployments. The plan became a clear, concise contract of what was about to happen, reviewed and approved before merge.

A snippet from our GitHub Actions workflow looked like this:

name: 'Terraform Plan Validation'
on:
  pull_request:
    paths:
      - 'infra/**'
      - '.github/workflows/terraform-plan.yml'

jobs:
  plan:
    name: 'Terraform Plan'
    runs-on: ubuntu-latest
    steps:
    - name: Checkout
      uses: actions/checkout@v4

    - name: Setup Terraform
      uses: hashicorp/setup-terraform@v3
      with:
        terraform_version: 1.5.0

    - name: Terraform Init
      run: terraform init -backend=false

    - name: Terraform Plan
      run: terraform plan -no-color

Stories from the other side

This wasn’t just a theoretical exercise. A fintech firm we know split its monolithic repo into 47 micro-stacks. Their deployment speed shot up by 70%, not because they wrote code faster, but because they spent less time waiting and untangling conflicts. Another startup moved from a central Terraform setup to the AWS CDK (TypeScript), embedding infra in their app repos. They cut their time-to-deploy in half, freeing their SRE team from being gatekeepers and allowing them to become enablers.

Guardrails not gates

Terraform is still a phenomenally powerful tool. But the way we use it has to evolve. A centralized remote state, when not designed for scale, becomes a source of fragility, not strength. Just because you can put all your eggs in one basket doesn’t mean you should, especially when everyone on the team needs to carry that basket around.

The most scalable thing you can do is let teams build independently. Give them ownership, clear boundaries, and the tools to validate their work. Build guardrails to keep them safe, not gates to slow them down. Your Friday evenings will thank you for it.

Git branching strategies Merge or Rebase

Picture this: You’re building a magnificent LEGO castle, not alone but with a team. Each of you is crafting different sections, a tower, a wall, maybe a dungeon for the mischievous minifigures. The grand question arises: How do you unite these masterpieces into one glorious fortress?

This is where Git, our trusty version control system, steps in, offering two distinct approaches: Merge and Rebase. Both achieve the same goal, bringing your team’s work together, but they do so with different philosophies and, consequently, different outcomes in your project’s history. So, which path should you choose? Let’s unravel this mystery together!

Merging: The Storyteller

Imagine git merge as a meticulous historian, carefully documenting every step of your castle-building journey. When you merge two branches, Git creates a special “merge commit,” a snapshot that says, “Here’s where we brought these two storylines together.” It’s like adding a chapter to a book that acknowledges the contributions of multiple authors.

# You are on the 'feature' branch 
git checkout main 
git merge feature 

# Result: A new merge commit is created on 'main'

What’s the beauty of this approach?

  • Preserves History: You get a complete, chronological record of every commit, every twist and turn in your development process. It’s like having a detailed blueprint of how your LEGO castle was built, brick by brick.
  • Transparency: Everyone on the team can easily see how the project evolved, who made what changes, and when. This is crucial for collaboration and debugging.
  • Safety Net: If something goes wrong, you can easily trace back the changes and revert to an earlier state. It’s like having a time machine to undo any construction mishaps.

But, there’s a catch (isn’t there always?):

  • Messy History: Your project’s history can become quite complex, especially with frequent merges. Imagine a book with too many footnotes, it can be a bit overwhelming to follow.

Rebasing: The Time Traveler

Now, git rebase takes a different approach. Think of it as a time traveler who neatly rewrites history. Instead of creating a merge commit, rebase takes your branch’s commits and replants them on top of the target branch, making it appear as if you’d been working directly on that branch all along.

# You are on the 'feature' branch 
git checkout feature 
git rebase main 

# Result: The 'feature' branch's commits are now on top of 'main'

Why would you want to rewrite history?

  • Clean History: You end up with a linear, streamlined project history, like a well-organized story with a clear narrative flow. It’s easier to read and understand the overall progression of the project.
  • Simplified View: It can be easier to visualize the project’s development as a single, continuous line, especially in projects with many contributors.

However, there’s a word of caution:

  • History Alteration: Rebasing rewrites the commit history. This can be problematic if you’re working on a shared branch, as it can lead to confusion and conflicts for other team members. Imagine someone changing the blueprints while you’re still building… chaos.
  • Potential for Errors: If not done carefully, rebasing can introduce subtle bugs that are hard to track down.

So, Merge or Rebase? The Golden Rule

Here’s the gist, the key takeaway, the rule of thumb you should tattoo on your programmer’s brain (metaphorically, of course):

  • Use merge for shared or public branches (like main or master). It preserves the true history and keeps everyone on the same page.
  • Use rebase for your local feature branches before merging them into a shared branch. This keeps your feature branch’s history clean and easy to understand, making the final merge smoother.

Think of it this way: you do your messy experiments and drafts in your private notebook (local branch with rebase), and then you neatly transcribe your final work into the official logbook (shared branch with merge).

Analogy Time!

Let’s say you and your friend are writing a song.

  • Merge: You each write verses separately. Then, you combine them, creating a new verse that says, “Here’s where Verse 1 and Verse 2 meet.” It’s clear that it was a collaborative effort, and you can still see the individual verses.
  • Rebase: You write your verse. Then, you take your friend’s verse and rewrite yours as if you had written it after theirs. The song flows seamlessly, but it’s not immediately obvious that two people wrote it.

The Bottom Line

Both merge and rebase are powerful tools. The best choice depends on your specific workflow and your team’s preferences. The most important thing is to understand how each method works and to use them consistently. But always remember the golden rule: merge for shared, rebase for local.

GitOps, The Conductor of Cloud Adoption

Let’s embark on a brief journey through the different “buckets” of technology that define our era.

The “Traditional” bucket harks back to days when deploying applications was a lengthy affair, often taking weeks or months. This was the era of WAR, ZIP, and EAR files, where changes were cumbersome and cautious.

Then comes the “New Wave,” synonymous with cloud-native approaches. Here, containers have revolutionized the scene, turning those weeks into mere minutes or seconds. It’s a realm where agility meets efficiency, unlocking rapid deployment and scaling.

Lastly, we reach “Serverless,” where the cloud truly flexes its muscles. In this space, containers are still key, but the real star is the suite of microservices. These tiny, focused units of functionality allow for an unprecedented focus on the application logic without the weight of infrastructure management.

Understanding these buckets is like mapping the terrain before a journey—it sets the stage for a deeper exploration into how modern software development and deployment are evolving.

GitOps: Streamlining Cloud Transition

As we chart a course through the shifting tides of technology, GitOps emerges as a guiding force. Imagine GitOps as a masterful conductor, orchestrating the principles of Git—such as version control, collaboration, compliance, and CI/CD (Continuous Integration and Continuous Delivery)—to create a symphony of infrastructure automation. This method harmonizes development and operational tasks, using familiar tools to manage and deploy in the cloud-native and serverless domains.

Cloud adoption, often seen as a complex migration, is simplified through GitOps. It presents a transparent, traceable, and efficient route, ensuring that the shift to cloud-native and serverless technologies is not just a leap, but a smooth transition. With GitOps, every iteration is a step forward, reliability becomes a standard, and security is enhanced. These are the cornerstones of a solid cloud adoption strategy, paving the way for a future where changes are swift, and innovation is constant.

Tech’s Transformative Trio: From Legacy to Vanguard

Whilst we chart our course through the shifting seas of technology, let’s adopt the idea that change is the only constant. Envision the technology landscape as a vast mosaic, continually shifting under the pressures of innovation and necessity. Within this expanse, three distinct “buckets” stand out, marking the epochs of our digital saga.

First, there’s the “Traditional” bucket—think of it as the grandparent of technology. Here, deploying software was akin to moving mountains, a process measured in weeks or months, where WAR, ZIP, and EAR files were the currency of the realm.

Enter the “New Wave,” the hip cloud-native generation where containers are the cool kids on the block, turning those grueling weeks into minutes or even seconds. This bucket is where flexibility meets speed, a playground for the agile and the brave.

Finally, we arrive at “Serverless,” the avant-garde, where the infrastructure becomes a magician’s vanishing act, leaving nothing but the pure essence of code—microservices that dance to the tune of demand, untethered by the physical confines of hardware.

This transformation from traditional to modern practices isn’t just a change in technology; it’s a revolution in mindset, a testament to the industry’s relentless pursuit of innovation. Welcome to the evolution of technology practices—a journey from the solid ground of the old to the cloud-kissed peaks of the new.

GitOps: Synchronizing the Pulse of Development and Operations

In the heart of our modern tech odyssey lies GitOps, a philosophy that blends the rigors of software development with the dynamism of operations. It’s a term that sparkles with the promise of enhanced deployment frequency and the rock-solid stability of a seasoned sea captain.

Think of GitOps as the matchmaker of Dev and Ops, uniting them under the banner of Git’s version control mastery. By doing so, it forges a union so seamless that the once-staggered deployments now step to a brisk, rhythmic cadence. This is the dance floor of the New Wave and Serverless scenes, where each deployment is a step, each rollback a twirl, all choreographed with precision and grace.

In this convergence, the benefits are as clear as a starlit sky. With GitOps, the deployments aren’t just frequent; they’re also more predictable, and the stability is something you can set your watch to. It’s a world where “Oops” turns into “Ops,” and errors become lessons learned, not catastrophes endured. Welcome to the era where development and operations don’t just meet—they waltz together.

Catching the Cloud: Why the Sky’s the Limit in Tech

Imagine a world where your tech needs can scale as effortlessly as turning the volume knob on your favorite song, where the resources you tap into for your business can expand and contract like an accordion playing a tune. This is the world of cloud technology.

The cloud offers agility; it’s like having an Olympic gymnast at your beck and call, ready to flip and twist at the slightest nudge of demand. Then there’s scalability, akin to a balloon that inflates as much as you need, only without the fear of popping. And let’s not forget cost-efficiency; it’s like shopping at a buffet where you only pay for the spoonfuls you eat, not the entire spread.

Adopting cloud technologies is not just a smart move; it’s an imperative stride into the future. It’s about making sure your tech can keep pace with your ambition, and that, my friends, is why the cloud is not just an option; it’s a necessity in our fast-moving digital world.

Constructing Clouds with GitOps: A Blueprint for Modern Infrastructure

In the digital construction zone of today’s tech, GitOps is the scaffold that supports the towering ambitions of cloud adoption. It’s a practice that takes the guesswork out of building and managing cloud-based services, a bit like using GPS to navigate through the labyrinth of modern infrastructure.

By using Git as a single source of truth for infrastructure as code (IaC), GitOps grants teams the power to manage complex cloud environments with the same ease as ordering a coffee through an app. Version control becomes the wand that orchestrates entire ecosystems, allowing for replication, troubleshooting, and scaling with a few clicks or commands.

Imagine deploying a network of virtual machines as simply as duplicating a file, or rolling back a faulty environment update with the same ease as undoing a typo in a document. GitOps not only builds the bridge to the cloud but turns it into a conveyor belt of continuous improvement and seamless transition. It’s about making cloud adoption not just achievable, but natural, almost instinctive. Welcome to the construction site of tomorrow’s cloud landscapes, where GitOps lays down the bricks with precision and flair.

Safeguarding the Cloudscape: Mastering Risk Management in a Cloud-Native Realm

Embarking on a cloud-native journey brings its own set of weather patterns, with risks and rewards as variable as the climate. In this vibrant ecosystem, risk management becomes a craft of its own, one that requires finesse and a keen eye for the ever-changing horizon.

GitOps emerges as a lighthouse in this environment, guiding ships safely to port. By integrating version control for infrastructure as code, GitOps ensures that each deployment is not just a launch into the unknown but a calculated step with a clear recovery path.

Consider this: in a cloud-native world, risks are like storms; they’re inevitable. GitOps, however, provides the barometer to anticipate them and the tools to weather them. It’s about creating consistent and recoverable states that turn potential disasters into mere moments of adjustment, ensuring that your cloud-native journey is both adventurous and secure.

Let’s set sail with a tangible example. Imagine a financial services company managing their customer data across several cloud services. They decide to update their data encryption across all services to bolster security. In a pre-GitOps world, this could be a treacherous voyage with manual updates, risking human error, and potential data breaches.

Enter GitOps. The company uses a Git repository to manage their infrastructure code, automating deployments through a CI/CD pipeline. The update is coded once, reviewed, and merged into the main branch. The CI/CD pipeline picks up the change, deploying it across all services systematically. When a flaw in the encryption method is detected, rather than panic, they simply roll back to the previous version of the code in Git, instantly reverting all services to the last secure state.

This isn’t just theory; it’s a practice that keeps the company’s digital fleet agile and secure, navigating the cloud seas with the assurance of GitOps as their compass.

Sailing Ahead: Mastering the Winds of Technological Change

As we draw the curtains on our exploration, let’s anchor our thoughts on embracing GitOps for a future-proof voyage into the realms of cloud-native and serverless technologies. Adopting GitOps is not just about upgrading tools; it’s about cultivating an organizational culture that learns, adapts, and trusts in the power of automation.

It’s akin to teaching an entire crew to sail in unison, navigating through the unknown with confidence and precision. By fostering this mindset, we prepare not just for the technology of today but for the innovations of tomorrow, making each organization a flagship of progress and resilience in the digital sea. Let’s set our sails high and embrace these winds of change with the assurance that GitOps provides, charting a course towards a horizon brimming with possibilities.

Uncommon Case: How to Wipe All Commits from a Repo and Start Fresh

There are times when you might find yourself needing to start over in a Git repository. Whether it’s because you’re working on a project that has gone in a completely different direction, or you’ve inherited a repo filled with a messy commit history, starting fresh can sometimes be the best option. In this article, we’ll walk through the steps to wipe your Git repository clean and start with a new “Initial Commit.”

Precautions

Before we dive in, it’s crucial to understand that this process will erase your commit history. Make sure to backup your repository or ensure that you won’t need the old commits in the future.

Step 1: Create a New Orphan Branch

First, let’s create a new branch that will serve as our new starting point. We’ll use the --orphan switch to do this.

git checkout --orphan fresh-start

The --orphan switch creates a new branch, devoid of commit history, which allows us to start anew. When you switch to this new branch, you’ll notice that it doesn’t carry over the old commits, giving you a clean slate.

Step 2: Stage Your Files

Now, stage all the files you want to keep in your new branch. This step is similar to what you’d do when setting up a new project.

git add --all

Step 3: Make the Initial Commit

Commit the staged files to establish the new history.

git commit -m "Initial Commit"

Step 4: Delete the Old Main Branch

Now that we have our new starting point, it’s time to get rid of the old main branch. We’ll use the -D flag, which is a shorthand for --delete --force. This flag deletes the branch regardless of its push status, so use it cautiously.

git branch -D main

The -D flag forcefully deletes the 'main' branch, so make sure you are absolutely certain that you want to lose that history before running this command.

Step 5: Rename the New Branch to main

Rename your new branch to 'main' to make it the default branch. We’ll use the -m flag here, which stands for “move” or “rename.”

git branch -m main

The -m flag renames the current branch to 'main'. This is useful for making the new branch the default one, aligning it with the conventional naming scheme. Not too long ago, the main branch used to be called 'master'… but that’s a story for another time. 🙂

Step 6: Force Push to Remote

Finally, let’s update the remote repository with our new main branch. Be cautious, as this will overwrite the remote repository.

git push -f origin main

Wrapping Up

And there you have it! You’ve successfully wiped your Git repository clean and started anew. This process can be useful in various scenarios, but it’s not something to be taken lightly. Always make sure to backup your repository and consult with your team before taking such a drastic step.