
There’s a dusty shelf in every network closet where good intentions go to die. Or worse, to gossip. You centralize DNS for simplicity. You enable logging for accountability. You peer VPCs for convenience. A few sprints later, your DNS logs have become that chatty neighbor who sees every car that comes and goes, remembers every visitor, and pieces together a startlingly accurate picture of your life.
They aren’t leaking passwords or secret keys. They’re leaking something just as valuable: the blueprints of your digital house.
This post walks through a common pattern that quietly spills sensitive clues through AWS Route 53 Resolver query logging. We’ll skip the dry jargon and focus on the story. You’ll leave with a clear understanding of the problem, a checklist to investigate your own setup, and a handful of small, boring changes that buy you a lot of peace.
The usual suspects are a disaster recipe in three easy steps
This problem rarely stems from one catastrophic mistake. It’s more like three perfectly reasonable decisions that meet for lunch and end up burning down the restaurant. Let’s meet the culprits.
1. The pragmatic architect
In a brilliant move of pure common sense, this hero centralizes DNS resolution into a single, shared network VPC. “One resolver to rule them all,” they think. It simplifies configuration, reduces operational overhead, and makes life easier for everyone. On paper, it’s a flawless idea.
2. The visibility aficionado
Driven by the noble quest for observability, this character enables Route 53 query logging on that shiny new central resolver. “What gets measured, gets managed,” they wisely quote. To be extra helpful, they associate this logging configuration with every single VPC that peers with the network VPC. After all, data is power. Another flawless idea.
3. The easy-going permissions manager
The logs have to land somewhere, usually a CloudWatch Log Group or an S3 bucket. Our third protagonist, needing to empower their SRE and Ops teams, grants them broad read access to this destination. “They need it to debug things,” is the rationale. “They’re the good guys.” A third, utterly flawless idea.
Separately, these are textbook examples of good cloud architecture. Together, they’ve just created the perfect surveillance machine: a centralized, all-seeing eye that diligently writes down every secret whisper and then leaves the diary on the coffee table for anyone to read.
So what is actually being spilled
The real damage comes from the metadata. DNS queries are the internal monologue of your applications, and your logs are capturing every single thought. A curious employee, a disgruntled contractor, or even an automated script can sift through these logs and learn things like:
- Service Hostnames that tell a story: Names like billing-api.prod.internal or customer-data-primary-db.restricted.internal do more than just resolve to an IP. They reveal your service names, their environments, and even their importance.
- Secret project names: That new initiative you haven’t announced yet? If its services are making DNS queries like project-phoenix-auth-service.dev.internal, the secret’s already out.
- Architectural hints: Hostnames often contain roles like etl-worker-3.prod, admin-gateway.staging, or sre-jumpbox.ops.internal. These are the labels on your architectural diagrams, printed in plain text.
- Cross-Environment chatter: The most dangerous leak of all. When a query from a dev VPC successfully resolves a hostname in the prod environment (e.g., prod-database.internal), you’ve just confirmed a path between them exists. That’s a security finding waiting to happen.
Individually, these are harmless breadcrumbs. But when you have millions of them, anyone can connect the dots and draw a complete, and frankly embarrassing, map of your entire infrastructure.
Put on your detective coat and investigate your own house
Feeling a little paranoid? Good. Let’s channel that energy into a quick investigation. You don’t need a magnifying glass, just your AWS command line.
Step 1 Find the secret diaries
First, we need to find out where these confessions are being stored. This command asks AWS to list all your Route 53 query logging configurations. It’s the equivalent of asking, “Where are all the diaries kept?”
aws route53resolver list-resolver-query-log-configs \
--query 'ResolverQueryLogConfigs[].{Name:Name, Id:Id, DestinationArn:DestinationArn, VpcCount:ResolverQueryLogConfigAssociationCount}'
Take note of the DestinationArn for any configs with a high VpcCount. Those are your prime suspects. That ARN is the scene of the crime.
Step 2 Check who has the keys
Now that you know where the logs are, the million-dollar question is: who can read them?
If the destination is a CloudWatch Log Group, examine its resource-based policy and also review the IAM policies associated with your user roles. Are there wildcard permissions like logs:Get* or logs:* attached to broad groups?
If it’s an S3 bucket, check the bucket policy. Does it look something like this?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:root"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::central-network-dns-logs/*"
}
]
}
This policy generously gives every single IAM user and role in the account access to read all the logs. It’s the digital equivalent of leaving your front door wide open.
Step 3 Listen for the juicy gossip
Finally, let’s peek inside the logs themselves. Using CloudWatch Log Insights, you can run a query to find out if your non-production environments are gossiping about your production environment.
fields @timestamp, @message
| filter @message like /\.prod\.internal/
| filter vpc.id not like /vpc-prod-environment-id/
| stats count(*) by vpc.id as sourceVpc
| sort by @timestamp desc
This query looks for any log entries that mention your production domain (.prod.internal) but did not originate from a production VPC. Any results here are a flashing red light, indicating that your environments are not as isolated as you thought.
The fix is housekeeping, not heroics
The good news is that you don’t need to re-architect your entire network. The solution isn’t some heroic, complex project. It’s just boring, sensible housekeeping.
- Be granular with your logging: Don’t use a single, central log destination for every VPC. Create separate logging configurations for different environments (prod, staging, dev). Send production logs to a highly restricted location and development logs to a more accessible one.
- Practice a little scrutiny: Just because a resolver is shared doesn’t mean its logs have to be. Associate your logging configurations only with the specific VPCs that absolutely need it.
- Embrace the principle of least privilege: Your IAM and S3 bucket policies should be strict. Access to production DNS logs should be an exception, not the rule, requiring a specific IAM role that is audited and temporary.
That’s it. No drama, no massive refactor. Just a few small tweaks to turn your chatty neighbor back into a silent, useful tool. Because at the end of the day, the best secret-keeper is the one who never heard the secret in the first place.