When the client changes DNS without telling you first
Mail dies, AutoSSL stops renewing, the homepage shows a registrar parking page. A short field guide to diagnosing and fixing a silent DNS handover.
When the client changes DNS without telling you first
The ticket arrives on a Friday afternoon. Three words: "website is down". No screenshot, no error, no context. You load the site in a private window. It loads fine. You ask a colleague on the other side of the building to load it. They get a registrar parking page that says "this domain is registered with another company". You both refresh. You see the real site. They see the parking page. Nothing has changed on the server. Nothing has changed on cPanel. The domain is up and down depending on who's looking at it, which is the DNS equivalent of a haunting.
This has happened to us twice with two different clients, summerbr
once and craneqil shortly after, and both times the cause was the
same: somebody on the client's side moved the domain to a new
registrar, the new registrar served default nameservers, the default
nameservers pointed nowhere useful, and propagation was halfway done
when we got the ticket. Nobody told us. Nobody on their side thought
it was relevant to tell us. Mail was already broken by the time we
looked. AutoSSL was about to fail. The team that did the move had
moved on to their next task.
This post is the short version of the use case we run when the haunting starts.
First check: is DNS itself disagreeing with itself
The fastest tell is to query two different public resolvers and compare answers:
dig +short summerbrooks.com @8.8.8.8
dig +short summerbrooks.com @1.1.1.1If one returns your server IP (203.0.113.42) and the other returns
something else entirely (or returns nothing), you are watching
propagation in flight. That is almost never a server problem. It is
almost always a registrar or nameserver problem.
Follow up with the authoritative chain:
dig NS summerbrooks.com +short
dig SOA summerbrooks.com +shortIf the NS records aren't the nameservers you put there originally,
the domain has been moved. Stop diagnosing the server. The server is
fine. The problem is upstream.
What breaks the moment nameservers flip
Once the new nameservers take over, the new zone is whatever the new registrar's default looks like. That is usually:
- An
Arecord pointing at the registrar's parking page or, worse, at nothing - No
MXrecords, so mail to that domain bounces immediately - No
TXTrecords, so SPF, DKIM, and DMARC all fail and any mail the domain does send lands in spam - No
CAArecords, so AutoSSL's next renewal at 03:00 fails with a validation error and your inbox fills up withAutoSSL Failed for User 'summerbr' - No
CNAMErecords forautodiscover,www,_acme-challenge,_dmarc, or whatever else the application stack depends on
The web request that loads the parking page is the loudest symptom, but mail is the most expensive one. Internal users at the client start replying to days-old threads with "did you get my email?" and nobody can find the lost messages because they bounced silently at the sending server. We've watched four days disappear into that hole before someone correlated the timing with "oh, our new marketing lead moved the domain".
The triage flow we run, in order
This is the order. Slowest information last:
# Who controls the domain now?
whois summerbrooks.com | grep -iE 'registrar|name server'
# What does the world think the nameservers are?
dig NS summerbrooks.com @8.8.8.8 +short
# What does the authoritative nameserver actually serve?
NS=$(dig NS summerbrooks.com +short | head -n1)
dig @"$NS" summerbrooks.com A +short
dig @"$NS" summerbrooks.com MX +short
dig @"$NS" summerbrooks.com TXT +short
dig @"$NS" summerbrooks.com CAA +short
# Are the records propagating?
dig summerbrooks.com @1.1.1.1 +short
dig summerbrooks.com @9.9.9.9 +shortYou're looking for three signals: who controls the domain (registrar), where the world is being sent (nameservers), and what the new zone contains (records). If the first answer surprises you, the rest will too.
A small companion habit: keep a spreadsheet, or more realistically a markdown table in your team wiki, of every client's expected registrar, nameservers, and the date you last verified them. The table is what turns "is this normal?" into a five-second check instead of a forty-five-minute investigation.
The client conversation
The conversation matters as much as the technical work. We have a short script we use, and it works because it sounds like curiosity rather than blame:
"Hi, we're seeing some unusual DNS behaviour for your domain. Did anyone on your side change anything related to the domain, the registrar, or the website hosting in the last week or two?"
About 70% of the time the answer is "oh yes, our [marketing person | new developer | agency partner] moved everything to [GoDaddy | Namecheap | Cloudflare] because [it was cheaper | they recommended it]". Nobody is lying. They just didn't know that the domain registrar and the hosting provider are different layers and that touching one breaks the other.
A boundary worth holding: the domain belongs to the client. It is not your DNS to defend. Your job is to make the consequences of the change visible, propose two options for getting things working again, and let them choose. If they want the new registrar to host DNS, you migrate the records and charge for the time. If they want DNS back where it was, you restore it. Either is fine. Both are billable.
Fix path A: move DNS back to your nameservers
If the client wants you to keep managing DNS:
- Have them log into the new registrar and set the nameservers back to yours (the ones you originally provided at onboarding)
- Wait for propagation.
dig NSfrom a couple of public resolvers tells you when the change has spread; full propagation can take anywhere from a few minutes to 48 hours depending on the TTL on the oldNSrecords - Verify every record on your nameservers (
A,MX,TXT,CAA, anyCNAMEthe stack relies on) against the backup you should already have. The phrase "you have the backup, right?" has the same energy here as it does with databases - Once
dig @8.8.8.8anddig @1.1.1.1agree, re-trigger AutoSSL on the cPanel account so it doesn't wait until 03:00 to recover
Fix path B: replicate records on the new nameservers
If the client wants the new registrar's DNS panel:
- Export your current zone for the domain. On cPanel you can do
this from WHM > DNS Functions > Edit DNS Zone, or by reading
/var/named/summerbrooks.com.dbdirectly - Recreate each record at the new registrar:
A,AAAAif you have IPv6,MXwith priorities,TXT(SPF and DMARC and any verification strings),DKIMselectors,CNAMEfor things likeautodiscoverandwww, andCAArecords pointing at the CA you want to allow (for cPanel AutoSSL this is typically Sectigo; for Let's Encrypt it'sletsencrypt.org) - Verify each one with
dig @<new-ns>before flipping - Charge for the time. This is real migration work, not a five-minute favour
The CAA record is the one most people miss. Without it, any CA in
the world can issue a certificate for the domain, which is exactly
the scenario you want to prevent. The dig and DNS quick reference
has the syntax for CAA and the other less-common record types.
What we now do BEFORE this happens
Two cheap habits have caught the last few attempts before they hurt:
- An onboarding question that reads "Who controls your domain registrar account, and who else on your team has access to it?". It surfaces the "our agency partner manages it" answer up front, so you know who to copy on changes
- A quarterly DNS audit. One bash loop that runs
dig NSon every client domain on the server and diffs the result against a stored expectation. Anything that drifts becomes a ticket. It's not real-time, but it catches the four-day-silent failure pattern before week two CAArecords on every domain we manage, pointing at the CA we use, so a rogue cert request from a new registrar's default panel fails politely instead of succeeding quietly
For the AutoSSL side of this, the failure pattern is the same one we wrote up in AutoSSL fails on Microsoft 365 autodiscover subdomains: the fix; a DNS change that nobody mentions and a 03:00 email that nobody reads. The compromises post, Three WordPress compromises in one week: the common thread, is worth a glance too, because a stolen registrar login is a path some attackers take and the symptoms look identical to an innocent registrar move on the first dig.
How ServerGuard handles this
This maps to two use case areas: DNS monitoring and SSL renewal monitoring.
ServerGuard runs a daily DNS audit on every domain attached to
a managed server. The audit pulls the cPanel zone, resolves the
same names from the public side using a couple of independent
resolvers, and compares the two. When the authoritative NS, MX,
or A records disagree with what cPanel is serving, the platform
opens an alert.
What it does not do: ServerGuard does not modify DNS records, and it does not modify registrar settings. DNS belongs to the client and the registrar. We only observe. The platform's "action" for a DNS drift is exactly one thing: alert the on-call engineer with the diff, the timestamp, and the suggested triage commands.
The honest limit: the audit runs once every 24 hours. If the registrar moved at 09:00 and mail started bouncing at 09:05, the platform will flag it by the next morning, not within the hour. Real-time DNS change detection (the "alert within five minutes" version) is upcoming. It is not in the current release. For sub-hour DNS-change alerting today, you want a dedicated tool like DNSChecker or RIPE Atlas in front of the ServerGuard use case.
The Friday afternoon ticket that says only "website is down" becomes, with the daily audit in place, a Friday morning alert that says "summerbr nameservers changed from ns1.prior-host.net to ns1.godaddy.com; MX records dropped; AutoSSL renewal at risk". That's the same incident, caught early enough to call the client before mail starts bouncing.
Related posts
- 13 min read
AutoSSL fails on Microsoft 365 autodiscover subdomains: the fix
AutoSSL fails on Microsoft 365 autodiscover subdomains: the fix The email arrives every night at the same time. . The body lists three or four subdomains that failed Domain Control Validation, the same ones every night, week after week. Not
- 15 min read
86 CPU spikes in 24 hours: a multi-cause cascade postmortem
86 CPU spikes in 24 hours: a multi-cause cascade postmortem The mailbox at 08:00 had 86 ChkServd CPU alerts from , all from the previous 24 hours. Not a single tidy outage with a single cause. A steady drip of "CPU at 95% for the last minut
- 6 min read
When you have to suspend a WooCommerce client: anatomy
Anatomy of a forced suspension on a shared cPanel server The decision to take a paying client offline to protect fourteen other paying clients is the worst part of running a small hosting agency. There is no scripted version of it that feel