All posts
SecurityMay 7, 2026|8 min read

Mobile Google Visitors Saw Casino Spam While Everyone Else Saw the Real Site

Normal visitors saw the real homepage. Mobile visitors from Google got a casino AMP page. Here is how I found the trigger.

S

Showrav Hasan

WordPress & Infrastructure Engineer

SEOMalwareCloakingTroubleshootingWeb Security
Mobile Google Visitors Saw Casino Spam While Everyone Else Saw the Real Site

TL;DR

Mobile visitors arriving from Google received a casino spam page, while normal desktop visitors, direct mobile visitors, Googlebot desktop, Googlebot Smartphone, and AdsBot mobile all received the real homepage. The trigger was not "mobile" alone and it was not "Googlebot" alone. It was the combination of a mobile browser user agent and a Google referrer. Once I tested those request paths separately, the bad response stood out immediately: the clean page was 770,377 bytes, the spam page was 26,819 bytes, and the spam HTML started with an AMP document in Turkish.

The Setup

A customer reported a strange SEO problem. They searched their brand from a phone, tapped the Google result, and landed on a casino page. Same domain. Wrong content. Very wrong content.

When I opened the site normally from desktop, it looked fine. The homepage loaded as expected. The title was correct. The canonical URL was correct. The structured data looked like the real business.

That made the first pass annoying. There was no obvious redirect. No visible injected banner. No weird JavaScript popup. Nothing screamed "compromised" in the normal browser view.

Not quite.

The customer was not saying "the site is always spam." They were saying "Google sends mobile users to spam." That distinction mattered.

Step 1: Save The Clean Desktop Response

I started by saving the page exactly as a normal desktop visitor would receive it.

curl https://www.example.com/ > normal.html

The file was large, which matched the real homepage. It had the normal site builder payload, styles, scripts, metadata, and page content.

normal.html    770377 bytes

The title matched the legitimate business.

normal.html: <title>Legitimate Business Homepage</title>
normal.html: <link rel="canonical" href="https://www.example.com/">
normal.html: <meta property="og:url" content="https://www.example.com/">

So the default request path was clean.

Step 2: Check Googlebot Without Assuming Anything

My first instinct was Googlebot cloaking. A lot of hacked sites show clean content to normal users and different content to search crawlers. That is the common pattern.

I tested Googlebot desktop.

curl -A "Googlebot" https://www.example.com/ > googlebot.html

Same size.

googlebot.html    770377 bytes

Same title.

googlebot.html: <title>Legitimate Business Homepage</title>

Then I tested Googlebot Smartphone, because Google mostly indexes the mobile version of a site.

curl -A "Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" https://www.example.com/ > googlebot-smartphone.html

Same result again.

googlebot-smartphone.html    770377 bytes

That made no sense for a minute. If this was classic crawler cloaking, Googlebot Smartphone should have triggered it. It did not.

So Googlebot was a red herring.

Step 3: Check Mobile Without Google

Next suspect: mobile detection. Maybe the site served a separate mobile page, and that mobile path was infected.

I tested a mobile browser user agent without a Google referrer.

curl -A "Mozilla/5.0 (iPhone; CPU iPhone OS 17_0 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.0 Mobile/15E148 Safari/604.1" https://www.example.com/ > mobile-direct.html

Still clean.

mobile-direct.html    770377 bytes

Same title. Same canonical. Same homepage.

Mobile alone was not enough.

Step 4: Add The Missing Piece

The customer's exact path was not "open the site on mobile." It was "search on Google from mobile, then tap the result."

So I added a Google referrer to the mobile request.

curl -A "Mozilla/5.0 (iPhone; CPU iPhone OS 17_0 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.0 Mobile/15E148 Safari/604.1" -e "https://www.google.com/" https://www.example.com/ > mobile-google.html

There it was.

mobile-google.html    26819 bytes

The title was not close.

mobile-google.html: <html amp lang="tr">
mobile-google.html: <title>Casino Bonus Sites 2026</title>

Same URL. Same domain. Totally different HTML.

The clean homepage was 770,377 bytes. The spam page was 26,819 bytes. The spam page started as an AMP document, loaded external casino assets, and used Turkish copy. It was not the site builder output at all. It was a separate response.

That matched the customer report exactly.

Step 5: Test The Red Herrings

I still wanted to avoid blaming the wrong layer. A lot of these cases look like malware, but cache and edge rules can create weird behavior too.

I tested AdsBot mobile.

curl -A "AdsBot-Google-Mobile" https://www.example.com/ > adsbot-mobile.html

Clean.

adsbot-mobile.html    770377 bytes

I tested a fresh desktop request.

curl "https://www.example.com/?nocache=1" > fresh-normal.html

Clean.

fresh-normal.html    770377 bytes

Then I repeated the mobile plus Google referrer request with a fresh query string.

curl -A "Mozilla/5.0 (iPhone; CPU iPhone OS 17_0 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.0 Mobile/15E148 Safari/604.1" -e "https://www.google.com/" "https://www.example.com/?nocache=1" > fresh-mobile-google.html

Same spam.

fresh-mobile-google.html    26819 bytes

That ruled out a stale cache object for me. The behavior was conditional, repeatable, and tied to request headers.

Step 6: Look For The Conditional Branch

Once the trigger was clear, the cleanup path became much simpler.

I was looking for code or rules that checked two things:

  1. A mobile user agent.
  2. A Google referrer.

That can live in several places:

  1. A compromised origin file.
  2. A malicious include loaded before the real app.
  3. A rewrite rule.
  4. A server level redirect rule.
  5. An edge worker or proxy rule.
  6. A plugin or app snippet with access to request headers.

The exact location is not useful to publish because it changes by stack, and I am keeping the customer fully anonymized. The useful part is the search pattern.

I checked for logic that looked at HTTP_USER_AGENT, HTTP_REFERER, google, android, iphone, mobile, and amp. I also checked any place that could return a full HTML document before the real application booted.

The bad branch matched the test perfectly. When the request looked like mobile traffic from Google, it returned the casino AMP payload. Every other path fell through to the clean homepage.

After removing that conditional branch and clearing the relevant caches, the same request returned the real site.

mobile-google.html    770377 bytes
fresh-mobile-google.html    770377 bytes

Same error? No.

Same page now.

Step 7: Confirm The Fix From The User Path

I do not trust a cleanup until I retest the exact path that failed.

So I checked the four important combinations again:

Desktop direct visitor          clean homepage
Mobile direct visitor           clean homepage
Googlebot Smartphone            clean homepage
Mobile visitor from Google      clean homepage

That is the minimum verification set for this kind of case. If you only test desktop, you miss it. If you only test Googlebot, you miss it. If you only test mobile direct traffic, you miss it.

The infected path was hiding between those assumptions.

Why This Was Easy To Miss

The site looked clean from the normal browser path. Even Googlebot Smartphone got the legitimate page. That is the nasty part.

Most people test one of these:

  1. Open the homepage in a browser.
  2. Fetch as Googlebot.
  3. Run a scanner.
  4. Clear cache and test again.

All four can pass while real mobile visitors from Google still get redirected or served spam.

The request needed both headers. Without the mobile user agent, clean. Without the Google referrer, clean. With both, spam.

That is why the customer's report was more accurate than the first technical checks. They described the real path. I just had to reproduce it literally.

If you are cleaning WordPress malware, I wrote a deeper filesystem and database workflow here: Cleaning a WordPress Site Infected with WSO Web Shell and Database Stored Malware. For general hardening after a cleanup, this older checklist still helps: 5 Essential Tips for WordPress Website Security.

The Quick Test I Use Now

When someone says Google results are sending users somewhere strange, I do not start with a normal browser anymore.

I test a small matrix:

normal desktop
mobile direct
Googlebot desktop
Googlebot Smartphone
mobile with Google referrer
mobile with Bing referrer

Then I compare file sizes, titles, canonicals, and the first few lines of HTML.

The file size difference alone can be enough to spot the bad path. In this case:

clean response: 770377 bytes
spam response:   26819 bytes

That is not a cosmetic difference. That is a completely different document.

FAQ

Why did Googlebot Smartphone see the clean page?

Because the trigger was not Googlebot. The bad response required a normal mobile browser user agent plus a Google referrer. Googlebot Smartphone identified itself as Googlebot, so it received the clean crawler path.

Is this considered cloaking?

Yes, in the practical SEO sense. Different users received different content from the same URL based on request headers. In this case, the target was mobile search traffic, not necessarily Google's crawler.

Can cache cause this kind of issue?

Cache can make it confusing, but the repeatable trigger matters. If fresh query strings and multiple user agent tests still produce the same split, look for conditional code, rewrite rules, edge rules, or malware.

What should I compare first?

Compare file size, title, canonical URL, robots meta, language, and the opening HTML tag. A clean site builder page and a spam AMP page usually look different within the first ten lines.

Should I only test Googlebot?

No. Test the path the user actually takes. If the report says "from Google on my phone," test mobile user agent plus Google referrer. Googlebot alone can miss the issue.

What is the fastest takeaway?

When a site looks clean but Google search visitors report spam, reproduce the exact request path. Mobile plus referrer can reveal a hidden branch that normal browser checks never touch.

S

Written by Showrav Hasan

WordPress & Infrastructure Engineer with 3,500+ resolved incidents across Rocket.net, Hostinger, and NameSilo. I write about the troubleshooting workflows, server strategies, and engineering decisions behind real production support.

Need hands-on help with this?

I use these same strategies to resolve critical incidents for production WordPress sites.