Skip to content
Pixeltree

Headless

Headless SEO Audit

A headless SEO audit for Hydrogen and Next.js storefronts. We surface rendering, crawl, indexation, and structured data issues your monitoring is missing.

What you get

Deliverables, not deliverable-ish.

Scoped plan

Written scope with success criteria, not a vague retainer.

Senior execution

The person scoping the work is the person doing the work.

Measurable output

Deliverables you can point at. Dashboards, flows, code, docs.

Clean handoff

Documentation and training so the work lives inside your team.

How we work

Our approach.

What a headless SEO audit actually catches

Headless storefronts fail SEO in ways that theme based storefronts never do. A theme based storefront serves HTML to Googlebot from the server. Whatever Googlebot sees is what the user sees. A headless storefront has rendering strategy as a first class concern. Server rendering, static generation, incremental static regeneration, client rendering, streaming. Each of these can serve a different HTML payload to Googlebot than to the user, and each has its own failure modes.

The most common failure is a PDP that renders the critical content after JavaScript hydration. The user sees it quickly enough because their browser hydrates. Googlebot, which is more conservative about JavaScript execution, sees an empty shell. The page indexes with no content, loses rankings, and the team spends three months diagnosing the wrong problem. We have seen this exact pattern at six different brands in the last two years. It is not a rare bug. It is the default failure mode of a poorly configured headless storefront.

The second common failure is revalidation strategy. Static generation with incremental revalidation is great for performance and terrible if the revalidation window is longer than the content update cycle. A PDP that revalidates every twenty four hours will show stale prices, stale inventory, and stale copy to Googlebot for twenty four hours at a time. If the revalidation is not invalidated on content publish, new blog posts will not show up in the sitemap or the feed for a full window. These are not edge cases. They are the configuration decisions that most headless storefronts get wrong at launch.

Our approach

We run the audit as a three to four week engagement.

  • Step one, crawl and data collection. We run a full crawl of the site with a JavaScript enabled crawler and a non JavaScript crawler. We compare the two. We pull Google Search Console data, log file data if available, and structured data samples at scale.
  • Step two, rendering diagnostics. We inspect the rendering strategy page type by page type. Server rendered, static, revalidated, client rendered. We validate that the content Googlebot sees matches the content the user sees for every template.
  • Step three, indexation and canonical review. We review the sitemap, the robots rules, the canonical tag behavior, the hreflang setup if applicable, and the indexation status reported by Search Console.
  • Step four, structured data validation. We validate structured data on product, collection, article, and organization schemas against Google's current requirements. We flag warnings, not just errors, because warnings often predict future deindexation.
  • Step five, Core Web Vitals and rendering performance. We run field and lab data on critical templates. We diagnose whether issues are rendering or asset related.
  • Step six, synthesis and recommendations. We produce a prioritized remediation plan with named issues, named owners, and rough effort estimates.

What you get

  • A rendering strategy map that shows how every template type is rendered and whether it matches the intended strategy.
  • A crawl comparison between JavaScript enabled and non JavaScript crawls with every delta flagged.
  • A Search Console indexation report with named issues and their likely root causes.
  • A canonical, sitemap, and robots review with every issue named.
  • A structured data validation report at scale across product, collection, article, and organization schemas.
  • A Core Web Vitals field and lab data pack on critical templates.
  • A prioritized remediation plan with named issues, owners, and rough effort estimates.

Timeline

The engagement runs three to four weeks.

  • Week one, crawl and data collection.
  • Weeks two and three, analysis across rendering, indexation, structured data, and performance.
  • Week four, synthesis and remediation plan.

Mini case anatomy

A composite. A twenty four million dollar apparel brand on a Next.js App Router storefront launched nine months earlier. Organic traffic had been flat since launch despite a growing content program and strong brand signals. The internal team believed the site was fine because Lighthouse scores were green and there were no red flags in Search Console.

The audit surfaced four issues. First, the PDP used a client side fetch for the product description and review content, which meant Googlebot was indexing a shell. The team had assumed the App Router's server components would handle this and had missed that the component in question was marked client. Second, the collection page had a canonical tag that pointed to the page without query parameters, which meant the filter pages were canonicalizing to the base collection and losing their long tail rankings. Third, the blog sitemap was revalidating every twenty four hours but the canonical feed was revalidating every sixty seconds, causing new posts to appear in the feed but not in the sitemap for up to a day. Fourth, the product structured data was missing the aggregate rating property because the review component was client rendered and the structured data was generated at build time.

The remediation plan fixed the PDP rendering with a six hour engineering effort, fixed the canonical tags with a two hour change, aligned the sitemap and feed revalidation with a four hour change, and moved the structured data generation to the server with a day of work. Total remediation effort was under a week. Organic traffic was up sixteen percent within ten weeks and up thirty one percent within six months. The audit paid for itself roughly thirty times over in the first year.

FAQs

Related reading. The headless development hub covers the full portfolio. For the build side, see Hydrogen build and Next.js commerce. The audit is often paired with performance tuning. For post migration audits, see headless migration. Operators comparing headless patterns often read headless Shopify versus Liquid and the Shopify speed optimization playbook. For platform decisions upstream of the audit, see platform strategy.

FAQ

Questions we hear most.

Standard SEO audits assume server rendered HTML and a relatively simple crawl. Headless storefronts introduce rendering strategy, partial hydration, revalidation, and edge caching. Most of the real issues on a headless site live in these layers and are invisible to a standard audit.
The engagement scope is audit and recommendation. If the fixes fit inside a short engineering window we can scope the implementation as a follow on. Larger remediation usually runs as a separate performance tuning or headless remediation engagement.
A mix. Commercial crawlers for bulk analysis, Chrome DevTools and WebPageTest for rendering diagnostics, Google Search Console for indexation ground truth, and custom scripts for structured data and canonical validation at scale.
Three to four weeks. One week of crawl and data collection, one to two weeks of analysis, one week of synthesis and recommendations.

Let's see if we're a fit.

15 minutes. We'll tell you whether this service fits where you are. If not, we'll name what does.

Book a 15-min call