SimpleSiteWatch metric glossary
A plain-English guide to the data points SimpleSiteWatch collects, what each one means, why it matters, and what usually improves it.
How to use this glossary
SimpleSiteWatch mixes different types of signals. Some show what happened in a controlled test, some reflect real user behaviour, and some show how Google sees the page. Reading them together gives a more balanced view than relying on one number in isolation.
Most SimpleSiteWatch datasets also store when the check ran and any source error message returned by the service. That helps separate a real website problem from a temporary API or connection issue.
Core platform terms
These terms explain how SimpleSiteWatch organises monitored websites and how the main data sets fit together.
| Data point | What it is | How it helps your website | How to improve it |
|---|---|---|---|
| Watch point | A single page or URL that SimpleSiteWatch checks on a schedule. | Tracking important URLs separately makes it easier to spot page-level issues before they affect enquiries, search traffic or conversions more broadly. | Monitor the pages that matter most, such as the home page, service pages, key landing pages, forms and high-value content hubs. |
| Website or root domain | The parent website record that groups related watch points, analytics connections, search data and reporting. | This gives you one place to view shared website signals instead of managing every page in isolation. | Make sure each watch point is linked to the correct website record so trends, reports and integrations stay aligned. |
| Connected services | Third-party sources linked to SimpleSiteWatch, such as Google Analytics and Google Search Console. | Connections add real traffic, search and indexing context to the technical checks. | Keep access verified, connect the correct property for each website, and review permissions when staff or agencies change. |
| Lab data | Controlled test data collected by tools such as Lighthouse or SSL checks rather than by live visitors. | Lab checks are useful for diagnosing likely causes quickly and consistently. | Use lab data to identify problem templates, heavy scripts, render-blocking assets and other technical issues that can be fixed directly. |
| Field data | Aggregated data based on real-world visitor behaviour or live search visibility, such as CrUX, GA4 and Search Console. | Field data shows what people and search engines are actually experiencing, which can differ from a controlled test. | Treat field data as the reality check. Use it to confirm whether technical fixes are improving the live experience over time. |
Reliability, uptime and website security
These checks help you understand whether the page is reachable, how quickly it responds, and whether the supporting domain and certificate are still healthy.
| Data point | What it is | How it helps your website | How to improve it |
|---|---|---|---|
| Uptime | The percentage of recent checks where the watch point responded successfully. | High uptime supports user trust, lead generation, campaign performance and search visibility. Repeated outages can disrupt all of them. | Investigate downtime causes, improve hosting resilience, review caching and security rules, and make sure the origin server and CDN can handle demand. |
| HTTP status code | The response code returned by the page, such as 200, 301, 404 or 500. | Status codes tell you whether the page loaded correctly, redirected, or failed. They are often the fastest clue when something breaks. | Fix broken routes, redirect retired pages cleanly, and investigate recurring 4xx or 5xx responses at server, CMS or application level. |
| Response time | How long the watch point took to respond during the uptime check. | Slow responses usually affect the real user experience before they become full outages. | Optimise hosting, database queries, caching, image handling and third-party scripts. Check for spikes after releases or traffic surges. |
| SSL expiry | The expiry date of the SSL certificate being served for the watch point. | An expired certificate can trigger browser warnings, break trust and stop visitors from completing important tasks. | Enable auto-renewal where possible, monitor renewal jobs, and check that certificate chains and DNS changes are being applied correctly. |
| SSL days left | The number of days remaining before the current SSL certificate expires. | This turns an expiry date into an operational lead time so teams can act before it becomes urgent. | Set a clear renewal process, keep ownership current with the certificate provider, and act early when the remaining days begin to drop. |
| SSL issuer and certificate subject | The issuing authority and the subject details recorded on the certificate being served for the page. | These fields help confirm that the expected certificate is being served and can expose misconfigurations after renewals or infrastructure changes. | Review certificate provisioning, hostname coverage and renewal workflows if the issuer or subject looks unexpected for the environment. |
| Domain expiry | The registration expiry date returned by RDAP for the root domain. | If the domain registration lapses, the website, email and connected services can all be disrupted. | Use a registrar account with stable ownership, renew well ahead of time, and avoid single-person dependency for critical domains. |
| Registrar | The domain registrar recorded in the RDAP lookup. | Knowing who controls the domain helps when troubleshooting expiry, nameserver or ownership issues. | Keep registrar access documented and up to date, especially during agency, staff or business transitions. |
Lighthouse and PageSpeed diagnostics
These are controlled diagnostic checks. They are good for finding likely causes and comparing technical improvements between runs.
| Data point | What it is | How it helps your website | How to improve it |
|---|---|---|---|
| Strategy | The device profile used by the Lighthouse run, usually mobile or desktop. | A page can behave very differently on mobile and desktop, so the chosen strategy affects the result and the recommended fixes. | Treat mobile as the stricter benchmark, then test desktop separately where layout and asset behaviour differ. |
| Performance score | A Lighthouse score that summarises loading speed and responsiveness signals from the test run. | It gives a quick technical snapshot, but it should be read alongside the underlying metrics rather than treated as the whole story. | Reduce JavaScript weight, improve caching, optimise images, remove render-blocking assets and improve server responsiveness. |
| Accessibility score | A Lighthouse score based on common accessibility checks such as colour contrast, labelling and semantic structure. | Accessibility issues affect usability for real people and often reveal broader content and interface quality problems. | Fix contrast, heading structure, form labels, alternative text, focus states and semantic HTML issues across templates. |
| Best Practices score | A Lighthouse score that reflects common web quality and safety checks, such as HTTPS use and browser-safe implementation patterns. | It helps identify avoidable technical debt that can undermine reliability, trust or maintainability. | Resolve flagged security and implementation issues, update outdated libraries and review third-party scripts that introduce risk. |
| SEO score | A Lighthouse score based on a limited set of technical SEO checks visible in the test. | It helps find basic blockers, but it is not a substitute for full SEO analysis or Search Console performance data. | Review titles, meta descriptions, indexability, canonical handling, crawl access, structured data and mobile-friendliness. |
| First Contentful Paint (FCP) | The time until the browser first renders any content from the page. | It reflects the first visible sign that the page is starting to load for the visitor. | Improve server response times, inline critical CSS where appropriate, reduce render-blocking assets and compress page resources. |
| Largest Contentful Paint (LCP) | The time until the largest visible content element is rendered, which usually approximates when the main content becomes visible. | This is one of the clearest indicators of perceived loading speed for users. | Optimise hero images, prioritise above-the-fold assets, improve caching, reduce server delay and limit heavy front-end dependencies. |
| Cumulative Layout Shift (CLS) | A measure of unexpected layout movement while the page is loading or updating. | Layout shifts make pages feel unstable and can cause accidental clicks or poor reading experiences. | Reserve space for images, embeds and banners, avoid injecting content above existing content, and make dimensions explicit in templates. |
| Total Blocking Time (TBT) | The total time during the Lighthouse run when the browser main thread is blocked for long enough to delay user input. | High blocking time usually points to heavy JavaScript or browser work that makes the page feel unresponsive. | Reduce JavaScript, split long tasks, defer non-critical scripts and review third-party tags that monopolise the main thread. |
Real-user experience and CrUX
CrUX reflects aggregated real-user experience data. It is especially useful for understanding whether visitors are feeling the same pain that lab tests suggest.
| Data point | What it is | How it helps your website | How to improve it |
|---|---|---|---|
| CrUX record type | Whether the data is recorded at the specific URL level or at the broader origin level. | Origin data can hide a weak page within a stronger site-wide average, while URL data is more page-specific when enough data exists. | Use page-level data where available for high-value URLs, but compare it with origin-level trends to see whether the issue is isolated or widespread. |
| Form factor | The device class represented in the CrUX data, such as mobile or desktop. | Experience often differs sharply by device, especially on layouts with large media, heavy scripts or poor responsive handling. | Prioritise the device type used most by your audience and review responsive design, image sizes and interaction patterns for that device. |
| Interaction to Next Paint (INP) | A real-user responsiveness metric showing how quickly the page reacts after user interactions. | A low INP helps pages feel responsive throughout the visit, not just at load time. | Reduce long JavaScript tasks, simplify interaction handlers, limit heavy client-side frameworks where possible and profile slow UI events. |
| Time to First Byte (TTFB) | The time until the first byte of the page response is received. | Slow TTFB often points to server, network or edge delivery delays that affect every visitor before the page can start rendering. | Improve server response paths, caching, CDN behaviour, database efficiency and network routing for critical pages. |
| Metric rating | The quality band assigned to a CrUX metric, such as good, needs improvement or poor. | Ratings make it easier to prioritise which real-user issues need action first. | Start with the metrics rated poor, then work through needs improvement signals that are affecting high-traffic or high-value pages. |
| Overall rating | The combined quality view across the tracked real-user experience metrics. | It provides a quick health signal, but it should still be unpacked into the individual metrics driving the result. | Fix the weakest experience metric first. The overall state usually improves when the worst bottleneck is removed. |
| FCP and LCP in CrUX | Real-user versions of the loading metrics, based on what visitors actually experience in the field. | These field versions help validate whether lab improvements are showing up for real users across the internet. | Keep improving page weight, server speed and asset prioritisation, then review whether the live metrics move in the same direction over time. |
Google Search Console performance data
These metrics show how the website is appearing in Google Search and how searchers are responding to those appearances.
| Data point | What it is | How it helps your website | How to improve it |
|---|---|---|---|
| Clicks | The number of times a user clicked the website from Google Search results. | Clicks show whether search visibility is translating into actual traffic. | Target relevant topics, improve titles and descriptions, align content to search intent, and strengthen pages that already earn impressions but few clicks. |
| Impressions | The number of times pages from the website appeared in Google Search results. | Impressions show visibility even when people do not click, which helps reveal early gains or losses in search presence. | Build useful, indexable content around the right topics, improve internal linking and remove technical blockers that suppress visibility. |
| Click-through rate (CTR) | Clicks divided by impressions. | CTR helps show whether search listings are compelling once they appear. | Tighten page titles, improve descriptions, match search intent more clearly, and use structured data where it is relevant and valid. |
| Average position | The average ranking position of the topmost result from the site. | Position helps explain whether low clicks are a visibility problem, a listing problem, or both. | Strengthen topic relevance, improve content depth, fix technical SEO blockers, earn stronger links and make sure pages are properly indexed and canonicalised. |
| Top queries | The search terms that are most often associated with the website appearing in results. | Queries show what Google currently understands the site to be relevant for. | Expand or refine content around the right search terms, and correct mismatches where irrelevant queries are taking visibility from better ones. |
| Top pages | The pages that are attracting the most search impressions or clicks. | This helps identify which content is carrying search performance and which pages may deserve more attention. | Protect the strongest pages, refresh ageing performers, and replicate effective structure or topic treatment across weaker pages. |
| Device breakdown | Search performance grouped by device type, such as mobile or desktop. | A device gap can reveal mobile UX issues, SERP differences or template problems that do not appear on desktop. | Investigate mobile layouts, page speed and metadata where mobile clicks or CTR lag behind desktop disproportionately. |
| Country breakdown | Search performance grouped by the searcher country available in Search Console. | This helps show where visibility is strongest or weakest geographically. | Review localisation, country targeting, market-specific content and location signals if one audience underperforms. |
| Hourly search trend | An hourly view of recent Search Console performance where available. | This can help spot sudden spikes, drops or timing effects that daily summaries hide. | Use hourly changes to investigate release issues, outages, campaign launches or sudden indexing disruptions more quickly. |
URL Inspection and indexing signals
These signals describe how Google sees a specific page from an indexing and search-eligibility point of view.
| Data point | What it is | How it helps your website | How to improve it |
|---|---|---|---|
| Index verdict | The high-level verdict about whether the inspected URL is indexed or eligible from Google’s point of view. | It quickly shows whether the page is broadly healthy, excluded or failing an indexing check. | Review the more specific indexing, fetch, canonical and robots signals underneath the verdict to find the real cause. |
| Coverage state | A more detailed description of whether Google could find and index the page. | Coverage helps explain why a page is included, excluded or delayed in indexing. | Resolve crawl blockers, duplicate-page issues, noindex instructions or canonical conflicts depending on the state shown. |
| Indexing state | Whether indexing is allowed, blocked or restricted by instructions such as noindex. | A page cannot perform in search if it is explicitly blocked from indexing. | Remove unintended noindex rules, review CMS defaults and check headers as well as page-level meta directives. |
| Robots.txt state | Whether the page is blocked to Google by robots.txt rules. | Blocked pages may not be crawled properly, which affects discovery, rendering and diagnosis. | Audit robots.txt changes carefully, especially after migrations, hosting changes or developer deployments. |
| Page fetch state | Whether Google could successfully retrieve the page from the server. | Google cannot index or evaluate a page properly if it cannot fetch it reliably. | Fix server errors, timeouts, firewall rules and edge configurations that interfere with Google fetching the page. |
| Last crawl time | The latest recorded time Google crawled the URL successfully. | This helps you understand whether changes may already be reflected in Google or whether the page has not been revisited recently. | Improve internal linking, sitemap accuracy and crawl accessibility so important pages are discovered and revisited more reliably. |
| Google canonical | The canonical URL Google selected for the page. | If Google chooses a different canonical than the one you expect, ranking signals may consolidate onto another URL. | Strengthen canonical consistency through redirects, internal links, canonical tags, sitemaps and cleaner duplicate handling. |
| User canonical | The canonical URL declared by the site itself. | This is the site’s own signal about which version should represent the content. | Make sure the declared canonical points to the preferred live page and matches redirects, internal links and sitemap entries. |
| Mobile usability verdict | The result of Google’s mobile usability analysis for the inspected page. | Mobile usability issues can affect both search experience and on-page conversion performance. | Fix viewport handling, cramped tap targets, text sizing and layout overflow issues on mobile templates. |
| Rich results verdict | The result of Google’s rich result analysis for the page. | A failing verdict can stop eligible structured data from appearing as enhanced results in search. | Fix structured data errors, required properties and template-level schema issues, then revalidate the affected page types. |
| Referring URLs | Known URLs that point to the inspected page. | Referring URLs help show how the page is being discovered or connected within the site and beyond. | Strengthen internal linking from relevant pages and make sure important destinations are not isolated. |
Google Analytics data
These metrics help connect technical and search signals with actual website usage and audience behaviour.
| Data point | What it is | How it helps your website | How to improve it |
|---|---|---|---|
| Active users | The number of unique users who engaged with the site in the selected period. | Active users show meaningful usage rather than just raw presence. | Make pages easier to use, faster to navigate and more aligned to user intent so visits turn into active engagement. |
| Total users | The total number of unique users who visited the site in the selected period. | This is the broadest user count and helps track overall reach. | Increase discoverability through SEO, campaigns, referrals and content that answers real audience needs. |
| Sessions | The number of visits or active sessions started on the site. | Sessions help distinguish overall traffic volume from the number of individual people. | Improve entry-page quality, campaign relevance and site usefulness so users return and continue exploring. |
| Pageviews | The number of times pages were viewed during the selected period. | Pageviews help show whether users are moving through the site or dropping off early. | Strengthen internal linking, content pathways, calls to action and content relevance so the next useful step is obvious. |
| Top pages | The pages receiving the highest measured pageview activity in the Analytics snapshot. | Top-page activity shows which content is attracting attention once users are already on the site. | Strengthen underperforming pages using the structure, content depth and navigation cues seen on the strongest page-level performers. |
| Engagement rate | The percentage of sessions that count as engaged sessions. | This helps show whether visits had meaningful interaction rather than ending immediately. | Clarify content hierarchy, speed up the experience, improve calls to action and give visitors a clear next step quickly. |
| Mobile traffic share | The percentage of measured traffic or sessions coming from mobile devices. | Knowing the device mix helps you prioritise design, content and performance decisions appropriately. | If mobile traffic is strong, treat mobile UX, page weight, form usability and navigation clarity as top priorities. |
| Desktop traffic share | The percentage of measured traffic or sessions coming from desktop devices. | Desktop-heavy sites may have different content depth, workflow and conversion expectations than mobile-first sites. | Support deeper research and comparison journeys with strong page structure, navigation and content hierarchy on larger screens. |
| Channel group | A grouped traffic source view, such as organic search, direct, referral or paid channels. | Channel mix helps explain which acquisition paths are driving traffic and which ones are weakening. | Strengthen the channels that matter strategically and investigate sudden losses that may point to search, campaign or tracking issues. |
| Source / medium | A more detailed breakdown of where sessions came from and the type of acquisition involved. | This helps isolate changes in specific campaigns, referral sources or search traffic streams. | Review tagging consistency, campaign quality, referral partnerships and landing page performance by source. |
| Country | The geographic country breakdown in the Analytics snapshot. | Country data helps explain whether traffic patterns align with the intended market or campaign audience. | Localise content, review search targeting and adjust acquisition plans if the strongest countries do not match the intended audience. |
| Landing page | The page where a session began. | Landing pages shape the first impression and often explain why users continue, convert or leave. | Prioritise messaging clarity, speed, trust signals and obvious next actions on the pages that start the most sessions. |
Schema Watch and structured data
Schema Watch focuses on whether structured data is present, valid and stable on the page over time.
| Data point | What it is | How it helps your website | How to improve it |
|---|---|---|---|
| Schema present | Whether structured data markup was detected on the page. | No schema means search engines have less explicit machine-readable context about the page content. | Add relevant schema types only where they genuinely match the page content and can be kept accurate over time. |
| Validity | Whether the detected schema appears structurally valid and usable. | Invalid schema is unlikely to help and can create confusion or lost rich result opportunities. | Correct missing required fields, invalid nesting, wrong types and template-level implementation mistakes. |
| Format summary | A summary of the markup formats detected, such as JSON-LD, microdata or RDFa. | Format information helps diagnose how the markup is being implemented and where problems may sit in the stack. | Prefer clean, maintainable template implementations and reduce duplicate or conflicting schema formats where possible. |
| Detected schema types | The Schema.org item types found on the page, such as Organisation, Article, FAQPage or Product. | This shows what the page is telling search engines about itself. | Use types that match the page purpose accurately and avoid adding types just because they are available. |
| Item count | The number of schema items detected in the latest check. | A sudden change in item count can reveal broken templates, removed modules or duplicated markup. | Investigate unexpected jumps or drops after releases and keep schema generation consistent across reusable templates. |
| Error count | The number of higher-severity schema issues detected in the latest check. | Errors are the strongest signal that the structured data is incomplete, invalid or unusable. | Fix errors first, especially those affecting required fields or invalid item structure. |
| Warning count | The number of lower-severity issues or missing recommended properties detected. | Warnings may not fully block markup use, but they often point to incomplete implementations. | Review recurring warnings and close the ones that strengthen the quality and clarity of the markup meaningfully. |
| Changed state | Whether the structured data fingerprint changed compared with a previous retained result. | Unexpected change is useful for catching template regressions or content publishing mistakes. | Check release notes, template edits and content workflows when schema changes unexpectedly across important pages. |
| Schema fetch status | Supporting metadata such as the page HTTP code, fetch error and collected HTML size recorded during the schema check. | These details help separate a broken schema implementation from a page that could not be fetched cleanly in the first place. | Fix access problems, redirects, timeouts or blocked responses before trying to diagnose the markup itself. |
AI Watch and AI reporting terms
These terms describe how SimpleSiteWatch records AI visibility checks and AI-generated reporting context.
| Data point | What it is | How it helps your website | How to improve it |
|---|---|---|---|
| AI Watch prompt | A configured topic or service phrase used to test how supported AI providers surface the website or domain. | Prompts let you monitor whether AI systems are mentioning, citing or overlooking the website for important topics. | Use prompts that reflect real services, products, questions and decision-making language used by the audience. |
| Query mode | The AI run mode used for the prompt, such as native recall or live search depending on provider capability. | Different modes can produce different citation behaviour and different visibility outcomes. | Choose the mode that best matches how the provider is expected to surface current information for your use case. |
| Surfaced | Whether the domain or result appeared in the AI response for the prompt. | This is the clearest first signal of whether the website is visible at all in the monitored AI answer. | Strengthen topical relevance, content clarity, authority signals and supporting search visibility around the monitored topic. |
| Domain cited | Whether the AI response cited the monitored domain as a source. | Citation is stronger than a vague mention because it suggests the provider used the site directly as evidence. | Publish clearer source-worthy content, improve crawlable public pages and strengthen topic ownership on the site. |
| Target URL hit | Whether the specific target page linked to the prompt appeared in the AI response or citation set. | A domain mention is useful, but a hit on the intended page is usually a stronger sign of relevance and precision. | Make the target page the strongest page for that topic, with clearer intent match, structure and evidence. |
| Citation position | The position or order in which the monitored site or URL appeared among cited results where available. | Earlier citation positions usually indicate stronger prominence in the answer. | Focus on the same fundamentals that improve discoverability elsewhere: clearer topical depth, stronger trust signals and cleaner public content. |
| Cited URLs | The URLs referenced in the AI answer for that run. | The cited set shows who the provider appears to trust or draw from on the topic. | Compare the cited competitors or sources with your own content to identify gaps in coverage, clarity or authority. |
| Confidence note | A stored note that captures context about the certainty or quality of the AI result. | This helps explain when the answer was unclear, partial or otherwise less reliable. | Treat low-confidence AI outcomes as prompts for manual review rather than direct truth, then improve the public evidence available on the topic. |
| AI report summary | The stored AI-generated commentary summarising the reporting period for a domain. | It helps turn large sets of metrics into a readable narrative and a prioritised next-step view. | Use it as a starting point, then validate the recommendations against the underlying technical, analytics and search data. |
| Provider, model and run status | The AI provider, model name and stored completion status for the AI Watch or AI reporting run. | This context helps explain why results may differ between runs and whether a result completed normally or failed. | Review provider settings, quotas and prompt configuration if the chosen model is not producing stable or useful outputs. |
| Input tokens, output tokens and estimated AI cost | Operational metadata that records the size and estimated cost of the AI run rather than the quality of the website itself. | These values help manage AI usage and cost, but they are not website performance scores. | Keep prompts focused, use the right model and avoid unnecessary prompt sprawl if costs rise without improving reporting quality. |