About Moat Analytics

Moat scans and analyzes billions of advertising impressions daily to provide analytics to publishers and advertisers, and to those working on their behalf.

Moat Analytics helps advertisers and publishers confirm that ads are shown to real people in environments that are brand safe and measure the attention paid to the ads.

In this topic:

Primary Users

Moat Analytics is primarily used by:

Pre-Requisite(s)

Oracle conducts periodic reviews, known as a Business Partner Qualification reviews (or "BPQ"), of companies purchasing Moat Analytics, Moat Reach, and Context products. The Oracle Advertising Privacy & Compliance team reviews these companies against Oracle and industry standards (e.g., Media Ratings Council "MRC" accreditation guidelines). Any new customer purchasing the aforementioned Moat or Context products must successfully complete the BPQ review before contract execution. Existing customers are audited randomly and re-evaluated as part of the ongoing BPQ audit program as needed.

Below are some of the many criteria Oracle evaluates while performing a BPQ review:

  1. Functional Crunchbase page.
  2. Functional LinkedIn page.
  3. Business listed on the Better Business Bureau.
  4. Alexa search results of the account's primary domain (sites linking to the domain are analyzed for fraudulent or suspicious traffic).
  5. Whether the account has been involved in any negative press, particularly related to ad trafficking fraud or spam and scams.
  6. Instances of plagiarized content.
  7. Pop-ups or instances detracting from an individual's experience with the domain.
  8. Number of ads found on the website (anything past the ten count usually clutters and detracts from the user experience with the domain).
  9. Quality of the account's domain (e.g., Are the site links working perfectly? Do the pages load properly, and does the experience of scrolling through the various content appear to be fluid?)
  10. Social media presence.
Please note this list is non-exhaustive. Oracle does not provide legal advice or guidance and recommends that each customer consult its own counsel for guidance concerning their compliance with the requirements in the BPQ review and any contract terms.

Use Cases

You can use Moat Analytics to help understand the environments in which advertisements are being served and to analyze the effectiveness of the ads. We focus on a few key areas:
Moat can be used to monitor and measure advertising for placements in:
Moat Analytics helps:

How Moat Analytics Works

Moat uses a combination of pixels, JavaScript, and browser signals to count ad impressions; detect, measure and filter invalid (non-human) traffic; measure attention; and determine the context of the environment in which an ad is placed.

Moat detects, analyzes and filters impressions and handles data according to MRC guidelines. Moat is accredited by the MRC for Display and Video Ad Viewability Metrics and Sophisticated Invalid Traffic Detection and Filtration for desktop, mobile web, and mobile in-app impressions, and for video ad impressions within connected TV (CTV) and over-the-top (OTT) environments. More than 50 metrics have been included in the MRC accreditation.

Moat Analytics activates its measurement through display ad tags, VPAID tags for video, VAST tags for CTV, and direct integrations with many popular platforms — such as Facebook, Instagram, YouTube, Twitter, and LinkedIn.

Display Ad and Content Measurement

To measure display ads, our Integrations team provides a JavaScript Tag to be attached to the creative and provided to the ad server or partner. The tag will fire 1x1 pixels, each of which is used for distinct measurement. Other tag options exist. For example, Moat can measure content page-level insights through the implementation of a Moat header tag.

Video and Audio Measurement

Moat measures video ad placements via several integrations, ranging from direct integrations with a player to a VPAID wrapper integrated with VAST and OMID, as described below.

VAST (Video Ad Serving Template) is a standardized XML response used for the delivery and measurement of video ads across the open web. This template contains ad asset(s), tracking pixels required by an ad server, and any other metadata to be passed to the player. The VAST spec is managed by the IAB and has been updated several times since its creation to allow for the ever-changing measurement needs of advertisers, platforms, and publishers.

VPAID (Video Player Ad Interface Definition) is a standardized API that enables JavaScript tags like Moat to "listen" for video events (such as start, midpoint, and pause) from VPAID-compliant video players. These events allow Moat to know when an ad has started and ended, in addition to those with quartiles (a quarter of a video's length) in between.

VPAID, initially designed as a standard for interactive video ads, had been adopted as the preferred method to provide accurate viewability measurement. However, with the rollout of VAST 4.1 in 2018, VPAID was slated by the IAB for deprecation in favor of OMID (Open Measurement Interface Definition). OM for web video was released in late 2020.

OMID (Open Measurement Interface Definition) is an API that enables websites and apps (via the Open Measurement SDK) to send standardized measurement signals and report events to JavaScript tags such as Moat's. Unlike VPAID, which is served as a wrapper for the video asset, the VAST 4.1 spec allows for verification partners to deliver their JavaScript tags separately from a video or audio ad file, giving a player more control without losing third-party verification and measurement. The underlying OMID API uses an open-source license, whereas the OM SDK for both in-app and web video are licensed with the IAB Tech Lab.

Connected TV Measurement

To measure Connected Television (CTV) and over-the-top (OTT) ad placements, Moat utilizes a pixel-based technology that allows us to capture impression, video start, and completion data, and to detect and report on Invalid Traffic. We also can detect and report on creative and ad-placement identifiers on CTV and on over-the-top OTT devices.

Impression Counting

The metrics we report are based on all impression activity, subject to the filtration procedures described in the sections below. In most integrations, a Moat tag is served with every ad impression. The Moat tag is designed to be cached by the browser for up to one hour to minimize network usage.

Moat waits for the ad to render on the page before counting an impression. (This is different from impression-counting systems that count an impression as soon as a request for the ad is received by the server.) Any difference between the Moat count of impressions and an ad server count usually represents impressions for which a request for an ad was initiated but the page was abandoned before the ad could be rendered.

Data Logging

Moat uses the industry-standard method of firing 1x1 image pixels for measurement and logging of data. To ensure each request is sent and received successfully, we include cache-busters, among them timestamps and random numbers. A measurement is typically logged upon the loading of the Moat tag, upon the rendering of the ad asset, or upon the confirmation of an in-view impression, and periodically throughout the duration of the session to update time-based metrics, including video quartile and completion metrics.

Metrics tracked by our pixels include binary events (in-view impression, interaction, scroll, etc.); up-to-date values of time-based metrics (total in-view time, page dwell time, etc.); ad and session identifiers (campaign, line item, start time, etc.); and page and browser information (URL, referrer, user agent, etc.). Moat neither tracks any personally identifiable information (PII), nor employs cookies for measurement.

Moat processes and updates metrics in real time, providing the most recently available information through the Moat Dashboard.

Certain partner accounts have elected to enable throttling. In such cases, the Moat tag runs on every impression served but may send data to us for only a selected portion of impressions. In such cases, we use the monthly volume of all campaigns under the account to determine the appropriate level of throttling so that reported metrics are not materially affected and to ensure a tight confidence interval.

Moat reports data we collect that are integrated with that of partners such as ad servers, audience measurement platforms, data-management platforms, DSPs, SSPs, and our own DMP and Contextual Intelligence services, as well as on benchmarks derived from industry-wide data. Our data are provided on dashboards in aggregate, for specific campaigns, and in visual representations such as heatmaps.

Viewability Standards and Methodology

Moat follows the IAB guidelines for counting in-view impressions:

To determine whether an ad meets the in-view time requirement, we check (or "poll") every 200ms, considering the ad in view if consecutive checks show it for the required time span. (While the MRC recommends polling at 100ms intervals for display ads, we have shown through empirical evidence that the 200ms methodology is equivalent in accuracy.)

Moat uses JavaScript to determine the position of an ad. If the Moat tag is on the page or inside one or more same-domain ("friendly") iFrames, viewability is measured in all browsers, including mobile browsers. If the tag is inside one or more cross-domain (or "hostile") iFrames, viewability is measured in all browsers and devices using the IAB-standard SafeFrame (implemented by the page or a Moat tag outside the iFrame via an API). If SafeFrame is not available, viewability is measured in the browser using browser-resource techniques or browser APIs such as Intersection Observer. Because Moat Analytics runs from within the browser and does not interface with external applications, it does not take into account non-browser applications when determining the viewability of an ad.

For mobile in-app measurement an SDK is typically used to determine the position of the ad container or WebView within the app; the Moat JavaScript tag determines the position of the ad within the WebView. Partners may need to update the integration of the SDK when they update their apps.

Invalid Traffic and Traffic Filtration Methodology

Moat identifies and provides alerts about dozens of forms of Invalid Traffic (IVT), as described below. Moat employs techniques based on identifiers, activity, and patterns, using data in log files to identify and filter (exclude) invalid activity, including but not limited to known and suspected non-human activity and suspected invalid human activity. However, because user identification and intent cannot always be known or discerned by a publisher or advertiser, or by their respective agents, it is unlikely that all invalid activity can be identified and excluded from reporting results.

IVT analytics can be enabled for web display inventory, for video, mobile, and native advertising, and for content, by using a Moat tag. Moat IVT data are available through the Moat Analytics API and the dashboard UI.

GIVT

For General Invalid Traffic (GIVT), we report impressions that were determined to be delivered to what is called a General IVT end point. Categories for GIVT include spiders, excessive activity, and data center traffic. Further definitions and explanations are given below. Any of these individual categorizations also count toward the overall IVT rate.

Data Center Traffic Rate. This is the percentage of measurable impressions that were determined to originate from a data center and are therefore considered in the GIVT category.

Moat has an extensive database of known non-human data-center IP addresses. We curate it through both our own first-party analysis and through partnerships. It is worth noting that traffic from such non-human sources is part of the fabric of the internet, as data denters can commonly be used to host search engine crawlers and other automated services. These legitimate-use cases are not human and are therefore designated as invalid end points for ad impressions.

For GIVT that falls below our quarterly benchmarks, these are quite possibly valid impressions caused by legitimate non-human traffic. Clients can work with their partners to avoid paying for these impressions.

When Moat begins to detect data center traffic rates in excess of our quarterly benchmarks, it warrants a deeper look, as there is a higher likelihood of fraudulent activity. This can be revealed by looking at the sub-metrics of SIVT, as discussed below.

In general, Moat recommends not targeting high-IVT environments. For an advertiser the easiest way to do this is by employing Pre-Bid by Moat; for publishers by using our Yield Intelligence offering.

Excessive Activity Rate. This is the percentage of impressions determined to have been delivered to users with overly high and therefore invalid levels of activity. Moat collects "hashed" or anonymized user data that track activity. If Moat deems a user's activity invalid, the user is blacklisted and impressions are categorized as "excessive activity."

Spider Rate. This is the percentage of impressions determined to have originated from known spiders according to the IAB/ABC International Spiders and Bots List.

SIVT

For Sophisticated Invalid Traffic (SIVT), we report impressions that were determined to be delivered to an SIVT end point. Categories for SIVT include automated browser, hidden ads, incongruous browser, invalid proxy, invalid source, and session hijacked. These all count toward the overall IVT rate.

Automated Browser Rate. This is the percentage of measurable impressions that were determined to originate from an automated browser. Automated browsers are detected via "information leaks" in browser automation software that reveal when a browser is being driven by software rather than by a human user's mouse, keyboard, or touch screen.

Hidden Ads Rate. This is the percentage of impressions for which an ad was hidden from the user's view for the duration of the impression.

Incongruous Browser Rate. This is the percentage of measurable impressions that were determined to originate from a browser with an incongruous feature set. An incongruous browser is a browser that is typically run through a botnet and has been modified both to avoid alerting the user of a PC that their computer is part of a botnet and to avoid detection by standard ad-verification techniques. Incongruous browsers, by nature, are adulterated and therefore are not genuine versions of Chrome, Firefox, or other commonly used browsers that they might purport to be. We detect them by scanning for subtle "incongruities" in the operating environment that would rule out their authenticity.

Invalid Proxy Rate. This is the percentage of measurable impressions that were determined to use a proxy, excluding corporate proxies. Moat uses three methods to identify Invalid Proxies: using a list curated by Digital Envoy; checking if the browser is sending an X-forwarded-for header; or using Flash, when available, to ascertain the real IP of a user (a process described in this Wired article).

Invalid Source Rate. This is the percentage of unfiltered impressions that were served on a domain or app identified by Moat as invalid, meaning the inventory source exhibits invalid characteristics for the site or app which has been flagged.

Session Hijacked Rate. This is the percentage of impressions triggered when a user's session has been forcibly redirected to another site, tab, or app store. This indicates illegitimate activity from a legitimate device and a real user, generating low-quality impressions. Examples might be when, unknown to a user, multiple browser windows open, or when unseen pop-under ads are served to a browser. Per MRC guidelines, this is categorized as SIVT.

Additionally, several tools provided by Moat Analytics (which are not necessarily specific to Invalid Traffic detection) can be used to further identify SIVT. For example, we by default report domains based on where an ad is served, even within multiple hostile iFrames. This reporting can be used to identify domain spoofing, ad tag hijacking, and creative hijacking. SIVT is filtered from all metrics that aren't explicitly unfiltered.

Moat works with several industry-leading partners, including Oracle's Contextual Intelligence (formerly Grapeshot), to extend its coverage of the SIVT spectrum and to identify and analyze environments not considered brand-safe.

There are further definitions of IVT (including GIVT and SIVT) and specifications of our methodologies in the Moat User Guide.

Further Notes and Safeguards

Our JavaScript communicates with our internal servers throughout the lifetime of an impression, enabling real-time decisions about IVT. We also use several back-end processes to analyze IVT.

Moat analyzes all traffic on an impression-level basis. To determine whether an impression is invalid, we do not use probabilities, confidence thresholds, reputation scores, or cookies. We detect pre-fetched impressions and do not count them as in-view until a page becomes visible.

We update all lists used for IVT detection daily. Signatures of major browsers are re-evaluated with every version's release. Checks are in place to ensure all systems and data used for IVT analytics are the latest available. We also perform quarterly audits of all our IVT systems to ensure that our monitoring and processes remain up-to-date.

Moat vets all business partners to make sure they are legitimate operators and share a mutual interest in detecting, filtering, and reducing Invalid Traffic. We do not conduct business with actors found to use or promote IVT. We periodically reevaluate partners for IVT, and if high levels are detected will work with them to reestablish appropriate levels.

Moat excludes all our internal office traffic from reporting by filtering out our office IP addresses. We also use a robots.txt file on our pixel servers to prevent legitimate crawlers from sending invalid data.

Because we report on the number of ad impressions served and rendered, we do not filter impressions generated through auto-refreshes. We recommend publishers provide an auto-refresh identifier in order to provide auto-play level reporting within a dashboard. Moat account managers can help clients identify pages or sites where metrics are below benchmarks due to auto-refreshes.

Impressions that occur when a page is not focused (e.g., in a background tab or minimized) are not counted as in-view unless the user switches that window and tab into an active state, per IAB guidelines.

Data might be excluded if a pixel fire sends corrupted data. However, Moat data are sent redundantly in each message, so corruption or loss of a single pixel does not adversely affect measurement.

Data Reporting

Real Time
Moat reports metrics in real time as data are gathered.

Categorization
For in-view impressions, we report the number of impressions analyzed; the percentage of impressions for which viewability was measurable; the number and percentage of measurable impressions; and the number and percentage that were found to be In-View.

Unless stated otherwise, all metrics are presented net of both GIVT and SIVT (that is, with GIVT or SIVT removed).

The GIVT and SIVT rate, and metrics associated with them, are presented as percentages of total unfiltered impressions. Some impressions may be flagged for multiple Invalid Traffic categories (for example, both "data center" and "spider") but will only be reported once. If Moat flags the impression for both GIVT and SIVT, it will be counted as GIVT.

In addition to our accredited methods, we use additional SIVT filtration techniques to enhance the accuracy of our reports. Moat is applying for accreditation of SIVT filtration techniques.

Data are categorized according to identifiers passed to the Moat tag when the impression is measured, such as advertiser, campaign, line item, creative, site, and placement.

Time Spans
We provide the data for date ranges chosen by the client.

Revisions
Situations requiring a reissuance or reprocessing of data are exceptionally rare. In such cases, Moat will notify the client prior to changing the data and will discuss the estimated impact.

We sunset raw event data after 550 days, and keep the aggregate reporting for those revisions for the full raw event data retention period.

Quality Assurance and Checks
For new integrations, Moat will vet the incoming data stream for correct implementation and accuracy. Before activating an account, we check to confirm the data appear to be accurate and within reasonable ranges. We have automated checks in place to make sure tags function correctly.

After an account is activated and data made live on a client's dashboard, we run periodic checks of the data to verify ongoing accuracy.

Measurement Limitations

There are a number of instances in which we may report an impression as unmeasurable. We also have techniques to mitigate such circumstances, as described below.

Cross-domain (a.k.a. hostile) iFrames limit what our tag can measure. If a Moat tag loads in a hostile iFrame, our geometric approach — which measures pixel percentages and relative placements to report viewability — will not work unless further techniques are available as described below.

There are circumstances in which we can provide measurement despite hostile iFrames. If an additional Moat tag is present on a page outside the iFrame; the page implements the SafeFrame AP; the browser has Flash available; or the browser supports the intersection observer API (or similar browser-specific APIs), the Moat tag can use alternative approaches to determine viewability with the same accuracy as the geometric approach. If none of these alternatives is available, then the impression will be reported as being unmeasurable.

Scroll rate is not measurable if the Moat tag loads inside a hostile iFrame.

If the user has enabled ad-blocking technology or has disabled JavaScript, Moat will not measure the impression.

If an ad is an image and the user has disabled image rendering, Moat will not measure the impression.

If an integration partner has not gone through Moat’s vetting process or IAB Tech Lab certification, traffic measured via the OM SDK will be considered unmeasurable for viewability and all viewability-related metrics will be recorded as zero.

Autoplay Video

We use the IAB Video Ad Impression Guidelines to estimate the percentage of video impressions that play as a result of auto-play. Moat recommends clients provide an auto-play identifier in order to provide auto-play-level reporting within their dashboards.

Incentivized Viewing

Moat reports on Incentivized Viewing Rate, which represents the percentage of unfiltered impressions resulting from referrals via domains offering incentivized viewing services, e.g., giving rewards for users who view an ad. This metric can be a useful tool for diagnosing traffic sources and initiating discussions between publishers and advertisers with data to support actionable steps to improve traffic quality.

Third Party Subprocesses

Moat utilizes the services of Akamai Technologies, Inc. to deliver the Moat JavaScript, VAST tags, and other measurement assets and send measurement pixels. Akamai does not interact with the impression transaction in any other way.