Moat scans and analyzes billions of advertising impressions daily to provide analytics to publishers and advertisers, and to those working on their behalf.
Moat Analytics helps advertisers and publishers confirm that ads are shown to real people in environments that are brand safe and measure the attention paid to the ads.In this topic:
- Primary Users
- Use Cases
- How Moat Analytics Works
Primary UsersMoat Analytics is primarily used by:
- Advertisers (which may also be referred to as "brands") and their agencies.
- Publishers and platforms on which advertising messages are run.
- Technology partners that work with advertisers and publishers, such as demand-side platforms (DSPs) and supply-side platforms (SSPs).
Pre-Requisite(s)Oracle conducts periodic reviews, known as a Business Partner Qualification reviews (or "BPQ"), of companies purchasing Moat Analytics, Moat Reach, and Context products. The Oracle Advertising Privacy & Compliance team reviews these companies against Oracle and industry standards (e.g., Media Ratings Council "MRC" accreditation guidelines). Any new customer purchasing the aforementioned Moat or Context products must successfully complete the BPQ review before contract execution. Existing customers are audited randomly and re-evaluated as part of the ongoing BPQ audit program as needed.
Below are some of the many criteria Oracle evaluates while performing a BPQ review:
- Functional Crunchbase page.
- Functional LinkedIn page.
- Business listed on the Better Business Bureau.
- Alexa search results of the account's primary domain (sites linking to the domain are analyzed for fraudulent or suspicious traffic).
- Whether the account has been involved in any negative press, particularly related to ad trafficking fraud or spam and scams.
- Instances of plagiarized content.
- Pop-ups or instances detracting from an individual's experience with the domain.
- Number of ads found on the website (anything past the ten count usually clutters and detracts from the user experience with the domain).
- Quality of the account's domain (e.g., Are the site links working perfectly? Do the pages load properly, and does the experience of scrolling through the various content appear to be fluid?)
- Social media presence.
Use CasesYou can use Moat Analytics to help understand the environments in which advertisements are being served and to analyze the effectiveness of the ads. We focus on a few key areas:
- Valid: Is the ad shown to a real person?
- Viewability: How much of the ad is in view, and for how long?
- Brand Safety: Is the ad shown in a contextual environment perceived as safe for the brand?
- Attention: How is the ad seen and interacted with, and by whom?
Moat can be used to monitor and measure advertising for placements in:
- Desktop web environments.
- Mobile web and app environments.
- Video environments, including full-episode players and connected TV (CTV).
Moat Analytics helps:
- Verify that ads are shown as intended — on screen, to a real person, on brand-suitable websites and desired device types.
- Measure attention, such as how many people see an ad, how much of the ad is seen, for what period of time, and ways in which one can interact with the ad.
- Analyze the effectiveness of ads, including comparisons with industry benchmarks.
- Detect and avoid placements found to involve fraudulent or Invalid Traffic (IVT), whether considered General IVT (GIVT) or Sophisticated IVT (SIVT), as defined by industry bodies such as the IAB and MRC.
- Ensure advertisements appear only in so-called "brand-safe" or "brand-suitable" environments, enabling advertisers to block bidding on advertising impressions and avoid serving ads into locations that may be fraudulent or inappropriate.
- Optimize advertising with viewable, fraud-free and effective placements.
Moat detects, analyzes and filters impressions and handles data according to MRC guidelines. Moat is accredited by the MRC for Display and Video Ad Viewability Metrics and Sophisticated Invalid Traffic Detection and Filtration for desktop, mobile web, and mobile in-app impressions, and for video ad impressions within connected TV (CTV) and over-the-top (OTT) environments. More than 50 metrics have been included in the MRC accreditation.
Moat Analytics activates its measurement through display ad tags, VPAID tags for video, VAST tags for CTV, and direct integrations with many popular platforms — such as Facebook, Instagram, YouTube, Twitter, and LinkedIn.
Display Ad and Content Measurement
Video and Audio Measurement
VPAID, initially designed as a standard for interactive video ads, had been adopted as the preferred method to provide accurate viewability measurement. However, with the rollout of VAST 4.1 in 2018, VPAID was slated by the IAB for deprecation in favor of OMID (Open Measurement Interface Definition). OM for web video was released in late 2020.
Connected TV Measurement
To measure Connected Television (CTV) and over-the-top (OTT) ad placements, Moat utilizes a pixel-based technology that allows us to capture impression, video start, and completion data, and to detect and report on Invalid Traffic. We also can detect and report on creative and ad-placement identifiers on CTV and on over-the-top OTT devices.
Impression CountingThe metrics we report are based on all impression activity, subject to the filtration procedures described in the sections below. In most integrations, a Moat tag is served with every ad impression. The Moat tag is designed to be cached by the browser for up to one hour to minimize network usage.
Moat waits for the ad to render on the page before counting an impression. (This is different from impression-counting systems that count an impression as soon as a request for the ad is received by the server.) Any difference between the Moat count of impressions and an ad server count usually represents impressions for which a request for an ad was initiated but the page was abandoned before the ad could be rendered.
Moat uses the industry-standard method of firing 1x1 image pixels for measurement and logging of data. To ensure each request is sent and received successfully, we include cache-busters, among them timestamps and random numbers. A measurement is typically logged upon the loading of the Moat tag, upon the rendering of the ad asset, or upon the confirmation of an in-view impression, and periodically throughout the duration of the session to update time-based metrics, including video quartile and completion metrics.
Metrics tracked by our pixels include binary events (in-view impression, interaction, scroll, etc.); up-to-date values of time-based metrics (total in-view time, page dwell time, etc.); ad and session identifiers (campaign, line item, start time, etc.); and page and browser information (URL, referrer, user agent, etc.). Moat neither tracks any personally identifiable information (PII), nor employs cookies for measurement.
Moat processes and updates metrics in real time, providing the most recently available information through the Moat Dashboard.
Certain partner accounts have elected to enable throttling. In such cases, the Moat tag runs on every impression served but may send data to us for only a selected portion of impressions. In such cases, we use the monthly volume of all campaigns under the account to determine the appropriate level of throttling so that reported metrics are not materially affected and to ensure a tight confidence interval.
Moat reports data we collect that are integrated with that of partners such as ad servers, audience measurement platforms, data-management platforms, DSPs, SSPs, and our own DMP and Contextual Intelligence services, as well as on benchmarks derived from industry-wide data. Our data are provided on dashboards in aggregate, for specific campaigns, and in visual representations such as heatmaps.
Viewability Standards and MethodologyMoat follows the IAB guidelines for counting in-view impressions:
- For display advertisements, 50% or more of the pixels of an ad must be visible onscreen for at least one second, or 30% or more for ads of 242,500 pixels or larger, such as the 970x250 and 300x1050 formats. The browser window must be active/in-focus, meaning that it is not minimized and not in a background tab. If the browser and another application are side-by-side, the page in the browser is still considered in-focus. Moat does not use "strong user interaction" as a proxy for viewability measurement.
- Moat tracks the ad itself when checking for viewability, not the container of the ad, except in rare instances in which a rich media ad consists of multiple individual assets. In such cases, Moat will track the ad container instead.
- For video, at least 50% of the pixels of the player must be visible on-screen, the page must be focused, and the ad must be playing for at least two continuous seconds.
To determine whether an ad meets the in-view time requirement, we check (or "poll") every 200ms, considering the ad in view if consecutive checks show it for the required time span. (While the MRC recommends polling at 100ms intervals for display ads, we have shown through empirical evidence that the 200ms methodology is equivalent in accuracy.)
Invalid Traffic and Traffic Filtration Methodology
Moat identifies and provides alerts about dozens of forms of Invalid Traffic (IVT), as described below. Moat employs techniques based on identifiers, activity, and patterns, using data in log files to identify and filter (exclude) invalid activity, including but not limited to known and suspected non-human activity and suspected invalid human activity. However, because user identification and intent cannot always be known or discerned by a publisher or advertiser, or by their respective agents, it is unlikely that all invalid activity can be identified and excluded from reporting results.
IVT analytics can be enabled for web display inventory, for video, mobile, and native advertising, and for content, by using a Moat tag. Moat IVT data are available through the Moat Analytics API and the dashboard UI.
For General Invalid Traffic (GIVT), we report impressions that were determined to be delivered to what is called a General IVT end point. Categories for GIVT include spiders, excessive activity, and data center traffic. Further definitions and explanations are given below. Any of these individual categorizations also count toward the overall IVT rate.
Data Center Traffic Rate. This is the percentage of measurable impressions that were determined to originate from a data center and are therefore considered in the GIVT category.
Moat has an extensive database of known non-human data-center IP addresses. We curate it through both our own first-party analysis and through partnerships. It is worth noting that traffic from such non-human sources is part of the fabric of the internet, as data denters can commonly be used to host search engine crawlers and other automated services. These legitimate-use cases are not human and are therefore designated as invalid end points for ad impressions.
For GIVT that falls below our quarterly benchmarks, these are quite possibly valid impressions caused by legitimate non-human traffic. Clients can work with their partners to avoid paying for these impressions.
When Moat begins to detect data center traffic rates in excess of our quarterly benchmarks, it warrants a deeper look, as there is a higher likelihood of fraudulent activity. This can be revealed by looking at the sub-metrics of SIVT, as discussed below.
In general, Moat recommends not targeting high-IVT environments. For an advertiser the easiest way to do this is by employing Pre-Bid by Moat; for publishers by using our Yield Intelligence offering.
Excessive Activity Rate. This is the percentage of impressions determined to have been delivered to users with overly high and therefore invalid levels of activity. Moat collects "hashed" or anonymized user data that track activity. If Moat deems a user's activity invalid, the user is blacklisted and impressions are categorized as "excessive activity."
Spider Rate. This is the percentage of impressions determined to have originated from known spiders according to the IAB/ABC International Spiders and Bots List.
For Sophisticated Invalid Traffic (SIVT), we report impressions that were determined to be delivered to an SIVT end point. Categories for SIVT include automated browser, hidden ads, incongruous browser, invalid proxy, invalid source, and session hijacked. These all count toward the overall IVT rate.
Automated Browser Rate. This is the percentage of measurable impressions that were determined to originate from an automated browser. Automated browsers are detected via "information leaks" in browser automation software that reveal when a browser is being driven by software rather than by a human user's mouse, keyboard, or touch screen.
Hidden Ads Rate. This is the percentage of impressions for which an ad was hidden from the user's view for the duration of the impression.
Incongruous Browser Rate. This is the percentage of measurable impressions that were determined to originate from a browser with an incongruous feature set. An incongruous browser is a browser that is typically run through a botnet and has been modified both to avoid alerting the user of a PC that their computer is part of a botnet and to avoid detection by standard ad-verification techniques. Incongruous browsers, by nature, are adulterated and therefore are not genuine versions of Chrome, Firefox, or other commonly used browsers that they might purport to be. We detect them by scanning for subtle "incongruities" in the operating environment that would rule out their authenticity.
Invalid Proxy Rate. This is the percentage of measurable impressions that were determined to use a proxy, excluding corporate proxies. Moat uses three methods to identify Invalid Proxies: using a list curated by Digital Envoy; checking if the browser is sending an X-forwarded-for header; or using Flash, when available, to ascertain the real IP of a user (a process described in this Wired article).
Invalid Source Rate. This is the percentage of unfiltered impressions that were served on a domain or app identified by Moat as invalid, meaning the inventory source exhibits invalid characteristics for the site or app which has been flagged.
Session Hijacked Rate. This is the percentage of impressions triggered when a user's session has been forcibly redirected to another site, tab, or app store. This indicates illegitimate activity from a legitimate device and a real user, generating low-quality impressions. Examples might be when, unknown to a user, multiple browser windows open, or when unseen pop-under ads are served to a browser. Per MRC guidelines, this is categorized as SIVT.
Additionally, several tools provided by Moat Analytics (which are not necessarily specific to Invalid Traffic detection) can be used to further identify SIVT. For example, we by default report domains based on where an ad is served, even within multiple hostile iFrames. This reporting can be used to identify domain spoofing, ad tag hijacking, and creative hijacking. SIVT is filtered from all metrics that aren't explicitly unfiltered.
Moat works with several industry-leading partners, including Oracle's Contextual Intelligence (formerly Grapeshot), to extend its coverage of the SIVT spectrum and to identify and analyze environments not considered brand-safe.
There are further definitions of IVT (including GIVT and SIVT) and specifications of our methodologies in the Moat User Guide.
Further Notes and Safeguards
Moat analyzes all traffic on an impression-level basis. To determine whether an impression is invalid, we do not use probabilities, confidence thresholds, reputation scores, or cookies. We detect pre-fetched impressions and do not count them as in-view until a page becomes visible.
We update all lists used for IVT detection daily. Signatures of major browsers are re-evaluated with every version's release. Checks are in place to ensure all systems and data used for IVT analytics are the latest available. We also perform quarterly audits of all our IVT systems to ensure that our monitoring and processes remain up-to-date.
Moat vets all business partners to make sure they are legitimate operators and share a mutual interest in detecting, filtering, and reducing Invalid Traffic. We do not conduct business with actors found to use or promote IVT. We periodically reevaluate partners for IVT, and if high levels are detected will work with them to reestablish appropriate levels.
Moat excludes all our internal office traffic from reporting by filtering out our office IP addresses. We also use a robots.txt file on our pixel servers to prevent legitimate crawlers from sending invalid data.
Because we report on the number of ad impressions served and rendered, we do not filter impressions generated through auto-refreshes. We recommend publishers provide an auto-refresh identifier in order to provide auto-play level reporting within a dashboard. Moat account managers can help clients identify pages or sites where metrics are below benchmarks due to auto-refreshes.
Impressions that occur when a page is not focused (e.g., in a background tab or minimized) are not counted as in-view unless the user switches that window and tab into an active state, per IAB guidelines.
Data might be excluded if a pixel fire sends corrupted data. However, Moat data are sent redundantly in each message, so corruption or loss of a single pixel does not adversely affect measurement.
Moat reports metrics in real time as data are gathered.
For in-view impressions, we report the number of impressions analyzed; the percentage of impressions for which viewability was measurable; the number and percentage of measurable impressions; and the number and percentage that were found to be In-View.
Unless stated otherwise, all metrics are presented net of both GIVT and SIVT (that is, with GIVT or SIVT removed).
The GIVT and SIVT rate, and metrics associated with them, are presented as percentages of total unfiltered impressions. Some impressions may be flagged for multiple Invalid Traffic categories (for example, both "data center" and "spider") but will only be reported once. If Moat flags the impression for both GIVT and SIVT, it will be counted as GIVT.
In addition to our accredited methods, we use additional SIVT filtration techniques to enhance the accuracy of our reports. Moat is applying for accreditation of SIVT filtration techniques.
Data are categorized according to identifiers passed to the Moat tag when the impression is measured, such as advertiser, campaign, line item, creative, site, and placement.
We provide the data for date ranges chosen by the client.
Situations requiring a reissuance or reprocessing of data are exceptionally rare. In such cases, Moat will notify the client prior to changing the data and will discuss the estimated impact.
We sunset raw event data after 550 days, and keep the aggregate reporting for those revisions for the full raw event data retention period.
Quality Assurance and Checks
For new integrations, Moat will vet the incoming data stream for correct implementation and accuracy. Before activating an account, we check to confirm the data appear to be accurate and within reasonable ranges. We have automated checks in place to make sure tags function correctly.
After an account is activated and data made live on a client's dashboard, we run periodic checks of the data to verify ongoing accuracy.
There are a number of instances in which we may report an impression as unmeasurable. We also have techniques to mitigate such circumstances, as described below.
Cross-domain (a.k.a. hostile) iFrames limit what our tag can measure. If a Moat tag loads in a hostile iFrame, our geometric approach — which measures pixel percentages and relative placements to report viewability — will not work unless further techniques are available as described below.
There are circumstances in which we can provide measurement despite hostile iFrames. If an additional Moat tag is present on a page outside the iFrame; the page implements the SafeFrame AP; the browser has Flash available; or the browser supports the intersection observer API (or similar browser-specific APIs), the Moat tag can use alternative approaches to determine viewability with the same accuracy as the geometric approach. If none of these alternatives is available, then the impression will be reported as being unmeasurable.
Scroll rate is not measurable if the Moat tag loads inside a hostile iFrame.
If an ad is an image and the user has disabled image rendering, Moat will not measure the impression.
If an integration partner has not gone through Moat’s vetting process or IAB Tech Lab certification, traffic measured via the OM SDK will be considered unmeasurable for viewability and all viewability-related metrics will be recorded as zero.
We use the IAB Video Ad Impression Guidelines to estimate the percentage of video impressions that play as a result of auto-play. Moat recommends clients provide an auto-play identifier in order to provide auto-play-level reporting within their dashboards.
Moat reports on Incentivized Viewing Rate, which represents the percentage of unfiltered impressions resulting from referrals via domains offering incentivized viewing services, e.g., giving rewards for users who view an ad. This metric can be a useful tool for diagnosing traffic sources and initiating discussions between publishers and advertisers with data to support actionable steps to improve traffic quality.
Third Party Subprocesses