The Buyer Intelligence Advantage
How does buyer intelligence enhance Playbooks?
What Is Buyer Intelligence?
Buyer Intelligence is a powerful data drive tool to enhance your Playbooks experience. Many services use collective data to forecast behavior. For example, map applications use a collective body of reports from drivers to display live road conditions. Predicting buyer behavior is tricky because it depends on a variety of factors that are not immediately apparent from the information provided in your average CRM. Buyer Intelligence uses aggregated data to recommend the best strategy to engage buyers.
Buyer Intelligence Consists of 3 Main Parts
- Minor contributions from many individuals
- A system that enables mass collaboration to solve complex problems
- Optimized benefits for all users
How Do We Use Buyer Intelligence?
We use Buyer Intelligence to show our customers which tasks are likely to provide them with the maximum return for their investment of effort. We compile the results of activities that occur within our platform and apply machine learning to develop state of the art statistical models. We continually enhance these models and use them to provide our customers with the most helpful information available.
For example, a user may make a phone call to a number that is disconnected. We store the phone number and the outcome to warn other users not to invest time into dialing that disconnected number.
We use Buyer Intelligence to build solutions for every step of the customer revenue cycle (see chart below for customer revenue cycle details). Currently, we have several live solutions for the “Prospect to Lead”, “Lead to Opportunity”, and “Opportunity to Close” stages that are powered by Buyer Intelligence.
The customer revenue cycle can be represented by this five-step process:
Where Do We Get Our Data?
We acquire information from CRM data, product usage, and strategic partners. We organize data assets—entities comprised of data, usually files, databases, documents, or constituent services that process data—into six base datasets. Though these datasets are not Buyer Intelligence themselves, they provide the raw materials from which we derive Buyer Intelligence insights.
|CRM Replica||Core CRM objects as defined by Access User||Enables predictive scoring|
|Product Events||All product activities||Produces Benchmarks|
|Voice||All calls, subject to user permission||Enables compliance and coaching|
|Gamification||Leaderboards||See what motivation is working|
|Contextual||Vendor, partner, and self-sourced data feeds||Chain of custody on 40 million contacts|
CRM Snapshots (Replicas)
Each time an activity occurs on our platform, we add or update the information in our data lake to match the current CRM state. For example, when an agent makes a phone call, our data lake services listen for changes on any of the CRM’s data objects. In this way, we can record dial results, status changes, contact attempts, and other important information.
Standard CRM Objects
|Accounts||PB Sync, Models, Reports||Task operation, Prioritization, Untapped Value, Buyer Map|
|Contacts||PB Sync, Models, Reports||Task operation, Prioritization, Untapped Value, Buyer Map|
|Leads||PB Sync, Models, Reports||Task operation, Prioritization, Untapped Value, Buyer Map|
|Opportunities||PB Sync, Models, Reports||Task operation, Prioritization, Task Value|
|Tasks||PB Sync, Models, Reports||Task operation, Prioritization, Task Value, Buyer Map|
|Users||Vendor, partner, and self-sourced data feeds||Hierarchy|
|UserRoles||Vendor, partner, and self-sourced data feeds||Hierarchy|
We record platform usage events, such as telephony, messaging (including email and text), and engagement applications in our data lake. For example, we log calls that originate on our platform with their telephony metadata such as call duration, dispositions, and Session Initiation Protocol (SIP) codes. We are also able to collect information about Play cadences and messages sent via SMS and LinkedIn.
Buyer Intelligence is generated from real human interactions that take place on our platform. For instance, when a user places a phone call or sends an email, we record a success or failure, which is used to generate a signal that informs other users. Important metadata are compiled and anonymized so they cannot be traced back to any single user or company. These metadata contribute to insights that guide users.
We follow a specific set of principles to ensure that customer data creates positive externality and is appropriately anonymized while not giving others undue advantage.
Principles of data use include:
When we look at data contribution by quartile, only the top 25% of companies contribute more than 1% of their data. Most other companies contribute less than 1% of the overall data, thus ensuring that all parties benefit far more than they contribute to Buyer Intelligence datasets.
Average % of Total Data Contribution by Customer
Customers may host call recordings on their own infrastructure or enable us to store them in our data lake. Your users directly control these settings from the platform. They can stream, download, or delete call recordings then utilize them for compliance or coaching purposes.
In order to incentivize maximum performance, we provide gamification and leaderboards that initiate friendly competition amongst agents.The dataset produced from this feature maps sales operations to outcomes, such as conversation, win, and conversion. With this data mapping, we can improve Buyer Intelligence models.
We have entered data partnerships and contracted vendor datasets to provide data enrichment services. For example, we use data feeds around venture funding, news events, and economic indicators to increase accuracy of lead scoring algorithms. Additionally, we source numerous firmographic and demographic data through subscription and web crawling services. Finally, we have secured several B2B contacts to provide chain of custody on new contact recommendations.
Where Do We Store Our Data?
Our data is stored in a data lake. The data lake acts as a repository containing both structured and unstructured data. This data lake contains a replication of customer CRM data, product event data (eg. calls and emails), voice data (eg. call recordings), gamification data (eg. results and KPIs), and partner data.
This data lake is architected according to rigorous data engineering protocols designed for raw data storage, data staging, and data mart aggregation. The system leverages industry-best AWS and Azure services. Additionally, de-identified metadata is collected from product and CRM events then transformed into analytic insights that serve as Buyer Intelligence for the benefit of our customers.
How Is Our Data Processed?
We use a federated pod architecture to operate production systems and provide services for our customers. We process data over public and private networks globally and provide a variety of protections to ensure privacy and data security against compromise, intrusion, and loss. We comply with access user permissions supplied by our customers for pulling data while adhering to data handling and data processing protocols.
- General Data Processing: provides services such as dialer operation, reporting, and predictive modeling/scoring. This data is solely for the private use and benefit of the customer providing the information.
- Anonymized Data Processing: provides anonymized services such as benchmarks, scores, and recommendations. This de-identified metadata is sourced from a subset of customer data elements for product usage.
|Use Case||Processing Mode||Example|
|Operation||General||Phone numbers passed to telephony backbone to enable in-product calling|
|Reports||General||Basic visibility into activity levels and outcomes to produce reports|
|Scores||General||Enables in-product task prioritization|
|Verification||Anonymized||Phone call contact rates or best time to send an email|
|Benchmarks||Anonymized||Activity trends of other Playbooks users|
|Recommendations||Anonymized||Buyer Map functionality|
|Research & Development||General and Anonymized||Build and deliver new features|
We have built a robust data architecture that has been reviewed and audited by leading third-party, data engineering experts. Improvements have been implemented to provide even stronger data security and governance, data processing and queuing, and scalable machine learning systems.
Our approach to predictive modeling incorporates statistics and data mining to forecast outcomes. Each model is made up of several predictors, which are variables that are likely to influence future results. Once historic data has been collected for relevant predictors, a statistical model is formulated. The model may employ any of several algorithms such as regression, neural network, random forest, etc.
The overall process for continuous model improvement follows five basic stages:
- Defining the analytic framework for modeling
- Preparing training datasets
- Feature engineering
- Model training
- Deployment to production.
This process includes many substeps consistent with data science practices.
Using our base data assets, we produce predictive models that can be used to effectively prioritize the most important sales activities and forecast their outcomes. As additional data becomes available, predictive models are validated or revised. We train models offline and deploy them into an online environment for near real-time scoring or utilize them in a batch production scoring system. We perform regular audits ensure that scores produced offline are implemented with end to end testing and validation.
The following chart explains several models we produce and the goals we aim to accomplish by creating them.
We use an artificial intelligence (AI) scoring system to operationalize predictive scoring, which primarily consists of an execution engine that applies a set of rules about fit and behavior. These rules and algorithms are used to score records on multiple CRM objects including leads, contacts, accounts, and opportunities. As part of our predictive offering for Accelerate customers, the following models are supported to solve business problems across the sales funnel. These scores can be used to prioritize sales activities.
|Lead Contact||More conversation with prospects, Appointments||Will this lead engage by phone?|
|Contact Contact||More conversations with decision makers||Will this contact engage by phone?|
|Lead Convert||More pipeline conversions||Will this lead convert to pipeline?|
|Contact Close (Win)||More closed won deals based on decision maker fit||Will this contact buy?|
|Account Close (Win)||More closed won deals based on account fit||Will this account buy?|
|Task Value: Appointment||Set appointments with the highest expected return on effort||What is the next best activity to maximize quality appointments?|
|Task Value: Conversion||Conversion leads with the highest expected return on effort||What is the next best activity on my lead list to maximize pipeline?|
|Task Value: Contact Close||Close decision makers with the highest expected return on effort||What is the next best activity with my contacts to maximize revenue?|
|Task Value: Account Close||Close accounts with the highest expected return on effort||What it the next best activity across my accounts to maximize revenue?|
|Opportunity Close (Win)||Close opportunities most likely to close win||Will this opportunity close/win ever?|
|Opportunity Period Forecast||Forecast opportunities most likely to close in period||Will this opportunity close win in quarter? Or this/next month?|
|Opportunity Revenue Forecast||Forecast opportunity revenue for the current period||What is the forecast revenue for the current period?|
Intelligence datasets use AI to derive metadata from base datasets through usage of our platform. These metadata are aggregated, de-identified, and curated by anonymous data processing. Their data use in product cannot be sourced to any single user or company. Our intelligence datasets offer our customers insights into contacting, qualifying, and selling. Intelligence datasets include the Verification, Profiles, Score History, Benchmarks, Insights, and Contact Strategy as summarized below.
When a phone or email interaction occurs on our platform, the metadata from that interaction (e.g. call time, call duration, SIP messages) is propagated to an anonymized database in near real-time. The system contains more than a billion phone and email events that our telephony and email systems have facilitated. Contact details are scored based on the ratio of prior attempts and responses.
Specific kinds of metadata are captured by the verification system, such as the time when a conversation last occurred. If a conversation happened recently (e.g. within the last 90 days), then that phone number or email address can be considered verified. In cases where many conversations have recently occurred, then that phone or email may be considered highly verified. Bad states, such as bounces and disconnects, are tracked through the verification system as well.
We are a processor for a customer’s CRM data upon implementation. We ingest customer data and store it in our data lake via the Salesforce Access User policy that we provide to the customer. Once we onboard the data, global profiles for persons and the company itself are constructed or updated. These global profiles form a large index of unique individuals which create the de-identified metadata as customers use our platform.
Specific kinds of profile intelligence include contact preferences (e.g. phone or email), approximate dates of last interaction, and various contact durations. Additionally, the profile dataset tracks overall activity levels, deal patterns, and title history. At the company level, profiles can be generated to determine firmographic networks. We have more than 200 million person profiles, which constitute over 75 million company profiles.
It is important to note that we do not take data from one CRM and put it in another CRM; rather, insights are derived based from customer usage. These insights are shown anonymously in product, thus ensuring that the principle of collective data use and anonymous data processing are in force. Additionally, when recommendations are given in products such as a better phone number or better email address, those data are licensed from a third party or otherwise procured following chain of custody logic. This means the contact information is owned by XANT.
The AI score data generated by predictive models is aggregated into a score history database that contains scores for all models on the XANT platform. These metadata can be used in a variety of ways including auditing, monitoring, and use in deriving collective insights. Additionally, we can combine score histories with global identities to create robust contact and buying propensities.
As usage and base data increase, various activities and sales variables can be benchmarked. Benchmarks help answer productivity and performance questions for customer engagement versus their industry peers. For instance, productivity benchmarks for calls, emails, talk time, and contact rates are captured. From this, opportunity stage duration and coverage ratios can be derived. These data can be used to generate an empirical “state of sales” and thus measure best practices and central tendencies of many different sales motions and personas.
How Do We Keep Data Safe?
Where regulation requires, we can process production data locally, otherwise known as localization. For example, a country-specific pod can be established to ensure production data does not leave a specified country or region. Most other processing is centralized for scoring, verification, and recommendation services. Central processing is complaint with data handling protocols defined in this document. Additional costs may apply for local processing.
We also adhere to stringent data handling protocols described below.
- Data in Transit: Customer data originates from CRMs such as Salesforce, Microsoft Dynamics, SAP, and Infor. Data is pulled with standard API connectors as defined by CRM providers. Data is handled with strong authentication and industry standard encryption protocols (TLS 1.2) to ensure safe data transfers into the XANT private network.
- Data at Rest: Within XANT data stores, data is protected from multiple threats and mishandling. Protections include logical access controls and least privileged access principles. Regular security audits are performed as well as vulnerability scans on network and web hosting.
- Data in Use: To provide value added services to all of its customers, XANT collects metadata from use of its software applications. No identifiable data from one customer is ever accessible or shown in the product to other customers. Metadata is used to provide Buyer Intelligence in product as governed by contract.