How to Approach Mobile Data Management

Don’t let the rise in mobile device usage and “mobile first” approaches lead to yet another silo to overcome down the road

With customer behavior and technology changing so quickly, organizations can get used to just adding a new tool or swapping out one tool for the next one, without considering what’s going on underneath. What’s underneath is data. That data represents your organization’s view of your customer. And when you switch from one tool to another or, you adopt a new tool, it’s easy to overlook the role that data plays (or could play) in these transitions when focusing on all the cool, shiny new features you get.

Mobile is One Type of Data Amongst Many: The Need for a
Comprehensive Data Strategy

The new “tool” producing data in this case is mobile. These days the data from web and transaction tools are old news, data from mobile tools is emerging, and data from IoT/connected devices is on the horizon. And we didn’t even mention tools producing product data, call center or customer service data, contextual data (like weather or anonymous web browsing data), or the many other types of data that organizations continue to collect and leverage. The one clear trend here is technology change— a change that is mainly additive.

Figure 1: Example of mobile data in a unified data supply chain

When new data types emerge, old types of data typically aren’t replaced…they’re supplemented. Couple this insight with elevated consumer expectations for consistent and relevant experiences and it’s clear that the crux of the challenge is flexibility with data. What this means is that an organization’s approach to data management, including the all-too-important mobile data, must be built upon flexibility as a means to achieve a comprehensive foundation.

Critical Consideration: Separating Data and Execution

A core component of a flexible data management approach is the separation of data and execution. Don’t manage data in the same place where you manage sending email, or web site content, etc. Why?

The Pillars of a Data Management Foundation Incorporating Mobile: Flexible and Comprehensive

Rapidly evolving technology and customer behavior mandates a data foundation that is flexible and comprehensive. But what does it mean, specifically, for a data foundation to be flexible and comprehensive?

Data Flexibility

Starts with Flexible Data Collection:

Flexibility with data collection directly leads to creating a comprehensive data foundation. The more data types that can be collected and managed together, the more that data can be used in creative ways to produce desirable results. Specifically, the following data collection capabilities lead to a flexible foundation:

Client-side and Server-side collection – Many mobile technologies and platforms leverage server-side data practices, whereas legacy web data collection leverages client-side practices. A flexible data platform should have the capability to collect data via javascript tags (client-side) AND SDKs (server-side).

Omnichannel and Offline Data – There is also data that exists offline or, that can only be ingested via a CSV upload. To guarantee all data types can be leveraged, data platforms should be able to ingest offline data sources.

Data Management Capabilities Needed:

It’s not only about getting the data, but it’s also about what can you do with that data. Ultimately, data handling capabilities will limit or empower an organization’s ability to leverage data towards strategic ends. The following capabilities enable organizations to leverage data to its fullest extent:

Data Transformation/Enrichment at Point of Collection – The ability to change and augment data up front (while collecting data), directly leads to the rapid use of data towards sophisticated strategies. Post-processing data is expensive and slow, meaning transformation at collection saves costs and speeds the use of data. Additionally, the ability to change, merge and augment data by renaming it, combining it or performing calculations on it enables that data to power sophisticated actions. Some examples of advanced data handling capabilities are cited below.

Data Streaming – Last, but not least, among data management capabilities is the ability to stream all data or any subset of that data to various execution systems in real time. Those execution systems could use the data to power customer engagement actions (emails, ads, personalization, etc.) or for further analysis or visualization of the data (analytics systems, data lakes, data warehouses, etc.). The more data destinations (just like data sources), the more valuable the data supply chain.

Visitor Stitching – Visitor stitching is core to robust identity resolution as it provides the technological basis for organizations to identify a person across multiple data sets and devices, and take action on that single view of the customer. But there’s a broad spectrum of “identity resolution” functionality from the simple combining of customer records into one record, to black box “intelligent” technologies that promise to do all the work for you while limiting your ability to customize. Where “intelligent” technologies may sound appealing for the sake of simplicity, ultimately the level of intelligence and lack of customizability must be taken into account.When executed robustly, identity resolution can be the key to providing personalized and consistent experiences to customers at scale. However, different business models commonly require different identity resolution strategies. An organization’s data platform, to maximize the value of data over time, should have flexibility allowing administrators to bring their identity resolution strategy to bear on their data.

Figure 2: Example of visitor stitching

Comprehensive Data

Relating Data Collection to Quality and Action: Comprehensive data collection is synonymous with having the flexibility to collect data as dictated by customer behavior and available technology. See details of data collection flexibility above.

With such diverse data flexibility, tools to provide visibility into incoming data quality become critically important because the value of data is tied to the ability to use that data. Data can only be used with maximum effectiveness when it’s in the right format and reconciled between systems. Building and maintaining a comprehensive data set requires that data to meet certain specifications to be immediately available for action. As such, data quality is critical to a truly comprehensive data foundation.

The capability to manage data specifications to support data quality allows organizations to define incoming data and easily spot undefined, invalid or missing data. Monitoring data quality flowing into the system helps ensure a comprehensive dataset.

 

Event-level and Audience-level Data: An organization’s data foundation ultimately should be a blend of event-level, granular building block data (such as clickstream data from a mobile app), and audience-level, processed data (such as an app session or user attribute).

Event-level building block data, and even audience-level data, can be combined or manipulated in different ways to provide sophisticated insights. For example, viewing certain screens in an app (event-level data) could define user affinities (audience-level data). The user affinities could then be used to inform message personalization or ad targeting.

 

Integrations: A dataset is only as valuable as it is comprehensive and actionable. The more integrations available to a data platform directly dictates how extensive the data set will be. By extension, the more integrations where data can be delivered, the more valuable that comprehensive data set becomes. All integrations for a data platform do not necessarily go in both directions, so it’s crucial to evaluate integrations both for the purpose of collecting data and also for the purpose of delivering data.

Key Challenge: Matching Data Processes with Data Capabilities

It’s not all about the data itself though. With the complexity of a comprehensive data supply chain, it’s essential to match technological data management capabilities with nuanced data management processes (both with underlying technology). The ability to, first, deliver data powering experiences and, then, analyze the results largely depends on how that data is collected and transformed up front. However, it’s often the case that 3 or 4 different individuals or groups of individuals, ranging from developers and marketers to business intelligence analysts, have needs and responsibilities somewhere along a subset of that data supply chain.

Typically developers define the upfront scope for building applications with an eye on creating elegant functionality. Marketers take up the baton there, leveraging functionality to achieve results with an eye on conversion and tracking. And business intelligence analysts close the loop by attempting to make sense of results and tie insights to action to start the cycle again.

Data governance technology underlying this process and allowing all teams to work from a standard dataset greatly enhances an organization’s flexibility and capability to deliver customer experience.