CRM systems : Leading Indicator Measurements frameworks

Leading Indicator Measurements

A leading indicating measurement is a predictor of future financial performance. Many companies look to CRM systems to provide the right leading indicator outputs so that the business can adapt to changing conditions sooner. While most of the measurement frameworks discussed can be leading indicator measurement frameworks, the two main paradigms here are either deliberately designed to be such (balanced score cards) or have no other real historical analysis use (knowledge management).

Figure 5.

Figure 5 depicts the relationship between time and payoff for measurement frameworks. Financial accounting systems measure activities that have happened in the past (e.g., last quarter’s financial performance). Balanced scorecards and CRM measurement systems tend to measure activities occurring now that lead to, through the causal links identified, future financial performance. Measuring knowledge management is more speculative because the process of generating knowledge will impact activities not yet conceived.

Balanced scorecards

Introduced by Robert S. Kaplan and David P. Norton in 1992, balanced scorecards are in widespread use among Fortune 1000 companies. At the time, the authors were seeking to find a way to report on leading indicators of a business’s health rather than lagging indicators, which they felt conventional financial accounting measures were (Kaplan & Norton 2001). Exclusive reliance on financial measures was causing organizations to do the wrong things. The measures included in the balanced scorecard are derived from the company’s vision and strategy.

The balanced scorecard is broken down into four sections, called perspectives:

The financial perspective The strategy for growth, profitability and risk from the shareholder’s perspective.
The customer perspective The strategy for creating value and differentiation from the perspective of the customer.
The internal business perspective The strategic priorities for various business processes that create customer and shareholder satisfaction.
The learning and growth perspective The priorities to create a climate that supports organizational change, innovation and growth.

Within each section, companies identify key measures and discover and map the causal linkages between measures and overall company performance. Typically, learning and growth objectives have a causal relationship with the internal perspective, the internal processes and programs. In turn, the internal perspective has a cause-effect relationship with the financial perspective (for example, if an internal manufacturing process, when changed, produces cost savings) and can have a cause-effect relationship on the customer perspective. Overall value flows upwards from the learning and growth perspective to the financial perspective. Figure 6 depicts an example of a balanced scorecard for a retail company.

Figure 6. Source: Kaplan & Norton (2001).

CRM systems can serve as the source for data within each of the perspectives. External customer-focused measures can be used to populate the customer perspective. Internal CRM efficiency measures could be used to populate the internal perspective. CRM knowledge management measures could be used to populate the learning and growth perspective.

Despite the wide adoption of the balanced scorecard, problems exist. First, it is not always possible or it may take too long to prove through statistical means the causal linkages between perspectives and measures. Second, the scorecard is reliant on performance measures from a variety of sources that must be reliable and timely. Poor data quality or misuse of the data is diminishing the usefulness of the balanced scorecard (Maisel, 2001). This problem is not unknown to CRM either. Gartner reports the number one reason CRM fails is that data is ignored or is of poor quality (Nelson & Kirkby, 2001).

Customer knowledge management

CRM systems can collect an enormous amount of data about customers. As pointed out earlier, the inability to use that data is proving to be a big stumbling block for CRM. Interestingly, very few companies actually measure their ability to create, manage and communicate customer knowledge. One of the reasons for lack of measurement is the fact that CRM data is widely dispersed across business functions. Each function has its own interests regarding customer information and its own ways of formatting and structuring the data (Davenport, 1998). This makes it difficult to pull the data together. Davenport distinguished between several types of customer knowledge:

  • Quantitative, data-driven knowledge found in transactional systems
  • Knowledge derived from interactions with people including: experiential observations, comments, lessons learned, qualitative facts, etc.
  • Tacit knowledge which is unstructured and difficult to express and must be converted to explicit knowledge

When it comes to customer knowledge, companies can (and a few do) measure three aspects pf customer knowledge:

  1. The value customer knowledge has (intangible asset measurement)
  2. The process by which it is produced and consumed (knowledge management operations)
  3. The quality of the knowledge or data (data quality)

One study, conducted by APQC in collaboration with Corning, Dow Corning and Siemens AG (Lopez et al., 2001) documented examples of real-world measures used throughout the process of implementing knowledge management. The authors identified five stages of knowledge management:

Stage 1 – Enter and advocate
Stage 2 – Explore and experiment
Stage 3 – Discover and conduct pilots
Stage 4 – Expand and support
Stage 5 – Institutionalize

Measurement proved critical for stages three and four but was found present in all but the first stage. Measures in this study are asset and operational measures. Stage 2 measures pertain to interest within knowledge management and fall into three categories: anecdotal data around war stories, success stories, etc., quantitative data around the growth of the knowledge management initiative, and qualitative data extrapolated from the anecdotal data. However, in this stage companies are formulating their knowledge management strategies.

Some measures in this stage include:

  • The number of sponsors recruited as champions and project sponsors
  • The number of appearances in from of decision makers and the response received
  •  The amount of corporate funding
  • The size of the gap between current state knowledge management measurement and desired state
  •  Measures against a benchmark
  •  Measures of cultural readiness

Stage 3 measures have more rigor and definition with the focus on proving business value.

Some measures in this stage include:

  • � Hard and soft business value derived from each pilot
  • � Time spent per hit (to distinguish between a quick review and rejection of data versus actual comprehension or use)
  • � Hits per user
  • � Frequency of site visits
  • � Percentage of total hits that are from repeat visitor
  • � Qualitative data concerning knowledge-sharing, knowledge value, team work, rewards, recognition and other organizational and cultural issue
  • � Identification and measuring of communities of practice
  • � Costs of capturing and creating knowledge
  • � Costs of ongoing knowledge management project management
  • � Project management effectiveness

Within stage 4, companies have adopted knowledge management within the organization and measures increase in robustness. Examples include:

  • � Knowledge flow in an out of a community
  • � Feedback (amount and quality) that flows in and out of a community
  • � Surveys to determine how employees value knowledge management
  • � Maturity measures to determine If the knowledge management process is ad-hoc or optimized

Stage 5 is a continuation of stage 4 and measures are not used to prove value. Instead they are used to check progress monitor the continued evolution of the culture.

Another approach to measuring knowledge involves measuring the flow of communications between people (Krebs, 1998). “An organization’s data is found in its computer systems, but a company’s intelligence is found in its biological and social systems, he argues. Kreb’s approach involves using surveys and observation to uncover the formal and informal communication links between people and groups within a company to uncover the social links within and across the boundaries of the organization. Link frequency is scored and visually depicted in a network diagram that clearly shows the nature of the linkages.

Another way to measure knowledge management is to understand not only the production and communication of knowledge but also its consumption. Knowledge turnover (Kellen, 2001) is a term used to describe how knowledge moves between understanding and action in four distinct phases:

Perceived Involves analyzing data, merging different types of data, building models, authoring with new information.
Plan Involves prioritizing, communicating and developing a plan of action based on information perceived.
Act Involves executing the plan derived from the information perceived correctly and changing the company’s behavior in the market.
Adjust Involves measuring how much the planned execution generated had the desired effect and adjusting the execution, mid-stream if possible.


Knowledge is externally derived in this scheme in the perceive and adjust phase and is internally generated within the plan and adjust phases. Knowledge within this flow is communicated and retained (Figure 7). One “knowledge turnover� is the completion of one perceive->plan->act->adjust cycle. This measurement schemes quantifies the collection and use of knowledge without regard to its inherent value. However, as measurements of actions based on knowledge collect data, the indirect or direct value of the knowledge can be derived.

Implementing CRM Measurement

If one includes the full breadth of what can be measured with CRM technology and approaches, CRM measurement is frighteningly difficult. Despite the successes that are described in various books, publications, vendor web sites and CRM industry portals, no company is systematically and consistently measuring customer facing activities across the breadth and depth of the organization and customer base. In fact, recent evidence is mounting that the vast majority of CRM initiatives are failing to produce results. So many impediments, technical and human, lie ahead.

Nearly every measurement framework, at its core, relies on the principle of causality. Lower level measures “roll-up into a higher-level measure based on some reasoned causal relationship. As CRM measurement frameworks become more complex, the causal linkages become more difficult and time-consuming to map, maintain and more importantly, to prove. Clearly some balance has to be struck between simplicity and complexity, between identifying causes and taking immediate action.

If the field of CRM measurement is complex, it is because the sum total of interactions between customers and companies are complex. If one considers this field as a region in space, or better still, an ocean, which is opaque, the problem becomes clear. In order to find fish, one needs more than one’s eyes. One needs some tools to find and catch fish. The same is true for finding a region of customer behavior that would be useful to understand and exploit: one needs tools designed to find that small area of useful information in the vast opaque sea (apparent entropy). When customer behavior is fluid due to a dynamic and changing market, existing tools designed to find significant patterns of customer behavior cannot be calibrated on old data or assumptions. The tools must evolve as the market evolves. A company’s ability to perceive the market must be as fluid as its ability to adapt to or shape the market. In complex, dynamic markets, it is quite conceivable that known causal linkages between layers within a company’s working theories of customer or market behavior can be invalid or worse still, be correct but irrelevant. When it comes to measuring something as dynamic as customers, most measurement frameworks need continual reassessment and recalibration.

At the other extreme are non-causal measurement schemes in which successful solutions proceed without establishing the causal linkages between related or rolled-up solutions. In some (most?) companies, this is the default approach to measuring successful initiatives. Lack of enterprise-wide coordination between various initiatives can lead to conflicting, redundant and sub optimal solutions. In this Darwinian model, however, successful CRM solutions are advanced, unsuccessful programs are weeded out and the company does receive some benefit. In fact, one could, in theory, design a measurement system that measures competing CRM programs on operational measures to help the company weed out what shouldn’t be done. Key concepts from successful programs can be shared and cross-pollinated across multiple teams. Proving causal linkages between human (customer or employee) behavior and business success can be dispensed with or downplayed. Instead, surviving programs and the key concepts behind them, however cross-pollinated they have become, represent the “causal linkages “explaining behavior or “predicting  performance. Anecdote rules. The key concepts, which inform new CRM programs, are more like memes, units of cultural information that successfully spread throughout the company. No one engineers a comprehensive behavioral model around customers nor does anyone engineer how customer knowledge is created. Is this a valid measurement approach?

Perhaps. If speed of adaptation is important, companies may not have the time to identify the right measures and the right causal relationships, which may take months or years to develop, as it sometimes does for balanced scorecard methods (Smith, 2001). Are causal measurement models better than correlated or non-causal ones at finding useful patterns? Perhaps, but the real issue is whether the measurement system is finding the right knowledge in timely way. While a non-causal CRM measurement system can detect conditions that provide opportunities quickly, determining the right business response will require some root cause analysis for diagnosing and fixing customer problems. Time becomes the pivotal variable.

All the things that can and should be measured across the enterprise regarding customers, be they value-creation, value delivery or customer insight activities, can be compared to that opaque sea. While the business can cast its net (its measurement system) to find fish (useful knowledge) where the fish usually swim, all sorts of things can cause the fish to swim in other hidden waters. Overly developed and non-adapting measurement systems are like the persistent fisherman casting his or her old nets in the same place, waiting for the fish that may never return. In this regard, the sea of activity between a company and its customers and within itself as it serves customers, is that sea of complexity. The theory of measurement advanced here is neutral on this question of causal versus non-causal customer knowledge. Investing in identifying causality is a decision that folds within the framework offered here and will be influenced by many factors. The CRM practitioner that complained that CRM stands for “can’t really measure was most likely responding to the cost of identifying causality that made proving CRM investments more difficult.

How does a business go about consistently measuring that field of complexity in a way that will detect new and unseen patterns? Most companies assume that this can be engineered in a predictable way. Some argue that it can’t. At best, a business can create an adaptive internal environment that seems best suited for detecting and acting upon this field of dynamic complexity. Stacey (2000) argues that the mainstream thinking about knowledge management that says knowledge is stored within the minds of individuals in tacit form and has value only when extracted as explicit knowledge, is wrong. For Stacey, knowledge assets lie in the “pattern of relationships between its members. Knowledge is “the act of conversing and new knowledge is created when ways of talking, and therefore patterns of relationship change. Customer knowledge comes about through interactions between people within the company.

Thomas et al. (2001) also agree that mainstream thinking about knowledge management is too simplistic. “Knowledge management is not just a matter of managing information. It is  deeply social in nature and must be approached by taking human and social factors into account, (Thomas et al., 2001). The authors argue the most important aspects of a knowledge management system is that it becomes a knowledge community; a place where people can encounter and interact with others who discover, use and manipulate knowledge.

Maxfield & Lane (1997) provide a deeper discussion about the non-deterministic way that strategy can unfold into business success through people. In this paper, the authors describe how, in complex, dynamic market conditions, business strategy shifts from management attempting to control a process of interactions by the players (or agents) involved, to control being redistributed among agents themselves to pursue a more dynamic “bottom-up approach. In this model, agents in the market pursue and form “generative relationships with each other. These relationships are perceived as creating value for the agents involved. How agents perceive themselves, products and services in the market and generative relationships is re-examined and reinterpreted as the agents themselves understand and describe the market space.

Another way of thinking about this knowledge management debate is to pose a question. For companies that deploy CRM systems, which contributed most to the benefits derived from the CRM system:

  • Establishing strong causal linkages within the measurement model deployed or in use?
  • The use of CRM technology for some efficiency or effectiveness gain?
  • The socialization of the measurement framework within the culture of the company?

In extremely fluid market conditions, it seems unlikely that businesses can identify, in time, key causal linkages in customer and employee behavior when all the agents involved are reinterpreting and redefining how they conceive of products, services, customers and relationships. When the nouns are fluid, do the verbs make sense?

In actual practice, businesses combine both approaches measurement and strategy. In many cases, successful market strategies are executed locally and often without upper management knowledge and control. In time and as market conditions stabilize, these distributed pockets of control can inform and shape overall strategy for a more traditional top-down approach through performance measurement and control systems. These measurements and systems must support top-down and bottom-up communication and feedback to support learning (Simons, 2000). Figure 8 depicts the relationship between the competing concerns of overall strategy posture (shape, adapt or do nothing), market volatility within the planning horizon and organizational approach.

Figure 8.

This debate between engineered-knowledge-in-the-artifacts versus emergent-knowledge-in-the-human-network is a key issue for CRM measurement. For CRM measurement frameworks to be successful, companies need to understand and refine their vision of how knowledge should be structured, communicated and socialized within the organization to influence results within required time frames.

Attributes of a CRM Measurement Framework

What we need now are some attributes that help us understand what constitutes the key dimensions of a measurement approach. Measurement frameworks can have three attributes or vectors that describe them:

  1. field breadth
  2. field depth
  3. field tractability

The term field here is defined as those customer-facing and customer-impacting activities to be measured that can include processes within the company, among its suppliers and certainly with its customers. Each of these vectors competes with each other for management funding and attention. Field breadth refers to how much of the total set of activities needed to be measured are actually measured. Are all customer segments, product categories, business processes measured? Field depth refers to how granular is the measurement approach. Systemic? At the customer segment level? At the customer level? How far are sub-attributes broken down? How frequently is data measured? Field tractability refers to how explainable and provable is the CRM measurement framework employed.

With these attributes in mind, here are the principles companies should consider for establishing the proper measurement framework:

  1. The measurement framework designed must cover the field width, depth and tractability in a cost effective manner that meets the company’s strategic goals. Tradeoffs between these vectors will ensue to address the cost of measurement and applicability to meeting strategic goals.
  1. The measurement framework designed must consider the level of stability or complexity within the market or within the enterprise. The more complex and volatile the market, the more adaptive and timely the measurement framework needs to be.
  1. The measurement framework needs to be able to function with partial and incomplete measures. It is impossible for companies to measure everything at once. A starting point must be had. One can be determined by restricting any combination of field breadth, depth and tractability.
  1. For highly complex markets, the measurement framework itself will evolve, perhaps rapidly. The measurement framework needs to be either self adapting or measured in some way (meta-measurement) so that it can be reconstituted as needed. This requires a different knowledge management approach and organizational model than most companies possess. Analogies from the complexity sciences provide some future directions for thinking about adaptable measurement systems.

Building a Composite Measurement Framework

If they haven’t done so already, most companies will need to build composite CRM measurement frameworks to get the optimal combination of measurement breadth, depth and tractability. Measurement frameworks are not a one-size-fits-all proposition. They need to be tailored for the company and its conditions. With the abundance of measurement approaches and lack of a comprehensive theory of customer behavior to guide them, companies will be designing frameworks themselves. Based on the issues discussed so far, here is an approach to consider.

  1. Consider the planning time horizon, competitive market stability or volatility and other market of company factors.
  • Are current market conditions stable or chaotic with rapid unpredictable change?
  • What is the company’s current competitive posture? Is the company attempting to shape the market significantly, adapt as a fast follower to the market, or sitting it out for a while and doing nothing?
  • What is the balance of focus needed between measuring internal capabilities and measuring customer behavior?
  • How much of the measurement framework needs to measure past activity or predict future events?
  1. Consider the technology implications
  • What technical infrastructure changes are needed to support the measurement framework?
  • Can the data needed be collected and combined within this infrastructure?
  • What is the ongoing cost of measurement and data collection?
  • What are the sampling and refresh rates that will be needed to support the measurement framework?
  • What are the core analytic techniques and technologies to support the data analysis needed?
  • What are the technical needs to continually collect strategic and qualitative data as opposed to conventional CRM operational data?
  1. Consider the organizational implications
  • What skills sets are needed to support the measurement framework?
  • How do motivation and incentive approaches in the company need to be altered to encourage successful measurement?
  • Can the company’s decision-making abilities absorb and use the measurement framework?
  • Does the company have flexible communication and collaboration tools and policies that let people within the company interact with each other concerning measurement data?
  • Can the customer decision-making capabilities of the company be measured and monitored so that the health of decision-making capabilities can be assessed?
  • Can feedback from the decision-making process inform and alter the measurement framework?

With these considerations in mind, a CRM measurement framework deployment plan can be formulated. In most cases, deployment of new measurement approaches is evolutionary. With the inherent risks in disrupting a customer base and employees that serve the customers within a company, companies frequently choose to limit deployment along some axis. Typically companies try to control the field breadth in the following ways:

Product deployment A measurement approach is rolled out for all customers for one specific product or service.
Segment deployment A measurement approach is rolled out for one customer segment (or sub-segment) for all products or services.
Narrow deployment A measurement approach is rolled out for one customer segment (or sub-segment) for one product or service.

Within each deployment model, companies can control scope further by restricting the remaining two vectors (depth and tractability):

  1. Controlling the field depth by limiting the how detailed the measurement approach is
  2. Controlling the field tractability by limiting causal research, data collection and analysis.

In practice, probably any sequence of deployment is possible. Since it is most unlikely that companies, especially large ones, can transform themselves completely, iterative implementations of new CRM strategies and measurement frameworks will be needed. In fact, for many companies, “adapt or perish is the directive. Changing market conditions and customer behavior and the proprietary, non-reproducible relationship companies and brands can have with their customers practically insists on iteratively implemented, adaptable CRM measurement frameworks.

Conclusion: The Complexity of CRM Measurement

The trends sweeping us along into this era of CRM have their roots midway through the 20th Century. Postmodernism is replacing modernism. One of the key conditions of postmodernism is the reversal, in importance, between production and consumption (Firat et al., 1995). Consumption, which makes up the three-quarters of the U.S. economy, now has privileged status instead of production. Firat et al. (1995) point out that “consumption becomes the means through which individuals define their self-images. And the marketing discipline is the primary institution reinforcing this trend.

Consumer behavior theories built on the consistency and orderliness of consumer behavior are being obviated, the authors argue. Global competition and new technologies ensure that as soon as customer behavior is on the “verge of stability and explainability, new products and services are introduced to destabilize the consumer behavior model so as to create competitive openings for challengers.” Traditional variables that have been used to predict or explain consumer behavior are now lacking, the authors say. It not just that “consumers frequently change their self-concepts, characters and values,  but they often subscribe to multiple  value systems and lifestyles. This problem is not simply restricted to business-to-consumer companies. The business buyer within a company is also a consumer and is affected similarly. In addition, business-to-business companies need to understand consumer behavior as much as the retail company.

With all this hand wringing, is it that customers are becoming segments of one? Are all the recent trends of targeted marketing, micro-segmentation, 1:1 marketing, mass-customization and CRM a response to this fractional, relativistic consumer mindset or is the new consumer mindset a reflection of these recent trends? In the competitive business world, it doesn’t matter which is the cause of the other. Consumers and businesses are quickly changing and showing no signs of slowing. Our measurement frameworks need to catch up. The multiplicity of frameworks for measuring “all things customer from the strategic to the operational is supremely challenging the CRM practitioner. These new customer-facing capabilities will take time to build out. This is not surprising since companies have had 150 years of industrialization and the modern project to perfect product-facing capabilities.

Change begins with knowing. Companies today need to implement more sophisticated ways of measuring this complex and diverse field. Technology will continue to drive these new measurement approaches. Can our human minds and our human cultures keep up?

by Vince Kellen
March, 2002
CIO, DePaul University
Faculty, School of CTI, DePaul University
Chicago, IL. U.S.A.

Leave a Comment

Your email address will not be published. Required fields are marked *