Performance Measurement in Engineering Projects and Management

Subject: Tech & Engineering
Pages: 80
Words: 18342
Reading time:
68 min
Study level: Master

Introduction

Although it has long been recognised that performance measurement has an important role to play in the efficient and effective management of organisations, it remains a critical and much debated issue. Significant management time is being devoted to the questions – what and how should we measure – while substantial research effort, by academics from a wide variety of management disciplines, is being expended as we seek to enhance our understanding of the topic and related issues (Neely, 1999).

Survey data suggest that between 40 and 60 per cent of companies significantly changed their measurement systems between 1995 and 2000 (Frigo and Krumwiede, 1999). Most of these initiatives, however, appear to be static. Although many organisations have undertaken projects to design and implement better performance measures, little consideration appears to have been given to the way in which measures evolve following their implementation (Waggoner et al., 1999). It is important that performance measurement systems be dynamic, so that performance measures remain relevant and continue to reflect the issues of importance to the business (Lynch and Cross, 1991).

In order to ensure that this relevance is maintained, organisations need a process in place to ensure that measures and measurement systems are reviewed and modified as the organisation’s circumstances change (Dixon et al., 1990). Yet few organisations appear to have systematic processes in place for managing the evolution of their measurement systems and few researchers appear to have explored the question – what shapes the evolution of an organisation’s measurement system.

The basic purpose of any measurement system is to provide feedback, relative to your goals, that increases your chances of achieving these goals efficiently and effectively. Measurement gains true value when used as the basis for timely decisions. The purpose of measuring is not to know how your business is performing but to enable it to perform better. The ultimate aim of implementing a performance measurement system is to improve the performance of your organization.

If you can get your performance measurement right, the data you generate will tell you where you are, how you are doing, and where you are going. In this paper we strive to understand the performance measurement process and the significance it has in business today, especially in the manufacturing and engineering project environment. Further we will study in detail the different measurement processes which are prevalently used in manufacturing, engineering project environments for measuring organizational performance.

Performance Management System

“Measurements are the key. If you cannot measure it, you cannot control it. If you cannot control it, you cannot manage it. If you cannot manage it, you cannot improve it.” (Harrington 1991) An organisation’s measurement system strongly affects the behaviour of people both inside and outside an organisation. If organizations are to survive and prosper in the information age competition they must use measurement systems derived from their strategies and capabilities. (Kaplan and Norton 1996)

Performance measurement is a management tool used to maximize project success. It may be defined as the ongoing monitoring and reporting of project accomplishments, particularly progress toward pre-established goals. A performance measurement system can be as simple as a set of basic performance targets combined with a simple tracking tool and reports. It can be as robust as a set of highly refined performance targets and indicators, such as an advanced electronic reporting system used organization-wide. Performance measurement also addresses staff assessment and improvement of organizational performance.

The subject of performance measurement is encountering increasing interest in both the academic and managerial ambits. This, for the most part, is due to the broadening spectrum of performances required by the present-day competitive environment and the new production paradigm known as Lean Production or World Class Manufacturing (Dixon et al., 1990). In addition there is the need to support and verify the performance improvement programmes such as Just-in-Time, Total Quality Management, Concurrent Engineering, etc. (Ghalayini and Noble, 1996).

These programmes are characterised by their ability to pursue several performances at the same time: for example the increase in the product quality together with the lowering of the production costs and the lead times, following the reduction in discards, waste, reworks, and controls. As a result the logic of “trade-off” between performances has been more or less abandoned, and thus there is a reconsideration of the current Performance Measurement Systems (PMSs), traditionally oriented solely towards the control of the production costs and productivity.

The revision and updating of the PMSs on one hand regards the innovation of the accounting systems, by means of the Activity-Based Costing as it concerns, in particular, the product costing (Johnson and Kaplan, 1987), and on the other, the extension of the measuring of the so-called non-cost performances, by nature not explicitly economic-financial, but pressingly demanded by the customers.

The environmental factors which urge a development of one side of PMS of the “non-cost” type are twofold: on one part linked to the environmental turbulence (in terms of frequency and unpredictability of changes) and on the other the managerial complexity (due to the passage from strategies based on cost-leadership to strategies based on differentiation/customisation, passage which increases the competition between the firms and require more complex organization).

Despite the “non-cost” performances (which regard physical measures pertinent to the characteristics of the product, the production technologies and the managerial techniques of the plant) seeming to be typically operational in nature, in fact they often have tactic and strategic relevance (Eccles, 1991; Wisner and Fawcett, 1991). Adapting to the “value strategies” (Rappaport, 1986), the PMSs are evolving from a characterisation based on the measuring and control of costs to one based on measuring the creation of value and thus on the non-cost performances.

This occurs by considering the performances not from the point of view of trade-off, with some performances privileged to the detriment of others, but jointly pursuing the performance results on different levels, and thus of performance compatibility. The consideration of the value, in addition to the traditional financial performances (measured by ROI, Discounted Cash Flow, etc.), determines a marked customer orientation, considering a long run period in which to analyse the satisfaction and fidelity of the customer.

In regard measures and organization specifically, the PMS innovations affect both the micro- and the macro- organisational aspects: the spreading of job enrichment/enlargement and team work displace attention from individual to group performance (Meyer, 1994); the adoption of management-by-process emphasises the transverse performances as compared to the single-function performances (De Toni and Tonchia, 1996). Besides, the performance evaluation is not important in relation to the predetermined standard but to the continued improvement to be achieved (Schmenner and Vollmann, 1994; Daniels and Burns, 1997). Finally, the aim must be to involve and motivate the assessed employees too (Flamholtz et al., 1985).

There are many types of measurements. In school, exams are graded to establish the academic abilities; in sports, time is clocked in split seconds to verify the athletic abilities. Similarly in teams and organizations, there are various tools and measurements to determine how well it performs. Gamble, Strickland and Thompson (2007, p. 99) provide a comprehensive method for measuring performance of organizations. How well each company performs is dependent on the strategic plan. Some of the measurements include basic financial ratios such as debt-to-equity ratio and if the levels are an issue with creditworthiness.

Several performance measurement systems are in use today, and each has its own group of supporters. For example, the Balanced Scorecard (Kaplan and Norton, 1993, 1996, 2001), Performance Prism (Neely, 2002), and the Cambridge Performance Measurement Process (Neely, 1996) are designed for business-wide implementation; and the approaches of the TPM Process (Jones and Schilling, 2000), 7-step TPM Process (Zigon, 1999), and Total Measurement Development Method (TMDM) (Tarkenton Productivity Group, 2000) are specific for team-based structures. With continued research efforts and the test of time, the best-of-breed theories that help organizations structure and implement its performance measurement system should emerge.

Background of PMS

The problem of how organisations should assess their performance has been challenging management commentators and practitioners for many years. Financial measures have long been used to evaluate performance of commercial organisations. By the early 1980s, however, there was a growing realisation that, given the increased complexity of organisations and the markets in which they compete, it was no longer appropriate to use financial measures as the sole criteria for assessing success.

Following their review of the evolution of management accounting systems, Johnson and Kaplan highlighted many of the deficiencies in the way in which management accounting information is used to manage businesses (Johnson, 1983; Kaplan, 1984; Johnson and Kaplan, 1987). They highlighted the failure of financial performance measures to reflect changes in the competitive circumstances and strategies of modern organisations. While profit remains the overriding goal, it is considered an insufficient performance measure, as measures should reflect what organisations have to manage in order to profit (Bruns, 1998). Cost focused measurement systems provide a historical view, giving little indication of future performance and encouraging short term focus (Bruns, 1998).

The shortcomings of traditional measurement systems have triggered a performance measurement revolution (Neely, 1999). Attention in practitioner, consultancy and academic communities has turned to how organisations can replace their existing, traditionally cost based, measurement systems with ones that reflect their current objectives and environment. Many authors have focused attention on how organisations can design more appropriate measurement systems.

Based on literature, consultancy experience and action research, numerous processes have been developed that organizations can follow in order to design and implement performance measurement systems (Bourne et al., 1999).

Many frameworks, such as the balanced scorecard (Kaplan and Norton, 1992), the performance prism (Kennerley and Neely, 2000), the performance measurement matrix (Keegan et al., 1989), the results and determinants framework (Fitzgerald et al., 1991), and the SMART pyramid (Lynch and Cross, 1991) have been proposed that support these processes. The objective of such frameworks is to help organisations define a set of measures that reflects their objectives and assesses their performance appropriately. The frameworks are multidimensional, explicitly balancing financial and non-financial measures.

Furthermore, a wide range of criteria has also been developed, indicating the attributes of effective performance measures and measurement systems. These include the need for measures to relate directly to the organisation’s mission and objectives, to reflect the company’s external competitive environment, customer requirements and internal objectives (Globerson, 1985; Wisner and Fawcett, 1991; Maskell, 1989; Kaplan and Norton, 1993). Others make explicit the need for strategies, action and measures to be consistent (Lynch and Cross, 1991; Dixon et al., 1990).

The performance measurement revolution has prompted many organisations to implement new performance measurement systems, often at considerable expense. However, unlike the environment in which organizations operate, many measurement initiatives appear to be static. Senge (1992) argues that, in today’s complex business world, organisations must be able to learn how to cope with continuous change in order to be successful.

Eccles (1991) suggests that it will become increasingly necessary for all major businesses to evaluate and modify their performance measures in order to adapt to the rapidly changing and highly competitive business environment. However, there has been little evidence of the extent or effectiveness with which this takes place. Moreover, the literature suggests that ineffective management of the evolution of measurement systems is causing a new measurement “crisis”, with organisations implementing new measures to reflect new priorities but failing to discard measures reflecting old priorities resulting in uncorrelated and inconsistent measures (Meyer and Gupta, 1994).

Furthermore, it is suggested that organisations are drowning in the additional data that is now being collected and reported (Neely et al., 2000). As with measurement systems introduced at the turn of the century, there is a danger that failure to manage effectively the way in which measurement systems change over time will cause new measurement systems to lose their relevance, prompting a new crisis and necessitating a further measurement revolution.

This raises a crucial question. Why do performance measurement systems fail to change as organisations change, rendering them irrelevant? This is an important question to answer if history is not to be repeated and organizations are to avoid the expense of another extensive overhaul of their measurement systems.

Wisner and Fawcett (1991) acknowledge the need for performance measures to be reviewed and changed to ensure that measures remain relevant in the last step of their nine step process where they “re-evaluate the appropriateness of the established performance measurement systems in view of the current competitive environment”. Bititci et al. (2000) identify the need for performance measurement systems to be dynamic to reflect changes in the internal and external environment; review and prioritise objectives as the environment changes; deploy changes in objectives and priorities; and ensure gains achieved through improvement programmes are maintained.

Dixon et al. (1990) and Bititci et al. (2000) propose audit tools that enable organisations to identify whether their existing measurement systems are appropriate given their environment and objectives.

Bititci et al. (2000) go on to posit that a dynamic performance measurement system should have: an external monitoring system, which continuously monitors developments and changes in the external environment; an internal monitoring system, which continuously monitors developments and changes in the internal environment and raises warning and action signals when certain performance limits and thresholds are reached; a review system, which uses the information provided by internal and external monitors and the objectives and priorities set by higher level systems, to decide internal objectives and priorities; and an internal deployment system to deploy the revised objectives and priorities to critical parts of the system.

Based on a review of the relevant literature, Waggoner et al. (1999) summarise the key forces driving and demanding change as: customers, information technology, the marketplace, legislation (public policy), new industries, nature of the work (e.g. outsourcing) and future uncertainty.

However, many authors also identify barriers to change that have received little attention in the performance measurement literature. Gabris (1986) identifies four categories of such barriers:

  • process burden, where processes such as performance measurement take employees away from their actual responsibilities;
  • internal capacity, where organizations lack the in-house capability to support an initiative;
  • credibility anxiety, where organizations suffer from an overload of management techniques; and
  • the “Georgia giant syndrome”, where management techniques work only under rigorous and closely supervised control conditions.

These factors can be considered to be the organisation’s readiness for change (Waggoner et al., 1999). Furthermore, Kotter (1996) argues that willingness or urgency to change throughout the organisation is necessary for such change to be effective.

Greiner (1996) categorises inhibiting factors as institutional, pragmatic, technical and financial. Numerous authors (such as Scott, 1995 and Pettigrew and Whipp, 1991) also highlight that the political nature of organizations requires further consideration, one of a number of factors demonstrating the impact that corporate culture can have on evolutionary change (Tichy, 1983).

The literature reviewed highlights the importance of managing the evolution of performance measurement systems to ensure that they continue to reflect the environment and objectives of the organisation. The literature suggests that the factors affecting evolutionary change within organisations, and hence the evolution of performance measures, are many and complex. However, these issues can be grouped into two main themes:

  • Drivers of change (those factors that cause change to be necessary); and
  • Barriers to change (those factors that must be overcome if change is to be effective).

Determinants of Performance Measurement System Design

Although many organizations have undertaken projects to design and implement better performance measures, few organizations appear to have systematic processes in place for managing the evolution of their measurement systems. Often organizations are drowning in the additional data that is now being collected and reported (Neely et al., 2000). Measures tend to lose their relevance and ability to discriminate between good and bad performance over time as performance objectives are achieved or as behaviour no longer reflects the performance objectives underpinning the measures (Meyer, Gupta, 1994).

Meyer and Gupta observe that failure to effectively manage this change causes the introduction of new measures that are weekly correlated to those currently in place so that an organisation will have a diverse set of measures that do not measure the same thing. Numerous other authors espouse the need for reflection on measures to ensure that they are updated to reflect the continuous change and issues of importance to the business.

Kennerley and Neely (2002), for example, claim that organisations need to review and modify measures and measurement systems as the organisation’s circumstances change. As with measurement systems introduced at the turn of the century, there is a danger that failure to manage effectively the way in which measurement systems change over time will cause new measurement systems to lose their relevance, prompting a new crisis and necessitating a further measurement revolution.

This raises a crucial question. Can the concept of contingencies be applied? We believe that considering the drivers of change, i.e. those factors that make change necessary, may enhance the organisation’s readiness for change. The contingency approach to management accounting is based on the premise that there is no universally appropriate accounting system that applies equally to all organisations in all circumstances (Otley, 1980).

Rather, it is suggested that particular features of an appropriate accounting system will depend on the specific circumstances in which an organisation finds itself. Consequently, the underlying premise of the contingency approach to performance measurement is that measures and measurement systems must reflect the context to which they are applied. By detecting contingencies of corporate performance measurement, one is encouraged to believe that the design of an organisation’s PMS should change when the same conditions (contingent factors) appear.

Emmanuel et al. (1995) summarises three main classes of contingent factors that have been identified as influencing the design of an accounting system. These are the environment (its degree of predictability, the degree of competition faced in the market place, the number of different product/markets encountered, and the degree of hostility exhibited), organisational structure (size, interdependence, decentralisation and resource availability), and technology (the nature of the production process, its degree of routineness). A consideration of corporate strategy has, quite surprisingly, not been prominent in control design studies despite some arguments that differences in corporate strategies should logically lead to differences in the design of planning and control systems (Dent, 1990).

In relation to contingencies in corporate performance measurement, Waggoner et al. (1999) summarised the following key forces driving change in performance measurement: customers, information technology, the marketplace, legislation (public policy), new industries, nature of the work, and future uncertainty. However, the focus here is more on the enabling power of these drivers to foster the evolution of performance measurement systems within organisations and not on the resulting structures of the measures.

To determine potential contingency factors of performance measurement we will therefore have to rely on the contingency theory of management accounting and simultaneously consider the Slovenian legislation and Slovenian economy’s specific characteristics that can also influence the way managing directors monitor their company’s performance. One of such characteristics is the traditional power of workers’ council, which has implications for corporate decision-making.

The Role of Performance Measurement Systems in Improving Financial Performance

From the methodological perspective, the most difficult research question is whether the contemporary performance measurement systems actually help firm profitability. The main function of performance measurement in a strategic context, as claimed by Letza (1996), is to provide the means of control to achieve the objectives required and to fulfil the company’s mission/strategy statement. This view is supported by Neely et al. (1994) who view performance measurement as a key part of strategic control. Fawcett et al. (1994) and Neely et al. (1994) develop this argument by stating the need for performance measurement to exercise this control through:

  1. helping managers to identify good performance;
  2. setting targets; and
  3. demonstrating success or failure which is ultimately reflected in financial statements.

The very essence of PMSs is therefore to improve decision-making so that the company performs better financially. As a consequence, the PMSs’ effectiveness can be viewed only from the perspective of its contribution to the company’s financial performance. The belief that financial results are the most important ultimate aspect of performance is firmly embedded both in the traditionalists’ view of the corporation (Friedman, 1962, Friedman, 1970; Friedman, Friedman, 1980) as well as in the alternative view to the traditional conception of the business enterprise (Pava, Krausz, 1996).

In the traditionalists’ view business managers have a responsibility to shareholders to maximise firm value while having no mandate to embark on socially responsible projects that do not enhance the income generating ability of the firm. In the alternative view, on the other hand, environmental concerns, community relations, product quality, consumer relations, and employee relations are also considered an important aspect of performance, nevertheless, along with financial performance.

Nearly all empirical studies on social responsibility have concluded that firms which are perceived as having met social-responsibility criteria have either outperformed or performed as well as other firms which are not (necessarily) socially responsible (Pava, Krausz, 1996). Yet, quantitative empirical evidence with specific focus on how internal performance measurement systems with the balanced structures of financial and non-financial performance measures improve corporate financial performance is so far still lacking. In the further subsections we will attempt to address this question, too.

The Benefits of Performance Measurement

Programs and organizations that institute meaningful measures of performance realize significant benefits, which increase as a system evolves and improves. Performance measurement can:

  • Strengthen decision-making at all levels – Timely and relevant reports on performance lay the groundwork for sound decision-making. In addition, performance measurement systems enable decision-makers to diagnose weak performance, identify and address root causes, and track improvement.
  • Enhance program outcomes – Performance measurement helps organizations and programs focus on achieving results. Effective performance measures are directly relevant to the organization or program mission and outcome goals.
  • Increase ability to meet accountability requirements – Actively measuring performance positions organizations and programs to respond effectively to requirements of the Performance Assessment Rating Tool (PART), the Government Performance and Results Act (GPRA), and other accountability initiatives.
  • Improve communication of outcomes to key audiences – Quantifying achievements and the impact of activities enables organizations and programs to demonstrate results to internal and external stakeholders.

Traditional PMS Methods

Traditional performance measures, developed from costing and accounting systems, have been criticised for encouraging short term-ism (Banks and Wheelwright, 1979; Hayes and Garvin, 1982), lacking strategic focus (Skinner, 1974), encouraging local optimisation (Hall, 1983; Fry and Cox 1989), encouraging minimisation of variance rather than continuous improvement (Johnson and Kaplan, 1987; Lynch and Cross, 1991), not being externally focused (Kaplan and Norton, 1992) and even for destroying the competitiveness of US manufacturing industry (Hayes and Abernathy, 1980).

At the time, many performance measurement systems in the UK and USA were heavily financially biased and it has been argued that systems which were specifically designed for external reporting were being inappropriately used to manage business enterprises (Hayes and Abernathy, 1980).

A number of authors have been criticising traditional performance measurement. For instance, Kaplan and Norton developed the Balanced Scorecard (1996) due to the fact that quality-oriented performance measures such as business processes or customer orientation were not an integral part of regular management reports, and financial figures are the consequences of yesterday’s decisions and not the indicators of tomorrow’s performance.

Limitations of Financial Measurements

As the preceding discussion has clearly demonstrated, we require balanced performance information to fully assess an organization’s success. Despite this realization, recent estimates suggest that 60 percent of metrics used for decision-making, resource allocation, and performance management are still financial in nature.9 It seems that for all we’ve learned, we remain stuck in the quagmire of financial measurement.

Perhaps tradition is serving as a guide unwilling to yield to the present realities. You see, traditionally, the measurement of all organizations has been financial. Bookkeeping records used to facilitate financial transactions can be traced back thousands of years. At the turn of the twentieth century, financial measurement innovations were critical to the success of the early industrial giants like General Motors.

The financial measures created at that time were the perfect complement to the machinelike nature of the corporate entities and management philosophy of the day. Competition was ruled by scope and economies of scale, with financial measures providing the yardsticks of success.

Over the last hundred years, we’ve come a long way in how we measure financial success, and the work of financial professionals is to be commended. Innovations such as Activity-Based Costing (ABC) and Economic Value Added (EVA) have helped many organizations make more informed decisions. However, as we begin the twenty-first century, many are questioning our almost exclusive reliance on financial measures of performance. Here are some of the criticisms levied against the over-abundant use of financial measures:

  • Not consistent with today’s business realities: Tangible assets no longer serve as the primary driver of enterprise value. Today it’s employee knowledge (the assets that ride up and down the elevators), customer relationships, and cultures of innovation and change that create the bulk of value provided by any organization. In other words, intangible assets. If you buy a share of Microsoft’s stock, are you buying buildings and machines? No, you’re buying a promise of value to be delivered by innovative people striving to continually discover new pathways of computing. Traditional financial measures were designed to compare previous periods based on internal standards of performance. These metrics are of little assistance in providing early indications of customer, quality, or employee problems or opportunities.
  • Driving by rear view mirror: This is perhaps the classic criticism of financial metrics. You may be highly efficient in your operations one month, quarter, or even year. But does that signal ongoing financial efficiency? As you know, anything can, and does, happen. Financial results alone are not indicative of future performance.
  • Tendency to reinforce functional silos: Working in mission-based organizations, you know the importance of collaboration in achieving your goals. Whether it’s improving literacy, decreasing HIV rates, or increasing public safety, you depend on a number of teams working seamlessly together to accomplish your tasks. Financial statements don’t capture this cross-functional dependency. Typically, financial reports are compiled by functional area. They are then “rolled-up” in ever-higher levels of detail and ultimately reflected in an organizational financial report. This does little to help you in meeting your noble causes.
  • Sacrifice of long-term thinking: If you face a funding cut, what are the first things to go in your pursuit to right the ship? Many organizations reach for the easiest levers in times of crisis: employee training and development, or maybe even employees themselves! The short-term impact is positive, but what about the long-term? Ultimately, organizations that pursue this tactic may be sacrificing their most valuable sources of long-term advantage.
  • Financial measures are not relevant to many levels of the organization: Financial reports by their very nature are abstractions. Abstraction in this context is defined as moving to another level and leaving certain characteristics out. When we roll up financial statements throughout the organization, which is exactly what we are doing: compiling information at a higher and higher level until it is almost unrecognizable and useless in the decision-making process of most managers and employees. Employees at all levels of the organization need performance data they can act on. This information must be imbued with relevance for their day-to-day activities.

In modern organisations, Data Warehouse Systems are facilitated for performance measurement. Building a data warehouse is still very much driven by technology and does not yet offer well established strategies and techniques for the development process.

State-of-the-art performance measurement theories are not associated with data warehouse development. Therefore, Data Warehouse Systems represent mainly the traditional way of performance measurement. Today, the main design focus of Data Warehouse Systems is on customer relationship management (e.g. customer satisfaction, customer retention, new customer acquisition, customer profitability, market and account share, etc.) and financial measures (e.g. turnover, cost, margin, etc.).

In Balanced Scorecard terms, the financial perspective and customer perspective are tackled, but the internal business process perspective and the learning and growth perspective are not addressed at all. As a further step towards a Corporate Performance Measurement System, the internal business process perspective ought to be integrated into the corporate data warehouse. Beside the advantage of reusing corporate data warehouse management facilities, it leverages analysis advantages through conformed dimensions for business process performance measurement. This single source of information on the performance of the company also avoids inconsistent measures.

Basically, we see the Balanced Scorecard as a framework for the Corporate Performance Measurement System, but not as a foundation for business process performance measurement. The Balanced Scorecard looks at business processes only as far as they have a great impact on customer satisfaction and achieve an organisation’s financial objectives (Kueng et al.). It is focused on corporations or organisational units such as strategic business units, but lacks a detailed and holistic business process performance measurement approach.

In the next section we discuss the various performance measurement tools used by manufacturing companies. They are the Balanced Scorecard (Kaplan and Norton, 1993, 1996, 2001), Performance Prism (Neely, 2002), and the Cambridge Performance Measurement Process (Neely, 1996) are designed for business-wide implementation; and the approaches of the TPM Process (Jones and Schilling, 2000), 7-step TPM Process (Zigon, 1999), and Total Measurement Development Method (TMDM) (Tarkenton Productivity Group, 2000), etc.

The Contingency Theory of Performance Measurement

The Contingency Theory

The foundations of contingency theory stem from the systems approach that established itself as a popular tool for studying organisations in the 1950s. The central feature of the open systems approach is that it seeks to study the activities of an organisation by reference to the context of the wider environment in which it is set (Emmanuel et al. 1990). Whereas nearly all previous work in organisational research had been universal in approach, seeking the single best organisational solution, much of the work conducted in the late 1950s and early 1960s noted that particular forms of organisation were best suited to particular environmental conditions.

Burns and Stalker (1961) had noted the appropriateness of mechanistic and organismic forms of organisation to stable and dynamic technological environments, respectively. A study of Woodward (1965) had found it necessary to recommend different principles of management depending upon the nature of the production process. Chandler (1962) had discovered a link between the corporate strategy selected by a firm and the organisational structure appropriate to its effective implementation.

All these results indicated that there was no single form of organisation that was best in all circumstances. In its present state, the contingency theory (of organisations) may best be described as a loosely organised set of propositions which in principle are committed to an open systems view of organisation, which are committed to some form of multivariate analysis of the relationship between key organisational variables as a basis for organisational analysis, and which endorse the view that there are no universally valid rules of organisation and management (Burrell/ Morgan 1979).

The contingency theory of management accounting

The reason to consider management accounting before turning to performance measurement lies in the fact that both fields are closely related. Accounting information is provided by the accounting information system, is routinely generated, transmitted through formally defined channels, and quantitative in nature. Most of them are financial. Performance measures, on the other hand, encompass qualitative information, too, and are provided by different, not only accounting, information systems.

In terms of their perspective, performance measures cover a wider range of both the financial and non-financial information, thus requiring to be treated as a separate (non-accounting) concept. However, performance measurement (management) as field of study is still in the phase of evolving into a separately identifiable academic ‘sub-discipline’ (Beasley/ Thorpe 2002). Management accounting, on the other hand, already exists as such, and one can find reference to contingency theory in the accounting literature already from the mid-1970s on.

The contingency approach to management accounting is based on the premise that there is no universally appropriate accounting system that applies equally to all organisations in all circumstances (Otley 1980). Rather, is it suggested that particular features of an appropriate accounting system will depend upon the specific circumstances in which an organisation finds itself. Thus a contingency theory must identify specific aspects of an accounting system, which are associated with certain defined circumstances and demonstrate an appropriate matching (Emmanuel et al. 1990).

Based on a review of the relevant literature, Emmanuel et al. (1990) summarises three main classes of contingent factors that have been identified as influencing the design of an accounting system: the environment, organisational structure and technology. Relevant features of an organisation’s environment affecting accounting system design that have been suggested include its degree of predictability, the degree of competition faced in the market place, the number of different product/markets encountered, and the degree of hostility exhibited. Structural features proposed include size, interdependence, decentralisation and resource availability.

Technological factors include the nature of the production process, its degree of routineness, how well means-end relationships are understood and the amount of task variety. Of these, environmental factors have most often been researched.

Additional research focused on two other factors, strategy and culture. A consideration of corporate strategy has, rather surprisingly, not been prominent in studies of control design despite some arguments that differences in corporate strategies should logically lead to differences in planning and control systems’ design (see Dent 1990). More often, the influence of organisational culture on control systems is researched. Emmanuel et al. (1990) mentions some of them (see also Ansari/ Bell 1991).

Performance measurement and the contingency theory

Performance measurement, although extensively studied in the last two decades, has been given relatively little consideration in terms of the factors that influence the design of performance measurement systems. A brief overview of the relevant existing literature will give some insight into the topic.

Wisner and Fawcett (1991) were among the first to acknowledge the need for performance measures to be reviewed and changed to ensure that measures remain relevant. They highlight the need to re-evaluate the appropriateness of the established performance measurement systems in view of the current competitive environment. Dixon et al. (1990) argue that organisations need a process in place to ensure that measures and measurement systems are reviewed and modified as the organisations’ circumstances change.

Meyer and Gupta (1994) observe that measures tend to lose their relevance and ability to discriminate between good and bad performance over time. They argue that failure to manage this change effectively causes the introduction of new measures that are weakly correlated with those currently in place so that an organisation will have a diverse set of measures that do not measure the same thing.

According to Lynch and Cross (1995), it is important that performance measurement systems be dynamic, so that performance measures remain relevant and continue to reflect the issues of importance to the business. Bititci et al. (2000) identify the need for performance measurement systems to be dynamic to reflect changes in the internal and external environment. Similarly, Bourne et al. (2000) suggest measurement systems should be reviewed and revised. They identify the need to review targets and performance against them; individual measures as circumstances change; and the set of measures to ensure that they reflect the strategic direction.

Balanced Score Card

What is Balanced Score Card?

In 1990 Robert Kaplan and Davis Norton carried out a year- long research project with 12 organizations at the leading edge of performance measurement. They came to the conclusion that traditional performance measures, having a financial bias and being cantered on issues of control, ignored the key issue of linking operational performance to strategic objectives and communicating these objectives and performance results to all levels of the organization (Corrigan 1995). Realizing that no single measure can provide a clear performance target or focus attention on all the critical areas of business, they proposed the concept of a Balanced Scorecard as a more sophisticated approach for meeting these shortcomings.

Kaplan and Norton are of the opinion that the Balanced Scorecard has its greatest impact when deployed to drive organizational change. In a rapidly changing environment, innovative firms are increasingly using the Balanced Scorecard to identify and communicate key factors that drive future values (Kaplan and Norton 1996) giving better indicators of where the organization is going. If companies are to survive and prosper in a competitive environment, they must use measurement and management systems derived from their strategies and capabilities.

The Balanced Scorecard can be seen as a management system that bridges the gap between strategic objectives set at the senior level within an organization, and their operational execution. This is accomplished by translating vision and strategy into objectives and measures, providing a framework to communicate this vision and strategy to employees, and thereby channelling the energies, the abilities, and the specific knowledge of people throughout the organization towards achieving long-term goals.

By developing a set of measures that gives managers a fast and comprehensive view of the organization (Kaplan and Norton 1992), the Balanced Scorecard method strives to focus the whole organization on what must be done to create breakthrough performance. The Scorecard takes the company’s vision, translates each key statement into measurable steps and then presents information so that the critical success factors can be evaluated and compared.

By measuring organizational performance across four balanced perspectives, the Balanced Scorecard complements traditional financial indicators with measures for customers, internal processes, and innovation and improvement activities (Kaplan and Norton 1996) – which in turn must all be linked to the organization’s strategic vision. BSC analysis method is constituted as four perspectives as following Figure 1.

BSC analysis.
Figure 1. BSC analysis.

Implementation of the Balanced Scorecard Concept

The Balanced Scorecard is a concept that can be implemented in many ways. One prerequisite is that it must be adapted, or changed to fit a specific organization. A good Scorecard reflects the strategic plan of the organization, provides a framework that helps shape work behaviour, allows each person to measure his/her individual The Balanced Scorecard is a concept that can be implemented in many ways. One prerequisite is that it must be adapted, or changed to fit a specific organization. A good Scorecard reflects the strategic plan of the organization, provides a framework that helps shape work behaviour, allows each person to measure his/her individual performance and gives data to make changes immediately so that performance is enhanced.

Strategic Implementation of the Scorecard

On the strategic level the Balanced Scorecard translates an organization’s mission and strategy into a comprehensive set of performance measures that provides the framework for a strategic measurement and management system. A successful Scorecard program demands a high level of commitment and time. External consultants or knowledgeable internal practitioners can play a critical role in launching a successful program (Kaplan and Norton 1996).

Overcoming the Vision Barrier through the Translation of Strategy

The Balanced Scorecard is ideally created through a shared understanding and translation of the organization’s strategy into objectives, measures, targets, and initiatives in each of the four Scorecard perspectives. The translation of vision and strategy forces the executive team to specifically determine what is meant by often vague and nebulous terms contained in vision and strategy statements, for example: “superior service” or “targeted customers.”

Through the process of developing the Scorecard, an executive group might determine that “superior service” means responding to inquiries within 24 hours. Thereafter, all employees can focus their energies and day-to-day activities on the crystal-clear goal of response times, rather than wondering about, and debating the definition of, “superior service.” Using the Balanced Scorecard as a framework for translating the strategy, these organizations create a new language of measurement that serves to guide all employees’ actions toward the achievement of the stated direction.

Cascading the Scorecard to Overcome the People Barrier

To successfully implement any strategy, it must be understood and acted upon by every level of the firm. Cascading the Scorecard means driving it down into the organization and giving all employees the opportunity to demonstrate how their day-to-day activities contribute to the company’s strategy. All organizational levels distinguish their value-creating activities by developing Scorecards that link to the highest-level organizational objectives.

By cascading, you create a “line of sight” from the employee on the front line back to the director’s office. Some organizations have taken cascading all the way down to the individual level, with employees developing personal Balanced Scorecards that define the contribution they will make to their team in helping it achieve overall objectives. Rather than linking incentives and rewards to the achievement of shortterm financial targets, managers now have the opportunity to tie their team, department, or agency rewards directly to the areas in which they exert influence. All employees can then focus on the performance drivers of future value and on which decisions and actions are necessary to achieve those outcomes.

Strategic Resource Allocation to Overcome the Resource Barrier

Developing your Balanced Scorecard provides an excellent opportunity to tie resource allocation and strategy together. When you create a Balanced Scorecard, you not only think in terms of objectives, measures, and targets for each of our four perspectives, but just as critically you must consider the initiatives or action plans you will put in place to meet your Scorecard targets. If you create long-term stretch targets for your measures, you can then consider the incremental steps along the path to their achievement.

The human and financial resources necessary to achieve Scorecard targets should form the basis for the development of the annual budgeting process. No longer will departments submit budget requests that simply take last year’s amount and add an arbitrary 5 percent. Instead, the necessary costs (and profits) associated with Balanced Scorecard targets are clearly articulated in their submission documents. This enhances executive learning about the strategy, as the group is now forced (unless they have unlimited means) to make tough choices and trade-offs regarding which initiatives to fund and which to defer.

Strategic Learning to Overcome the Management Barrier

In rapidly changing environments, we all need more than an analysis of actual versus budget variances to make strategic decisions. Unfortunately, many management teams spend their precious time together discussing variances and looking for ways to correct these “defects.” The Balanced Scorecard provides the necessary elements to move away from this paradigm to a new model in which Scorecard results become a starting point for reviewing, questioning, and learning about your strategy.

The Balanced Scorecard translates your vision and strategy into a coherent set of measures in four balanced perspectives. Immediately, you have more information to consider than merely financial data. The results of your Scorecard performance measures, when viewed as a coherent whole, represent the articulation of your strategy to that point and form the basis for questioning whether your results are leading you any closer to the achievement of that strategy.

A Review of Existing Feasibility Analysis

According to Clifton and Eyffe (1997), feasibility analysis of an industrial project is divided into 4 stages: identification, pre-selection, analysis, evaluation and decision. Each of the four stages of analysis includes works as shown in Table 1.

Limitations of Existing Feasibility Analysis Techniques

Existing feasibility analysis techniques have limitations as follows.

  1. Little consideration to the goals and strategies of the receptor firm
  2. Little consideration to the organizational structures of the receptor firm

Table 1: Feasibility Analysis of Project.

Phase Analysis Method Contents
Identification Identification and
research of problems
  • Analysis of present industry, field survey, initial analysis of industry and market survey
Identification Informal examination
Pre -feasibility study
  • Brief examination of market,
  • Forecasting demand
  • Estimate cost and investment,
  • Estimate profit
  • Brief analysis of risk
Analysis Market analysis
  • Survey of product market
  • Market demand analysis in past and present
  • Estimate market share of the product and future demand
Technical Analysis
  • Examine technical aspects of the project:
  • Examine factors for production: materials, labor, production schedule, location etc,
  • Estimate manufacturing cost and investment
Financial Analysis
  • Prepare Pro forma financial statements and cash flows
  • Obtain ROI, ROE, the break-even point, price analysis
Evaluation & Decision Analysis of profitability
  • Estimate social cost of the project and profit
  • Result presentation

Balance in the Balanced Scorecard

As the Balanced Scorecard is adopted in the organization, one may encounter some resistance to the term itself. Some may feel the Balanced Scorecard represents the latest management fad sweeping executive suites around the nation; hence the mere mention of such a buzzword would preclude employees from accepting the tool regardless of its efficacy. This may represent a legitimate concern, depending on the fate of previous change initiatives within your organization.

But whereas others may prefer to use other monikers for the tool -such as Performance Management System, Scoreboard, or any number of others – scholars believe it is important to consistently use the term Balanced Scorecard when describing this tool, because the concept of balance is central to this system, specifically relating to three areas:

  • Balance between financial and non-financial indicators of success. The Balanced Scorecard was originally conceived to overcome the deficiencies of a reliance on financial measures of performance by balancing them with the drivers of future performance. This remains a principle tenet of the system.
  • Balance between internal and external constituents of the organization. Financial stakeholders (funders, legislators, etc.) and customers represent the external constituents represented in the Balanced Scorecard, while employees and internal processes represent internal constituents. The Balanced Scorecard recognizes the importance of balancing the occasionally contradictory needs of all these groups in effectively implementing strategy.
  • Balance between lag and lead indicators of performance. Lag indicators generally represent past performance.

Typical examples might include customer satisfaction or revenue. While these measures are usually quite objective and accessible, they normally lack any predictive power. Lead indicators, in contrast, are the performance drivers that lead to the achievement of the lag indicators. They often include the measurement of processes and activities. Response time might represent a leading indicator for the lagging measure of customer satisfaction.

While these measures are normally thought to be predictive in nature, the correlations may prove subjective and the data difficult to gather. A Scorecard should include a mix of lead and lag indicators. Lag indicators without leading measures don’t communicate how you are going to achieve your targets. Conversely, leading indicators without lag measures may demonstrate short-term improvements but don’t identify whether these improvements have led to improved results for customers, ultimately allowing you to achieve your mission.

A New Approach Using Balanced Score card

The procedure of the new approach using BSC is illustrated Figure 2. This model was made to consider financial factors as well as non- financial factors except existing validity analysis, adding analysis that used BSC index. Application of this model proposed a large range of standard to be able to standardize methods of validity analysis, which can happen in the process of integrating scattered technology trade institutions.

The New Approach for Evaluation of a Technology Using BSC.
Figure 2: The New Approach for Evaluation of a Technology Using BSC.

Performance Prism

Historical Background

The balanced scorecard, with its four perspectives, focuses on financials (shareholders), customers, internal processes, plus innovation and learning. In doing so it downplays the importance of other stakeholders, such as suppliers and employees. The business excellence model combines results, which are readily measurable, with enablers, some of which are not. Shareholder value frameworks incorporate the cost of capital into the equation, but ignore everything (and everyone) else.

Both activity based costing and cost of quality, on the other hand, focus on the identification and control of cost drivers (non-value-adding activities and failures/non-conformances respectively), which are themselves often embedded in the business processes. But this highly process focused view ignores any other perspectives on performance – such as the opinion of shareholders, customers and employees. Conversely, benchmarking tends to involve taking a largely external perspective, often comparing performance with that of competitors or other ‘best practitioners’ of business processes.

However, this kind of activity is frequently pursued as a one-off exercise towards generating ideas for – or gaining commitment to – short-term improvement initiatives, rather than the design of a formalized ongoing performance measurement system.

How can this be? How can multiple, seemingly conflicting, measurement frameworks and methodologies exist? In fact the answer is simple. They can exist because they all add value. They all provide unique perspectives on performance. They all furnish managers with a different set of lenses through which they can assess the performance of their organizations. In some circumstances, an explicit focus on shareholder value – at the expense of everything else – will be exactly the right thing for an organization to do. In other circumstances, or even in the same organization but at a different point in time, it would be suicide.

Then, perhaps, the balanced scorecard or the business excellence model (or some combination of them) might be the answer. The new CEO of a company, with too overt a current focus on short-term shareholder value, may find these frameworks a useful vehicle to help switch attention more towards the interests of customers, investments in process improvement and the development of innovative products and services.

The key is to recognize that, despite the claims of some of the proponents of these various frameworks and methodologies, there is no one ‘holy grail’ or best way to view business performance. And the reason for this is that business performance is itself a multi-faceted concept. Nevertheless, when we talk to academics, industrialists and non-profit organizations alike, there seems to be a ‘pent-up demand’ for a multi-faceted, yet highly adaptable, new framework – a framework which will address the needs for business performance measurement within the new competitive environment of the 21st Century. The challenge: How to satisfy that demand?

What is Performance Prism?

Our solution to the problem is a three dimensional model that we call the Performance Prism. The Performance Prism has five facets – the top and bottom facets are Stakeholder Satisfaction and Stakeholder Contribution respectively. The three side facets are Strategies, Processes and Capabilities.

Why does our model look like this and have these constituent components? Let us explain. We believe that those organizations aspiring to be successful in the long term within today’s business environment have an exceptionally clear picture of who their key stakeholders are and what they want. They have defined what strategies they will pursue to ensure that value is delivered to these stakeholders.

They understand what processes the enterprise requires if these strategies are to be delivered and they have defined what capabilities they need to execute these processes. The most sophisticated of them have also thought carefully about what it is that the organization wants from its stakeholders – employee loyalty, customer profitability, long term investments, etc. In essence they have a clear business model and an explicit understanding of what constitutes and drives good performance.

Performance Prism Framework.
Figure 3: Performance Prism Framework.

Start with Stakeholders Not Strategies

One of the great fallacies of performance measurement is that measures should be derived from strategy. Listen to any conference speaker on the subject. Read any management text written about it. Nine times out of ten the statement will be made – “derive your measures from your strategy”. This is such a conceptually appealing notion, that nobody stops to question it. Yet to derive measures from strategy is to misunderstand fundamentally the purpose of measurement and the role of strategy.

Performance measures are designed to help people track whether they are moving in the direction they want to. They help managers establish whether they are going to reach the destination they set out to reach. Strategy, however, is not about destination. Instead, it is about the route you choose to take – how to reach the desired destination.

Organisations adopt particular strategies because they believe those strategies will help them achieve a specific, desirable end goal. Amazon.com, the original internet book retailer, has not started to expand into CD sales, toys and home improvement products, just because they feel like expanding their product portfolio. They have deliberately decided to leverage their e-commerce and operational expertise – their core processes and capabilities – to extend the range of products they sell beyond books because they want to increase sales revenues and, in the longer term, enhance shareholder returns. Expanding into CD sales and other product lines is the strategy they hope will enable them to achieve these objectives.

At one level this is a semantic argument. Indeed the original work on strategy, carried out in the 1970s by Andrews, Ansoff and Mintzberg, and asserted that a strategy should explain both the goals of the organization and a plan of action to achieve these goals. Today, however, the vast majority of organizations have strategies that are dominated by lists of improvement activities and management initiatives – e.g. grow market share in Asia, extend the product range, seek new distribution channels.

While these are undoubtedly of value, they are not the end goal. These initiatives and activities are pursued in the belief that, when implemented, they will enable the organization to better deliver value to its multiple stakeholders – investors, customers and intermediaries, employees, suppliers, regulators and communities – all of whom will have varying importance to the organization in question. The first and fundamental perspective on performance then is the stakeholder perspective.

It is no accident that the balanced scorecard starts by asking, “What do the shareholders want?” Undoubtedly, as already mentioned, for many organizations the shareholders are the most important stakeholders. Throughout the 1980s and 1990s, however, there has been growing recognition of other stakeholder groups, most notably customers – hence the customer perspective on the balanced scorecard – and employees, who are often subsumed on the balanced scorecard under either the internal processes or the innovation and learning perspectives. For manufacturing and many service businesses, suppliers are also an essential stakeholder group to consider. Hence their inclusion in the revised version of the business excellence model, although interestingly not (so far) on the balanced scorecard.

As companies outsource ever increasing amounts of non-core activity, they become more and more dependent upon their suppliers. Today, Boeing manufactures only three components on a 777. Its reliance on suppliers for components and spares is immense and its exposure, should its suppliers fail to perform, cannot be underestimated.

Perhaps nowhere is this phenomenon more pronounced than in eCommerce transacted on the internet, where intermediaries – quasi customers or suppliers – are often highly involved in the sales and logistics activities required to deliver the product or service offered. A further emerging stakeholder aspect of the eCommerce revolution is that the use of organizations called ‘complementors’ is becoming common practice. Complementors are alliance partners that provide an enterprise with products and services that extend the value of that enterprise’s own customer offering. This often involves co-branding or building complementary products. Although not exclusive to dot.com industries, complementers are increasingly becoming a key component of internet companies’ armoury.

If complementors’ wants and needs are not catered for, they are likely to take their alliance elsewhere. In addition to these ‘conventional’ stakeholders, recent developments have resulted in two other groups gaining increasing power and prominence. The first is the regulatory and legal community. In the UK, Ian Byatt, the Water Industry regulator announced in November 1999 that the UK’s water companies would be expected to reduce their prices by 12% on average over the course of the next twelve months.

Some companies will be required to reduce their prices more than others, because of their failure to deliver in the preceding five years against specific customer service goals defined by the regulator and his team. The goals defined by the regulator do not necessarily relate to the individual water companies’ strategies. They are not necessarily the goals the water companies would have chosen for themselves, but given that the regulator’s ruling is expected to cost the Water Industry between £800 and £850 million in lost operating profits next year, it is easy to see why delivering the performance the regulator requires – i.e. ensuring regulator satisfaction – is key for certain companies.

Neither is regulatory compliance confined to recently privatized industries. There has been a significant trend in recent years for regulatory bodies, such as the European Commission and the U.S. Justice Department, to take a far more active interest in companies that abuse their competitive position. Punitive fines and individual jail sentences have been handed out to companies and their personnel involved in pricing cartels and other less obvious antitrust practices. Those ‘named and shamed’ for such practices include a litany of such bastions of international business as Coca-Cola, Microsoft, Hoechst, Roche, Volkswagen, British Airways, Unilever, plus many other ‘household names’ and less well-known corporations.

The final sets of stakeholders are even more fascinating, and in many ways are even more difficult to satisfy because of the potential diversity of their wants and needs. Pressure groups, such as Greenpeace and Friends of the Earth, have become enormously influential through their awesome communications ability. For instance, in two celebrated cases, they first managed to prevent Shell from sinking the Brent Spar oil platform in the Atlantic Ocean and, more recently, have managed to remove genetically modified foods from the European menu, much to the Monsanto Company’s dismay.

Monsanto’s chairman subsequently admitted that the pressure groups had done a far better job of marketing than the company had done. And the source of their marketing and communications ability was the internet. The internet offers unprecedented power to anyone who has an interest in the performance of an organization. Take, for example, the “McLibel” case. In 1990, McDonalds took two unemployed protestors to court over allegations that they made in leaflets which they were handing out on the street. Despite the fact that the two protestors had no legal experience between them, they decided to defend themselves.

They kept McDonalds in court for 300 days, during which time supporters of the protestors set up the McLibel web pages, detailing McDonalds’ alleged misdemeanours. These pages received over 35,000 hits in one 24 hour period alone.

Building a Multi-Faceted Business Performance Model

So, as we have seen, the first perspective on performance is the stakeholder satisfaction perspective. What managers have to ascertain here is who the most influential stakeholders are and what do they want and need? Once these questions have been addressed then it is possible to turn to the second perspective on performance – strategies. The key question underlying this perspective is what strategies should the organization adopt to ensure that the wants and needs of its stakeholders are satisfied? In this context, the role of measurement is fourfold. First, measures are required so that managers can track whether or not the strategies they have chosen are actually being implemented.

Second, measures can be used to communicate these strategies within the organization. Third, measures can be applied to encourage and incentivize implementation of strategy. Fourth, once available, the measurement data can be analyzed and used to challenge whether the strategies are working as planned (and, if not, why not).

The old adages “you get what you measure” and “you get what you inspect, not what you expect”, contain an important message. People in organizations respond to measures. Horror stories abound of how individuals and teams appear to be performing well, yet are actually damaging the business.

When telesales staffs are monitored on the length of time it takes for them to deal with customer calls, it is not uncommon to find them cutting people off mid-call, just so the data suggest that they have dealt with the call within 60 seconds. Malevolently or not, employees will tend towards adopting ‘gaming tactics’ in order to achieve the target performance levels they have been set. Measures send people messages about what matters and how they should behave.

When the measures are consistent with the organization’s strategies, they encourage behaviors that are consistent with strategy. The right measures then not only offer a means of tracking whether strategy is being implemented, but also a means of communicating strategy and encouraging implementation.

Many of the existing measurement frameworks and methodologies appear to stop at this point. Once the strategies have been identified and the right measures established it is assumed that everything will be fine. Yet studies suggest that some 90% of managers fail to implement and deliver their organization’s strategies. Why? There are multiple reasons, but a key one is that strategies also contain inherent assumptions about the drivers of improved business performance.

Clearly, if the assumptions are false, then the expected benefits will not be achieved. Without the critical data to enable these assumptions to be challenged, strategy formulation (and revision) is largely predicated on ‘gut feel’ and management theory. Measurement data and its analysis will never replace executive intuition, but it can be used to greatly enhance the making of judgments and decisions. A key judgment is of course whether an organization’s strategy and business model remains valid.

A second key reason for strategic failure is that the organization’s processes are not aligned with its strategies. And even if its processes are aligned, then the capabilities required to operate these processes are not. Hence the next two perspectives on performance are the processes and capabilities perspectives. In turn, these require the following questions to be addressed – “What processes do we need to put in place to allow the strategies to be executed?” and “What capabilities do/shall we require to operate these processes – both now and in the future?”

Again, measurement plays a crucial role by allowing managers to track whether or not the right processes and capabilities are in place, to communicate which processes and capabilities matter, and to encourage people within the organization to maintain or proactively nurture these processes and capabilities as appropriate. This may involve gaining an understanding of which particular business processes and capabilities must be competitively distinctive (“winners”), and which merely need to be improved or maintained at industry standard levels (“qualifiers”).

Business Processes have received a good deal of attention in the 1990s with the advent of Business Process Re-engineering. Business Processes run horizontally across an enterprise’s functional organization until they reach the ultimate recipient of the product or service offered – the customer. Michael Hammer, the re-engineering guru, advocates measuring processes from the customer’s point of view – the customer wants it fast, right, cheap and easy (to do business with).

But is it really as simple as that? There are often many stages in a process. If the final output is slow, wrong, expensive and unfriendly, how will we know which component(s) of the process are letting it down? What needs to be improved? In the quest for data (and accountability), it is easy to end up measuring everything that moves, but learning little about what is important. That is one reason why processes need owners – to decide what measures are important, which metrics will apply and how frequently they shall be measured by whom – so that judgments can be made upon analysis of the data and actions taken.

Processes cannot function on their own, however. Even the most brilliantly designed process needs people with certain skills, some policies and procedures about the way things are done, some physical infrastructure for it to happen and, more than likely, some technology to enable or enhance it. In fact, capabilities can be defined as the combination of an organization’s people, practices, technology and infrastructure that collectively represents that organization’s ability to create value for its stakeholders through a distinct part of its operations.

Very often that distinct part will be a business process, but it could also be a brand, a product/service or an organizational element. Measurement will need to focus on those critical component elements that make it distinctive and also allow it to remain distinctive in the future. Competitive benchmarks will be needed in order to understand the size of the gap. Competitors will be seeking ways to create value for probably not exactly the same, but a very similar set of stakeholders too.

The fifth, and final, perspective on performance is a subtle but critical twist on the first. For it is the “stakeholder contribution”, as opposed to “stakeholder satisfaction”, perspective. Take, for example, customers as stakeholders. In the early 1980s, organizations began to measure customer satisfaction by tracking the number of customer complaints they received. When research evidence started to show that only about 10% of the unsatisfied customers complained, organizations moved to more sophisticated measures, such as customer satisfaction. In the late 1980s and early 1990s, people began to question whether customer satisfaction was enough.

Research data gathered by Xerox showed that customers who were very satisfied were five times more likely to repeat their purchase in the next 18 months, than those who were just satisfied were. This, and similar observations, resulted in the development of the concept known as customer loyalty. The aim of this concept was to track whether customers:

  • came back to buy more from the same organization, and
  • recommended the organization to others.

Even more recently, research data from a variety of industries, has demonstrated that many customers are not profitable for organizations. Other data illustrate that increased levels of customer satisfaction can result in reduced levels of organizational profitability, because of the high costs of squeezing out the final few customer satisfaction percentage points. The reaction has been increasing interest in the notion of customer profitability.

Sometimes the customer profitability data produces surprises for the organization, indicating that a group of customers thought to be quite profitable are in fact loss-makers and that other customer groups are far more profitable than generally believed by the organization’s executives. Performance data allow assumptions to be challenged.

The important point, and the subtle twist, is that customers do not necessarily want to be loyal or profitable. Customers want great products and services at a reasonable cost. They want satisfaction from the organizations they chose to use. It is the organizations themselves that want loyal and profitable customers. So it is with employee satisfaction or supplier performance too. For years, managers have struggled to measure supplier performance.

Do they deliver on time? Do they send the right quantity and quality of goods? Do they deliver them to the right place? But these are all dimensions of performance that the organization requires of its supplier. They encapsulate the supplier’s contribution to the organization. Supplier satisfaction is a completely different concept. If a manager wanted to assess supplier satisfaction then (s) he would have to ask – Do we pay on time? Do we provide adequate notice when our requirements change? Do we offer suppliers forward schedule visibility? Do our pricing structures allow our suppliers sufficient cash flows for future investment and, therefore, ongoing productivity improvement? Could we be making better use of the vendor’s core capabilities?

The key message here is that all organizations require certain things of their stakeholders and all organizations are responsible for delivering certain things to all of their stakeholders. What drives shareholder satisfaction? – Dividends, share price growth, predictable results, etc. Unpleasant surprises erode investors’ confidence in the management team. What do organizations want of their shareholders? – Capital, reasonable risk-taking, long term commitment, etc. This fifth and final perspective on performance – the notion of stakeholder contribution – is a vital one, because it explains why there is so much confusion around the concept of stakeholders in the literature.

We would suggest that gaining a clear understanding of the ‘dynamic tension’ that exists between what stakeholders want and need from the organization, and what the organization wants and needs from its stakeholders, can be an extremely valuable learning exercise for the vast majority of corporations and, especially, their respective business units.

Stakeholder and organizational Needs.
Figure 4: Stakeholder and organizational Needs.

Applying the Performance Prism to Measures Design

Five distinct, but logically interlinked, perspectives on performance have been identified together with five key questions for measurement design:

  • Stakeholder Satisfaction – who are the key stakeholders and what do they want and need?
  • Strategies – what strategies do we have to put in place to satisfy the wants and needs of these key stakeholders?
  • Processes – what critical processes do we require if we are to execute these strategies?
  • Capabilities – what capabilities do we need to operate and enhance these processes?
  • Stakeholder Contribution – what contributions do we require from our stakeholders if we are to maintain and develop these capabilities?

As we have seen, these five perspectives on performance can be represented in the form of a prism. A prism refracts light. It illustrates the hidden complexity of something as apparently simple as white light. So it is with the Performance Prism. It illustrates the complexity of performance measurement and management. Single dimensional, traditional frameworks pick up elements of this complexity.

While each of them offers a unique perspective on performance, it is essential to recognize that this is all that they offer – a single uni-dimensional perspective on performance. Performance, however, is not uni-dimensional. To understand it in its entirety, it is essential to view from the multiple and interlinked perspectives offered by the Performance Prism.

Delivering Stakeholder Value.
Figure 5: Delivering Stakeholder Value.

Performance Measurement Matrix

The development of Keegan et al.’s (1989) performance measurement matrix was based on the above concept by addressing the non-financial aspects of organizational performance (see Figure 6). Accordingly, it incorporates cost, non-cost, external and internal factors that influence organizational performance. However, the links between these categories are not explicitly described and this is identified as one of the main weaknesses of the matrix.

As with the balanced scorecard, the strength of the performance measurement matrix lies in the way it seeks to integrate different classes of business performance – financial and non-financial, internal and external. The matrix, however, is not as well packaged as the balanced scorecard and does not make explicit the links between the different dimensions of business performance, which is arguably one of the greatest strengths of Kaplan and Norton’s balanced scorecard.

Performance Measurement Matrix.
Figure 6: Performance Measurement Matrix.

Results and Determinants Framework

Similar to Kaplan and Norton’s BSC, Fitzgerald et al. (1991) developed another PM framework by considering leading and lagging performance measures (see Figure 7). This PM framework specifically targets the PM in the service sector. The framework identifies six performance measures: two measure the results (lagging indicators) of competitive success (competitiveness and financial performance) while the other four measure the determinants (leading indicators) of competitive success (quality of service, flexibility, resource utilization and innovation).

Results and determinants framework.
Figure 7: Results and determinants framework.

European Foundation for Quality Management Model

Quality and change are similar concepts because they both imply movement and are not finite states. Organisations who adopt quality programs generally speak of a ‘quality journey’ to indicate that they will continue to adapt and improve in response to change, particularly to the changing requirements of customers.

‘Quality’ has many definitions and proponents of quality systems use different terminology such as ‘quality assurance’, ‘total quality control’ and ‘continuous improvement’. The preferred terminology in the context of this paper is ‘total quality management’ or TQM.

In their discussion of quality management systems, Magd and Curry[2] cite Ho’s definition of TQM: “TQM provides the overall concept that fosters continuous improvement in an organization. The TQM philosophy stresses a systematic, integrated, consistent, organization-wide perspective involving everyone and everything. It focuses primarily on total satisfaction for both the internal and external customers within a management environment that seeks continuous improvement of all systems and processes.”

The TQM philosophy is customer oriented and is as applicable to libraries as it is to other organizations. Excellent service, according to TQM theory, is regarded as the responsibility of all employees. That sense of responsibility can be achieved through the involvement of all members of the organization in planning, implementing, monitoring and improving.

The European Foundation for Quality Management model (EFQM) is another framework which was developed on the basis of determinants (enablers) and results indicators similar to the Fitzgerald et al. (1991) PM framework. The EFQM model works according to the principle that excellent results with respect to Performance, Customers, People and Society are achieved through Leadership driving Policy and Strategy, that is delivered through People, Partnerships and Resources and Processes (The European Foundation for Quality Management, 2000).

The model consists of five “Enablers”, or with criteria that the organisation can manipulate, and four “Results”, or what an organisation will achieve. The enabler criteria are concerned with how the organisation undertakes key activities, while the results criteria is concerned with what results will be achieved. The model is widely used to carry out quality management and the self-assessment of organisations. However, the terms used in the EFQM model are open and can be interpreted in number of ways (Neely et al., 2000), thus increasing the number of performance measures within each category. This leads to the problem of selecting and relying on the appropriate performance measure for the organisation.

Strategy and Success Maps

By considering the transformation of organizational resources and the stocks of these resources, ‘success and strategy maps’ are developed. These show the causal relationship between the different perspectives and provide a good visual representation of the organizational objectives and their performance drivers. By extending the four perspectives of Kaplan and Norton’s BSC, a strategy map is constructed. By translating the organization’s strategy into a “logical architecture of a strategy map, organizations create a common and understandable point of reference for all organizational units and employees” (Kaplan and Norton, 2001).

This shows how the employees’ jobs are linked to the overall objectives of the organization (Kaplan and Norton, 2000). Therefore a strategy map can be considered as a strong communication tool which can help an organization to achieve its strategy. Furthermore, strategy maps demonstrate how an organization can convert its resources (including the intangible resources such as employee knowledge) into tangible outcomes (Kaplan and Norton, 2000). However, Neely et al. (2003) argue that if the strategy map is limited to the four perspectives of the BSC then it has the drawback of not addressing all the stakeholder groups of an organization.

Success maps are developed by extending the five perspectives of the performance prism. Similar to the performance prism, the success map also takes a broader view of the stakeholders of an organization. In addition to the success map, Neely et al. (2002) propose to map the likely risks or failures of an organization. By doing so, the organization can identify the critical failure points which can harm the organization’s performance (Neely et al., 2003).

SMART Pyramid

SMART pyramid developed by Wang Laboratories (Lynch and Cross, 1991) facilitates the need for inclusion of measures that are focussed internally and externally. This follows the concept of cascading down of measures from organization to department and on to work centre level, reflecting the corporate vision as well as internal and external business unit objectives. The four levels of the pyramid embody the corporate vision, accountability of the business units, competitive dimensions for business operating systems and specific operational criteria.

Music Pms

The MUSiC Performance Measurement Method has been evolved and refined to meet the demands of commercial application. It follows the basic principles of usability engineering and testing, which are well established in theory but too often absent in practice. It is supported by a range of tools which can be chosen and used according to specific development needs, budget and timescales. The method draws on Usability Context Analysis (described elsewhere in this book).

Other supporting tools include the Performance Measurement Handbook (Rengger et al 1993), and DRUM software which facilitates management of evaluation data, task analysis, video protocol analysis and derivation of measures. The approach is backed by training and accreditation in the proper application of the method and the tools to support it. Where appropriate, it is also backed by consultancy on how to integrate method and tools into system development and evaluation, and by help in establishing a quality system for usability engineering.

The basic outputs, derived in all versions of the method, are measures of:

  • Effectiveness – how correctly and completely goals are achieved in context.
  • Efficiency – effectiveness related to cost of performance (calculated as effectiveness per unit of time). Optional outputs of the full, video supported method include further measures and diagnostic data:
  • Relative User Efficiency – an indicator of learnability (relating the efficiency of specific users to that of experts)
  • Productive Period – the proportion of time spent not having problems
  • Snag, Search and Help times – time spent overcoming problems, searching unproductively through a system, and seeking help. These problem-related measures are valuable sources of diagnostic data about specific areas where designs fail to support adequate performance. In use, the method provides pointers to causes of problems.

The quantitative data enable comparison of user-based performance – at a prototype stage, or in acceptance testing – against performance with earlier versions of a system, with alternative designs, with competing products, or with other ways of achieving the same work goals. The diagnostic information helps in identifying just where improvements need to be made in further developing a prototype or future versions of a system. It also helps inform about the level and specific content of training required for particular classes of user.

The basic outputs (measures of effectiveness and efficiency) can be arrived at without use of video by following the minimal version of the method, the Basic MUSiC Performance Measurement Method. This relies on accurate observation in real time, and employs manual timing of task performance, together with assessment of task outputs. It takes less effort than the full method, but gives reduced diagnostic data. The Basic method may be employed in quick, small scale evaluations, or where the budget is very constrained. It has the disadvantages of failing to capture an accurate, review able record of the specific incidents which reveal where users encounter problems.

The full, video supported version of the method can be tailored to individual evaluation needs, for example by choosing not to analyse productive period – thus reducing analysis time – or by adding specific measures such as counts of errors. While ad hoc additional measures may meet the requirements of individual evaluations, they should be interpreted with circumspection unless their validity and reliability have been assessed.

Applying the Performance Measurement Method

Any evaluation should have clearly stated aims. Since the outputs of the MUSiC Performance Measurement Method span a range of optional measures and diagnostic data, it is essential in applying the method to start with a set of specific evaluation objectives, and, if possible, performance targets. Data collection for its own sake has little merit in commercial usability work, where tight timescales and budgetary constraints demand tightly focused evaluations giving results which are specifically applicable in informing development, (re)design or selection of systems and alternative work practices.

Major considerations in applying the method are timing and resources, both human and financial. If the aim is simply comparison of available systems, or acceptance testing of an implemented system, then the method can be applied to use of a fully developed system either in a laboratory setting which reflects the intended context of use, or in a pilot implementation. This can provide valuable information about relative performance with different systems, and about required levels of training and user support.

However, applying the method in this manner (when a design has been finalized) misses many of the benefits of shaping a system more closely to meet user needs, skills, capabilities and expectations. The principal use of the method is as an integral part of development processes based around prototyping and iterative improvement. This means that the usability testing program should run from early in development to final delivery.

The diagnostic data inform the development, and the performance measures indicate the degree of progress towards meeting usability targets. The initial steps in the approach – and particularly the context analysis upon which the method draws – are best carried out before a prototype is produced. The involvement of development staff and user communities in the testing process gives significant benefits in informing developers about the performance of different designs in users’ hands, and in building positive attitudes towards forthcoming systems in the minds of user communities.

One trade-off which exercises the minds of many usability professionals concerns timing and the degree of fidelity of prototypes. Evaluations early in development necessarily involve lower fidelity prototypes. Early evaluations offer many benefits. The earlier the evaluation, the easier it is to make changes in the design, yet less reliance can be put on the findings since the system tested will deviate more from a realistic context of use. This is not just a matter of the appearance of the prototype, but also the scope of its functionality, speed of response, the influences of other networked devices and access to on-line databases, etc., all of which can significantly affect usability.

In general, early prototypes can inform about broader design considerations, but the detailed factors which affect final usability can only be assessed with high fidelity prototypes or pilot implementations. However, if testing commences with a high fidelity prototype, this may fix design ideas in the minds of the development team which are subsequently more difficult to change. Ideally, the approach should embrace both low and high fidelity prototypes, with a progression from broad changes to fine tuning.

Steps and Tools used in the Performance Measurement Method.
Figure 8: Steps and Tools used in the Performance Measurement Method.

Figure 7 summarizes the sequence of steps required to evaluate usability and derive measures by the Performance Measurement Method. The right hand column of the figure shows the MUSiC tools providing guidance and support at each stage, and outlining the necessary procedures. Steps 4 to 7 may be repeated as development progresses. Tests typically will involve only a subset of the whole system, and repeat tests with modified system designs may focus on those specific areas where a high priority need for improvement has been identified.

For evaluations to be run smoothly and efficiently, it is most convenient for studies to be carried out in a usability laboratory, which allows prototype systems to be studied without undue disruption of client’s commercial work. Alternatively, data can be captured in the workplace if, for example, key factors in an information system or its environmental setting cannot adequately be replicated in a laboratory. It is important to note that the method is not based around capturing spontaneously occurring samples of data, either in the workplace or the laboratory. That is a methodological minefield beyond the scope of this chapter.

The MUSiC approach to performance measurement is based on the analysis of pre-selected representative scenarios. Wherever the data are recorded, it is essential first to identify appropriate evaluation tasks and user profiles which meet the specific objectives of the evaluation. These are agreed as a result of a context study by the usability team, together with other stakeholders, who also define assessment rules for task output, for determining correctness and completeness of task goal achievement.

A major practical consideration in planning an evaluation is the availability of users matching the required profile. Finding representative users is of great importance in any user-based evaluation, and may require careful pre-planning. It may, for example, be necessary to provide training in the use of a new or revised system, which matches the training users will have received on roll-out of the system. Typically, the usability team takes responsibility for user arrangements, and for preparing task instructions.

Other Approaches to Performance Measurement

Throughout the 1970s and 1980s the measures traditionally used by businesses were subject to highly vocal criticism from influential figures, such as Berliner and Brimson (1988); Hayes and Abernathy (1980); Johnson and Kaplan (1987); Kaplan (1983, 1984) and Skinner (1971). These criticisms resulted in several innovations. New methods of product costing, for example, Activity Based Costing and Through-put Accounting, were developed (Cooper and Kaplan, 1988; Galloway and Waldron, 1988a, 1988b, 1989a, 1989b).

Alternative means of valuing businesses and brands, such as Shareholder Value Analysis and Brand Valuation, were proposed (Ambler and Kokkinaki, 1998; Rappaport, 1998; Stewart, 1991). Research studies, which explored explicitly the information needs of managers and investors, were undertaken (Mavrinac and Siesfeld, 1997; McKinnon and Bruns, 1992).

Keystroke-Level Model

Many established methods for work measurement involve collecting detailed measures of times taken to perform elements of work: specific activities. These are typically gained by observing and timing the performance of work by large numbers (hundreds or even thousands) of staff. Larger organizations may have their own work measurement teams who carry out long-term studies, with the intention of informing management decisions about anticipated future requirements for staffing levels. The individual activities which are timed may be of only a few seconds duration. Measures relevant to office information systems often focus on clerical activities, and some methods (e.g. Clerical Work Measurement) draw on libraries of data about timings of common clerical tasks, collected from various organizations.

Such approaches may in some circumstances give very useful approximations of performance times, but they have immediately apparent disadvantages when applied to novel ways of performing work tasks. They are essentially “bottom-up”, yielding additive estimates of times for composite tasks simply from the sum of timings for their component activities. Typically they assume an idealized way of carrying out the task. An analogous approach in the field of human computer interaction is the Keystroke-Level Model (K-LM: Card et al., 1980).

This is a method for estimating task performance times for interaction between individuals and computer systems, which assumes error-free expert performance. The K-LM incorporates an explicit model of the user’s cognitive processing time, represented by adding a time allowance for thinking at key points which the analyst must identify in the interaction. Clerical work measurement methods typically do not make explicit allowance for cognitive processes, although in some cases they may add a time element for error recovery. Alternatively, both these factors may simply be reflected in the mean performance times for some activities.

A major disadvantage of traditional approaches to performance measurement is that analysts must rely on historical performance data to draw inferences about future work performance with as-yet-unimplemented systems. However, the future performance will be affected by different contextual factors and higher cognitive factors in the users, which such methods simply ignore. From a user’s viewpoint the quality of use of interactive systems and the productivity of computer-supported work are affected by specific design decisions at every level. Where new systems are being developed these traditional methods do little to help developers shape many key design decisions which will affect work efficiency and user satisfaction.

Time-Based Competition

Some authors and organizations have attempted to be even more prescriptive, by proposing very detailed and specific measurement frameworks. Azzone et al. (1991), for example, developed the framework shown in Table 2, which seeks to identify the measures most appropriate for organizations that have chosen to pursue a strategy of time-based competition. The Institute of Chartered Accountants of Scotland (ICAS) has also developed a detailed performance measurement framework, based on the different ways in which businesses use performance measures, for:

  • Business planning; and
  • Monitoring operations.

Table 2: Measures for time based competition.

Internal configuration External configuration
R&D engineering time Number of changes in projects
Delta average time between two subsequent innovations
Development time for new products
Operations through-put time Adherence to due dates
Incoming quality
Distance travelled
Value-added time (as a percentage of total time)
Schedule attainment
Outgoing quality
Manufacturing cost
Sales and Marketing order processing lead time Complexity of procedures
Size of batches of information
Cycle time
Bid time

Du Pont Pyramid of Financial Ratios

To develop their frameworks, ICAS (1993) prepared a master list of all the financial and non-financial performance measures that they uncovered during a substantial review of the literature and then mapped them on to two tree-diagrams. Similarities between the ICAS frameworks and the Du Pont Powder Company’s Pyramid of Financial Ratios, shown in Figure 9, can be observed. This is not surprising given that Du Pont is widely recognized as being the founder of financial performance measurement: In 1903, three Du Pont cousins consolidated their small enterprises with many other small single-unit family firms.

They then completely reorganized the American explosives industry and installed an organizational structure that incorporated the “best practice” of the day. The highly rational managers at Du Pont continued to perfect these techniques, so that by 1910 that company was employing nearly all the basic methods that are currently used in managing big business (Chandler, 1977, p. 417).

Du Pont Pyramid of Financial Ratios.
Figure 9: Du Pont Pyramid of Financial Ratios.

Inputs, Processes, Outputs, Outcomes

The performance measurement frameworks discussed so far have tended to be hierarchical in orientation. There are, however, several frameworks, which encourage executives to pay attention to the horizontal flows of materials and information within the organization, i.e. the business processes, most notably those proposed by Brown (1996) and Lynch and Cross (1991). Brown’s framework, which is shown in Figure 10, is useful because it highlights the difference between input, process, output and outcome measures.

He uses the analogy of baking a cake to explain this more fully. Input measures would be concerned with volume of flour, quality of eggs, etc. Process measures would be concerned with oven temperature and length of baking time. Output measures would be concerned with the quality of the cake. Outcome measures would be concerned with the satisfaction of the cake eaters – i.e. was the cake enjoyable?

Inputs, processes, outputs, outcomes.
Figure 10: Inputs, processes, outputs, outcomes.

Lynch and Cross’s Performance Pyramid

While it is conceptually appealing and undoubtedly a useful way of explaining the difference between input, process, output and outcome measures, Brown’s framework falls at one extreme of a continuum stretching from hierarchical to process focused frameworks. Lynch and Cross’s Performance Pyramid, shown in Figure 11, falls in the middle of this continuum.

Performance Pyramid.
Figure 11: Performance Pyramid.

The strengths of this framework are that it ties together the hierarchical view of business performance measurement with the business process view. It also makes explicit the difference between measures that are of interest to external parties-customer satisfaction, quality and delivery, and measures that are primarily of interest within the business – productivity, cycle time and waste.

Total Productivity Management

What is Total Productive Maintenance?

Total Productive Maintenance (TPM) is a maintenance program concept. Philosophically, TPM resembles Total Quality Management (TQM) in several aspects, such as

  1. total commitment to the program by upper level management is required,
  2. employees must be empowered to initiate corrective action, and
  3. a long range outlook must be accepted as TPM may take a year or more to implement and is an on-going process.

Changes in employee mind-set toward their job responsibilities must take place as well.

TPM brings maintenance into focus as a necessary and vitally important part of the business. It is no longer regarded as a non-profit activity. Down time for maintenance is scheduled as a part of the manufacturing day and, in some cases, as an integral part of the manufacturing process. It is no longer simply squeezed in whenever there is a break in material flow. The goal is to hold emergency and unscheduled maintenance to a minimum.

Background of TPM

TPM evolved from TQM, which evolved as a direct result of Dr. W. Edwards Deming’s influence on Japanese industry. Dr. Deming began his work in Japan shortly after World War II. As a statistician, Dr. Deming initially began to show the Japanese how to use statistical analysis in manufacturing and how to use the resulting data to control quality during manufacturing. The initial statistical procedures and the resulting quality control concepts fuelled by the Japanese work ethic soon became a way of life for Japanese industry. This new manufacturing concept eventually became knows as Total Quality Management or TQM.

When the problems of plant maintenance were examined as a part of the TQM program, some of the general concepts did not seem to fit or work well in the maintenance environment. Preventative maintenance (PM) procedures had been in place for some time and PM was practiced in most plants. Using PM techniques, maintenance schedules designed to keep machines operational were developed.

However, this technique often resulted in machines being over-serviced in an attempt to improve production. The thought was often “if a little oil is good, a lot should be better.” Manufacturer’s maintenance schedules had to be followed to the letter with little thought as to the realistic requirements of the machine. There was little or no involvement of the machine operator in the maintenance program and maintenance personnel had little training beyond what was contained in often inadequate maintenance manuals.

The need to go further than just scheduling maintenance in accordance with manufacturer’s recommendations as a method of improving productivity and product quality was quickly recognized by those companies who were committed to the TQM programs. To solve this problem and still adhere to the TQM concepts, modifications were made to the original TQM concepts. These modifications elevated maintenance to the status of being an integral part of the overall quality program.

The origin of the term “Total Productive Maintenance” is disputed. Some say that it was first coined by American manufacturers over forty years ago. Others contribute its origin to a maintenance program used in the late 1960’s by Nippondenso, a Japanese manufacturer of automotive electrical parts. Seiichi Nakajima, an officer with the Institute of Plant Maintenance in Japan is credited with defining the concepts of TPM and seeing it implemented in hundreds of plants in Japan.

Books and articles on TPM by Mr. Nakajima and other Japanese as well as American authors began appearing in the late 1980’s. The first widely attended TPM conference held in the United States occurred in 1990. Today, several consulting companies routinely offer TPM conferences as well as provide consulting and coordination services for companies wishing to start a TPM program in their plants.

Implementation of TPM

To begin applying TPM concepts to plant maintenance activities, the entire work force must first be convinced that upper level management is committed to the program. The first step in this effort is to either hire or appoint a TPM coordinator. It is the responsibility of the coordinator to sell the TPM concepts to the work force through an educational program. To do a thorough job of educating and convincing the work force that TPM is just not another “program of the month,” will take time, perhaps a year or more.

Once the coordinator is convinced that the work force is sold on the TPM program and that they understand it and its implications, the first study and action teams are formed. These teams are usually made up of people who directly have an impact on the problem being addressed. Operators, maintenance personnel, shift supervisors, schedulers, and upper management might all be included on a team. Each person becomes a “stakeholder” in the process and is encouraged to do his or her best to contribute to the success of the team effort. Usually, the TPM coordinator heads the teams until others become familiar with the process and natural team leaders emerge.

The action teams are charged with the responsibility of pinpointing problem areas, detailing a course of corrective action, and initiating the corrective process. Recognizing problems and initiating solutions may not come easily for some team members. They will not have had experiences in other plants where they had opportunities to see how things could be done differently. In well run TPM programs, team members often visit cooperating plants to observe and compare TPM methods, techniques, and to observe work in progress. This comparative process is part of an overall measurement technique called “benchmarking” and is one of the greatest assets of the TPM program.

The teams are encouraged to start on small problems and keep meticulous records of their progress. Successful completion of the team’s initial work is always recognized by management. Publicity of the program and its results are one of the secrets of making the program a success. Once the teams are familiar with the TPM process and have experienced success with a small problem, problems of ever increasing importance and complexity are addressed.

As an example, in one manufacturing plant, one punch press was selected as a problem area. The machine was studied and evaluated in extreme detail by the team. Production over an extended period of time was used to establish a record of productive time versus non-productive time. Some team members visited a plant several states away which had a similar press but which was operating much more efficiently. This visit gave them ideas on how their situation could be improved.

A course of action to bring the machine into a “world class” manufacturing condition was soon designed and work was initiated. The work involved taking the machine out of service for cleaning, painting, adjustment, and replacement of worn parts, belts, hoses, etc. As a part of this process, training in operation and maintenance of the machine was reviewed. A daily check list of maintenance duties to be performed by the operator was developed. A factory representative was called in to assist in some phases of the process.

After success has been demonstrated on one machine and records began to show how much the process had improved production, another machine was selected, then another, until the entire production area had been brought into a “world class” condition and is producing at a significantly higher rate.

Note that in the example above, the operator was required to take an active part in the maintenance of the machine. This is one of the basic innovations of TPM. The attitude of “I just operate it!” is no longer acceptable. Routine daily maintenance checks, minor adjustments, lubrication, and minor part change out become the responsibility of the operator. Extensive overhauls and major breakdowns are handled by plant maintenance personnel with the operator assisting. Even if outside maintenance or factory experts have to be called in, the equipment operator must play a significant part in the repair process.

Training for TPM coordinators is available from several sources. Most of the major professional organizations associated with manufacturing as well as private consulting and educational groups have information available on TPM implementation. The Society of Manufacturing Engineers (SME) and Productivity Press are two examples. Both offer tapes, books, and other educational material that tell the story of TPM. Productivity Press conducts frequent seminars in most major cities around the United States. They also sponsor plant tours for benchmarking and training purposes.

Results of TPM

Ford, Eastman Kodak, Dana Corp., Allen Bradley, Harley Davidson; these are just a few of the companies that have implemented TPM successfully. All report an increase in productivity using TPM. Kodak reported that a $5 million investment resulted in a $16 million increase in profits which could be traced and directly contributed to implementing a TPM program. One appliance manufacturer reported the time required for die changes on a forming press went from several hours down to twenty minutes!

This is the same as having two or three additional million dollar machines available for use on a daily basis without having to buy or lease them. Texas Instruments reported increased production figures of up to 80% in some areas. Almost all the above named companies reported 50% or greater reduction in down time, reduced spare parts inventory, and increased on-time deliveries. The need for out-sourcing part or all of a product line was greatly reduced in many cases.

Today, with competition in industry at an all time high, TPM may be the only thing that stands between success and total failure for some companies. It has been proven to be a program that works. It can be adapted to work not only in industrial plants, but in construction, building maintenance, transportation, and in a variety of other situations. Employees must be educated and convinced that TPM is not just another “program of the month” and that management is totally committed to the program and the extended time frame necessary for full implementation. If everyone involved in a TPM program does his or her part, an unusually high rate of return compared to resources invested may be expected

The work of Bitchi and Turner are based on various models (2000) and frameworks but it was particularly influenced by the following developments:

  1. Results of the integrated performance measurement systems research;
  2. Active monitoring research;
  3. Research on quantification of the relationships between performance measures; and
  4. IT platforms on performance measurement.

Integrated performance measurement systems (IPMS)

The integrated performance measurement systems (IPMS) project researched the structure and relationships within performance measurement systems and developed a reference model and an audit method for IPMS. The structure of this reference model is based on the viable business structure (Bititci and Turner, 1998) which has emerged from the viable systems theory (Beer, 1985) and the CIM-OSA business process architecture (ESPRIT Consortium AMICE, 1991). Throughout the IPMS project the researchers conducted many audits with collaborating companies. Key findings of the IPMS research program that relate to the dynamics of performance measurement systems were:

  1. A performance measurement system should be a dynamic system.
  2. Most organizations have only a static performance measurement system.
  3. This, in turn, has a negative effect on the integrity of the performance measurement system as well as on the agility and responsiveness of the organization.

The main barriers to an organization’s ability to adopt a more dynamic approach to performance measurement systems can be summarized as follows:

Lack of a structured framework, which allows organizations to:

  • Differentiate between improvement and control measures; and
  • Develop causal relationships between competitive and strategic.
  • Objectives and processes and activities.
  • Absence of a flexible platform to allow organizations to effectively and efficiently manage the dynamics of their performance measurement systems.
  • Inability to quantify the relationships between measures within a system.

Active monitoring

This was also an EPSRC funded project under the ROPA scheme. The objective of the project was to establish the applicability of reliability engineering techniques to design active control systems for business processes. This research first studied literature and industrial practice with respect to active monitoring and developed an approach to the design of active monitoring systems. This approach was tested with three different types of business processes (operate, support and manage). The results of the research demonstrated that an active monitoring approach can be used to maintain the reliability of business processes and that it is the basis of the internal control system (Bititci et al., 1998a; Turner and Bititci, 1998).

Quantitative model for performance measurement systems

The quantitative models for performance measures project was born directly out of the IPMS project. The objective of this project was to investigate tools and techniques that can be used to model and quantify the relationships between performance measures. The project developed and validated an approach for modeling and quantifying the relative relationships between performance measures within a system using the analytical hierarchy process (Suwignjo et al., 1997; Bititci et al., 1998b).

Emerging IT tools

Recent times have seen some newly emerging IT based management tools specifically targeted to performance measurement. Such tools include:

  • IPM;
  • Ithink analyst;
  • PerformancePlus; and
  • Pb Views.

A recent publication in Information Week (Coleman, 1998) critically reviewed these software packages and concluded that IPM provided the best all-round functionality. The main benefit of using an IT platform for managing the performance measurement system within an organization is that maintenance of the information contained within the systems becomes much simpler. This is a particular benefit, which is commonly quoted by all suppliers of such systems.

Maturity Models for Performance Measurement Systems

In Information Systems literature the term Maturity Model has been used by two schools: Richard L. Nolan from Harvard Business School and Watts S. Humphrey from Carnegie Mellon University. The Nolan model has been widely recognized and utilized by both practitioners and researchers alike. Nolan’s initial model describes four distinct stages. These are the following: Initiation, Expansion, Formalization, and Maturity (Gibson/Nolan, 1974), cf. Fig. 12.

The Nolan model is based on the companies’ spending for electronic data processing (EDP). In their original article from 1974, Gibson and Nolan describe the suggested model as follows: “The basis for this framework of stages is the recent discovery that the EDP budget for a number of companies, when plotted over time from initial investment to mature operation, forms an S-shaped curve. (…) The turnings of this curve correspond to the main events – often crisis – in the life of the EDP function that signal important shifts in the way the computer resource is used and managed. There are three such turnings, and, consequently, four stages.” (p. 77).

The Nolan maturity model is based on three underlying types of growth:

  1. a growth in computer applications – from simple payroll applications to complex management systems;
  2. a growth in the specialization of EDP personnel;
  3. a growth in formal management techniques and organization – from lax management practices to resource-oriented planning and control.

In 1979, Nolan transformed the original four-stage model into a six-stage model by adding two new stages; the stages Integration and Data Administration were put in between Formalization and Maturity. For a more detailed and also critical analysis of the Nolan curve see Galliers/Sutherland (1999) and van der Riet et al. (1997).

Four stages of growth.
Figure 12: Four stages of growth (amended from Gibson/Nolan, 1974).

The second classical Maturity Model was developed by the end of the eighties by Watts Humphrey and his team from the Software Engineering Institute (SEI) at Carnegie Mellon University. Initially, the SEI model was simply called CMM (Capability Maturity Model). Meanwhile, SEI has introduced Maturity Models for different purposes, e.g. People Capability Maturity Model, Software Acquisition Capability Maturity Model, Systems Engineering Capability Maturity Model, Integrated Product Development Capability Model. The classical CMM is now called SW-CMM (Capability Maturity Model for Software).

The SWCMM is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of the underlying processes (SEI, 2001). The SW-CMM has become a de facto standard for improving software processes. The SW-CMM is organized into five maturity levels: Initial, Repeatable, Defined, Managed, and Optimizing; cf. Fig. 13.

The five levels of software process maturity.
Figure 13: The five levels of software process maturity (Paulk et al, 1994, p. 16).

What are the differences between the Nolan model and the CMM? First, the Nolan model looks at a particular organizational unit (the EDP unit or IT function) whereas the CMM is focused on processes carried out within the IT function. Second, the Nolan model describes the changes of four dimensions (EDP budget, computer applications, EDP personnel, management techniques); the CMM considers solely the quality of processes. However, the CMM model addresses different so-called key practices – themes that must be taken into consideration when process maturity is to be incremented from one stage to the next (Paulk et al., 1993).

Conclusion

To overcome the problems associated with the traditional performance measures, a number of PM frameworks have been developed. These frameworks integrate multiple performance measures which capture both financial and non-financial aspects of the organisation. Some of these PM frameworks blend the lagging indicators with the leading indicators (e.g. the BSC), or in other words measuring the results of the organisational performance and the drivers of the results (e.g. the EFQM model, Fitzgerald et al.’s framework).

The concept behind the combination of lagging (results) and leading (enables) indicators is to identify any failures before they damage the end result of the organisation. For example, the leading indicators of the BSC would identify the issues which will have an impact on the financial measures (i.e. the lagging indicator of the BSC), and provide information before the organisation is affected by the issue. Furthermore, the lagging indicators monitor the past performance of the organisation, while the leading indicators assist to plan the future activities.

The importance of providing a balanced overview of organisational performance can be identified in most of the performance measurement frameworks. The use of multi-dimensional perform measures which capture different perspectives of the organisation such as shareholder value, customer satisfaction, financial perspective, capabilities of the employees, internal business processes and so forth is evident in these performance measurement frameworks.

The need to link the strategy of the organisation with the performance measures is emphasised in most of the performance measurement frameworks. When the performance measures are aligned with the organisational strategy, the implementation of performance measurement ensures the strategy implementation. In most of the PM frameworks, the measures are derived from the organisational strategy. However, Neely and Adam’s performance prism adopts a different view by deriving the organisational strategy from the requirements of the stakeholders.

The developments of performance measures identified in the general performance measurement are evident even from the performance measurement within manufacturing sector. The need to go beyond the financial measures and consider customer and shareholder value, business processes, organizational learning and growth aspects are emphasised in the performance measurement in the manufacturing sector. As a result, multiple and integrated performance measures that combine qualitative, quantitative, objective and subjective measures are identified as more effective to measure the performance of manufacturing work.

From the literature review, this paper has identified the general and manufacturing specific performance measurement frameworks. It was established that each performance measurement framework has its own advantages and disadvantages. Therefore the selection of a performance measurement framework depends on the requirements of the particular organisation.

“To achieve sustainable business success in the demanding world marketplace, a company must… use relevant performance measures (UK Government White Paper on Competitiveness quoting RSA Tomorrow’s Company Inquiry Report). World-class manufacturers recognize the importance of metrics in helping to define goals and performance expectations for the organization. They adopt or develop appropriate metrics to interpret and describe quantitatively the criteria used to measure the effectiveness of the manufacturing system and its many interrelated components” (Foundation of Manufacturing Committee of the National Academy of Engineering – USA).

The above quotation explicitly shows that they performance measurement matrices discussed in this paper are relevant for the manufacturing sector and they are being widely implemented to measure performance. So the discussions on the different performance measurement models show how they are implemented in different manufacturing and engineering project environments.

Reference

Harrington, J.H. 1991 Business Process Improvement – The breakthrough strategy for total quality, productivity, and competitiveness, McGraw-Hill, New York.

Kueng, P., Wettstein, Th., and List, B. 2001 A Holistic Process Performance Analysis through a Process Data Warehouse. In Proceedings of the American Conference on Information Systems (AMCIS ), Boston, USA.

Neely, A.D. 1999, “The performance measurement revolution: why now and where next”, International Journal of Operations and Production Management, Vol. 19 No. 2, pp. 205-28.

Frigo, M.L. and Krumwiede, K.R. 1999, “Balanced scorecards: a rising trend in strategic performance measurement”, Journal of Strategic Performance Measurement, Vol. 3 No. 1, pp. 42-4.

Waggoner, D.B., Neely, A.D. and Kennerley, M.P. 1999, “The forces that shape organizational performance measurement systems: an interdisciplinary review”, International Journal of Production Economics, Vol. 60-61, pp. 53-60.

Lynch, R.L. and Cross, K.F. 1991, Measure Up – The Essential Guide to Measuring Business Performance,Mandarin, London.

Dixon, J.R., Nanni, A.J. and Vollmann, T.E. 1990, The New Performance Challenge – Measuring Operations for World-Class Competition, Dow Jones-Irwin, Homewood, IL.

Johnson, H.T. (1983, “The search for gain in markets and firms: a review of the historical emergence of management accounting systems”, Accounting, Organizations and Society, Vol. 2 No. 3, pp. 139-46.

Johnson, H.T. and Kaplan, R.S. 1987, Relevance Lost – The Rise and Fall of Management Accounting, Harvard Business School Press, Boston, MA.

Kaplan, R.S. 1984, “The evolution of management accounting”, The Accounting Review, Vol. 59 No. 3, pp. 390-418.

Kaplan, R.S. and Norton, D.P. 1992. “The balanced scorecard – measures that drive performance”, Harvard Business Review. pp. 71-9.

Kaplan, R.S. and Norton, D.P. 1993, “Putting the balanced scorecard to work”, Harvard Business Review, pp. 134-47.

Bruns, W. 1998, “Profit as a performance measure: powerful concept, insufficient measure”, Performance Measurement – Theory and Practice: The First International Conference on Performance Measurement, Cambridge.

Eccles, R.G. 1991, “The performance measurement manifesto”, Harvard Business Review, January-February,pp. 131-7.

Neely, A.D. 1999, “The performance measurement revolution: why now and where next”, International Journal of Operations and Production Management, Vol. 19 No. 2, pp. 205-28.

Bourne, M., Neely, A., Mills, J. and Platts, K. 1999, “Performance measurement system implementation: an investigation of failures”, Proceedings of the 6th International Conference of The European Operations Management Association, Venice, pp. 749-56.

Kaplan, R.S. and Norton, D.P. 1992, ”The balanced scorecard – measures that drive Performance”, Harvard Business Review, pp. 71-9.

Kennerley, M.P. and Neely, A.D. 2000, “Performance measurement frameworks – a review”, Proceedings of the 2nd International Conference on Performance Measurement, Cambridge, pp. 291-8.

Keegan, D.P., Eiler, R.G. and Jones, C.R. 1989, “Are your performance measures obsolete?”, Management Accounting (US), Vol. 70 No. 12, pp. 45-50.

Fitzgerald, L., Johnston, R., Brignall, T.J., Silvestro, R. and Voss, C. 1991, Performance Measurement in Service Businesses, The Chartered Institute of Management Accountants, London.

Lynch, R.L. and Cross, K.F. 1991, Measure Up – The Essential Guide to Measuring Business Performance,Mandarin, London.

Bititci, U.S., Turner, T. and Begemann, C. 2000, “Dynamics of performance measurement systems”, International Journal of Operations & Production Management, Vol. 20 No. 6, pp. 692-704.

Bourne, M., Neely, A., Mills, J. and Platts, K. 1999, “Performance measurement system implementation: an investigation of failures”, Proceedings of the 6th International Conference of The European Operations Management Association, Venice, pp. 749-56.

Bourne, M., Mills, J., Wilcox, M., Neely, A. and Platts, K. 2000, “Designing, implementing and updating performance measurement systems”, International Journal of Operations & Production Management, Vol. 20 No. 7, pp. 754-71.

Bruns, W. 1998, “Profit as a performance measure: powerful concept, insufficient measure”, Performance Measurement – Theory and Practice: The First International Conference on Performance Measurement, Cambridge.

Dixon, J.R., Nanni, A.J. and Vollmann, T.E. 1990, The New Performance Challenge – Measuring Operations for World-Class Competition, Dow Jones-Irwin, Homewood, IL.

Eccles, R.G. 1991, “The performance measurement manifesto”, Harvard Business Review, pp. 131-7.

Fitzgerald, L., Johnston, R., Brignall, T.J., Silvestro, R. and Voss, C. 1991, Performance Measurement in Service Businesses, The Chartered Institute of Management Accountants, London.

Frigo, M.L. and Krumwiede, K.R. 199), “Balanced scorecards: a rising trend in strategic performance measurement”, Journal of Strategic Performance Measurement, Vol. 3 No. 1, pp. 42-4.

Gabris, G.T. 1986, “Recognizing management techniques dysfunctions: how management tools often create more problems than they solve”, in Halachmi, A. and Holzer, M. (Eds), Competent Government: Theory and Practice, Chatelaine Press,Burk, VA, pp. 3-19.

Ghalayini, A.M. and Noble, J.S. 1996, “The changing basis of performance measurement”, International Journal of Operations & Production Management, Vol. 16 No. 8, pp. 63-80.

Globerson, S. 1985, “Issues in developing a performance criteria system for an organisation”, International Journal of Production Research, Vol. 23 No. 4, pp. 639-46.

Greiner, J. 1996, “Positioning performance measurement for the twenty-first century”, in Halachmi, A. and Bouckaert, G. (Eds), Organizational Performance and Measurement in the Public Sector, QuorumBooks, London, pp. 11-50.

Johnson, H.T. 1983, “The search for gain in markets and firms: a review of the historical emergence of management accounting systems”, Accounting, Organizations and Society, Vol. 2 No. 3, pp. 139-46.

Johnson, H.T. and Kaplan, R.S. 1987, Relevance Lost – The Rise and Fall of Management Accounting, Harvard Business School Press, Boston,MA.

Kaplan, R.S. 1984, “The evolution of management accounting”, The Accounting Review, Vol. 59 No. 3, pp. 390-418.

Kaplan, R.S. and Norton, D.P. 1992, “The balanced scorecard – measures that drive Performance”, Harvard Business Review, pp. 71-9.

Kaplan, R.S. and Norton, D.P. 1993, “Putting the balanced scorecard to work”, Harvard Business Review, pp. 134-47.

Kaplan, R.S., Norton, D. P. 1996 The Balanced Scorecard. Harvard Business School Press.

Kaplan, R. S. and D. P. Norton (2001a), Transforming the Balanced Scorecard from Performance Measurement to Strategic Management: Part I, Accounting Horizons, pp. 87-105.

Kaplan, R. S. and D. P. Norton (2001b), Transforming the Balanced Scorecard from Performance Measurement to Strategic Management: Part II, Accounting Horizons, pp. 147-161.

Keegan, D.P., Eiler, R.G. and Jones, C.R. 198), “Are your performance measures obsolete?”, Management Accounting (US), Vol. 70 No. 12, pp. 45-50.

Kennerley, M.P. and Neely, A.D. 2000, “Performance measurement frameworks – a review”, Proceedings of the 2nd International Conference on Performance Measurement, Cambridge, pp. 291-8.

Kotter, J.P. 1996, Leading Change, Harvard Business School Press, Boston, MA.

Lynch, R.L. and Cross, K.F. 1991, Measure Up – The Essential Guide to Measuring Business Performance,Mandarin, London.

Maskell, B. 1989, “Performance measures for world class manufacturing”, Management Accounting (UK), pp. 32-3.

Meyer, M.W. and Gupta,V. 1994, “The performance paradox”, in Straw, B.M. and Cummings, L.L. (Eds), Research in Organizaional Behaviour, Vol. 16, JAI Press, Greenwich, CT, pp. 309-69.

Neely, A. 1998, Measuring Business Performance – Why, What and How, Economist Books, London.

Neely, A.D. 199), “The performance measurement revolution: why now and where next”, International Journal of Operations and Production Management, Vol. 19 No. 2, pp. 205-28.

Neely, A.D., Kennerley, M.P. and Adams, C.A. 2000, The New Measurement Crisis: The Performance Prism as a Solution, Cranfield School of Management, Cranfield.

Neely, A.D., Mills, J.F., Gregory, M.J., Richards, A.H., Platts, K.W. and Bourne, M.C.S. 1996, Getting the Measure of Your Business, Findlay Publications, Horton Kirby.

Scott, W.R. 1995, Institutions and Organizations: Theory and Research, Sage Publications, London.

Senge, P.N. 1992, The Fifth Discipline: The Art and Practice of the Learning Organization, Century Business Press, London.

Tichy, N.M. 1983, Managing Strategic Change: Technical, Political, and Cultural Dynamics, John Wiley & Sons,New York, NY.

Waggoner, D.B., Neely, A.D. and Kennerley, M.P. 1999, “The forces that shape organizational performance measurement systems: an interdisciplinary review”, International Journal of Production Economics, Vol. 60-61, pp. 53-60.

Wisner, J.D. and Fawcett, S.E. 1991, “Linking firm strategy to operating decisions through performance measurement”, Production and Inventory Management Journal, Third Quarter, pp. 5-11.

Brooks, H. 1996, National Science Policy and Technology Transfer, Proceedings of a Conference on Technology Transfer and Innovation, Washington, D.C., NSF.

David S. Cliffon, Jr. and David E. Fyffe 1977, Project Feasibility Analysis: A Guide to Profitable New Ventures, John Wiley & Sons, pp. 2-10.

Kohler, B.M., A.M. Rubenstein and C.F. Douds.1973, “A Behavioral Study of International Technology Transfer between the United States and West Germany”, Research Policy, Vol.3, pp.23-34.

Robinson, Charles J., Ginder, Andrew P., 1995 “Implementing TPM”, Productivity Press, Portland Oregon.

Society of Manufacturing Engineers, P.O. Box 6028, Dearborn, MI 48121.

Steinbacher, Herbert R., Steinbacher, Norma L., 1995 “TPM for America”, Productivity Press, Portland, Oregon.

Takahashi, Yoshikazu, and Osada, Takashi, 1990 “TPM”, Asian Productivity Organization, Tokyo.