Implementing Results-Based Management: Lessons from the Literature

Introduction

Promoting Favorable Implementation Conditions

Developing Performance Measurement Systems

Using Performance Information

Bibliography

Annex A: Summary of Lessons Learned from 1996 Report

Introduction

The following is a concise synthesis of lessons learned from implementing results-based management Footnote1 in a variety of Canadian and international jurisdictions. It is a follow-up report to a review prepared and published by the Office of the Auditor General in 1996, which is summarised in Annex A. Unlike the first review that focussed specifically on implementation, this update also includes lessons learned on more operational issues such as development of indicators, data collection, analysis, monitoring and reporting. It should be noted that the preparation of this report did not involve a critical analysis of specific lessons learned as published in the literature, but rather focussed on common lessons learned and trends across jurisdictions.

11 Results-based management, managing for results and performance management are used interchangeably throughout the report. Return to reference 1

There is an abundance of literature on performance measurement and results-based management however most of it tends to be descriptive or normative. The documents used for this report were published between 1996-1999 and attempted to assess what has worked and what has not worked with respect to efforts at implementing results-based management. The lessons that have been identified as being instructive are divided into three sections: promoting favourable implementation conditions, developing performance measurement systems, and using performance information. For more discussion of specific points raised under any of the lessons learned, the reader is advised to refer to the cited references contained in the bibliography which provides a comprehensive list of reports and articles. A list of references from the first review is also included in Annex A.

Promoting Favorable Implementation Conditions

Implementing and maintaining a performance management and measurement system represents a major commitment on the part of any organisation. Several organisational and human factors have been identified in the literature that contribute to the creation of a favourable implementation environment. By paying careful attention to these factors, an organisation can generally improve its chances of establishing a successful performance management and measurement system.

Organisational Factors

1. A customised results-based management regime is critical

Though it may be tempting to simply adopt a results-based management system deemed successful in another jurisdiction or organisation, this practice has proved to be very ineffective. It is important that the system be developed according to the needs and situation of the users. No single system will be appropriate for every organisation. As Joyce (1997, p.53) notes: "...public agencies are not all alike. Different solutions exist for the performance measurement problem in different agencies." The evidence suggests that customised results-based management systems are critical for success. (Caiden 1998, p.45) Even individual components such as indicators and data collection systems should be developed with the specific users in mind. (Joyce 1997, p.53; Itell 1998, p.12)

Experience in OECD countries suggests that selecting an appropriate approach for implementing results-based management is also very important. "Basic approaches to implementing performance management (e.g. top-down versus bottom-up; comprehensive versus incremental; systematic versus ad hoc; de facto versus de jure) must be selected according to the needs and situations of each country." (OECD 1997, p.29)

2. Take your time and maintain momentum

Implementing results-based management is a long-term process. It takes time to plan, develop indicators, and align management systems before even collecting any performance data. (OECD 1997, p.29) For example, Poate (1997, p. 54) notes that "For aid donors dealing with a two year planning cycle and five-year implementation, results may take a decade to emerge", while Plantz, Greenway and Hendricks (1997, p. 24) suggest that "It could easily take an agency seven months or more of preparation before collecting any data, and it easily could take three to five years before the findings of a program's outcome measurement system actually reflect the program's effectiveness." As suggested by the experience of OECD countries and development agencies, organisations have to be patient and persistent. In this type of process, building consensus and maintaining momentum is crucial to success. (Poate 1997, p.56) This is particularly important in a highly politicised organisation where the political timetable may pr esent a formidable obstacle to long-term implementation. (Newcomer 1996-97, p.32)

Though it may be tempting to rush implementation, organisations have found that this only decreases the likelihood that the measurement system will be useful. (Plantz, Greenway and Hendricks 1997, p.24) The idea is to take the time to develop and implement a results-based management system that will be worthwhile and is accepted throughout the organisation. For example, organisations have limited the use of indicators in certain cases until they had sufficient measurement experience. "Outcome oriented measures are given time to develop and any sanctions for not setting or reaching appropriate goals should come far down the road, after departments have experience identifying cause and effect." (Itell 1998, p.17)

3. Linking performance measures to the policy or strategic framework is key

The successful implementation of results-based management depends on the extent to which performance measures are linked to an existing policy or strategic framework. The experience of the Australian Public Service suggests that on a government level this policy and institutional framework should include formal Cabinet endorsement of evaluation requirements, strong political and managerial support, and clear mandates given by Cabinet to central agencies for implementation, and devolution of management responsibility. (Poate 1997, p.56)

From an organisational perspective, it is necessary to "Connect the dots. If your performance management efforts are not connected to your business plan (which defines day-to-day operations in a government agency) and to the budget (which is where the money is), then you will be doomed to failure because your performance measurement approach will have no real meaning to the people running, or affected by, the program. (National Performance Review 1999) This requires the existence of a strategic plan, inclusive of organisational goals and objectives that reflect a long-term vision or mission. (National Performance Review 1997; Downey 1998, p,18) Ensuring that performance indicators and measurement practices are linked to strategic objectives or expected results is key to successful performance management. (OECD 1997, p.29; Poate 1997, p.56; Epstein and Olsen 1996, p.43; Newcomer and Downy 1997-98, p.38; Nakamura and Warburton 1998, p. 40) In this way, performance measurement is integrated within strategic p lanning (Epstein and Olsen 1996, p.42) and therefore "knowledge about strategy implementation is increased, and the strategy is more likely to be realised." (PricewaterhouseCoopers 1999 p.8)

A message throughout the literature is to "Focus on the big picture. Executives must not get bogged down in minutia, but instead create a simple, clear vision of the agency' core mission based on a realistic view of the current situation and of future trends. Managers also need to take a corporate-level view, and not make the mistake of aggregating function or lines of business." (Downey 1998, p,18; see also PriceWaterhouseCoopers, 1999, Epstein and Olsen, 1996)

The experience of world-class organisations in Canada and internationally "suggest some general guidelines for defining indicators and measures. It is useful to begin with the end in mind - to go back to the vision and broad objectives that defined the long-term impacts the policy, program or service was intended to produce and for whom. From there it is possible to identify short- and medium-term results that should contribute to achieving those impacts and then indicators that fairly reflect those results." (Gibson and Boisvert 1997, p.8).

4. Align management systems to support implementation

Successful implementation of results-based management requires management systems that support the systematic collection, recording, analysis and reporting of performance information. (Olsen 1997, p.29; Poate 1997, p.57; PricewaterhouseCoopers 1999, p.11) For some organisations, this may mean realigning existing system in order to ensure that they collect the right information needed for decision-making.

In order to avoid costly duplication of effort, organisations should carefully examine existing data collection, monitoring, evaluation and research functions and the information they already collect. Existing systems may already be compiling data related to outcomes. (Plantz, Greenway and Hendricks 1997, p.24; Nakamura and Warburton 1998, p. 41) "Another advantage of making use of existing information is that the personnel who have been responsible for the previously existing data systems will not be as likely to view the new performance measurement system as a direct threat to their job security." (Nakamura and Warburton 1998, p. 41)

For organisations who are installing or changing computer systems, this is an opportunity to modernise their service management and introduce performance information. (Local and regional authorities in Europe 1997, p.20)

5. Providing adequate financial and human resources is critical

In order to successfully implement results-based management, organisations require adequate financial and human resources. There is a cost associated with implementation and organisations do not necessarily have the capacity to adopt a new system. (Thomas 1998, p.17; Caiden 1998, p.40) If organisations are expected to design and use performance measurement systems, those requesting the information need to recognise that these organisations need the resources to do so. (Newcomer and Downy 1997-98, p.38) Experience in the US suggests that "commitment of resources - including the time of a top-level manager devoted to design and implement feasible performance measurement systems" is a key indication of top leadership support. Such support "minimizes the risk." (Newcomer and Wright 1996-97, p.32)

6. Location of stewardship over performance measurement process is important

There are so many players involved in implementation that there can be confusion over ownership in the development process. The location of ownership and control of the process is very important because this will inevitably have an effect on the type of performance measures that are developed. The literature points to different possible locations.

There is evidence suggesting that control over the process should not be located in financial management or budget office. Doing so may lead to measures that will serve the budgeting process well but will not necessary be useful for internal management. Some suggest that performance management be located at the program level and that this will assist in ensuring buy-in from line-managers. (Newcomer and Downy 1997-98, p.39; Newcomer and Wright 1996-97, p.32; Wholey and Newcomer 1997, p.94) However, according to the experience in the Australian Public Service, central monitoring and evaluation units within organisations should take responsibility for the process. It is argued that these units can provide the necessary technical and analytical expertise needed for successful implementation. (Poate 1997, p.56)

At the government level, the experience of OECD countries indicates that a central agency such as the department of finance or planning, cabinet office, or chief executive office should lead the process. (Poate 1997, p.53) "[I]f performance monitoring is to be part of an overall initiative, in particular, a shift from a traditional direct type of control to a more flexible, strategic, and distant type of control, then the need for firm central leadership seems clear." (Zapico-Goñi and Mayne 1997, p.261)

7. Pilot projects can be a useful approach

Conducting pilot projects presents a good opportunity for organisations to test new management systems. They represent an opportunity to identify and work out problems with some or all of its components. To be effective, pilots must attempt to emulate the scenario of full implementation. Therefore pilots must last long enough to test most aspects of the new system including data collection and must involve a representative group of participants. (Plantz, Greenway and Hendricks 1997, p.24)

Human Factors

8. Developing a performance management culture is critical

Successful implementation of results-based management is dependent on the organisation's ability to create a management culture that is focussed on results. (USGAO 1997b, p.73; Epstein and Olsen 1996, p.43; PricewaterhouseCoopers 1999, p.11) It requires more than the adoption of new administrative and operational systems. An emphasis on outcomes requires first and foremost a performance-oriented management culture that will support and encourage the use of the new management approaches. (Poate 1997, p.57; Downey 1998, p.18) The public sector traditionally has had an administrative culture which emphasises the measurement of input whereas a performance management culture is focussed on managing inputs and outputs to achieve outcomes.

A lesson from companies in Europe and the US is to ensure that the right values and behaviours are operative in the management culture. "Leaders are establishing a target set of desired values and behaviours, and designing measures to deliver theses whilst avoiding inadvertent undesirable behaviours.... Leaders are realising that processes to produce desired behaviours must work their way through an increasingly complex organisational web if value is to be delivered." (PricewaterhouseCoopers 1999, p.11)

The New Zealand experience suggests that the informal factors in an organisational culture and environment are sometimes more important than the formal ones. "While new formal management arrangements and systems have been vital, the positive changes that have been brought about could not have occurred without the informal systems of peer-group pressure, a commitment to public service ethics and individual professionalism among key staff." (Poate 1997, p.57)

Experience in the US identifies a possible obstacle to introducing the concept of using performance information for decision-making as the existing culture of the organisation, "which may be used to conducting its affairs, for example, according to established custom, by rules of thumb...or in reaction to external events as they occur. Using performance data to make decisions in such cases would require an organizational revolution." (Caiden 1998, p.40) An organisation's current climate can also negatively affect implementation. Results-based management may be viewed by staff as a cost-cutting exercise, especially in times of budgetary restraint and downsizing. This creates defensiveness and impedes implementation of a performance-oriented culture (Thomas 1998, p.18).

Changing the culture is not an easy task and also takes time. It is long-term process of change that must be consistent and requires continual refinement and improvement. (Thomas 1998, p.17; Poate 1997, p.56) Experience suggests that appropriate leadership and a sense of shared commitment to the reform process is critical to building a performance-oriented culture. (Mascarenhas 1996, p.17; USGAO 1997b, p.73)

9. A practical understanding of accountability is needed

Implementing results-based management is a significant public sector management reform which presents new challenges in defining accountability. "In this environment it will be necessary to rearticulate our vision of public accountability." (Shergold 1997, p.303) The traditional notion of accountability, top-down authority responsible to the people through elected policymakers and senior administrators, must be reshaped to reflect this new public sector management environment. (Kettl 1997, p.456) The traditional notion of only holding public servants to account for the correct application of government regulations and procedures seems incompatible with an empowered, results- and service-oriented public sector. (Mayne 1997, p.159) However, it is recognized that "...it is a significant challenge to effect a culture change that allows employees to realize that they are accountable for results - not just to their supervisor, but to the organisation, customer and stakeholder." (National Performance Review 1 999)

Results-based management implies a shift in focus away from procedures and outputs management to outcome level results achievement. While current outputs-based performance management systems hold individuals responsible for output achievement, it does not logically follow that public servants should now be held accountable for achieving policy and programme outcomes. "Accountability means that government agencies have a responsibility to influence outcome results. This does not mean writing into a contract or a performance agreement that you are going to be 100% accountable for reducing an accident rate to a certain level by a given time. It is a matter of recognising that there's a responsibility to influence the outcome result that's being sought". (State Services Commission 1999) There remains, nonetheless, an obligation to demonstrate what outcome results have been accomplished. "The key is to make this demonstration the essence of the accountability regime. Accomplishment accountability is the credibl e demonstration of what one has achieved that is of significance and value." (Mayne 1997, p.159)

10. Senior level leadership and involvement is essential

There is strong evidence to suggest that senior level leadership is necessary for successful implementation. (Plantz, Greenway and Hendricks 1997, p.23; Wholey and Newcomer 1997, p.94) Without the support of senior management, there is no impetus for change. (Epstein and Olsen 1996, p.42) It is critical that they fully support and actively participate in both the creation and implementation of results-based management. (Downey 1998, p.18; Poate 1997, p.54) By actively participating in implementation, they are demonstrating their commitment to the reforms. (Alford and Baird 1997, p.56) "Clear, consistent, and visible involvement by senior executives and managers is a necessary part of successful performance measurement and management systems." (National Performance Review 1997) "Senior leadership must help an organisation overcome its resistance to change." (National Performance Review 1999)

There is also evidence to suggest that the leadership role be shared. Although the support of top political leadership is essential to ensure the success of the system (Newcomer and Downy 1997-98, p.39), it is important to cascade leadership throughout the organisation. (National Performance Review 1999) This will give the performance management process a depth and sustainability that ensures its survival through changes in political party leadership or senior management level turnover.

11. Full participation fosters support for implementation

In all cases, governments attribute successful implementation to full participation of staff at all levels. (Downey 1998, p.18) In addition to staff, it may also be beneficial to include other stakeholders in the process especially when identifying expected outcomes. Seeking different perspectives may reveal important issues that may not have occurred to staff. (Plantz, Greenway and Hendricks 1997, p.24; National Performance Review 1999; Local and regional authorities in Europe 1997, p.15; Epstein and Olsen 1996, p.42) Stakeholder involvement can help an agency identify results-oriented performance measures and set realistic target levels. (USGAO 1997b, p.13)

Another benefit of stakeholder involvement is that it helps increase their commitment and a sense of ownership, both of which provide needed support for the performance measurement system. Gibson and Boisvert (1997, p.18) point out that "[s]ustainability is not guaranteed in a political environment, but gaining the commitment and support of clients helps embed the desired changes." As well, "[e]ssentially, stakeholders are less likely to criticise or resist a management framework that they have developed themselves." (Meier 1998, p.i) This is why it is important that those who will actually be using the new performance measurement system develop their own measures. (Epstein and Olsen 1996, p.42) Simply put, "do not impose, involve". (Poate 1997, p.53)

12. Training and education are key ingredients for success

A major hurdle in implementing results-based management is the relative lack of experience and expertise. (Mascarenhas, 1996, p.22; Hatry 1997b, p.41) Successful implementation is dependent on managers and staff having the necessary knowledge, skills and abilities to develop and use the performance measurement system. (USGAO 1997b, p.76; Itell 1998, p.17; Newcomer and Downy 1997-98, p.39; Poate 1997, p.57) The lesson has therefore been to provide training for nearly all of those involved. Training will provide managers, staff and key stakeholders with the knowledge and skills they need to work with data, understand it and use it to improve effectiveness. (Gibson and Boisvert 1997, p.11) It has also been suggested that in order to ensure the institutionalisation of results-based management, political appointees in strategic areas such as budget offices should also receive training. (Newcomer and Wright 1996-97, p.32)

Training can also assist in changing the organisational culture. Once managers and staff understand how results-based management works, they start to appreciate its potential. (Epstein and Olsen 1996, p.42) "When new systems are bring introduced, training is likely the be needed at two levels: familiarity with the basic concepts linked to the underlying principles of reform; and operational support to define objectives, construct performance indicators, use indicators for reporting and review, and evaluate. The former can be achieved through briefings and explanatory material. The latter required a sustained effort from something like a methodology support group." (Poate 1997, p.54)

13. Use existing expertise to support implementation

Implementing results-based management can be very challenging for organisations, especially those that lack the in-house technical capacity. One important lesson that has been learned is to use expertise to support implementation. A technical expert can provide guidance on every aspect of development and use of the performance measurement system. "The first time around, guidance on collection and analysis methods from a technical expert will often save time, offer reassurance, and improve results." (Plantz, Greenway and Hendricks 1997, p.24)

Organisations may not have to look very far. The expertise may very well exist within the organisation itself. Employees with experience in any aspect of results-based management should be directly involved in implementation. (National Performance Review 1999) "Evaluators also possess the technical expertise needed to inform the design of performance measurement systems as well as the analysis of performance data." (Wholey and Newcomer 1997, p.98)

14. Communicate purpose of performance measurement system

It is essential to have a vision or plan which contains a clear definition of the purpose of results-based management and to communicate this throughout the organisation. (Epstein and Olsen 1996, p.41, p.42; PricewaterhouseCoopers 1999, p.7; Itell 1998, p.17) Lessons from companies in Europe and the US show that "[t]he starting point for any improvement programme is to realise that the current position is unsatisfactory and something better exists and is achievable." (PricewaterhouseCoopers 1999, p.7) Employees need to know why performance measurement is being undertaken, what their role is in the new system and how performance information fits into the decision-making process. (National Performance Review 1997; National Performance Review 1999) They need to know that the performance measurement system will provide essential information to improve management within the public sector and that it will help monitor progress made towards the achievement of expected results. "A lack of clear expectations ab out possible uses for performance data presents perhaps the most difficult challenge...[because] In many cases, the sorts of measures that might effectively guide internal decision-making may provide data that managers would not want made public for resource allocation decisions." (Newcomer and Downy 1997-98, p.38; Wholey and Newcomer 1997, p.95) The experience of state and local governments in the US has been that well-informed employees adjust more easily to the new performance management system and will perform better. (Epstein and Olsen 1996, p.42)

In addition to this internal communication, external stakeholders should also be informed and understand the purpose of results-based management because, ultimately, they will be interested in knowing how well an organisation has achieved its goals and objectives. (National Performance Review 1997)

Developing Performance Measurement Systems

The task of developing a performance measurement system can be quite daunting to the inexperienced organisation. Organisations from around the world that are the leaders in performance measurement have been learning by doing through trial and error for more than a decade. Out of this experience we have uncovered a number of common lessons learned that can guide the novice organisation in mastering the technical aspects of developing a performance measurement system.

15. Use a manageable number of indicators

One of the biggest risk factors that threaten successful implementation of results-based management is over-complexity. Over-complexity of a performance measurement system will lead to implementation problems and will simply frustrate stakeholders. The easier it is to use and apply, the more likely stakeholder will adopt and embrace the new approach. (Meier 1998, p.i)

One way to keep it simple is to limit the number of indicators. "Multiple cases suggested that indicators should be kept down in number; three indicators which are solid measures of outcome are better than 10 which don't measure anything relevant." (Epstein and Olsen 1996, p.43 ) "Departments are limited to five program outcome indicators but are free to adopt as many internal management indicators as necessary." (Itell 1998, p.13) "The performance measures for a specific individual or group should not exceed 5 to 7 measures." (Atkinson and McCrindell 1996, p.17) Too many measures is the sign of an organisation that has not taken the time to prioritise measures. (National Performance Review 1997) This is echoed in both the public and private sector where it is argued that the quality of the indicators is far more important than the quantity. (Poate 1997, p.56) Too many measures may not only be ineffective but could be harmful. "There is significant evidence to suggest that over-complexity of the performanc e measurement and data collection system is the biggest factor threatening successful implementation of RBM. (Meier 1998, p.i)

However, it is important to adopt a balanced set of measures that provides adequate information on which to base decisions. (PricewaterhouseCoopers 1999, p.9) The measures must provide an adequate performance picture. (USGAO 1997b, p.71) At a macro level, many jurisdictions have resorted to using composite indicators. "Alberta, Oregon and other jurisdictions engaged in macro-level planning have resorted in part to multiple measures. By building composite indicators based on clusters of performance measures, for instance, they have established a framework for tracking progress on 'soft' socio-economic goals." (Gibson and Boisvert 1997, p.8)

16. Clearly define key terms and concepts

Another lesson has been that defining key terms and concepts will assist in the implementation process. "Agencies' use of inconsistent definitions for their programs' measures could hamper decision-makers' use of data collected from those measures in planning, comparing performance, and reporting on performance achieved." (USGAO 1997b, p.61) A standard set of definitions will help minimise misunderstandings and will foster consistency throughout the organisation. (Nakamura and Warburton 1998, p. 37)

17. Using the logic chart can be very helpful

Using the logic chart has proved to be very helpful in the development and identification of expected results, indicators and risks. It facilitates the task of conceptualising project/program in terms of inputs, outputs, and outcomes. It also helps verify the logical consequences of cause and effect linkages and hence, the level of attribution. (Plantz, Greenway and Hendricks 1997, p.24) This is a particularly useful for tool for stakeholders that are not familiar with results-based management because it illustrates how RBM works. (Meier, 1998, p.i) "A further advantage that proponents still claim for the logframe is that the identification of risks help to structure the uncontrollable factors separating outcomes from output." (Poate 1997, p.55)

18. Align performance measures with accountability and decision-making authority

When performance measures are being developed, care should be taken to ensure that these are aligned with accountability and decision-making authority. Measures should relate directly with management and staff job descriptions and responsibilities. (Nakamura and Warburton 1998, p. 46) In this sense, individuals should only be held accountable for what they can influence. (PricewaterhouseCoopers 1999, p.8) "In constructing performance measures for workers or for organizational units, and interpreting and using performance measure results, it is important to know the formal job description of those involved. Performance measures focussing on aspects of a process over which those whose performance is being measured have only partial or no control may engender anxiety or feelings of unfair treatment." (Nakamura and Warburton 1998, p. 42)

Certain conditions must exist for individuals to accept accountability for results. Managers and staff must understand the system and how they can influence results. They must understand what they are responsible for and believe that the system measures what they contribute to the organisation. (National Performance Review 1997) If these conditions do not exist, the performance measurement system will be deemed unfair [with respect to accountability]. (Atkinson and McCrindell 1996, p.16) It is also important that the performance information be tailored to the individual users. (Poate 1997, p.55) A frequent problem faced by managers is that the information is too highly aggregated and is therefore of little use to them because it does not correspond to their level of decision-making. (Mascarenhas, 1996, p.21; Hatry 1997b, p.40) To overcome this problem, some suggest that programs/projects identify their own expected results, indicators and data collection methods. (Plantz, Greenway and Hendricks 1997, p.24)

If managers are to be held accountable for the performance of their programs, they need the flexibility and power to manage their programs for results. (Newcomer and Downy 1997-98, p.40) Some also suggest that accountability must be shared by managers and staff throughout the organisation. (National Performance Review 1999)

Finally, it is important to recognize that performance measurement is ultimately a means of providing accountability for a program, not just its manager. As the National Performance Review (1999) pointed out: "Accountability is a multidimensional concept and often a key enabler of success...[There must be an] establishment or assignment of accountability for performance/results and the effective stewardship of resources to produce those results. To truly work, accountability has to be shared by managers and employees; further, your organization as a whole must be accountable to the customer and stakeholder."

19. Credible performance information is essential

For performance information to be useful, it must be valid and reliable. The information will simply not be accepted or used if it is biased or inaccurate. (British Columbia. Office of the Auditor General 1997, p.9; Atkinson and McCrindell 1996, p.17) Even the perceived possibility that the information could be falsified can impair the usefulness of the system. (Nakamura and Warburton 1998, p. 47) To ensure that the information is credible, there needs to be some form of independent checking or auditing. (Epstein and Olsen 1996, p.44; Nakamura and Warburton 1998, p. 41) The means of verification must be communicated. It has been found that simply describing the chosen method within annual reports provides assurance to readers that the information is credible. (USGAO 1997b, p.72)

Independent checking or auditing not only influenced those using performance information, it also has an effect on those collecting the data. It seems that the possibility of audit is enough to increase efforts to maintain accurate records. (Epstein and Olsen 1996, p.44)

20. Performance standards and targets are essential for measurement

In order for results-based management to function as intended, emphasis should be given to identifying targets and performance standards. (Poate 1997, p.57) It is difficult to judge whether results are improving if one has no reference point against which to compare. In this sense, targets are also critical for defining accountability. "Absent a specific and measurable standard of performance against which measured performance is compared, there is no basis for accountability." (Atkinson and McCrindell 1996, p.17)

Benchmarking against similar programs is another method used for setting targets. Evidence from the private sector suggests that benchmarking against competitors is a useful practice. (PricewaterhouseCoopers 1999, p.10) Poate (1997, p.57) notes that "[I]ndicators and targets should be set in the context of what is understood to be best practices through reference to the experience of other agencies undertaking the same or similar tasks. However, cases in the public sector point to the dangers of comparing outcomes of one program to another to determine which is better. Some suggest that the best comparison for a program is itself. (Plantz, Greenway and Hendricks 1997, p.26)

21. Use baseline data to set targets

There have been multiple cases illustrating that the use of baseline data from past performance can help agencies set realistic targets. (USGAO 1997b, p.61) Baseline data provides the trend information on which to base targets. For those who have no experience with results-based management and therefore have no baseline data available, some suggest that they start by collecting data before actually setting targets. (Plantz, Greenway and Hendricks 1997, p.26; Laurent 1999)

Using Performance Information

How an organisation uses the performance information generated from its performance measurement activities will influence its long term success in implementing results-based management. A learning organisation uses performance information to identify its weaknesses and strengths as a basis for making adjustments to management systems and strategic planning processes. The following lessons learned reflect the experiences of leading organisations in using performance information for the purposes of learning and continuous improvement.

22. Demonstrable use of performance information is essential

Performance information must both be used and be seen by others to be used. If top-level managers use the information for decision-making purposes, others are more likely to follow their example. (Newcomer and Downy 1997-98, p.39) This will encourage staff to accept and participate in the new performance measurement system. (Epstein and Olsen 1996, p.43) If the performance information is not readily used, the credibility of the entire activity will be questioned. (Poate 1997, p.57) However, managers and staff also have to see that there are significant benefits to the organisation and its programs. Performance management must make a difference and account for something. The performance information collected has to be useful. (OECD 1997, p.29; Gibson and Boisvert 1997, p.18; British Columbia. Office of the Auditor General 1997, p.6). It has to illustrate that it is worth the cost incurred to collect the data. (Hatry 1997b, p.41; Itell 1998, p.17) As data usage increases and produces real benefits, the m ore confidence individuals will have in the data. (Gibson and Boisvert 1997, p.11)

23. Evaluation and performance measurement are complimentary

Results-based management does not replace the role of evaluation. In fact, evaluation and performance measurement are mutually supportive and complementary. For example, evaluations can be used to help measure outcomes. (Poate 1997, p.56) Performance information alone does not provide the complete performance picture. Evaluations complete the performance picture by providing the depth of analysis needed to explain why targets were not met or why they were exceeded. (Poate 1997, p.56) They also provide information managers need to improve operations. "Identifying and communicating the reasons that programs do not perform at expected levels is also clearly the province of program evaluation. Performance measurement alone will typically not provide the data that program managers need to understand why performance is below expectations, nor will it tell them how they may improve operations. The wide variety of program evaluation techniques that can be used to illuminate program operations complements perfo rmance measurement." (Wholey and Newcomer 1997, p.98) Impact evaluations can also "help agencies confidently attribute the achievement of intended results to one or more of its programs by providing information on the extent to which those programs contributed to the results achieved." (USGAO 1997a)

In cases where ongoing performance measurement is too costly or difficult, it may be more appropriate to conduct periodic evaluation studies. The reverse is also true. Where periodic evaluations are too expensive to conduct on a timely basis, a performance measurement system that tracks proxy performance indicators will provide the essential management information. (Zapico-Goñi and Mayne 1997, p.261) Evaluations can also be used to help develop the performance measurement system be generating performance measures. (Zapico-Goñi and Mayne 1997, p.262) "In designing and implementing a new performance measurement system, cost effectiveness, avoiding mistakes and acceptance and confidence in the new system are more likely to be achieved if this system makes appropriate use of information already being collected and builds on the expertise gained from the related and already established data collection, monitoring, evaluation and research functions within the public sector." (Nakamura and Warburton 1998, p. 41)

24. Incentives can be used to foster support

There is evidence to suggest that providing incentives, whether financial or non-financial causes individuals to change their behaviour and helps communicate what is important to the organisation. (National Performance Review 1997; PricewaterhouseCoopers 1999, p.11) Rewarding successful employees is needed to complete the accountability framework. "Accountability is a two-way street. The organization must reward individuals who keep their end of the bargain." (National Performance Review 1999)

The most successful results-based management systems are non-punitive. They should focus on using the information to help improve programs and projects. (National Performance Review 1997) Other evidence states that the system can include penalties, but that these should be introduced slowly. (Gibson and Boisvert 1997, p.18) Introducing sanctions, especially those linked to the budget, can compromise staff and management buy-in and commitment. (National Performance Review 1999)

25. Performance reporting is needed for decision-making

Information regarding progress towards achieving objectives should be reported and communicated to all stakeholders. As stated earlier, results-based management should be implemented using a participatory approach. Stakeholders involved in the process will want to be kept informed of the progress. This vital flow of information should be maintained. (National Performance Review 1997, 1999)

Managers and staff need performance information relatively frequently in order to make adjustments to programs and to later assess the effectiveness of those adjustments. (Hatry 1997b, p.40) The most successful organisations have kept in constant communication with their staff at all levels. (National Performance Review 1999) To ensure optimal use of performance information, careful attention should be paid to the frequency with which it is reported. An overemphasis on frequent and detailed reporting without sufficient evidence of its value for public managers, the government, parliament, and the public will not meet the information needs of decision-makers. "Frequent reporting may facilitate good management of the public sector but there is no guarantee of improvement in performance." (Mascarenhas, 1996, p.21) The manner in which the information is presented will also affect its usefulness, with too much detail detracting from the utility of the information. (Poate 1997, p.57) "Information should be prese nted in a way that can be easily understood by legislators and the public and is sufficient to provide an understanding of government's performance. Excessive detail, vague or overly technical descriptions and jargon should be avoided as they might cause confusion and misinterpretation." (British Columbia. Office of the Auditor General 1997, p.7)

Other than the use for decision-making, reporting has other advantages. Reporting may actually motivate employees to become more outcome oriented because it makes them more aware of their contribution to the organisation. Their work is in essence validated. (Olsen 1997, p.32). Reporting can also be counter-productive. e.g. when teachers teach to the test in order to reach targets or when findings are intentionally misrepresented to avoid embarrassment. Poate (1997, p.55) reports that the fear of publication and embarrassment through publication appear to be effective as a sanction against managers.

26. Learn, review, and adjust performance measurement systems

Another lesson that has been learned is that even once results-based management is implemented, the work is not done. To ensure continued success, the performance measurement system must be monitored and improved continuously. This will translate into a responsive system that reflects the changing environment in which it operates. (Poate 1997, p.56; Plantz, Greenway and Hendricks 1997, p.24; Epstein and Olsen 1996, p.41) The system should not be static. It should reflect changes in the organisation, changes to programs. It should be flexible enough to allow for revised measures to be developed by managers, who over time, have become more familiar with measurement and have gained some experience in developing indicators. (Epstein and Olsen 1996, p.41, p.43) Even the most experienced individuals will revise their measures at least once as they need time to develop. This is really the essence of a learning organisation. Managers and staff learn best through trial and error and a hands-on-approach. (Meier 1998, p.i; National Performance Review 1999)

Some also suggest that even before implementing new procedures, past and current systems should be reviewed to better understand why they may have failed to achieve their objectives. Learning from past mistakes and using shared experiences may help gain acceptance and consensus regarding the new initiative. (Poate 1997, p.53)

Bibliography

Abbreviations Used:

CCOLA
Canadian Council of Legislative Auditors
GAO /USGAO
General Accounting Office (United States)
GPRA
Government Performance and Results Act
OECD
Organisation for Economic Co-operation and Development
OESP
Office of Evaluation and Strategic Planning
OPPAGA
Office of Program Policy Analysis and Government Accountability

Alford, John and Baird, Jeanette (1997). "Performance Monitoring in the Australian Public Service: a Government-Wide Analysis", Public Money & Management, April-June, 49-58.

Atkinson, Anthony A. and McCrindell, James Q. (1996). Strategic Performance Measurement in Government: A Study for the Canadian Centre for Management Development, University of Waterloo, Waterloo, Ontario.

Axson, David A.J. (1999). "The Fastest Route to Right Answers: Refining Approaches for Better Decision-Making Through Performance Reporting", Strategy & Leadership, May/June, vol. 27, No 3, 6-10.

British Columbia. Office of the Auditor General (1997). Reporting Performance Information, in Proceedings of a Symposium on Public Sector Accountability Reporting and Auditing Issues, COLA Study Group.

Caiden, Naomi (1998). "Public Service Professionalism for Performance Measurement and Evaluation", Public Budgeting & Finance, Summer, Volume 18, Number 2, 35-52.

Downey, Mortimer L. (1998). "Making GPRA Work for Your Agency", The Public Manager, Fall, Volume 27, Number 3, 18.

Epstein, Jeff and Olsen, Raymond T. (1996). "Lessons Learned by State and Local Governments", The Public Manager, Fall, 41-44.

Florida Legislature, OPPAGA (1997). Performance-Based Program Budgeting in Context: History and Comparison, Tallahassee, Florida.

Gibson, Judith and Boisvert, Brenda (1997). Data-Driven Performance: Accountability, Business Planning and Performance Measurement in Government, The Conference Board of Canada, 207-97.

Hatry, Harry (1997a). "We Need a New Concept of Accountability", The Public Manager, volume 26, no 3, 37-38.

Hatry, Harry (1997b). "When the Rubber Meets the Road: Performance Measurement for State and Local Public Agencies", New Directions for Evaluation, Fall, No.75, 31-44.

Hope, Jeremy and Fraser, Robin (1998). "Measuring Performance in the new organisational model". Management Accounting, vol 76, no6, 22-23.

Itell, Jeffrey (1998). "Where Are They Now? - Performance Measurement Pioneers Offer Lessons from the Long, Hard Road", The New Public Innovator, May/June, 11-17.

Joyce, Philip G. (1997). "Using Performance Measures for Budgeting: A New Beat, or Is It the Same Old Tune?", New Directions for Evaluation, Fall, No.75, 45-61.

Kettl, D. F. (1997). "The Global Revolution in Public Management: Driving Themes, Missing Links", Policy Analysis and Management, Volume 16, Number 3: 446-462.

Laurent, Anne (1999). Extreme Measures.

Local and regional authorities in Europe (1997). The use of performance indicators in local public services, Council of Europe, Strasbourg, France.

Mayne, John (1997). "Accountability for Program Performance: A key to Effective Performance Monitoring and Reporting", In Monitoring Performance in the Public Sector, ed. John Mayne and Eduardo Zapico-Goñi, New Brunswick, NJ, Transaction Publishers.

Mascarenhas, R.C. (1996). "Searching for Efficiency in the Public Sector: Interim Evaluation of Performance Budgeting in New Zealand", Public Budgeting & Finance, Fall, 13-27.

Meier, Werner (1998). Lessons Learned from Implementing Results-Based Management (RBM) in CIDA, Performance Review Branch, Canadian International Development Agency.

Nakamura, Alice O. and Warburton, William P. (1998). "Performance Measurement in the Public Sector", Canadian Business Economics, Winter, Volume 6, Number 2, 37-48.

National Performance Review (1999). Balancing Measures: Best Practices in Performance Management

National Performance Review (1997). Benchmarking Study Report

Newcomer, Kathryn E. and Downy, Amy (1997-98). "Performance-Based Management: What Is It and How Do We Get There?", The Public Manager, Winter, Volume 26, Number 4, 37-40.

Newcomer, Kathryn E. and Wright, Roy E. (1996-97). "Toward Effective Use of Performance Measurement in the Federal Government", The Public Manager, Winter, 31-33.

OECD (1997). In Search of Results: Performance Management Practices, Paris, France.

Olsen, Raymond T. (1997). "Performance Management Strategies Showing Promise", The Public Manager, Spring, 29-33.

Plantz, Margaret C. , Greenway, Martha Taylor and Hendricks, Michael (1997). "Outcome Measurement: Showing Results in the Nonprofit Sector", New Directions for Evaluation, Fall, No.75, 15-30.

Poate, Derek (1997). Measuring & Managing Results: Lessons for Development Cooperation, OESP, New York.

PricewaterhouseCoopers (1999). Managing Corporate Performance Today and Tomorrow, UK, Author.

Shergold, P. (1997). "The colour purple: perceptions of accountability across the Tasman", Public Administration and Development, volume 17, p.293-306.

State Services Commission (1999). Occasional Paper No 7: Looping the Loop: Evaluating Outcomes and Other Risky Feats, Wellington, New Zealand

Thomas, Paul (1998). "The Politics of Performance Measurement", Management, Volume 8, no 2, 17-19.

USGAO (1997a). Managing for Results: Analytical Challenges in Measuring Performance, GAO/HEHS/GGD-97-138, Washington, D.C.

USGAO (1997b). The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven, GAO/GGD-97-109, Washington, D.C.

Wholey, Joseph S. and Newcomer, Kathryn E. (1997). "Clarifying Goals, Reporting Results", New Directions for Evaluation, Fall, No.75, 91-98.

Zapico-Goñi, Eduardo and Mayne, John (1997). "Performance Monitoring: Implications for the Future", In Monitoring Performance in the Public Sector, ed. John Mayne and Eduardo Zapico-Goñi, New Brunswick, NJ, Transaction Publishers.

Annex A

Summary of Lessons Learned from 1996 Report

The following is a summary of the lessons learned from the original document published by the OAG in 1996. The complete report is available from the Office of the Auditor General. For more discussion of specific points raised under any of the lessons learned, the reader is advised to refer to the cited references contained in the bibliography which provides a comprehensive list of reports and articles.

1. Implementation plans need to anticipate resistance and maintain momentum

Timing is an important factor to consider during implementation. There are risks associated with taking both too long or too little time to implement results-based management. A too hurried implementation will only exacerbate fears, but there are also risks associated with a slow implementation process. (National Academy of Public Administration 1994, p. 3) It is therefore important to keep the momentum going at a balanced pace. (Broom 1995, p. 16)

2. Pilot projects can be a useful approach

Conducting pilot projects present a good opportunity for organisations to test new systems. (Broom 1995, p. 15; Henry and Dickey 1993, p. 211; Likierman 1993, p. 18) Pilot projects have facilitated implementation by easing the organisation into reform. They represent an opportunity to work out problems with some or all of the components of the new system while it is being used on a small scale. (Jackson 1991, p. 20)

3. Developing a performance management culture is critical

A results-based culture must permeate all levels of the organisation and be consistent throughout each type of activity. (OECD 1994, p. 21) Key is the development of results-focused objectives within the organisation and fostering a culture of measurement and learning. (Western Australia. Office of the Auditor General 1994, p. 17; Broom 1995, p. 13-14) "While experience suggests that appropriate leadership providing appropriate signals and incentives can successfully change this culture, the reform processes require ongoing consistency in these signals." (OECD 1993, p. 47)

4. Senior level leadership and continued commitment is essential

There is strong evidence to suggest that senior level leadership is necessary for successful implementation. (Holtham, p. 5; OECD 1994, p. 47) It is critical that senior management, from top to middle managers fully support and actively participate in the implementation of results-based management. (USGAO 1994, p. 14-15; Broom 1995, p. 16) Their level of commitment sets the tone for the entire organisation. (Treasury Board Secretariat 1996, p. 6; Jackson 1991, p. 19; Shaw 1987, p. 8)

5. Full participation will foster support for implementation

In all cases, governments attribute successful implementation to full participation of staff at all levels. (National Academy of Public Administration 1994, p. 5; USGAO 1995a, p. 12) By involving managers and staff in the process of developing the strategic plans, performance measures, etc., a sense of ownership and commitment is developed. (Meekings 1995, p. 8; Likierman 1993, p. 17) And people are less likely to criticize something they have developed themselves and are more motivated to work together to accomplish common objectives. (USGAO 1994, p. 14; USGAO 1995a, p. 15)

6. Training and education are key ingredients for success

Among leaders of results-based management, training is listed as an essential ingredient for reform. (USGAO 1995a, p. 15; OECD 1994, p. 47; Treasury Board Secretariat 1996, p. 8) Training in the development and use of performance measures builds the skills needed for results-based management in organisations that have historically been preoccupied with inputs. (Western Australia. Office of the Auditor General 1994, p. 18) Training not only assists in the acquisition of skills, but also in changing the organisational culture. (Jackson 1991, p. 20)

7. Key terms and concepts should be clearly defined

Another lesson has been that providing definitions of key terms and concepts will assist in the implementation process. There is a general lack of understanding of terms and concepts in performance-based management. (National Academy of Public Administration 1994, p. 4) A standard set of key terms and concepts will help minimize misunderstandings, and miscommunications and will foster consistency throughout the organisation. (National Academy of Public Administration 1994, p. 7)

8. Management systems need to be aligned to support implementation

Managers need to be provided with a supportive infrastructure which is conducive to collecting additional or at least a different type of data for the results-based management system. Resistance and lack of support among those responsible for measurement is likely if this support is not provided. (Western Australia. Office of the Auditor General 1994, p. 17; OECD 1994, p. 49) Planning, budgeting and reporting systems need to be realigned and integrated with existing systems so as not to become burdensome. (USGAO 1994, p. 18; Likierman 1993, p. 19; Nagel and Cutt 1995, p. 47; Shaw 1987, p. 8)

9. Incentives can be used to foster support

Mangers will be less likely to commit to achieving results if they do not have the flexibility needed to manage their programs efficiently and effectively. (USGAO 1995a, p. 15) The idea of having fewer rules and controls is an attractive incentive in support of results-based management. (Groszyk 1995, p. 14) The use of other types of incentives needs to be carefully considered. Pay for performance has been used in some jurisdictions (OECD 1995, p. 38), but not always with positive results. (OECD 1993, p. 29) Incentives can be counterproductive when the perception is that good performance is rewarded and poor performance is penalized with budget cuts and staff reductions, rather than seen as an opportunity to learn. (Pollitt 1986, p. 168)

10. Use and hence credibility of the performance information is essential for effective implementation

The performance information produced must both be used and seen to be used. If performance information is not used, there is no value in measuring results, and the system looses its credibility. (Meekings 1995, p. 6 & 8) Using the information means more than simply requesting published reports on results. The information must be factored into the decision-making process, and hence must be credible information. (National Academy of Public Administration 1994, p. 7)

11.Moving to outcome measures from input/output measures increasingly is key to success

There is significant evidence to suggest that rather than the traditional focus on input and activity, the more successful performance measurement systems focus on program outcomes. (National Academy of Public Administration 1994, p. 7) Experience has shown that developing performance measurement systems that are outcome-linked is critical for using performance measures to improve programs. (USGAO 1995b, intro., p. 3)

12.Establishing a results measurement specialist is important

There is evidence to suggest that organisations cannot rely on its managers to implement a change of this type without knowledgeable assistance to avoid focusing on activities. A knowledgeable specialist would work with program managers and line-staff to develop their missions, goals, objectives, and measures. This is a quality control measure that allows a single individual to assure consistency and continuity across program units in the construction of performance measures. (American Society for Public Administration 1996, p. 7)

Bibliography

Abbreviations Used:

OECD
Organisation for Economic Co-Operation and Development
USGAO
United States General Accounting Office

American Society for Public Administration (1996). Performance Measurement: A Work in Progress: Minnesota's Department of Labour and Industry, Washington, D.C.

Broom, Cheryle A. (1995). "Performance-Based Government Models: Building a Track Record", Public Budgeting & Finance, Volume 15, Number 4, 3-17.

Groszyk, Walter (1995). Using Performance Measures in Government: Implementation of the Government Performance and Results Act of 1993, prepared for OECD meeting, Paris.

Henry, Gary T. and Kent C. Dickey (1993). "Implementing Performance Monitoring: A Research and Development Approach", Public Administration Review, Vol. 53, No. 3, 203-212.

Holtham, Clive "Developing a System for Measuring Departmental Performance", Public Money & Management.

Jackson, Peter M. (1991). Measuring Performance in the Public Sector, Public Sector Management Conference, FEE, 11-22.

Keating, M.S. (1990). "Managing for Results in the Public Interest", Australian Journal of Public Administration, Vol. 49, No. 4, 387-398.

Likierman, Andrew (1993). "Performance Indicators: 20 Early Lessons from Managerial Use", Public Money & Management, Volume 13, Number 4, 15-22.

Management Advisory Board and Management Improvement Advisory Committee (1993). Performance Information and the Management Cycle, Canberra, Australia.

Meekings, Alan (1995). "Unlocking the Potential of Performance Measurement: A Practical Implementation Guide", Public Money & Management, 5-12.

Nagel, Kevin F. and James Cutt (1995). Strategic Management, Accountability and Performance Measurement in a Provincial Government Organisation: A Review of the Approach and Experience of the British Columbia Ministry of Transportation, Center For Public Sector Studies, University of Victoria.

National Academy of Public Administration (1994). Towards Useful Performance Measurement: Lessons Learned from Initial Pilot Performance Plans Prepared Under Government Performance and Results Act, Washington, D.C.

Western Australia. Office of the Auditor General (1994). Special Report: Public Sector Indicators 1993-94, Report no. 7.

OECD (1995). Performance Management in Finland, Public Management Committee, Paris, 34-42.

OECD (1994). Performance Management in Government: Performance Measurement and Results-Oriented Management, Public Management Occasional Papers, No.3, Paris.

OECD (1993). Towards a Results-Oriented Culture, Public Management Committee, 8th Session, Paris.

Pollitt, Christopher (1986) "Beyond the Managerial Model: the Case for Broadening Performance Assessment in Government and the Public Services", Financial Accountability & Management, 2(3), Oxford, U.K., 155-170.

Shaw, Nigel (1987). "Productivity Development in the United States", Management Services, Vol. 31 No. 10 & 11, 88-14.

Treasury Board Secretariat (1996). Towards a more Results-Driven Management Culture in the Federal Public Service, Ottawa.

USGAO (1995a). Government Reform: Goal-Setting and Performance, (GAO/AIMD/GGD-95-130R), Washington, D.C.

USGAO (1995b). Managing for Results: Critical Actions for Measuring Performance (GA/T-GGD/AIMD-95-187), Washington, D.C.

USGAO (1995c). Managing for Results: Experiences Abroad Suggest Insights for Federal Management Reforms (GAO/GGD-95-120), Washington, D.C.

USGAO (1995d). Managing for Results: Status of the Government Performance and Results Act (GAO/T-GGD-95-193), Washington, D.C.

USGAO (1994). Managing for Results: State Experiences Provide Insights for Federal Management Reforms (GAO/GGD-94-22), Washington, D.C.