PERFORMING FOR WHOM? AN EVALUATION OF THE WELSH HOUSING MANAGEMENT PERFORMANCE INDICATORS REGIME Robert S.G. Smith and Richard M. Walker
1 Introduction
Over recent years the World Bank and the United Nations Centre for Human Settlement have developed a range of global housing performance measures (World Bank, 1992a) 1. These performance measures have been grouped into six performance indicator modules: housing affordability, housing finance, housing quality, housing production, housing subsidies, and regulatory audit. In total, these comprise 45 indicators (Priemus, 1992). Data on these indicators has been collected in 52 countries and analysed to produce comparative league tables of housing market performance (World Bank, 1992b). Performance measurement regimes have also been developed within individual countries, over the last decade with specific reference to publicly provided services (Carter et al., 1992). These regimes have been extended to include housing. Their aim is to clarify the performance of housing management in the public sector (for example, Department of the Environment, 1990; Deparmaent of Housing and Urban Developent, 1992a; National Federation of Housing Associations, 1992; Welsh Office, 1991a). The development of performance regimes for particular sectors of the housing economy highlights a weakness in the World Bank/United Nations approach. The underlying assumption behind the World Bank model is that markets can be enabled to provide solutions to 'housing problems'. However, in countries with complex housing markets, there is often significant intervention by the state in the form of non-market, or public-sector, housing (see Maclerman and Gibb, 1993, for a more detailed argument). Housing is also used as a tool of broader economic policies. For example, in England and Wales, the 1992 Autumn Public Expenditure Statement allocated s million for housing associations to purchase second-hand owner-occupied properties. This initiative was primarily aimed at aiding recovery in the private-sector housing market; it was not solely designed to expand the supply of socially rented housing. These externalities have to be taken into account when attempting to analyse the performance of the housing policies of different countries, and particularly when comparing them. This problem is witnessed'in the World Bank/United Nations research
Neth. J. of Housing and the Built Environment, Vol. 9 (1994) No. 1.
47
on housing indicators. The indicators are collected in 52 countries. On this basis, comparisons of 'housing sector performance' (World Bank, 1992a, p. 4) are undertaken. However, the data is not collected at a national level. Rather, it is derived from particular cities, usually capital cities (World Bank, 1992b). This then leads to distorted information which does not reflect the operations o f the housing markets in those countries. If we take the example of the UK, London is the city where data is collected. There are a number of problems here. First, how is London defined? Second, the housing market in London does not duplicate that of the rest of the UK. Traditionally, average house prices have been higher in London than in many other UK cities, whilst fluctuations in house prices have often been marked. In addition, pressures on the social housing sector in London exceed those of other areas. This may be seen, for example, in higher levels of homelessness in the capital. These issues indicate the significance and complexity of performance measurement regimes. They also show the need to explore performance measurement within different sectors of the housing market. In this paper, we seek to contribute to the recent discussion of the role of performance indicators that has been taking place in this journal. We do so by examining what could be called an existing social rented housing module. Maclennan and Gibb (1993) argue for the inclusion of such a module in the future work undertaken on international housing indicators. The specific purpose of this paper is to set out the background to performance measurement in public-sector housing where we have seen the development of performance indicators and expectations. Initially, we outline the background to these developments. Secondly, we go on to critically examine the assessment of performance in respect of local authority housing management services, using information from the Report to Tenants etc. (Wales) Determinztion Orders 1990 and 1991 (welsh Office, 1991a; 1992) (referred to below as the determination). Thirdly, we highlight key indicators of performance which may be of value not only to customers but also to the service providers and managers. Following the introduction of the Report to Tenants etc. Determination (implementing the requirement of Section 167 (1) of the Local Government and Housing Act of 1989 for every local housing authority to provide a report on their housing management performance to each tenant), there has been further interest in measuring the quality of publicly provided services. The new financial regime for local housing authorities introduced in 1990, the Citizen's Charter, the Government's white paper 'Competing for Quality' (H.M. Treasury, 1991), and the consultation papers on competing for quality in housing management (Department of the Environment/Welsh Office, 1992; Welsh Office 1993) have all increased the pressure on local authorities to focus their attention on performance measurement and the setting of performance standards and targets. In the past, local authorities have not perhaps been in the forefront of developing performance measures or understanding the requirements of their customers. Yet faced by the challenge of legislation, they will be forced to develop a co-ordinated approach to performance measurement, target setting, and outcome monitoring.
48
2 Background to performance measurement At its simplest level, performance measurement is a tool to assess the impact and success of policies and organisations in delivering all aspects of services. However, as Carter et al. (1992) note, it is a very broad concept that can have different meanings for different audiences in different contexts. Within local government, the finance department may be concerned with financial performance, whereas elected members and officers may be interested in the effectiveness of policy. Meanwhile, customers, whether they be tenants or the local electorate, are concerned about the quality of services. In addition to competing audiences, the concept of performance itself has been seen as having three distinct yet related elements. These are economy (the purchase of resources at lowest cost consistent with a specified quality and quantity); efficiency (a specific volume and quality of service using the least resources capable of delivering the specification); and effectiveness (providing the right service to enable the local authority to implement its policies and objectives). Some have argued that a fourth 'E' should be added: equity; this would ensure a fair and equitable distribution of resources and outcomes for different groups (Mullins, 1991). Yet others have tried to extend the alphabet to include quality and customer voice (Pollitt, 1988). Added to the Three 'E's, it has been argued (Accounts Commission, 1992; Audit Commission, 1988; Department of the Environment, 1989; Welsh Office, 1989) that performance review should incorporate two distinct activities. These are monitoring and analyses. Monitoring is the regular measurement of actual performance against set targets. Whereas monitoring examines what is done, analysis considers how it is done. Analysis asks whether the service can be provided more efficiently/effectively in a different way. Alongside the issue of measurement, there has been considerable debate concerning performance indicators as a management tool. The National Federation of Housing Associations (1992, p. 4) defined indicators as "... facts which help us to assess whether we are achieving our targets and thereby objectives." However, as E1ton (1987) has noted, there is a danger that whatever is easily measurable will be used as a performance indicator. In our view it is not necessary for all performance indicators to be quantitative and objective; some may be qualitative and more subjective. What is important is that they are useful measures, either of inputs (levels of resources used, or measures of economy), throughputs (levels of services provided, or measures of efficiency) or of outputs (outcomes achieved or measures of effectiveness). These definitions are not universally accepted; for example, the Audit Commission (1988) talk of inputs, outputs, and impacts. After the demise of corporate planning in the late 1970s, little attention was given to performance management in local government until the late 1980s (Clapham and Satsangi, 1992). Interest was revived with, for example, the establishment of the Audit Commission. What is of particular note is that the performance measurement regimes in public-sector housing developed later than other public services. We now explore the aims and intended audience of the Report to Tenants etc. before going on to evaluate the regime. 49
3 The Report to Tenants' Determination in Wales
Public-sector housing in Wales is provided by the 37 district councils. These publichousing authorities provide nearly one in five homes in Wales (Welsh Office, 1991b), representing 223,000 homes. However, there has been an almost 20 percent fall in the scale of council housing in Wales during the 1980s. This is due to transfer of the stock from the public sector to the private sector, primarily under the right-to-buy legislation. Though the housing legislation governing the 37 districts is enacted in England, the task of housing-policy formulation is devolved to the Welsh Office. Since its inception in 1965, the Welsh Office has increasingly developed and supported a series of new housing-policy initiatives quite separately from those pursued in England by the Department of the Environment. Furthermore, is has adapted policy to suit the Welsh context (Smith and Williams, 1991, p. 168). The stated legislative aim of the determination is to allow local-authority tenants to judge whether they are getting value for money from their local housing authority. We can also identify two additional aims of the legislation. The first is to establish a dialogue between landlords and tenants about the housing service; and the second is to increase the accountability of the housing service. Though the report is explicitly aimed at tenants, it has two further audiences: local authorities and the Secretary of State. The existence of a multiple audience for the report introduces an inherent conflict into the performance indicator regime. This discrepancy leads one to question whether the existing determination and subsequent report adequately service three groups. The same report should allow tenants to judge value for money by providing tenants with information. It should help the Secretary of State to understand the nature and performance of local housing authorities. And it should help local-authority service managers and providers to understand more about their performance, while allowing them to compare themselves with other authorities. The use of performance indicators in housing has a relatively short history. Nevertheless, their primary role has been to assist central government in understanding the nature and performance of local housing authorities (see Audit Commission, 1986a, b; Department of the Environment, 1989; Welsh Office, 1989). Use of performance indicators has been extended to include tenants, local and central government. These indicators are collected and analysed by local authorities rather than external agencies (for example, the Audit Commission). The wider range of users allows us to consider how the performance indicators regime has helped in determining the performance of local housing authorities. Has the regime been of much value to central government and the service providers and managers? The use of the performance information by central government is witnessed by the Welsh Office's production of unpublished league tables. These are drawn from the reports which show each authority's performance in relation to all other authorities. The tables attribute cost savings where appropriate. In the first two years of the reports, the information was kept confidential. There is, of course the potential for this information to be publicly available, in line with central government views of local accountability (for example, the Citizen's Charter). These developments have also been reflected in the housing associations sector in England (Askew et al., 1993). In addition, they have been linked to incentives, as performance measurement 50
is done in the USA (Department of Housing and Urban Developent, 1992a, b). The process of deciding upon the content of the determination is consultative. Local authorities and a range of agencies and pressure groups can influence the content of the determination through consultation exercises. However, tenant input into this process has been limited. The Act def'mes the mechanisms requiring local authorities to provide a performance indicator report (referred to below as the report) to all their tenants within six months of the end of the financial year and to the Secretary of State. But it leaves the final responsibility for the content of the determination to be decided upon by the Secretary. However, the housing-policy community from which the determination arose raises some significant questions. The content of the determination is centrally decided, though local authorities and pressure groups are involved. Yet the information in the report is not necessarily that which tenants need to understand the performance of their landlord. Nor does it necessarily give local authorities an understanding of their performance (see Housing Management Advisory Panel, 1991 for a series of alternative indicators). Furthermore, we shall argue here that the performance indicators requested in the determination do not, in fact, bring central or local government providers and managers or tenants any closer to understanding the overall effectiveness of local housing authorfies. Rather, as we argue below, they point towards measures of economic efficiency. These views are developed through an evaluation of the performance indicator regime in Wales.
4 An evaluation of the Welsh performance indicator regime The 1990/91 determination required authorities to provide information on five broad housing management areas: participation, repairs, rents, lettings, and management. Thus, authorities have to provide data on a minimum of 24 items. The 1991/92 determination added three further indicators; a mid-year stock figure; expenditure on programmed repairs; and cyclical repairs during 1991/92. The determination for 1992/93 requires information on administration and estate management and sets out revised definitions of terms (this is discussed below). For the purpose of this paper, we will examine the 1990/91 and 1991/92 determination and reports where shared definitions and information exist. To appreciate the value of the 1990/91 determination, in terms of explaining the effectiveness of local housing authorities, each indicator has been classified as an input (a measure of economy), a throughput (efficiency), and an output (effectiveness). Table 1 identifies the 24 indicators in the housing-management areas specified in the determination. It shows that the majority relate to input indicators. This overt reliance on input indicators (20 of the 24) allows us to understand the economic efficiency of local housing authorities only partially. It precludes an understanding of the overall effectiveness of the housing service. We have argued above that performance measurement needs to extend beyond the area of economic efficiency, which typified much of the early work (for example, see Audit Commission, 1986a).
51
Table 1
T h e R e p o r t t o T e n a n t s p e r f o r m a n c e i n d i c a t o r s 1990/91 c l a s s i f i e d as
input, throughput and output Performance
indicators (for financial year 1990/91)
Performance measure
Input Throughput Output Participation No. of recognised tenants' groups and federations Funding for tenants' associations and federations in the financial year Description of how tenants' satisfaction with repairs is assessed
* * *
Repairs No. of responsive repairs completed in the financial year Average response times for responsive repairs No. of accepted repair requests outstanding No. of dwellings with programmed repairs completed Proposed expenditure on programmed repairs in succeeding financial year Proposed no. of dwellings involved in programmed repairs in succeeding years No. of dwellings with cyclical repairs completed Proposed expenditure on cyclical repairs in succeeding f'mancial year Proposed no. of dwellings involved in cyclical repairs in succeeding years Total repair and maintenance expenditure in the financial year Total average per dwelling repairs and maintenance expenditure in financial year in the stock at mid-year Rents Average weekly rent levels at the end of the financial year Current tenants' rent arrears at the end of the financial year Current tenants' arrears at the end of the financial year as a percentage of the total rent collectable over that year No. of households with four or more weeks' gross rent in arrears at the end of the financial year
Lettings No. of transfer requests outstanding at the end of the financial year No. of acceptances of transfer requests in the financial year No. of voids at 31 March or 1 April by type of void Average turn-around time for voids averaged for all properties let during the financial year
Management No. of established posts at 31 March in relation to the local authority's role as housing manager FTE staff costs incurred in housing management per dwelling in the financial year, divided by amount of dwellings in the stock at mid-year Source: Welsh Office, 1991a.
52
* * * * * * * * * *
* *
*
The limited amount of information required in the determination, 24 pieces as against 75 plus in England, reflects not just the limitations of local government information systems in Wales. The need for a limitation has also been argued because the intended audience is comprised of tenants. It has been suggested that tenants are area interested in limited amounts and specific pieces of information. However, other research suggests that this is an incorrect assumption (I-Iutt and Walker, 1992; Niner, 1992; Walker, 1992). The performance area of lettings can be taken to illustrate the limitations of the determination. In this year area only limited information is available on which to base an understanding of organlsational effectiveness for customers and service managers. The lettings area includes transfers and voids information (see Table 1 for the detailed indicators). This is likely to be of direct concern to tenants; they can see their chances of getting a transfer and perceive why stock is vacant. However, readers of the reports are unable to understand the effectiveness of local housing authorities in letting homes, because this is only a partial picture, excluding the reliance on input information. No information is provided about transfer throughputs, i.e. transfer policy or output ratios, the latter referring to percentage of transfer applicants rehoused (unless readers conduct the calculation themselves). Information on voids provides only a 'snapshot' picture and scarce output information on voids turn-around time. Information is lacking on average turnover of stock and types of dwellings becoming vacant. And no information is given on waiting- list administration, i.e. new lettings or fairness of allocations. Therefore, it is impossible for tenants, or any other audience, to understand performance across a range of lettings issues, such as value for money, equity, or effectiveness. The limited information on lettings subsequently makes interpretation of the transfer information difficult, not only for tenants but for others as well. The determination asks authorities to provide three pieces of output information and one on throughput (Table 1). This is insufficient, if we are to use it to understand effectiveness. The throughput data describing how to assess tenant satisfaction with repairs is not reflected in corresponding output data. For example, it does not show up in the number of pre- and post-inspections of repairs or tenant response to tear-off slips or questionnaires, for example. Thus, it is impossible to gauge the outputs and achievements of these particular policies and practices. Issues arising from the throughput data also raise questions about the usefulness and validity of the output data. Firstly, the output indicator, which asks for response times to all types of repairs does not ask for target times. Hence it is impossible to determine effectiveness in this crucial area. Secondly, information on average relet times can be specified by authorities in working or calendar days or weeks, which leads to confusion over actual relet times. Thirdly, and linked, because of the lack of targets or policy information for relets, one is unable to interpret the significance of changes in performance from one year to the next. For example, average relet time for Blaenau Gwent moved from 3.00 weeks to 22.00 weeks and from 8.00 weeks to 21.00 weeks in Port Talbot, for 1990/91 and 1991/92 respectively, yet theI:e is no requirement to explain these changes. Consequently, the determination leaves only one indicator that provides information about overall effectiveness. This sole indicator is the percentage of current tenants' rent arrears at the end of the financial year. 53
The value of the input indicators is further weakened by the reliance on absolute figures for 18 of the 24 indicators. For example, the determination requests the number of dwellings where cyclical and planned maintenance was completed in the current financial year and the anticipated number of dwellings for the succeeding year, including proposed expenditure. To be able to understand organisational effectiveness, data is needed on the percentage of homes where planned or cyclical maintenance has been undertaken (though the 1990/91 determination did not ask for stock figures) and the percentage of expenditure on planned maintenance versus responsive repairs. Thus, without analysing the indicators provided in the reports or searching for data from other sources, many aspects of the determination remain problematic. This is particularly the case for tenants who would not necessarily have access to some of this additional data. Above we have identified a number of specific problems with the indicator regime and the indicators. Below we highlight a problem pertaining to all aspects of the reports. Authorities derive the information in their report from the specified performance indicator and an accompanying list of definitions contained in the determination. The def'mitions are limited by both their capacity to explain the requirements of the determination and their extent. This leaves considerable scope for authorities to interpret their data and it raises significant methodological problems, both for housing researchers and for policy and practice. If this data is to be used to understand and interpret the performance of local housing authorities, a common basis needs to be established. These problems are best illustrated by exploring the management performance area, which includes measures of full-time equivalent (FTE) staff providing the housing service and the costs of providing that service. The housing management area defined in the determination to derive the FTE posts suggests a number of functions. It asks for FTE "established in relation to the local authority's role as housing manager" (Welsh Office, 1991a, p. 6). The suggested list is limited in its capacity to describe 'housing management'. It contains some notable exclusions; for example, it excludes all aspects of void control, housing management duties undertaken by wardens, disputes between tenants, general estate management, and the development of policy or research that has particular reference to housing management. This creates a real problem. In accordance with the ring-fencing of the housing revenue account in 1990, as defined in the Local Government and Housing Act 1989, the local-authority's role as housing manager should be clearly defined. Furthermore, as the determination asks for information about the authority as housing manager, one would expect all staff charged to the Housing Revenue Account to be included in this figure. This definitional paradox is compounded by two other factors. First, the 1992/93 determination removes all aspects of repairs and maintenance from the definition of housing management. Second, the definition of housing management functions which are to be put out to tender in 1997/98 (Welsh Office, 1993), through Compulsory Competitive Tendering of local-authority housing-management services, differs significantly from that in the determination. The changing nature of the definitions in the determination and the mis-matches with other policy developments further reduces the explanatory powers of the determination and the subsequent reports.
54
Table 2
Variation in reporting the number of full-time equivalent housing management staff
Local authorities
Alyn and Deeside Blaenau Gwent Ceredigion Delyn Glyndwr Islwyn Lliw Valley Meirionnydd Monmouth Newport Ogwr Port Talbot Radnorshire Rhymney Valley South Pembembrokeshire Swansea Torfaen Vale of Glamorgan Wrexham Ynys M6n
Report to Tenants Audit Commission FTE staff No. full-time No. part-time staff staff 74.00 93.00 31.00 31.51 14.50 99.00 40.50 18.00 13.71 56.00 85.00 55.00 10.00 75.00 4.10 216.00 118.00 33.00 117.00 24.00
44 79 40 73 17 82 72 28 35 164 63 36 3 117 12 184 157 59 122 87
4 13 0 22 4 2 0 8 6 26 1 4 0 20 0 0 28 3 12 2
Source: Report to Tenants 1990/912; Audit Commission, 1990.
If we explore the number of FTE staff that authorities provided in their 1990/91 reports and compare this to the Audit Commissions Profiles for 1990/91 (Audit Commission, 1990), significant discrepancies emerge. Table 2 highlights sizeable variations in 20 out of the 37 Welsh district housing authorities. The most significant variations in the understatement of housing management staff are Ynys M6n, with a variation of nearly 362 per cent; Newport, 293 per cent; South Pembrokeshire, 293 per cent; Delyn 232 per cent; and Lliw Valley, 177 per cent. Furthermore, this variation is only derived by examining the number of full-time staff. By way of comparison, a number of authorities have overstated their number of FTE staff: Radnorshire, 333 per cent; Alyn and Deeside, 68 per cent; Torfaen, 33 per cent Islwyn 21 per cent; and Blaenau Gwent, 17 per cent. A similar problem to that of FTES is found with management costs per tenancy per annum. Table 3 shows that, where data is available, every authority underestimated its management costs in their report, compared to Audit Commission (1990) data, which is drawn from local authorities themselves.. The most significant variations exist in the Vale of Glamorgan, with a 573 per cent underestimation, followed by Newport (484%), Ynys M6n (394%), and Rhymney Valley (259%). Also significant is the range of costs
55
reported (Table 3). In the tenants' determination, the lowest m a n a g e m e n t cost reported is s per tenancy per annum in Taff-Ely; the highest is s in Port Talbot. In comparison, the Audit C o m m i s s i o n data reports the lowest cost at s in Neath and
Table 3 Variations in the reporting of housing management costs Local authorities
Management costs identified in the Report to Tenants
Audit Commission
129 124 145 NA 151 198 NA 129 169 149 190 62 111 72 192 103 153 NA 176 NA 79 104 55 75 220 133 157 103 81 162 79 161 44 141 65 92
348 193 206 231 NA NA 283 287 255 202 359 NA NA 205 289 NA 228 250 NA 374 223 145 266 NA NA NA 347 335 154 420 205 311 NA 385 373 167
Aberconwy Afron Alyn and Deeside Blaenan Gwent Brecknock Cardiff Carmarthen Ceredigion Colwyn Cynon Valley Delyn Dinef'wr Dwyfor Glyndwr Islwyn Llanelli Lliw Valley Meirionnydd Merthyr Tydfil Monmouth Montgomeryshire Neath Newport Ogwr Port Talbot Preseli Pembrokshire Radnorshire Rhondda Rhuddlan Rhynmey Valley South Pembembrokeshire Swansea Taft-Ely Torfaen Vale of Glamorgan Wrexham Ynys Mfn Note: NA: Not available Source: Report to Tenants 1990/912; Audit Commission, 1990. 56
the highest at s in Rhymney Valley. These discrepancies can be primarily attributed to the definitions of housing management issued by the Welsh Office. These definitions have been adopted by local housing authorities in Wales and have produced significant variations between the reported figures and the actual figures. This is a particularly difficult area, and discussion of the costs of housing management have a long history (BedweU, 1949; The Society of Housing Managers, 1957), Nonetheless, it raises questions about the value of the exercise to tenants, the Welsh Office, and local authorities themselves. If this information were relied upon to establish the performance (with particular reference to economy) of local housing authorities in Wales, one would have to seriously doubt any conclusions that were drawn. This can be further illustrated by an examination of data provided by Welsh authorities for the first two reports. Table 4 depicts the differing interpretations of the number of FTE providing the housing service, whilst Table 5 looks at the changes in management costs per dwelling per annum. Both Tables 4 and 5 highlight the extremities. Particularly noteworthy is that the two authorities reporting decreases in FTE (Blaenau Gwent and Cardiff) report no significant falls in costs. However, those authorities who show an increase in FTE do not show a corresponding increase in costs. This again leads one to question the validity of the data provided. It suggests that these relatively significant shifts in staff and/or costs are attributable to changing interpretations of the definitions rather than alterations in the level or cost of services provided to tenants. The performance measurement area of participation illustrates similar issues, but also points to a further problem: non-completion of information. Participation provides two pieces of input data and one of throughput data. Information is requested on the number of recognised tenants' associations and their funding. On that basis, the determination is unable to explain landlord attitudes towards tenant participation. This is particularly the case if tenants' groups have emerged around issues rather than having been formed as an explicit policy of developing tenant participation. Research in Wales also demonstrates that tenant participation is not well developed and lags behind Eng-
Table 4 Changes in the reporting of FIE between 1990/91 and 1991/92 Local authorities
No. FTE 90/91
No. FTE 91/92
Changes in FTE
Torfean Cardiff Bleanau Gwent RlayrrmeyValley Carmarthen Merthyr Tydfil South Pembs Delyn Port Talbort Taft-Ely
136.00 320.00 93.00 75.00 53.00 44.50 4.10 31.51 55.00" 11.45
69.00 259.00 59.00 45.00 24.00 24.50 12.00 161.59 71.00 45.50
-67.00 -61.00 -34.00 -30.00 -29.00 -20.00 +7.90 + 12.49 + 16.00 § 34.05
Source: Report to Tenants 1990/912and 1991/92. 57
Table 5
Changes in the reporting of management costs per dwelling between 1990/91 and 1991/92
Local authorities
Cost per dwelling 90/91
Cost per dwelling 91/92
Changes in cost 90/91-91/92
Merthyr Tydfil Carmarthen Rhymey Valley Torfean Islwyn Cynon Valley Llanelli Dinefwr Montgomeryshire Newport
176.00 258.44 162.00 141.17 192.20 149.00 103.00 62.00 78.50 55.00
52.99 156.00 60.78 50.44 108.51 90.00 130.00 130.48 169.00 164.85
-123.01 -102.44 -101.22 -90.73 -83.69 -59.00 +27.00 +68.48 +90.50 + 109.85
Source: Report to Tenants 1990/912 and 1991/92. land (Tenant Participation Advisory Service (Wales), 1991). The information on funding provides a confused picture when examining commitment to participation across Wales. This relates to a weak definition, whereby authorities are only asked to report the level of funding by the authority. Hence, Table 6 for 1990/91 provides a mixed picture of funding for tenants' associations and does not allow conclusions to be drawn.
Table 6 The number of tenants' groups a n d their f u n d i n g mechanisms Local authorities
Arfon Delyn Islwyn Port Talbot Rhondda Rhymney Valley Swansea Torfaen Vale of Glamorgan Wrexham Source:
Notes:
58
Number of tenants' associations
Funding s
2 4 1 2 7 5 12 1 2 14
50~ NA 7,0001 0 4.502 3.402 7001 2,0001 1,8543 6004
Report to Tenants, 1990/912. 1 2 3 4
Funding per tenants' association,per annum Funding per dwelling, per annum, in each tenants associationarea Totalfundingper annum Funding for the tenants' federation
As of 1992/93, the determination requires authorities to provide information on grants/funding available to tenants groups divided by stock at the mid-year. Again, this information is misleading. Tenant groups do not necessarily exist in equal proportions across local-authority stock. Moreover, the funding provided by most landlords is specifically targeted at groups. For example, it is mostly in the form of start-up grants and funding for on-going running costs, though it may include the provision of free accommodation etc. The problematic nature of the indicators and their definitions is further compounded by the non provision of information by some authorities. Of the 37 authorities, only seven were able to provide all the indicators requested by the Welsh Office in 1990/91. The highest incidence of non-provision of information was in the area of participation. In that area, 18 authorities failed to provide information for some of the three indicators. Of these 18, four authorities excluded all information on participation. The area of repairs also caused problems for many authorities. In fact, 14 authorities were unable to provide some of the information. This problem occurred again in 1991/92, though then it related to different indicators and was not as pronounced. The over-reliance on input data and the problematic nature of the data provided by local housing authorities seem to undermine the credibility of determination. As a mechanism for tenants, service providers, and managers, or for central government, or anyone else, it seems inadequate to elucidate local housing authorities' overall effectiveness, or for that matter their economic effectiveness. Even the revisions incorporated into the 1992/93 determination do not overcome these fundamental problems. One can even say that the determination precludes questions about an individual authority's performance and prevents comparison of the performance of diverse organisations. Before going on to look at alternative indicators, let us briefly examine the performance measurement regime for public housing authorities (PHA) in the USA (Department of Housing and Urban Developent, 1992a). This regime, called the Public Housing Management Assessment Program (PHMAP), does not suffer from as many of the problems identified above. It is primarily concerned with the effectiveness of organisations in delivering services. The regime looks at performance in the areas of stock vacancies and relets; modernisation programmes; rents; energy consumption; repairs; operating reserves and expenses; and resident initiatives and development. A comprehensive manual on collating the indicators is included (Department of Housing and Urban Development, 1992b). Interestingly of the 25 indicators that are required under these headings, 50 per cent are taken from existing returns. Thus, they do not have to be specifically collected, as they are in Wales. Furthermore, some of the indicators are qualitative, requiring the PHA to provide time-series data and information. Each performance indicator has a defined target and is ranked on a scale of A to F (A being highest, meaning the PHA has attained the target). Some indicators are weighted; for example, the area of development, the quality of the physical work is given a weight of three. Each PHA is ranked accordingly. Those deemed to be high performers (with a total weighted score of no less than 90%) are provided with incentives (relief from providing financial returns and reduced monitoring) and subsequently publicised as successful housing organisations. A poor- performing PHA, 59
by contrast, is penalised; the ultimate sanction is to be placed in receivership. What sets this regime apart from that of Wales is its specific focus on service managers and central government, in this case represented by the Department of Housing and Urban Development. However, as we have demonstrated, the performance measurement regime in Wales focuses upon three audiences: tenants, managers, and central government. To understand the effectiveness of local housing authorities, we need to go beyond the determination. Indeed, we must look to ways of providing performance indicators that achieve specified objectives and address the needs of the audience.
5 Future directions Since the late 1980s, performance measurement has become an integral part of housing management. The influential reports produced by the Audit Commission (1986a, 1988), Department of the Environment (1989) and Welsh Office (1989) have been followed by new legislation. These laws were designed to allow tenants to judge whether or not they are receiving value for money from their local authority, to increase accountability to the consumer, and to establish dialogue between landlords and tenants. It has also provided an opportunity for local authorities to reconsider performance measurement in general with respect to the management of the housing service. It should be clear from the foregoing that the provision of performance indicators to tenants has proved a major challenge to local authorities. At the same time, some of the indicators required under the determination may be of little interest or relevance to tenants (though they may be of value to managers), whilst some of those of greatest interest are not easy to measure. In order to set the context for monitoring performance, attention must be given to the overall aims, objectives, and priorities of individual authorities. Performance measures must be related to the policies of the authority. The targets set and performance measures used must be meaningful, if they are to motivate staff to provide a higher quality of service. This is particularly important during a period of increasing financial and legislative constraints. It is also important to consider the council sector within the broarder context of the household tenure balance and condition of the housing stock in the district. At the same time, the socio-economic and demographic characteristics of council tenants should be compared with the profile of households in other tenures. The 1991 Census of Population provides a useful basis for this review of the council sector, but due consideration would need to be given to how such information could be regularly updated. Let us consider some of the specific areas of performance. Tenant satisfaction with the council repairs service is a critical measure of performance. This can be assessed by looking at how quickly repairs are carded out (as a proportion of particular categories of responsive repairs completed within target times relative to the number of outstanding repairs); how well they are carried out (level of complaints/tenant 60
satisfaction); the average cost of repairs; and the proportion of appointments kept by contractors (though it should be recognised that households themselves do not always keep appointments). Indicators developed around these areas would simultaneously provide information of concern to tenants. They would then be able to understand the overall service they receive (though personal experience may differ). At the same time, managers and central government can understand how effective the repairs service is. The availability and allocation of council housing (or social rented housing more generally) is another key concern. The number of households on waiting lists is not a measure of performance in itself (including existing tenants seeking a trsnsfer). Nor is the number of homeless applications/acceptances or the number of people in temporary accommodation. Rather, these are indicators of the task facing individual authorities. These figures are indicative of the pressure for rehousing and also the criteria the authorities apply for consideration. Other more qualitative indicators may be more valuable in calculating how long it takes to assess an applicant's entitlement for housing or deciding what information people should be given as to their likely waiting time for permanent rehousing. Equally, it is important to consider the council's enabling performance in relation to its successful nominations to housing associations. This throughput information is vital for prospective tenants (and transfer applicants), if they are to understand how effective the allocations process. On this bases, they would be able to judge the fairness or equity of the system. These indicators can also be linked to target groups of applicants that authorities need to monitor for output purposes, e.g. the proportions of tenants housed who are from ethnic minorities, who are disabled, etc. The issue of vacant public-sector housing remains a vexed question. It is clearly linked to availability and allocation of council tenancies. The traditional measure of performance in this area has been the number and proportion of dwellings which are standing empty at one moment in time. The recent consultation paper from the Audit Commission (1992) also suggests the percentage of ready-to-let dwellings void for less/more than six weeks. However, both provide only a 'snapshot' measure of performance. That picture may be useful for progress-chasing of current voids, but it does not provide sufficient information to assess performance. As has been argued elsewhere, the key issue for landlords is to understand the factors underlying vacancy: stock turnover rate and vacancy duration (Institute of Housing, 1989). The latter ought to represent the focus of performance, using frequency distributions of vacancy intervals across a range of void types. Local authorities should then seek to minimise the time for which council homes are 'unnecessarily' empty (without compromising other housing management policies). This would, in turn, to minimise rent loss and maximise rehousing opportunities. On the issue of rents, performance indicators have concentrated on rent levels, arrears as a proportion of rent collectable, and number of households in serious arrears. It is appropriate that arrears should be looked at in relation to rent levels (and perhaps the rate at which rents have increased). But it may also be appropriate to consider arrears in relation to incomes and levels of unemployment. However, some of the measures currently used (or suggested) are indicators of personal indebtedness rather than effective housing management. Better measures might include the proportion of tenants assessed for housing benefit (and the time taken to process applications), the 61
number of tenants interviewed/advised over arrears, or the number of housing officers trained in debt counselling. These throughput indicators suggest the need to collect information about processes that have an impact on output indicators, such as the proportion of tenants paying off arrears and the number of notices of intention to seek possession that have been issued. As we have demonstrated, these linkages are typically not made. Yet they are essential for interested parties to be able to understand the overall performance of housing organisations. Finally, in this section let us consider tenant consultation, participation, and communication. Here requirements in Wales have been more highly developed than in England under the tenants' determination. This is an area of housing management where both quantitative and qualitative measures supply appropriate information on numbers of tenants' and residents' associations. In this area too, the level of funding/support is useful, but so would be other information such as the availability of a tenants' handbook or information pack, the regularity of a tenants' newsletter, evidence of support for tenant training, the existence of specific posts with a remit to encourage participation, and arrangements for formal consultation. Above all, authorities need to show a clear commitment to participation. These indicators suggest the need for central government to seek the views of customers to define performance measures which are relevant. They also suggest the need to re-examine the role of performance measurement regimes which are focused on the objectives of housing organisations and not solely created to be tools for central government. Indicators which are qualitative and which seek to examine processes and outcomes need to play a central role in performance measurement regimes which aim to understand the effectiveness of housing organisations. For these objectives to be achieved definitions of indicators have to be collectively established. These should allow for the information provided to be comparable across a range of organisations and across time.
6 Conclusions The issues that we have raised here demonstrate that defining the objectives of performance measurement, the definitions of performance indicators, and the way that authorities actually collect that information are problematic. The same problems can be equally addressed in relation to the World Bank performance indicators regime, where data collection issues are paramount. If we turn to the tenants' determination in Wales, the objectives of the legislation have been compromised by the range of actors interested in the indicators: customers, service providers and managers, and central government. The information required in the determination is insufficient to satisfy the needs of all of these audiences. And it is unable to explain the effectiveness of public housing authorities in Wales. We have demonstrated that the actual data produced in the reports cannot be relied upon. We cited the definitions, issued by the Welsh Office, within the determination as the cause of the unreliability, although the ways in which authorities collate and provide the data add to this problem. 62
The Local Government Act 1992 requires each authority in England and Wales to provide a list of performance indicators. This demand further confuses the message given to customers and service managers. These indicators are substantially different from those required in the tenants' determination in Wales (though not significantly from those in England). This also extends to variations in the definitions. For example, serious arrears are defined as "4 or more weeks' gross rent" in Wales (Welsh Office, 1991a, p. 5) and as "an amount equivalent to 13 times the weekly rent they [tenants] are personally due to pay" (Audit Commission, 1992, p. 27) in the Citizen's Charter Performance Indicators. If these discrepancies remain in the performance indicator regime in Wales, it is going to be very difficult for customers to decide which indicators actually provide them with information on economy, efficiency, and effectiveness. Yet this problem may also provide a solution. Is there a need for the information to be collected and verified by an independent organisation, such as the Audit Commission? This practice exists in the housing-association sector in the UK, where the responsible regulatory body (The Housing Corporation, in England; Housing for Wales; and Scottish Homes) undertake these functions through 'performance audit'. The practice also exists in the USA, where the Department of Housing and Urban Development uses the data to assess overall performance of public housing authorities, attaching incentives to high performance (Department of Housing and Urban Developent, 1992a). Looking to the future, we have identified a number of factors which need to be considered in developing a performance indicator regime. We believe that these meet the objectives of performance measurement in housing management. The key is measuring performance against the specified objectives of a local housing authority's policies and practices in ways which are meaningful to consumers, service managers, and central government. In light of our conclusions, there is a real need for policy-makers, practitioners, and housing researchers to stand back and review performance measurement in housing management. We need to reconsider what we are trying to measure, who we are measuring it for, and how we are using these measures. If this reflection is not done, we potentially will end up with performance measures for the sake of collecting performance indicators.
Notes These performance measures formed the basis of the international housing conference held in November 1992 at the OTB Research Institute for Policy Sciences and Technology (Delft), from which keynote papers have since been published in an earlier special issue of this journal (Netherlands Journal of Housing and the Built Environment, 1993).
63
The data presented in Tables 2 to 6 has been drawn from the individual reports produced by authorities in either 1990/91 or 1991/92. They are available from the individual authorities or the Centre for Housing Management and Development, Department of City and Regional Planning, University of Wales College of Cardiff, PO Box 906, Cardiff, CF1 3YN, Wales.
References
Accounts Commission (1992), Managing Housing Performance, Edinburgh: Accounts Commission. Askew, P., A. Evans, S. Littler, R. Smith, R. Walker, and P. Williams, (1993), A Standard Classification of Housing Associations: A Preliminary Analysis, London: The Housing Corporation. Audit Commission (1986a), Managing the Crisis in Council Housing, London: HMSO. Audit Commission (1986b), Improving Council Housing Maintenance London: HMSO. Audit Commission (1988), Performance Review in Local Government: Action Guide, London: HMSO. Audit Commission (1990), Profiles 1990-91, Welsh District Councils, London: HMSO. Audit Commission (1992), Citizen's Charter Performance Indicators, London: HMSO. Bedwell, E. (1949), "Housing management costs", The Society of Housing Managers, Quarterly Bulletin (2), no. 13, p. 4. Carter, N., R. Klein, and P. Day (1992), How Organisations Measure Success: the Use of Performance Indicators in Government, London: Routledge. Clapham, D., and M. Satsangi (1992), "Performance assessment and accountability in British housing management", Policy and Polities (20), no. 1, pp. 63-74. Department of the Environment (1989), The Nature and Effectiveness of Housing Management in England, London: HMSO. Department of the Environment (1990), The Report to Tenants Etc Determination 1990, London: Department of the Environment. Department of the Environment/Welsh Office (1992), Competing for Quality in Housing. Competition in the Provision of Housing Management: A Consultation Paper, London: Department of the Environment/Welsh Office. Department of Housing and Urban Development (HUD) (1992a), Public Housing Management Assessment Program; Interim Rule and Notice of Submission of Proposed Information Collection to OMB, Washington, DC: Department of Housing and Urban Development. Department of Housing and Urban Development (HUD) (1992b), Public Housing Management Assessment Progran~ (PHMAP) Handbook, Washington, DC: Department of Housing and Urban Development.
64
Elton, L.B. (1987), "Warning signs", Time Higher Education Supplement, 11 September, p. 12. H. M. Treasury (1991), Competing for Quality: Buying Better Services, London: HMSO. Housing Management Advisory Panel for Wales (1991), Are You Being Served? Measuring Performance in Housing Management, Advice for Landlords and Tenants, Cardiff: Welsh Office. Hutt, J., and R.M. Walker (1992), Reporting to Tenants. A Guide for Landlords and Tenants in the Social Rented Sector, Cardiff: Welsh Office. Institute of Housing (1989), Void Targets, Coventry: Institute of Housing. Maclennan, D., and K. Gibb (1993), "Housing indicators and research for policy form the perspective of applied economics", Netherlands Journal of Housing and the Built Environment (8), no. 1, pp. 49-60. Mullins, D. (1991), Housing Services for Homeless People, Coventry: Institute of Housing. National Federation of Housing Associations (1992), Improving Your Use of Performance Indicators, London: NFHA. Netherlands Journal of Housing and the Built Environment (1993), "Special Issue: Housing Indicators: A tool for housing policy and housing research?" Netherlands Journal of Housing and the Built Environment (8), no 1. Niner, P. (1992), "Making the best of performance reports", Inside Housing (9), no. 31, pp. 8-9. Pollitt, C. (1988), "Bringing consumers into performance measurement: concepts, consequences and constraints", Policy and Politics (16), no. 2, pp. 77-87. Priemus, H. (1992), "Housing indicators: an instrument in international housing policy", Netherlands Journal of Housing and the Built Environment (7), no. 3, pp. 217-238. Smith, R.S.G., and P.R. Williams (1991), "The changing dimensions of housing Policy in Wales", Contemporary Wales (4), pp. 167-188. Tenant Participation Advisory Service (Wales) (1991), Making a Start. A Report on Tenant Participation in Wales, Cardiff: Tenant Participation Advisory Service (Wales). The Society of Housing Managers (1957), Housing Management, Report of Conference, London: The Society of Housing Managers. Walker, R.M. (1992), "Reporting to Tenants", Welsh Housing Quarterly (7), pp. 1517. Welsh Office (1989), The Relative Effectiveness of Different Forms of Housing Management in Wales, Cardiff: Welsh Office. Welsh Office (1991a), The Reports to Tenants etc. (Wales) Determination 1990, Cardiff: Welsh Office. Welsh Office (1991b), Welsh Housing Statistics, N~ 11, 1991, Cardiff: Welsh Office. Welsh Office (1992), The Reports to Tenants etc. (Wales) Determination 1992, Cardiff: Welsh Office. Welsh Office (1993), The Extension of Competitive Tendering for Housing Management in Wales: A Consultation Paper, Cardiff: Welsh Office. 65
World Bank (1992a), The Housing Indicators Program Extensive Survey Part I: Introduction, Washington, DC: World Bank. World Bank (1992b), The Housing Indicators Program Extensive Survey Preliminary Results, Washington, DC: World Bank.
66