THE PROFESSION research quality assessment and the metrication of the social sciences miriam e. david ESRC Teaching and Learning Research Programme, Institute of Education, University of London, 20 Bedford Way, London WC1H 0AL, UK E-mail:
[email protected] doi:10.1057/palgrave.eps.2210176
Abstract The British system of quality assessment of research in universities, known as the Research Assessment Exercise (RAE), has recently been the subject of major public policy review and debate. The system of research quality or performance assessment has been running for over twenty years, although many of its facets have changed as has the increasingly marketised political economy. Nevertheless, the UK RAE has been the prototype for the growth and development of such systems internationally, although how different countries have conceived of such forms of review has varied greatly. The question of the relationship between research quality in higher education and the public funding of research lies at the heart of what has become a contentious and acrimonious debate in the UK. While these issues can be seen as fundamentally about social and economic matters, in fact the social sciences as an organised group of subjects or interests have not played a key role in the public arena. This article outlines the contours of the recent debates in the UK, by comparison and contrast with the ways in which such systems of performance and quality assessment have been debated inter alia in Australia, New Zealand, France and the Netherlands. In essence, the issues have centred upon questions of measurement of performance known as metrication, and bibliometrics versus social judgments about research quality.
Keywords
bibliometrics; impact of research; peer review; performance indicators; privatisation; quality assurance
52
european political science: 7 2008 (52 – 63) & 2008 European Consortium for Political Research. 1680-4333/08 $30 www.palgrave-journals.com/eps
INTRODUCTION
T
he British system of quality assessment of research in universities, known as the Research Assessment Exercise (RAE), has recently been the subject of major public policy review and debate within academia, and within and across the social sciences, contributing to the education policy reforms. A system of research quality or performance assessment has been running for over twenty years in the UK, based on peer review, although many of its facets have changed as has the increasingly marketised political economy. The question of the relationship between research quality in higher education and the public funding of research lies at the heart of what has become a contentious and acrimonious debate in the UK. Indeed, Roberts (2007) entitled a recent public policy presentation ‘Dual support: duel for support’ to characterise just how contested are the proposed reforms of research assessment. Historically, there was a clear separation between peer review of research quality and the distribution of funds to universities through the dual support system of what is known as Quality Research (QR) and teaching funds. This separation of peer review and public funding has been eroded through a panoply of changes, including the expansion and globalisation of forms of higher education, and the diverse ways in which the relations between and within research and teaching and learning have been transformed (David, 2007). These have also been linked with developments in systems of quality assessment and assurance within public services including higher education (Morley, 2003). Transformations in the forms of public and private provisions within and across education also impact upon the changing systems of quality assessment and assurance (Ball, 2007).
Nevertheless, the UK RAE has been the prototype for the growth and development of such systems internationally, although how different countries (such as France, Ireland or the Netherlands as European examples, or countries within Australasia) have conceived of such forms of peer review and quality or performance assessment has varied greatly. The changed global political economy in the twenty-first century has had implications for the transformations and web-based developments in how research quality is assessed and problematically linked to research funding, drawing especially on new ways of measurement of performances, often now called ‘metrics’ or ‘metrication’ and also including ‘bibliometrics’ to assess citations of publications. Urry and Walby (2006: 2) propose a new system of bibliometrics, as we shall see below, ‘that maintains something of a peer review system that has been a key feature of the RAE since its inception in the UK in 1986’. Hantrais (2006: 1) has argued that ‘metrics are being used increasingly in France and in other EU member states to provide quantitative indicators of research performance in terms of inputs and of the volume and impact of outputs at national, institutional, departmental or research unit level. In many disciplines, particularly the natural sciences, but also increasingly in social and human sciences, including economics and linguistics, bibliometric measures are also being introduced to assess the published outputs of individuals and impacts on the research community’. Middleton (2008, in press) too showed recently how changes in the New Zealand system of research funding for higher education were based upon ‘an outputs-driven’ model drawing on both Britain and Australia. She cites the argument of New Zealand’s Tertiary Education Commission (TEC) charged with responsibility for the design, implementation and overmiriam e. david
european political science: 7 2008
53
sight of their so-called Performance-Based Research Fund (PBRF): A mixed model combining both peer review and performance indicators preferable to the prevailing alternatives, namely a model based solely on peer review, like the British Research Assessment Exercise (RAE), or a model based solely on performance indicators, such as the Australian Research Quantum (Boston, 2004: 1) (Middleton, 2008: 1, in press). She goes on to argue that the changes in research assessment have been linked to developments in personalised evidence portfolios (EP) for all academic teachers, and the requirement for all academics to ‘be researchers’ could impact upon other forms of professional development, including undermining independent professional practice. The new system developed in New Zealand in the early twenty-first century has thus made much more explicit linkage between research quality assessment and the processes of consequent research funding than has been the case in the various transformations of the initial British RAE, until recently, but the development of individualisation of performance through personal development portfolios (PDPs) is similar in the UK (Clegg and David, 2006). The Australian system of building QR, through performance indicators, has also been an innovative development in the twenty-first century, ‘designed to produce mechanisms that will allow specific ongoing assessments of the quality of research being carried out within and across fields in Australia’s universities, and it is intended to have some funding consequences for universities and departments within them’ (Yates, 2006: 1). While Roberts (2007) argues that the UK should adopt elements of the Australian developing approach to research evaluation, Yates (2006) has provided a trenchant
54
european political science: 7 2008
critique, from the perspective of an educational researcher within the social sciences, of the ways in which the Department of Education, Science and Training in Australia initiated a new process to develop a Research Quality Framework (RQF) for Australia. She argues that ‘initially, at least, the RQF discussion papers and associated submissions, seemed to make some advance on unexamined assumptions, by beginning to note and think through some of the different ways research in the humanities and social science might have ‘‘impact’’. Potentially here there seemed to be a broadening of the ways research could make a case about its significance, because a criterion had been made explicit and examined, not just enacted. But whether these cases will be taken up in practice returns us to the yissues of whether education (and the social sciences) have either status or power in present developments’ (Yates, 2006: 22). It is this question of the balance of power in how the social sciences, as a set of subjects or disciplines, within universities, have been able to influence the British education reforms about a new framework for research quality based upon revisions to the RAE, and especially to new forms of peer review through for instance ‘bibliometrics’, to which I now turn.
A NEW FRAMEWORK FOR THE ASSESSMENT AND FUNDING OF RESEARCH In the spring of 2007, the British Government made the decision, drawing on the processes of public consultation especially within and across the academic communities and learned societies throughout the previous year, to ‘develop a new framework for the assessment and funding of research, to come into effect after the 2008 Research Assessment Exercise’ (HEFCE, 2007). The Government also committed itself to continuing
research quality assessment and the metrication of the social sciences
r
[email protected] the present form of the RAE that was already well underway, and based upon individual academic so-called portfolios, for the RAE 2008, despite having previously indicated a possible abandonment. The chief executive of the Higher Education Funding Council for England (HEFCE) set out the aims of the new framework based on the Government’s reading of the evidence presented and analysed the replies to their questions that had arisen from the public consultation. Thus the aims now not only included issues around producing robust indicators of research excellence for all disciplines ‘to benchmark quality against international standards and to drive the council’s funding for research’ but also ‘to reduce significantly the administrative burdens on institutions in comparison to the RAE, to avoid creating any undesirable behavioural incentives and to promote equality and diversity’ (my emphasis). However, while stating that there would be a ‘single overarching framework for funding’ there will be ‘two distinct processes for assessing quality – one for
science-based subjects (science, technology, engineering and medicine (STEM)) and another for all other subjects’. The statement continued: ‘Research quality in the arts, humanities, social sciences, and mathematics and statistics will be assessed through a light-touch process, based upon peer review and informed by statistical indicators in common with the science-based disciplines.’ These latter are to include ‘bibliometric indicators of research quality and impact, external research income and postgraduate student activity’ and HEFCE is ‘commissioning expert advice on how bibliometric data can be used effectively to produce quality indicators’ (ibid: 2).
THE SOCIAL SCIENTIFIC EVIDENCE PRESENTED IN THE PUBLIC CONSULTATION It could be argued that the proposals for the new framework recognise the clear and comprehensive arguments put forward in the public consultation, held the previous summer (2006), by the miriam e. david
european political science: 7 2008
55
social science community about the question of metrication and performance reviews given that they constitute only a minor element in the light touch approach. This included both the evidence from the over-arching body for over thirty learned societies in the social sciences, namely the Academy of Social Sciences, (covering and including inter alia education (BERA), geography (RGS), the political sciences (PSA), psychology (BPS), regional studies (RSA), social policy (SPA) and sociology (BSA)), and the Economic and Social Research Council (ESRC) as well as various bodies for the arts and humanities, including the British Academy. The Academy of Social Sciences (2006: 2) argued that ‘for many disciplines, a purely metrics based approach would be unacceptable, while a carefully selected combination of quantitative and qualitative methods to assess research quality would gain a good measure of support. The main difficulty lies, however, in identifying those quantitative measures which are relevant to the quality (rather than quantity) of research, are selected to avoid bias towards particular styles of research, and reflect the diversity of disciplines and research environments encompassed by the social sciences. This problem affects the social sciences more than either STEM subjects or arts and humanities, because much work in social sciences crosses disciplinary boundaries yThere is a real concern that failure to address this issue within a metrics based assessment system could reduce the contribution of social science research to many important fields such as the medical and biosciences’. Similarly, the response from the ESRC, included as an appendix to the general reply from the Research Councils UK (RCUK), argued that the overall views of the social science community showed that ‘there was a considerable degree of consensus that i) some element of peer review should be retained in any new
56
european political science: 7 2008
system; ii) the metrics to be used should focus on outputs more than inputs so that, for example, research income should not be used as a single and reliable measure of quality, and iii) any new system should not be introduced too quickly’ (RCUK, 2006: Appendix A). However, it should be noted that the scientific community, covering the STEM subjects, through both the submissions of the various representative learned societies such as the Royal Society, the Royal Academy of Engineering and the RCUK, did not fully support the introduction of a wholly metrics based approach. For example, the RCUK argued that ‘the bureaucracy throughout the system has now become counter productive to the furtherance of many of the important challenges facing the research baseyThe funding councils should reassure stakeholders that the new systems will not prove a barrier to interdisciplinary research across the STEM/non-STEM interface. While recognizing that valuable efficiency savings that could accrue from moving to a wholly metrics-based system, and welcoming a lighter-touch approachythe models proposed to date would [not] meet the needs set outy’ (2006: 1–2). Sastry and Bekhradnia (2006), in a paper produced for the Higher Education Policy Institute (Hepi) entitled Using metrics to allocate research funds, also argued strongly against the proposed changes such as ‘scrapping the RAE’ and using Research Council grants instead of QR funding to individual universities, seeing these as costly, more expensive and representing a ‘huge leap into the unknown’. They also suggested that individual universities would have an incentive to target funders outside the Research Council system, which in turn would shift the orientation of university research away from public interest activities and towards research that supports private interests. Indeed, they were in favour of ‘alternatives to funding-based
research quality assessment and the metrication of the social sciences
metrics’ such as the use of citations in a ‘basket of viable metrics’ (sic). Perhaps most importantly, Sastry and Bekhradnia argued that the costs of implementing separate systems of assessment for the arts and humanities and social sciences would be great since the non-STEM subjects represent almost half of the submissions to the RAE. Nevertheless, this advice was not heeded, despite the incontrovertible statement that: ‘Moreover, it would be a peculiar judgment that the RAE’s processes had unacceptable consequences for STEM subjects which were nevertheless acceptable for Arts, Humanities and Social Sciences.’ They also noted that there would be extra costs of operating expert panels to validate results produced by metrics. They also mentioned that further additional costs would ensue to validate the uses of other external funders. All of this would entail the Government having to accept a greater regulatory role in the process than had been anticipated. Finally, they also pointed to two other anomalies in the proposed new system, namely the question of the nature of the competition, as between institutions, individuals or subjects, and the question of regional disparities insofar as there were differences between English, Scottish, Welsh and Irish funding allocations and mechanisms. These would have clear implications for educational policy in devolved administrations and the equity of funding as between not only STEM/nonSTEM subjects but also different universities in the diverse regions of the UK.
OTHER SOCIAL SCIENTIFIC ARGUMENTS ABOUT METRICATION IN THE PROPOSED NEW FRAMEWORK There have been many arguments presented by social scientists among others against specific elements of the system of
‘sympathy expressed by social scientists for developing a wholly new system of bibliometrics to address the now clearly web-based system of global higher education and the universal uses of Information Technology and knowledge-based systems through the expansion of the internet.’ metrication, and especially against the uses of input rather than output measures, such as either the system of using Research Council funding to replace Funding Council QR funds (as elaborated by Sastry and Bekhradnia, 2006) or even simply measuring research grant income, as a component of research quality or performance (as noted above by the ESRC). On the other hand, however, there has been some sympathy expressed by social scientists for developing a wholly new system of bibliometrics to address the now clearly web-based system of global higher education and the universal uses of Information Technology and knowledge-based systems through the expansion of the internet. For example, the Academy of Social Sciences initiated an internet-based closed discussion list on options for post2008 social sciences research assessment (Gilbert, 2006). While Sastry and Bekhradnia (2006) had argued that there was no citation database capable of producing data of sufficient quality to inform funding, several contributors to the Academy of Social Sciences’s discussion list presented convincing arguments to the contrary. For example, Urry and miriam e. david
european political science: 7 2008
57
Walby (2006: 1), who are professors of sociology, ‘suggest a way to retain the principle of the peer review system, adapted for the era of metrics, using internet technology’. They go on to argue that ‘this does not appear to discriminate against non-big science disciplines or institutions which are disproportionately full of non-big science disciplines. This system is one which recognizes and utilizes the web as a key contemporary medium through which communication about academic work takes place’. As with most social science critics, they set up an argument against using research income measures (such as Research Council grants) since they misrepresent and distort academic work. Instead, they suggested an ingenious new system of assessing research outputs by focusing on institutions and disciplinary groups of individuals within such institutions. Their proposal is, however, based upon a private internet company’s approach to gathering information about citations within journal articles, which unfortunately retains the dilemmas of the inexorable spread of the worldwide web. Nevertheless, apart from having to accept the inevitable nature of what has been called ‘academic capitalism’ (Slaughter and Rhoades, 2004), their suggestion for adopting Google Scholar as the basis for developing a bibliometric database system seems imaginative. Not only do Urry and Walby propose locating this approach to citations in a live and constantly updated database system, within the context of what they refer to as the ‘democratic nature of the web’ but they also suggest that the evaluations should occur only once every decade (viz 2009 and 2019) and that this should be done within institutions and by counting citations in groupings of staff by subjects or areas to arrive at grade point average (GPA). They add that QR income would then be calculated as some multiple of the number of staff listed and the GPA, which would be normalised for that type of
58
european political science: 7 2008
‘suggest a way to retain the principle of the peer review system, adapted for the era of metrics, using internet technology.’ discipline. In this way, there would be a fair distribution of research money, academic status and peer review. They conclude that ‘this system thus goes with the idea that it should be one’s ‘‘colleagues’’ (and students) rather than presumed prestige or money that ought to provide the basis of research funding and status’. Using Google Scholar, they assert, would be at no cost to the institutions or the system as a whole, but here they seem blind to the need for constant technological and software developments at a substantial (albeit perhaps hidden and implicit rather than explicit) cost to institutions, if not individual academics and researchers. Nevertheless, the principle of their proposal seems eminently reasonable, and has garnered support among other social scientists and especially through the web-based discussion initiated by Gilbert (2006). In that discussion, for instance, Gardner, a professor of educational assessment at Queens University, Belfast, has expressed interest in pursuing ‘new sophisticated systems, and authoritative and balanced analyses of citation index usage’. Colman, a professor of Psychology at Leicester has also demolished arguments about using inputs such as research income and is supportive of arguments for developing appropriate databases for citations by using data from academics themselves, as well as intelligent interrogation of the database. While he is sympathetic to Urry and Walby’s suggestions in principle, he nonetheless argues that their preference for using Google Scholar is based upon a
research quality assessment and the metrication of the social sciences
misapprehension that it is better than other science/social science citation systems in that it counts citations to books, chapters, reports and grey publications as well as journal articles, and does not presume to second guess which are the leading journals. He argues that the ISI citation reference search similarly does not presume what are the leading journals, but that there is a slightly different problem: ISI only counts citations in journals in the ISI database, but these citations can be from books, chapters, reports, etc. or anything that anyone chooses to cite in journal articles. Thus, a common problem of both these databases is that they only use journal articles as their basis. However, given the accelerating growth of the internet databases such as Google Scholar it may be possible to include books and other grey material in the longer term. Other commentators are also sympathetic to the development of online databases in principle, but at the same time point out difficulties such as the predominance of the use of the English language in an article about ‘caveats for the use of citation indicators in research and journal evaluations’ (Leydesdorff, 2006, cited in Gilbert, 2006). Yet others point to the complexities and diversities in defining research quality within specific social science subject areas, as a prelude to developing such systems. Here the Social Policy Association has produced a carefully documented report on defining quality in social policy research (Becker et al, 2006: 2) based upon a large web-based survey of 251 social science academics and six discussion groups plus telephone interviews: ‘It shows that within the social policy community it is possible to construct a ranking of quality criteria with some indicators receiving significantly more support than others.’ Hantrais (2006: 1) refers, in her review of the French system by comparison with the British, to a critical problem for
bibliometrics in the social sciences in so far as ‘they were not designed to assess book length publications, particularly when they are single authored, the product of many years of individual research within a narrow area of specialism yeven if they are recognized as seminal works, they may never attract a wide readership’. She concludes, ‘nor should bibliometrics be used without due regard for the perverse effects they produce on individual and group publishing strategies and the place of British social and human science research within the international community’. Similarly, Spaanen (2007) in a talk given at an Academy of Social Sciences’ day conference, from the Royal Netherlands Academy of Arts & Sciences, argues that ‘there is mounting pressure from Social Sciences, Humanities and many other fields to develop another system [of research quality evaluation], different criteria and indicators that are more in line with their own research traditions’. He thus proposes a new system, as introduced in the Netherlands and known as the Standard Evaluation Protocol (SEP), which includes a self-evaluation report by each institute focusing upon quality (outputs and international position), relevance (to policy, industry and society), research management and accountability, plus external site visits every six years. The productivity or performance measures cover outputs in journals and citations. In addition, there is a ‘meta evaluation committee which covers different subjects or disciplines and provides reports in the more problematic context of the evaluation of societal quality, and which still focuses upon peer review of scientific quality to the exclusion of other factors’. Roberts (2007) is also keen to develop a UK system of research evaluation building upon the experiences of the Netherlands, Australia and Hong Kong but emphasising two separate systems, one with international benchmark reviews miriam e. david
european political science: 7 2008
59
for the top groups and ‘an alternative research assessment for less research intensive universities’.
THE PRIVATISATION OF RESEARCH ASSESSMENT AND PERFORMANCE INDICATORS The global transformation in the political economy of higher education has led to an increase in private involvement in the processes of public sector education, as Ball (2007: 191) has also noted in relation to school systems, concluding that ‘the questions raisedyconcern the kind of future we want for education and what role privatisation and the private sector might have in that future, and crucially how justice and ethical behaviour can be balanced against necessary pragmatism within a modern and democratic systemywe need to move beyond the tyrannies of improvement, efficiency and standards, to recover a language of and for education articulated in terms of ethics, moral obligations and values’. Slaughter and Rhoades (2004) have also developed a sophisticated analysis of the ways in which markets and the state, in the USA, have expanded inexorably such that ‘the rise of the ‘‘new’’ global knowledge or information society calls for a fresh account of the relations between higher education institutions and society. Our analysis of these relations has led us to develop a theory of academic capitalism which explains the process of college and university integration into the new economy’ (2004: 1). The development of the new bibliometric systems has indeed, as already noted, usually been through private companies which are part of the development of the new relations between higher education, where they have originated, and the wider socio-economic system. However, these new organisations or institutions of knowledge transfer that
60
european political science: 7 2008
‘Bibliometrics have mainly been developed through the privatization of knowledge transfer and specifically through knowledge-based companies.’ are indeed part of the new social relations, and are built upon new forms of social science, are not necessarily particularly sensitive to the social sciences and/or the nuances of forms of citation. Bibliometrics have mainly been developed through the privatisation of knowledge transfer and specifically through knowledge-based companies. These companies have been developed through the outsourcing of the work of public policy units and higher education institutions. For instance, a key player in the British higher education economy is now Evidence Ltd ‘a knowledge-based company in Leeds that specializes in the analysis and interpretation of research performance’ (Adams 2006: 17). This company’s clients include several Government Departments, such as the former Office of Science and Innovation (OSI) in the now newly organised Department of Innovation, Universities and Skills, HEFCE, the ESRC and other Research Councils, and it has been engaged to develop and improve the bibliometrics that HEFCE is currently considering using. The founder-director, Jonathan Adams, has also undertaken international work funded by OSI to profile citation impact, and which is published in ‘Scientometrics’ (Adams et al, 2007). The origin of bibliometrics, according to Adams (2007), lies with Eugene Garfield who started an international company based in the USA which was then bought out by a company called Thomson Scientific, dedicated to
research quality assessment and the metrication of the social sciences
developing bibliometrics that would respond to the need to create measures, such as ‘highly cited’, to demonstrate ‘high impact’ (ibid: 18). Adams further argues that this approach draws on ‘the sociology of research’ by showing how research is constructed. According to a report in the THES (20 September 2007: 2), with an intriguing title ‘Metrics marred by doubt at first cite’ the metrication system which uses Thomson Scientific forms of citation is not working and is producing spurious correlations from inappropriate journals in the STEM subjects.
CONCLUSION Whether these new forms of research quality assessment through metrication, and specifically online bibliometrics, can promote equality, as the Government claims is one of its aims, remains open to question, but it may be that the intention is to emphasise the need to respond to the need for diversity (of subjects, of discipline areas, of institutions or of individuals) rather than equality of individuals or institutions. Moreover, the commitment to the use of private companies to develop appropriate measures of research quality for public use seems to fly in the face of equality questions, although these companies may well have been developed through the relatively new forms of business using private equity. Nevertheless, it is clear that new forms of peer review, through some method of online ‘metrication’ of citations as a form of peer review, or bibliometrics, are clearly here to stay. Ensuring that they achieve a fair and equitable overview of the books, journals, etc. in the social sciences, and that they cover not individuals but groups such as the social sciences, or sub-groups of the social sciences, is important especially in an increasingly global knowledge economy. The most important issue to which
‘The league table mentality helps to foster a notion of increasingly rigid and stratified systems of higher education.’
most social science commentators draw attention is the inequities of reliance on evaluation procedures that rely only on inputs, such as research council grants to individuals, or indeed the argument to change from funding council monies to the use of research council funding measures only (described by some as a ‘third way’ solution). In an increasingly competitive market economy this cannot be seen as a fairer measure, as it is clear that higher education is now highly stratified, and some institutions would never be able to join the competition for research resources. The Sutton Trust (2007) recently reported on opportunities for university participation, but its research methodology was limited to looking at elite universities, defined as the top thirteen from the league tables. The league table mentality helps to foster a notion of increasingly rigid and stratified systems of higher education, in terms of research versus teaching. Evidence from the Teaching and Learning Research Programme (David, 2007) shows that opportunities to participate in universities and higher education have widened over the last twenty years or so but that, at the same time, English higher education is a stratified, differentiated and distributed system. As well as traditional universities, it includes the Open University, higher education colleges, and higher education courses offered in further education establishments. Not all of these participate in research and have the same research miriam e. david
european political science: 7 2008
61
culture as most traditional universities, nor do they have the same opportunities to apply for research council grants. Emphasising this approach would increase rather than erode current inequities in the UK system of higher education and build upon an inequitable system of diversities of institutions, individuals and
subject areas, particularly for the social sciences and humanities as non-STEM subjects. A commitment to the evaluation of research quality must begin on a society-wide basis, valuing equally the subjects across universities and higher education, whatever their diversity.
References Academy of Social Sciences. (2006) DfES document. Reform of higher education research assessment and funding. Response on behalf of the Academy of Social Sciences (September), http:// www.acss.org.uk. Adams, J. (2006) ‘Consistency confirms strength of UK research’, Research Fortnight, 11 October. Adams, J. (2007) ‘View from the top. Nobody expects the Spanish inquisition’, Research Fortnight, 25 April. Adams, J., Gurney, K. and Marshall, S. (2007) ‘Profiling citation impact: a new methodology’, Scientometrics 27(2): 325–344 (jointly published by Akademiaai Kiado, Budapest and Springer Dordrecht). Ball, S.J. (2007) Education plc: Understanding Private Sector Participation in Public Sector Education, London: Routledge. Becker, S., Bryman, A. and Sempik, J. (2006) ‘Defining ‘Quality’ in social policy research views, perceptions and a framework for discussion’, Lavenham Suffolk Social Policy Association & Joint Universities Council Social Policy Committee. Clegg, S. and David, M.E. (2006) ‘Passion, pedagogies and the project of the personal in higher education’, Twenty-first Century Society 1(2): 149–167. David, M.E. (2007) ‘Equity and diversity: towards a sociology of higher education for the twenty-first century?’ British Journal of Sociology of Education 28(5): 675–690. Gilbert, N. (2006) Discussion list on options for post 2008 social sciences research assessment
[email protected], http://www.esrcsoceitytoday.ac.uk. Hantrais, L. (2006) ‘Bibliometrics in the social sciences and humanities. Key points from a study of the research assessment in France for the comite ´ national d’e ´valuation de la recherche ´’ (October), unpublished paper and personal communication. HEFCE. (2007) Circular letter number 06/2007, Future framework for research assessment and funding, Bristol HEFCE, 6 March. Leydesdorff, L. (2006) The Knowledge-Based Economy: Modeled, Measured, Simulated, USA Universal Publishers, www.universal-publishers.com and www.leydesdorff.net. Middleton, S. (2008, in press) ‘Research assessment as a pedagogical device: Bernstein, professional identity and education in New Zealand’, British Journal of Sociology of Education 29(2) March. Morley, L. (2003) Quality and Power in Higher Education, Buckingham: Open University Press. Research Councils UK (RCUK). (2006) DfES Consultation on reform of Higher Education Research Assessment and Funding Submission from Research Councils UK. Roberts, Sir G. (2007) ‘Dual support; duel for support’, Powerpoint Presentation to the Society for Research in Higher Education, March. Sastry, T. and Bekhradnia, B. (2006) ‘Using metrics to allocate research funds. A short evaluation of alternatives to the Research Assessment Exercise’, Oxford Higher Education Policy Institute (HEPI), April, www.hepi.ac.uk. Slaughter, S. and Rhoades, G. (2004) Academic Capitalism and the New Economy: Markets, State, and Higher Education, Baltimore, MD: John Hopkins University Press. Spaanen, J. (2007) ‘Judging research on its merit? The Standard Evaluation Protocol and the evaluation of societal quality’, Powerpoint presentation from the Royal Netherlands Academy of Arts and Sciences to the Royal Irish Academy, Dublin, Ireland, 3 May. The Sutton Trust. (2007) University Admissions by Individual Schools, London: The Sutton Trust (September). Urry, J. and Walby, S. (2006) ‘Better than money: new peer review metrics for the next RAE’, unpublished paper and personal communication.
62
european political science: 7 2008
research quality assessment and the metrication of the social sciences
Yates, L. (2006) ‘Is impact a measure of quality? Producing quality research and producing quality indicators of research in Australia’, in J. Blackmore, J. Wright and V. Harwood (eds.) Australian Educational Review No 6: Special Issue on ‘Counterpoints on the Quality and Impact of Educational Research’, pp. 119–132.
About the Author Miriam E. David, AcSS, FRSA is Professor of Education and Associate Director (Higher Education) of the ESRC’s Teaching & Learning Research Programme at the Institute of Education University of London. Her research is on social diversity and inequalities in education. She is chair of the Council of the Academy of Social Sciences (AcSS), a member of the ESRC’s Research Grants Board and the Council of the Society for Research in Higher Education (SRHE). She is co-editor (with Peter Glasner) of 21st Century Society journal of the Academy of Social Sciences and an executive editor of the British Journal of Sociology of Education. Her publications include (with Diane Reay and Stephen Ball) Degrees of Choice: Social Class, Race and Gender in Higher Education (2005, Stoke-on-Trent: Trentham Books).
miriam e. david
european political science: 7 2008
63