Mandatory Ranking of R&D – a growing debate

Mandatory Ranking of R  & D - a growing debateWhat if the unthinkable happened and the U.S. government imposed a mandatory and public ranking of research universities and individual faculty according to their “research excellence?” Just to be clear, I’m not advocating that such a ranking be done. However, no matter how strongly one might disagree with the idea of mandatory public rankings based on data that is currently largely private, I’ll bet that in the next few years, we’ll start to see policy makers cautiously exploring this idea.

Next year, those of us engaged in the university R&D ecosystem will get a rare opportunity to see speculation in action. The U.K. university system is about to launch a major new government-mandated university assessment process, the “Research Excellence Framework,” or REF. When I was in the UK recently, the upcoming REF dominated discussion at dinner tables and coffee breaks.

In a radical new twist on university assessment, nearly 2/3rds of a university’s REF score will be based on the research output of individual faculty. University faculty deemed by their departments to be the most likely to rank highly will submit their best four papers to a government appointed panel. The panel will assess and then publicly rank each faculty member according to a star system, one star being the lowest and four stars the highest.

First, four star Army generals… now four star university professors?

I’m guessing that the British government didn’t intend an academic version of a military-style hierarchy, yet a star-based ranking system is reminiscent of the tradition in the U.S. Army of anointing four star generals. Professors in the U.K. who fare well on their assessments will be assigned a four star ranking. Of course money has to enter this picture at some point: the more four star faculty a university employs, the more government funding the university will receive. Four star academics will be worth their weight in … pounds. Literally.

Here’s the catch: U.K. REF faculty assessments are not quantitative. Instead, a professor’s merit is based on evaluations conducted by the appointed panels of experts. To evaluate university submissions, the government agency managing the REF process will oversee panels of government-appointed, nominated judges.

A university’s total REF score will be based on reported activity in three major arenas: 1) individual faculty research output, 2) a university’s total social and economic impact, and 3) a university’s environment and facilities. In more detail, here are the three categories each university in the U.K. will be assessed on:

1) faculty research output: 65% of total REF score. Output equals the traditional scholarly stuff of publications, book chapters, conference activity, etc. This is the portion of the REF where individual faculty will receive a star ranking from the REF oversight committee.

2) university impact: 20% of total score. Impact is a university-level measure. Essentially, impact is the non-scholarly activities that benefit the world off-campus; impact is gauged by submitted case studies. (This is how university technology commercialization offices have been pulled into the REF process.) Impact measures can include university startups, having a positive impact on government policy, or developing industry products and services.

3) university “environment:” 15% of total REF score. This is mostly traditional educational data, e.g. the number of doctoral degrees a university grants, what percentage of those degrees went to women, how much research funding a university earns, what sort of facilities it has, and so on.

I applaud the underlying goal of the British REF, to improve the quality of research and teaching at their universities. Yet, a key shortcoming of the REF assessment process is its subjectivity. Two of the three portions (faculty research output and university impact) of the U.K.’s REF assessment are qualitative.

A subjective judgement process may undermine exactly what the REF was intended to accomplish. Peer reviewed measures of “excellence” may set up a process that’s ripe to become heavily politicized, rendering it perhaps yet another empty exercise in who’s who in a particular academic fiefdom. If that happens, the U.K.’s investment in REF will crumble into yet another non-productive counting activity that reinforces the entrenchment of already dominant academic fiefdoms –not a strong strategy to improve the relevance, innovation-capacity and impact of the U.K.’s university research infrastructure.

A REF in the U.S: data, good data mining tools and a user-friendly interface

So let’s imagine that the U.S. government — motivated by a current harsh economic climate and public concern over bloated, irrelevant and costly universities — demands that universities and individual faculty prove that federal research funding is a worthwhile investment. If the government were to implement a nation-wide assessment, the heart of the process should be simple data transparency. Both university-wide and individual faculty rankings should be based on quantitative data from external sources, not on the subjective judgements of government appointed panels.

The university system in the U.S. is vast, decentralized and diverse, and that’s part of its strength. That’s why a top-down process to evaluate an arena as creative and fluid as research and technology development won’t work. In fact, creating and then managing all the moving parts and pieces of a centrally orchestrated REF assessment will cost the U.K. government lots of money that could be better spent elsewhere. University administrations will pay in terms of their time.

Everyone likes to talk about transparency. If transparency is the best process, what, exactly, do I mean when I say transparency should be the heart of any faculty assessment process?

To have transparency, first you need data. U.S. university systems already have the data they would need for a U.S. take-off on the REF. However, just dumping data into yet another impenetrable government-funded databank won’t help. Instead, data should be placed into a smart, quantitative, publicly accessible tool.

Like Innovation Excellence on Facebook

A good example of what a nation-wide university assessment tool should look like is Microsoft Academic Search. MS Academic Search lacks the content coverage of Google Scholar. But its user interface and pattern mapping and comparison capabilities are light-years ahead. Take a look at MS Academic Search to see the potential insight a good analytical tool could introduce into the world of university research and innovation strategy. For example, in Academic Search, you can:

1. Compare research productivity of individual faculty at universities around the world. See how individual faculty fare when ranked according to their publications, citations, and h-indexes.

2. See the intellectual links between researchers who are citing, co-authoring and collaborating with one another.

3. At the university level, see how entire universities compare, and what their organizational-level h-indexes are.

Academic Search is getting it right. Imagine its power if even more infographic and data mining capabilities were added to it.

Just for fun, I’m going to propose a set of measures that universities and individual faculty should be assessed on. All of these datapoints are currently readily available. They just haven’t been bundled up and placed into the right database that feeds a user-friendly web portal.

University-level metrics

In a nutshell, university administrations should be evaluated according to their ability as stewards, how much research and applied innovation they’re managing to extract from federal research dollars. Data that U.S. universities submit should be normalized by annual research funding received to correct for differences in resources.

Metric 1. University-wide scholarly impact: university-wide h index: the total average h -index of all full-time university researchers

Metric 2. Total, combined faculty research output: the total number of scholarly papers, per institution, as logged in ISI normalized by the university’s annual research money received.

Metric 3. University ability to turn research into public benefit: The number of university inventions in external use per federal research dollar; “external use” means under some form of external contractual arrangement, paid or not. This should include open source and Creative Commons type licenses too.

Metric 4. A university’s industry impact: per federal research dollar, the amount of industry funding received for on-campus collaborative research

Metric 5. University technology commercialization impact as measured according to the following technology transfer health indexes: 1) commercial health index: distribution of patent licensing revenue across entire patent portfolio 2) jobs created by startups health index: FTEs distributed across all startups founded on a licensed university patent and 3) speed to licensing index: distribution of weeks between invention disclosed and date to executed license.

Individual faculty-level metrics

Public ranking of university faculty will make or break careers. Therefore, the process needs to be as free of politics as possible. That’s why data is better. It speaks more fairly. True, even external and quantititative performance data is created in a political ecosystem of journal editors and grant reviewing committees. But a system of evaluating committees nominated specifically to assign star rankings would be even worse.

University faculty are evaluated all the time by their departments. Nearly every working university professor knows her h-index, number of times cited, number of publications, and the journal impact factor of her accepted articles. Here’s the data that should be collected for individual faculty assessments;

Faculty metric 1. Scholarly productivity and impact: the individual h -index of all published scholarly work. (This data exists on Google Scholar and on MS Academic Search already).

Faculty metric 2. Innovation impact: how many of a faculty’s inventions or books are in external use (commercial or not) : This metric would be the number of formally disclosed inventions that are under some form of external contractual arrangement, paid or not. Also published popular books and software. This should include inventions that got patented, plus work that’s been released under open source and Creative Commons type licenses.

Faculty metric 3. A faculty’s ability to add value to industry: how much industry funding a faculty member has received in the past year for collaborative research

Bidding over star faculty

Maybe having a new class of elite, four-star of university professors would taint the system. It could certainly take much of the fun out of being an academic, a profession in which tenure and the freedom to pursue one’s own research agenda are core perks of the job. Public rankings also introduce the risk of bullying and finger-pointing towards those faculty who land in the bottom 50th percentile.

For those fortunate faculty who end up at the top of the rankings, however, life would be sweet. Rightly or wrongly, four star faculty will enjoy money and prestige. Four star professors would fend off ever-enriched job offers from competing universities vying to attract high-rated faculty. This reminds me of the article I wrote for NCURA magazine about faculty tenure. In this article, I speculated that if tenure were to go away and faculty were to become a mobile workforce, the top ten percent or so of faculty will be bid for, sort of like star baseball players.

In the U.K., I suspect that an unintended outcome of ranking faculty publicly will be the creation of a tiered system in which top faculty will benefit from being hotly pursued and benefit in the form of higher wages. The battle for four star faculty in the U.K. has already begun. One highly productive professor I was scheduled to meet with emailed me the day before to tell me was no longer with that university: he and his students had been hired away by another university, and his entire lab was moving immediately. Another professor I spoke to (whose distinguished career would likely make him a four star faculty) was brought back from retirement by his former employer to boost his university’s REF score.

The first thing graduate students would do, while choosing where to apply, would be to shop for their future advisor by her national ranking. Research money from governments and sponsoring companies would rain down on four star faculty. Companies would browse the faculty ranking tool to decide who to approach for research collaborations.

A downside of transparent faculty rankings might be to further tip the balance towards rich universities who can afford to purchase an all-star faculty team. Teaching universities would left out in the cold if this were to happen. Some corrective, balancing provision would need to be set forth to help poorer universities purchase a few all-star faculty of their own. Frequently, universities with smaller budgets are the same ones who offer cheaper tuition and therefore, a critical social path upwards for lower-income students.

Conclusion

It remains to be seen whether the U.K. is helping or harming its world-class research university system by implementing mandatory assessments. Reactions to the REF in people I spoke to while I was in England were passionately divided. What everyone agreed on, however, is that faculty tenure and unpublished university performance measures are increasingly harder to defend in an era where unemployment and private-sector lay-offs are all too common.

Like it or not, in both the U.K and the U.S., the tax paying public foots much of the bill for our university system, first in taxes, then in tuition. If the people paying the bills demand that universities step up and demonstrate the research output of their faculty, it will become increasingly difficult for university administrators to defend closed books in the name of “academic tradition.”

In the U.K. the REF assessment process will be managed by two government agencies that do most of the management of the UK university system: one, the Research Council UK, or RCUK, and two, the Higher Education Funding Council for England (HEFCE). I wonder whether a few years from now, U.S. federal funding agencies such as NSF, NIH, DARPA, DOD and DOE will be mandated to fund and oversee the collection of (already-existing) university data to be published in a user-friendly public database.

image credit: rypple.com

Join the global innovation community

Don’t miss an article (4,000+) – Subscribe to our RSS feed or Innovation Excellence Weekly newsletter (sample).


Melba KurmanMelba Kurman writes and speaks about innovative tech transfer from university research labs to the commercial marketplace. Melba is the president of Triple Helix Innovation, a consulting firm dedicated to improving innovation partnerships between companies and universities.

Melba Kurman

NEVER MISS ANOTHER NEWSLETTER!

Categories

LATEST BLOGS

Three things you didn’t know about credit cards

By Hubert Day | October 18, 2023

Photo by Ales Nesetril on Unsplash Many of us use credit cards regularly. From using them for everyday purchases to…

Read More

Five CV skills of a business-minded individual

By Hubert Day | September 21, 2023

Photo by Scott Graham on Unsplash The skills listed on a CV help employers quickly understand your suitability for a…

Read More