Saturday, May 24, 2008

(More) Open Metrics: Emerging Impact Measures

Numbers Game Hots Up

Citation metrics have become key numbers for journals, institutions and even individuals, and a host of different models are emerging

/ Tracey Caldwell / Information World Review / February 4 2008 /

The beguiling simplicity of the “impact factor” has made it a figure of supreme importance in research.

Impact Factor

Journal impact factors, or IFs, measure how often science and social science journals are cited by academics. The measurement of the number of times a journal is cited by researchers in the field has become shorthand for the value of that journal; and funding bodies and employers use citation metrics to assess the productivity of institutions, departments and individuals.

Thomson Scientific dominates the citation metrics landscape with its Web of Science-based citation index. Recently, however, it has faced increasing competition from the likes of Scopus and Google Scholar. The existence of realistic alternatives to Thomson Scientific’s index – and which give different results to it – has thrown the debate on citation metrics wide open.


Web of Science
[
http://en.wikipedia.org/wiki/Web_of_science]

Scopus

Google Scholar

[snip]

Citation metrics are also used to produce the H Index (a measure of an individual’s publishing activity) and the G Index (a weighted version of the H Index).

[snip]

H-index
[
http://en.wikipedia.org/wiki/H-index]

G Index
[
http://en.wikipedia.org/wiki/G-index]

The Web of Science (WoS) is well established with huge coverage. But critics say that WoS is expensive – Google Scholar, by comparison, is free – and its coverage incomplete. They also say that because citation metrics take years to create, WoS cannot identify what is hot right now.

Lagging, Not Leading

But Kuan-Teh Jeang, editor in chief of open access (OA) journal Retrovirology, says that metrics designed to measure “previous” modes of publication are an assessment of publishing impact on a largely “Western and developed audience” and are lagging rather than leading indicators.


Retrovirology

"Things that seem to be invisible now might prove to be highly impactful,” says Jeang. [snip]

Matthew Cockerill, publisher at OA publishing house Biomed Central, believes the timeliness of the newer indices is an asset. “Google Scholar is wide-embracing and up to date. Scopus adds every new biomedical journal on an annual basis. And Google Scholar adds on an automated basis ... .

BioMed Central
[
http://en.wikipedia.org/wiki/BioMed_Central]

[snip]

Thomson Scientific believes that maintaining indexing quality and consistency of citation data is key. “Our focus is on making sure our metrics reflect the scholarly process well,” says Jim Pringle, vice president of product development at Thomson Scientific. He points out that the company supplements its journal citation reports by publishing a hotlist of papers that are emerging as highly cited.

[Jim] Pringle [of Thomson Scientific] says [the company] is watching with interest experiments with other citation metrics from journal ranking body Eigenfactor to download metrics. “With a download or a page view versus a citation in peer-reviewed literature, you are dealing with a different point on the value scale,” he says.

Eigenfactor

"Eigenfactor: Measuring the Value and Prestige of Scholarly Journals" / Carl Bergstrom / College & Research Libraries News (May 2007) Vol. 68, No. 5
[
http://www.ala.org/ala/acrl/acrlpubs/crlnews/backissues2007/may07/eigenfactor.cfm]
[snip]

When developing the H Index, Jorge Hirsch, who teaches at the University of California, decided that as citation counts were used for research evaluation in faculty recruiting and promotion, as well as in grant allocations, articles that received large numbers of citations should be considered as significant in such evaluations, even when they were not published in high-impact journals.

Hirsch developed the H Index as a metric that could illustrate research achievement.

Thomson Scientific ... puts health warnings on its metrics. The company says it does not rely on the impact factor alone in assessing the usefulness of a journal, and neither should anyone else.

“It is important that people use the metrics well and use them for the right purpose,” says Pringle.

[snip]

[Cockerill notes that] “[t]he reality is that people tend to give emphasis to a number and this can be circular so that the decision to submit a piece of research is based on the IF of the journal.

“Evaluation authorities say they don’t attach importance to impact factors but the perception is that IFs are all-important. But people forget about the partial nature of IFs.

Selection-Neutral

[snip]

One of the issues with citation metrics is that they do not thoroughly reflect the range of scientific advancement. Research with a more practical application might be cited less in other research. There is scientific value in clinical trials and individual datasets, yet no-one cites results from them.

There have been moves to include sources beyond journal papers, but there is still a way to go. Scopus publishes conference proceedings publications and 33 million abstracts and Thomson Scientific also publishes conference proceedings as “a way to uncover research ideas as they are presented for the first time - often before publication in the journal literature”.


Beyond the journal [Niels] Weertman, [Scopus product manager] says: “We want to include other sources, and researchers need to have access to that content. In some disciplines such as science and maths it has been shown that 50-60% of research results are in conference proceedings; while in arts and humanities most research is in a book, not in papers or conference proceedings.”

Citation measures have come under fire but whatever their flaws, they are undeniably relevant.

Research has shown there is a positive relationship between average citations per paper and peer review measures of research performance.

[snip]

The increasing complexity of the metrics landscape should have at least one beneficial effect: making people think twice before bandying about misleading indicators. More importantly, it will hasten the development of better, more open metrics based on more criteria, with the ultimate effect of improving the rate of scientific advancement.


[snip]

Measure the Metrics

Cognitive scientist and open access (OA) evangelist Stevan Harnad also welcomes the introduction of metrics to supplement and eventually substitute for panel review, but he believes HEFCE must test and validate many potential metrics against the panel reviews in 2008.

He says: “The candidate metrics must go beyond just ISI journal impact factors, or even article/author citation counts. Non-ISI citation data (such as Google Scholar), download data, co-citations, and many other candidate metrics should be tested and validated against the RAE 2008 panel reviews, discipline by discipline.

“Open access looms large in both the generation and evaluation of metrics. RAE/HEFCE still has not made the link. Once OA self-archiving is mandated UK-wide and worldwide there will be an unprecedentedly rich and diverse set of OA metrics to test and validate.”


[more]

[http://www.iwr.co.uk/information-world-review/features/2208954/numbers-game-hots-3774199]

No comments:

Post a Comment

Popular Posts