Citation analysis
Citation analysis is the examination of the frequency, patterns, and graphs of citations in documents. It uses the directed graph of citations — links from one document to another document — to reveal properties of the documents. A typical aim would be to identify the most important documents in a collection. A classic example is that of the citations between academic articles and books.[1][2] For another example, judges of law support their judgements by referring back to judgements made in earlier cases (see citation analysis in a legal context). An additional example is provided by patents which contain prior art, citation of earlier patents relevant to the current claim. The digitization of patent data and increasing computing power have led to a community of practice that uses these citation data to measure innovation attributes, trace knowledge flows, and map innovation networks.[3]
Documents can be associated with many other features in addition to citations, such as authors, publishers, journals as well as their actual texts. The general analysis of collections of documents is known as bibliometrics and citation analysis is a key part of that field. For example, bibliographic coupling and co-citation are association measures based on citation analysis (shared citations or shared references). The citations in a collection of documents can also be represented in forms such as a citation graph, as pointed out by Derek J. de Solla Price in his 1965 article "Networks of Scientific Papers".[4] This means that citation analysis draws on aspects of social network analysis and network science.
An early example of automated citation indexing was CiteSeer, which was used for citations between academic papers, while Web of Science is an example of a modern system which includes more than just academic books and articles reflecting a wider range of information sources. Today, automated citation indexing[5] has changed the nature of citation analysis research, allowing millions of citations to be analyzed for large-scale patterns and knowledge discovery. Citation analysis tools can be used to compute various impact measures for scholars based on data from citation indices.[6][7][note 1] These have various applications, from the identification of expert referees to review papers and grant proposals, to providing transparent data in support of academic merit review, tenure, and promotion decisions. This competition for limited resources may lead to ethically questionable behavior to increase citations.[8][9]
A great deal of criticism has been made of the practice of naively using citation analyses to compare the impact of different scholarly articles without taking into account other factors which may affect citation patterns.[10] Among these criticisms, a recurrent one focuses on "field-dependent factors", which refers to the fact that citation practices vary from one area of science to another, and even between fields of research within a discipline.[11]
History[edit]
In a 1965 paper, Derek J. de Solla Price described the inherent linking characteristic of the SCI as "Networks of Scientific Papers".[4] The links between citing and cited papers became dynamic when the SCI began to be published online. The Social Sciences Citation Index became one of the first databases to be mounted on the Dialog system[22] in 1972. With the advent of the CD-ROM edition, linking became even easier and enabled the use of bibliographic coupling for finding related records. In 1973, Henry Small published his classic work on Co-Citation analysis which became a self-organizing classification system that led to document clustering experiments and eventually an "Atlas of Science" later called "Research Reviews".
The inherent topological and graphical nature of the worldwide citation network which is an inherent property of the scientific literature was described by Ralph Garner (Drexel University) in 1965.[23]
The use of citation counts to rank journals was a technique used in the early part of the nineteenth century but the systematic ongoing measurement of these counts for scientific journals was initiated by Eugene Garfield at the Institute for Scientific Information who also pioneered the use of these counts to rank authors and papers. In a landmark paper of 1965 he and Irving Sher showed the correlation between citation frequency and eminence in demonstrating that Nobel Prize winners published five times the average number of papers while their work was cited 30 to 50 times the average. In a long series of essays on the Nobel and other prizes Garfield reported this phenomenon. The usual summary measure is known as impact factor, the number of citations to a journal for the previous two years, divided by the number of articles published in those years. It is widely used, both for appropriate and inappropriate purposes—in particular, the use of this measure alone for ranking authors and papers is therefore quite controversial.
In an early study in 1964 of the use of Citation Analysis in writing the history of DNA, Garfield and Sher demonstrated the potential for generating historiographs, topological maps of the most important steps in the history of scientific topics. This work was later automated by E. Garfield, A. I. Pudovkin of the Institute of Marine Biology, Russian Academy of Sciences and V. S. Istomin of Center for Teaching, Learning, and Technology, Washington State University and led to the creation of the HistCite[24] software around 2002.
Automatic citation indexing was introduced in 1998 by Lee Giles, Steve Lawrence and Kurt Bollacker[25] and enabled automatic algorithmic extraction and grouping of citations for any digital academic and scientific document. Where previous citation extraction was a manual process, citation measures could now scale up and be computed for any scholarly and scientific field and document venue, not just those selected by organizations such as ISI. This led to the creation of new systems for public and automated citation indexing, the first being CiteSeer (now CiteSeerX, soon followed by Cora, which focused primarily on the field of computer science and information science. These were later followed by large scale academic domain citation systems such as the Google Scholar and Microsoft Academic. Such autonomous citation indexing is not yet perfect in citation extraction or citation clustering with an error rate estimated by some at 10% though a careful statistical sampling has yet to be done. This has resulted in such authors as Ann Arbor, Milton Keynes, and Walton Hall being credited with extensive academic output.[26] SCI claims to create automatic citation indexing through purely programmatic methods. Even the older records have a similar magnitude of error.