Faculty Scholarly Productivity Index
This article is written like a personal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic. (May 2019) (Learn how and when to remove this template message)
This article needs additional citations for verification. (July 2007) (Learn how and when to remove this template message)
The Faculty Scholarly Productivity Index (FSPI), a product of Academic Analytics, is a metric designed to create benchmark standards for the measurement of academic and scholarly quality within and among United States research universities.
The index is based on a set of statistical algorithms developed by Lawrence B. Martin and Anthony Olejniczak. It measures the annual amount and impact of faculty scholarly work in several areas, including:
- Publications (how many books and peer-reviewed journal articles have been published and what proportion of the faculty is involved in publication activity?)
- Citations of journal publications (who is referring to those journal articles in subsequent work?)
- Federal research funding (what and how many projects have been deemed of sufficient value to merit federal dollars, and at what level of funding?)
- Awards and honors (a key indicator of innovative thinking and/or scholarly excellence that has impacted the discipline over a period)
The FSPI analysis creates, by academic field of study, a statistical score and a ranking based on the cumulative scoring of a program's faculty using these quantitative measures compared against national standards within the particular discipline. Individual program scores can then be combined to demonstrate the quality of the scholarly work of the entire university. This information is gathered for over 230,000 faculty members representing 118 academic disciplines in roughly 7,300 Ph.D. programs throughout more than 350 universities in the United States.
Rankings approach [ edit ]
Unlike other annual college and university rankings, e.g., the U.S. News & World Report annual survey, the FSPI focuses on research institutions as defined by the Carnegie Classification of Institutions of Higher Education. It draws on the approach used by the United States National Research Council (NRC), which publishes a ranking of U.S.-based graduate programs approximately every ten years, but focuses on providing a more frequently-gathered set of benchmark measurements that do not include the qualitative and subjective reputation assessments favored by the NRC and other ranking systems.
History [ edit ]
The system for evaluating university programs that forms the basis of the FSPI was developed by Lawrence Martin and Anthony Olejniczak of Stony Brook University. Martin had been studying, speaking, and writing about faculty scholarly productivity since 1995. During that period,[vague] a series of discipline-specific, per-capita regression models was created and tested to evaluate their accuracy and the feasibility of predicting the academic reputation of the faculty of doctoral programs.
These prototype materials employed data from the National Research Council's 1995 publication Continuity and Change (and the subsequent CD-ROM publication of data), describing and evaluating American Ph.D. programs by field. Martin and Olejniczak found that the reputation of a program (as measured by faculty scholarly reputation from a survey conducted by the National Research Council) could be predicted well using a discipline-specific regression equation derived from quantitative, per capita data available for each program (the number of journal articles, citations, federally funded grants, and honorific awards). Reputation could be predicted with high statistical significance but important deviations from the regression line were also apparent; that is to say, some schools were outperforming their reputation, while others were under-performing. The prototype materials based on this method, and the data from the 1995 NRC study, were subsequently presented at numerous academic conferences from 1996 to 2004, and have formed the basis on which the FSP Index was developed.
Unfortunately, like many academic productivity algorithms, the FSPI is not without major flaws. It fails to adequately differentiate among and apply appropriate measures to evaluating the very distinct academic fields represented in most colleges and universities. Furthermore, a number of specific objections have been raised about how the FSPI measures scholarly productivity. Among them are: 1) inadequate—or inconsistent—weighting of quality of journals in which publications appear; 2) failure to differentiate labor involved in producing different types of publications (publications based on secondary sources and those based on tedious and deep research are not differentiated—hence departments with many faculty members who write much but research little are better rated; 3) failure to differentiate between scholarly concentrations of departments. Departments with faculty who are more involved in obscure, non-mainstream research are less cited than those involved in fashionable, mainstream areas of research and scholarship; 4) citation indexes, extensively used in scholarly productivity indexes, do not measure citations in books; 5) citation indexes are more appropriate for hard science disciplines and less appropriate for humanities disciplines; 6) non-conventional publications, which are increasing in number (e.g. - Web sites and on-line publications, audio and media productions) are ignored; 7) use of such indexes promotes "researching and publishing to the index" in order to preserve and enlarge university, government, and private grant support—and indirectly promote conservative, safe, mainstream research and publications.
In spite of these objections, today the product is used by numerous universities.
References [ edit ]
[ edit ]
- "Top 50"
- “A New Standard for Measuring Doctoral Programs,” Piper Fogg, The Chronicle of Higher Education, January 12, 2007. ()
- "How Productive Are your Programs?", Scott Jaschik, Inside Higher Education, January 25, 2006. (http://www.insidehighered.com/news/2006/01/25/analytics)
- “Towards a Better Way to Rate Research Doctoral Programs: Executive Summary,” Joan Lorden and Lawrence Martin, position paper from NASULG's Council on Research Policy and Graduate Education, ()
- Academic Analytics website
- "Are Public Universities Losing Ground?", Inside Higher Education, March 14, 2007. (http://www.insidehighered.com/news/2007/03/14/analytics)