Faculty Scholarly Productivity Index
   HOME

TheInfoList



OR:

The Faculty Scholarly Productivity Index (FSPI), a product of Academic Analytics, is a
metric Metric or metrical may refer to: * Metric system, an internationally adopted decimal system of measurement * An adjective indicating relation to measurement in general, or a noun describing a specific type of measurement Mathematics In mathema ...
designed to create benchmark standards for the measurement of academic and scholarly quality within and among
United States The United States of America (U.S.A. or USA), commonly known as the United States (U.S. or US) or America, is a country primarily located in North America. It consists of 50 states, a federal district, five major unincorporated territorie ...
research
universities A university () is an institution of higher (or tertiary) education and research which awards academic degrees in several academic disciplines. Universities typically offer both undergraduate and postgraduate programs. In the United States, t ...
. The index is based on a set of
statistical Statistics (from German: ''Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industria ...
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specificat ...
s developed by Lawrence B. Martin and
Anthony Olejniczak Anthony or Antony is a masculine given name, derived from the ''Antonii'', a ''gens'' ( Roman family name) to which Mark Antony (''Marcus Antonius'') belonged. According to Plutarch, the Antonii gens were Heracleidae, being descendants of Anton, ...
. It measures the annual amount and impact of faculty scholarly work in several areas, including: * Publications (how many books and
peer-review Peer review is the evaluation of work by one or more people with similar competencies as the producers of the work (peers). It functions as a form of self-regulation by qualified members of a profession within the relevant field. Peer review ...
ed journal articles have been published and what proportion of the faculty is involved in publication activity?) * Citations of journal publications (who is referring to those journal articles in subsequent work?) * Federal research funding (what and how many projects have been deemed of sufficient value to merit federal dollars, and at what level of funding?) * Awards and honors (a key indicator of innovative thinking and/or scholarly excellence that has impacted the discipline over a period) The FSPI analysis creates, by academic field of study, a statistical score and a ranking based on the cumulative scoring of a program's faculty using these quantitative measures compared against national standards within the particular discipline. Individual program scores can then be combined to demonstrate the quality of the scholarly work of the entire university. This information is gathered for over 230,000 faculty members representing 118 academic disciplines in roughly 7,300
Ph.D. A Doctor of Philosophy (PhD, Ph.D., or DPhil; Latin: or ') is the most common degree at the highest academic level awarded following a course of study. PhDs are awarded for programs across the whole breadth of academic fields. Because it is a ...
programs throughout more than 350 universities in the United States.


Rankings approach

Unlike other annual
college and university rankings College and university rankings order the best institutions in higher education based on factors that vary depending on the ranking. Some rankings evaluate institutions within a single country, while others assess institutions worldwide. Rankings ...
, ''e.g.'', the '' U.S. News & World Report'' annual survey, the FSPI focuses on research institutions as defined by the
Carnegie Classification of Institutions of Higher Education The Carnegie Classification of Institutions of Higher Education, or simply the Carnegie Classification, is a framework for classifying colleges and universities in the United States. It was created in 1970 by the Carnegie Foundation for the Adva ...
. It draws on the approach used by the
United States National Research Council The National Academies of Sciences, Engineering, and Medicine (also known as NASEM or the National Academies) are the collective scientific national academy of the United States. The name is used interchangeably in two senses: (1) as an umbrell ...
(NRC), which publishes a ranking of U.S.-based graduate programs approximately every ten years, but focuses on providing a more frequently-gathered set of benchmark measurements that do not include the qualitative and subjective reputation assessments favored by the NRC and other ranking systems.


History

The system for evaluating university programs that forms the basis of the FSPI was developed by Lawrence Martin and Anthony Olejniczak of
Stony Brook University Stony Brook University (SBU), officially the State University of New York at Stony Brook, is a public research university in Stony Brook, New York. Along with the University at Buffalo, it is one of the State University of New York system's ...
. Martin had been studying, speaking, and writing about faculty scholarly productivity since 1995. During that period, a series of discipline-specific, per-capita regression models was created and tested to evaluate their accuracy and the feasibility of predicting the academic reputation of the faculty of doctoral programs. These prototype materials employed data from the
National Research Council National Research Council may refer to: * National Research Council (Canada), sponsoring research and development * National Research Council (Italy), scientific and technological research, Rome * National Research Council (United States), part of ...
's 1995 publication ''Continuity and Change'' (and the subsequent CD-ROM publication of data), describing and evaluating American Ph.D. programs by field. Martin and Olejniczak found that the reputation of a program (as measured by faculty scholarly reputation from a survey conducted by the
National Research Council National Research Council may refer to: * National Research Council (Canada), sponsoring research and development * National Research Council (Italy), scientific and technological research, Rome * National Research Council (United States), part of ...
) could be predicted well using a discipline-specific regression equation derived from quantitative, per capita data available for each program (the number of journal articles, citations, federally funded grants, and honorific awards). Reputation could be predicted with high statistical significance but important deviations from the regression line were also apparent; that is to say, some schools were outperforming their reputation, while others were under-performing. The prototype materials based on this method, and the data from the 1995 NRC study, were subsequently presented at numerous academic conferences from 1996 to 2004, and have formed the basis on which the FSP Index was developed. Like many academic productivity algorithms, the FSPI has major flaws. It fails to adequately differentiate among and apply appropriate measures to evaluating the very distinct academic fields represented in most colleges and universities. Furthermore, a number of specific objections have been raised about how the FSPI measures scholarly productivity. Among them are: 1) inadequate—or inconsistent—weighting of quality of journals in which publications appear; 2) failure to differentiate labor involved in producing different types of publications (publications based on secondary sources and those based on tedious and deep research are not differentiated—hence departments with many faculty members who write much but research little are better rated; 3) failure to differentiate between scholarly concentrations of departments. Departments with faculty who are more involved in obscure, non-mainstream research are less cited than those involved in fashionable, mainstream areas of research and scholarship; 4) citation indexes, extensively used in scholarly productivity indexes, do not measure citations in books; 5) citation indexes are more appropriate for hard science disciplines and less appropriate for humanities disciplines; 6) non-conventional publications, which are increasing in number (e.g. - Web sites and on-line publications, audio and media productions) are ignored; 7) use of such indexes promotes "researching and publishing to the index" in order to preserve and enlarge university, government, and private grant support—and indirectly promote conservative, safe, mainstream research and publications. In spite of these objections, today the product is used by numerous universities.Academic Analytics Client List


References


External links


"Top 50"
* “A New Standard for Measuring Doctoral Programs,” Piper Fogg, ''The Chronicle of Higher Education'', January 12, 2007.

* "How Productive Are your Programs?", Scott Jaschik, ''Inside Higher Education'', January 25, 2006. (http://www.insidehighered.com/news/2006/01/25/analytics) * “Towards a Better Way to Rate Research Doctoral Programs: Executive Summary,” Joan Lorden and Lawrence Martin, position paper from NASULG's Council on Research Policy and Graduate Education,


Academic Analytics website
* "Are Public Universities Losing Ground?", ''Inside Higher Education'', March 14, 2007. (http://www.insidehighered.com/news/2007/03/14/analytics) {{University ranking systems University and college rankings in the United States