Other Titles

Leadership in Nursing Education [Session]

Abstract

Session presented on Tuesday, November 10, 2015: When considering evidence-driven approaches to quality improvement in schools or departments of nursing, seldom is a true quality improvement circle observed. Too often, faculty "discuss" results of standardized test scores in ways that could be considered anecdotal, episodic, time-limited and perhaps, idiosyncratic. That is to say that unfortunately, a one-time only end-of-term discussion of the output of one set of standardized test scores are put forth, instead of results in a series of exams, longitudinally. Discussion of results for selected content areas for a student cohort, then, is usually eclipsed by the limitations of time (and know-how) obviating the opportunity for true quality improvement strategizing. If faculty are given regular formal opportunities to discuss group results of high-stakes, proctored, content-mastery exams (such as those reported by Mountain Measurement, NCSBN and/or ATI), the chance to collaborate both with one course and across a set of courses might emerge if the Deming Plan-Do-Check-Act (PDCA) model is used. Structuring the groups with members representing like content domains (e.g. all medical-surgical nursing instructors) may provide a good beginning for planning a sole course for subsequent offerings but it falls short of true quality improvement if not restructured, in subsequent formal opportunities, to capture creative strategizing from a group of dissimilar content masters. That is, when teachers of courses across a curriculum are assembled, the opportunity to identify weak versus strong content areas traversing a spectrum of learning opportunities might arise. In this fashion, faculty might learn to understand, with depth, what is involved in the measurement of student outcomes, especially as course and program objectives are leveled and considered. This presentation allows the attendee to witness how a plan for faculty development, in which regular formal opportunities for faculty to view and critically analyze reports of standardized test scores, was designed. With an emphasis on assigning similar, then dissimilar, faculty content masters to groups, the authors will walk the participants through a series of report interpretations identifying the uses, then misuses, of the evidence. Prototypes of data users will be presented, starting with the occasional faculty user of data-those who obtain, then record, a test score for partial course credit in one time-limited course. The presentation progresses, then, to a description of the power-user of data who will compile a set of scores for multiple student cohorts, over time, indicative of alleged teaching prowess alongside possible mitigating trends, again limited to a course over time. Finally, a plan from the program perspective, to align faculty from diverse content areas for more discriminating analysis of report data, will be discussed, in which emphasis is placed on the commonalities of categories of student performance, e.g. patient safety, comfort care, use of the nursing process. It is in this last crucial component, involving similar, then dissimilar faculty areas of expertise, in which true performance improvement might be realized, especially if faculty can truly engage in the continuous, not static, nature of this process (across, not just within, courses). Since the introduction of Deming quality improvement principles to health care environments began in the early 1990s, significant resistance has occurred through protestations that principles of Japan's automotive industry improvements (where the model began) could never apply to the human condition. In recent years, however, as hospitals and other health care environments have demonstrated fewer adverse outcomes with use of the PDCA model for provider and administrator decision-making, the use of the model is more frequent. Seldom is it seen used in higher education with few references to PDCA action circles for faculty identified in the literature. It remains to be seen if use of action circles, with changing faculty membership over time, can be associated to improved student learning outcomes such as improved test scores, particularly in nursing where high-stakes content-mastery testing is common and necessary.

Description

43rd Biennial Convention 2015 Theme: Serve Locally, Transform Regionally, Lead Globally.`

Author Details

Mary Anne Schultz, RN; Geri Chesebrough, RN, CNE

Sigma Membership

Gamma Tau at-Large

Type

Presentation

Format Type

Text-based Document

Study Design/Type

N/A

Research Approach

N/A

Keywords:

Deming Model, PDCA action circles, standardized test score reports

Conference Name

43rd Biennial Convention

Conference Host

Sigma Theta Tau International

Conference Location

Las Vegas, Nevada, USA

Conference Year

2015

Rights Holder

All rights reserved by the author(s) and/or publisher(s) listed in this item record unless relinquished in whole or part by a rights notation or a Creative Commons License present in this item record.

All permission requests should be directed accordingly and not to the Sigma Repository.

All submitting authors or publishers have affirmed that when using material in their work where they do not own copyright, they have obtained permission of the copyright holder prior to submission and the rights holder has been acknowledged as necessary.

Acquisition

Proxy-submission

Share

COinS
 

The Use of Standardized Test Score Reports to Inform Instruction Using the Deming Model

Las Vegas, Nevada, USA

Session presented on Tuesday, November 10, 2015: When considering evidence-driven approaches to quality improvement in schools or departments of nursing, seldom is a true quality improvement circle observed. Too often, faculty "discuss" results of standardized test scores in ways that could be considered anecdotal, episodic, time-limited and perhaps, idiosyncratic. That is to say that unfortunately, a one-time only end-of-term discussion of the output of one set of standardized test scores are put forth, instead of results in a series of exams, longitudinally. Discussion of results for selected content areas for a student cohort, then, is usually eclipsed by the limitations of time (and know-how) obviating the opportunity for true quality improvement strategizing. If faculty are given regular formal opportunities to discuss group results of high-stakes, proctored, content-mastery exams (such as those reported by Mountain Measurement, NCSBN and/or ATI), the chance to collaborate both with one course and across a set of courses might emerge if the Deming Plan-Do-Check-Act (PDCA) model is used. Structuring the groups with members representing like content domains (e.g. all medical-surgical nursing instructors) may provide a good beginning for planning a sole course for subsequent offerings but it falls short of true quality improvement if not restructured, in subsequent formal opportunities, to capture creative strategizing from a group of dissimilar content masters. That is, when teachers of courses across a curriculum are assembled, the opportunity to identify weak versus strong content areas traversing a spectrum of learning opportunities might arise. In this fashion, faculty might learn to understand, with depth, what is involved in the measurement of student outcomes, especially as course and program objectives are leveled and considered. This presentation allows the attendee to witness how a plan for faculty development, in which regular formal opportunities for faculty to view and critically analyze reports of standardized test scores, was designed. With an emphasis on assigning similar, then dissimilar, faculty content masters to groups, the authors will walk the participants through a series of report interpretations identifying the uses, then misuses, of the evidence. Prototypes of data users will be presented, starting with the occasional faculty user of data-those who obtain, then record, a test score for partial course credit in one time-limited course. The presentation progresses, then, to a description of the power-user of data who will compile a set of scores for multiple student cohorts, over time, indicative of alleged teaching prowess alongside possible mitigating trends, again limited to a course over time. Finally, a plan from the program perspective, to align faculty from diverse content areas for more discriminating analysis of report data, will be discussed, in which emphasis is placed on the commonalities of categories of student performance, e.g. patient safety, comfort care, use of the nursing process. It is in this last crucial component, involving similar, then dissimilar faculty areas of expertise, in which true performance improvement might be realized, especially if faculty can truly engage in the continuous, not static, nature of this process (across, not just within, courses). Since the introduction of Deming quality improvement principles to health care environments began in the early 1990s, significant resistance has occurred through protestations that principles of Japan's automotive industry improvements (where the model began) could never apply to the human condition. In recent years, however, as hospitals and other health care environments have demonstrated fewer adverse outcomes with use of the PDCA model for provider and administrator decision-making, the use of the model is more frequent. Seldom is it seen used in higher education with few references to PDCA action circles for faculty identified in the literature. It remains to be seen if use of action circles, with changing faculty membership over time, can be associated to improved student learning outcomes such as improved test scores, particularly in nursing where high-stakes content-mastery testing is common and necessary.