Other Titles
Teaching and Learning Strategies for Educating Nurses [Session]
Abstract
Session presented on Monday, November 9, 2015:
Purpose: This presentation will discuss the development, validation, and implementation of the Fairness of Items Tool (FIT) for its use by nurse educators as a component of the test development process to improve the quality of multiple-choice examinations. The FIT provides clear and concise guidelines for nursing faculty to use in developing unbiased test items.
Background: Multiple-choice examinations are a common assessment method used in nursing programs, and conclusions based on these assessments have high stakes consequences. Faculty members therefore have an obligation to ensure that tests are valid and reliable assessments of student learning. For a test to be fair, valid, and reliable, it must contain well-written items. Constructing and revising test items is difficult and time consuming, and nursing faculty members lack adequate preparation and sufficient time for examination construction and analysis. Published guidelines are available to assist faculty in creating examination items; however, assessments and textbook item banks contain violations of these guidelines, resulting in the administration of assessments containing flawed test items. Developing clear and concise guidelines for nursing faculty to use in developing unbiased test items is one strategy that may improve the quality of nursing assessments, thereby improving the quality of the decisions made based on these assessments.
Methodology: Development and validation of the FIT was a three-phase process grounded in two theoretical frameworks adapted for this research study: the Revised Framework for Quality Assessment and the Conceptual Model for Test Development. In the first phase, the tool was developed by the primary investigator through an extensive review of published higher education and nursing literature related to item-writing rules, examination bias, and cultural bias. Phases two and three used systematic methods to establish the validity and reliability of the FIT. In phase two, content validity and face validity were established through review by a panel of item-writing experts. In phase three, multiple measures were used to establish reliability and construct validity through testing of the FIT by nursing faculty (N = 488) to evaluate sample MCQs. The sample for this research study was drawn from a list of 5,786 names and email addresses systematically sampled from AACN member school websites. Inclusion criteria included active teaching in a nursing program and utilization of faculty-generated MC examinations for student assessment. Faculty-generated MC examinations include those that are developed by faculty through writing new test items, using test bank items, revising test items from any source, or any combination of these activities. Overall, the demographic characteristics of the sample population were fairly representative of the general nursing faculty population, consisting primarily of educated white females over age 45. The sample population was more likely to have doctoral preparation, full-time and tenured or tenure track status, certification in academic nursing education, and hold higher academic rank than the general nursing faculty population. Males were slightly overrepresented in the sample, while African Americans were underrepresented. The sample represented all regions in the United States, over 162 programs of nursing, and diverse clinical specialties.
Results: The results of this research study support the hypothesis that the FIT is a valid and reliable tool for identifying bias in MCQs as a component of a systematic process for test development. The known groups comparison supported the validity of the FIT as a measure of item bias. Tests for independence demonstrated that FIT scores are not affected by demographic variables. Analysis of agreements provided strong support for equivalence, and the KR20 supported the stability of the FIT. Cronbach alpha correlation coefficients demonstrated adequate reliability for a newly developed tool. This research study also demonstrated that participants made similar decisions when using the FIT to evaluate MCQs.
Conclusion/Recommendations: Nurse educators can use the FIT as a component of the test development process to improve the quality of multiple-choice examinations. The FIT provides clear and concise guidelines for writing MCQs and revising textbook test bank items. The FIT provides a means to facilitate systematic research to validate guidelines and testing procedures and to improve the quality of MC test items. Improving the quality of examinations has the potential to improve student success and better prepare graduates for licensure and certification examinations, indirectly increasing the quality, quantity, and diversity of nurses joining the workforce. Note: This research study used Research Electronic Database Capture (REDCap), a secure, web-based survey tool and database, supported by Center for Clinical and Translational Science and Training grant UL1-RR026314.
Sigma Membership
Beta Iota
Type
Presentation
Format Type
Text-based Document
Study Design/Type
N/A
Research Approach
N/A
Keywords:
Multiple-Choice Test Items, Item-Writing Guidelines, Test Item Development
Recommended Citation
Hicks, Nikole Anderson, "Are your multiple-choice tests "FIT"? Using the Fairness of Items Tool (FIT) as a component of the test development process" (2016). Convention. 92.
https://www.sigmarepository.org/convention/2015/presentations_2015/92
Conference Name
43rd Biennial Convention
Conference Host
Sigma Theta Tau International
Conference Location
Las Vegas, Nevada, USA
Conference Year
2015
Rights Holder
All rights reserved by the author(s) and/or publisher(s) listed in this item record unless relinquished in whole or part by a rights notation or a Creative Commons License present in this item record.
All permission requests should be directed accordingly and not to the Sigma Repository.
All submitting authors or publishers have affirmed that when using material in their work where they do not own copyright, they have obtained permission of the copyright holder prior to submission and the rights holder has been acknowledged as necessary.
Review Type
Abstract Review Only: Reviewed by Event Host
Acquisition
Proxy-submission
Are your multiple-choice tests "FIT"? Using the Fairness of Items Tool (FIT) as a component of the test development process
Las Vegas, Nevada, USA
Session presented on Monday, November 9, 2015:
Purpose: This presentation will discuss the development, validation, and implementation of the Fairness of Items Tool (FIT) for its use by nurse educators as a component of the test development process to improve the quality of multiple-choice examinations. The FIT provides clear and concise guidelines for nursing faculty to use in developing unbiased test items.
Background: Multiple-choice examinations are a common assessment method used in nursing programs, and conclusions based on these assessments have high stakes consequences. Faculty members therefore have an obligation to ensure that tests are valid and reliable assessments of student learning. For a test to be fair, valid, and reliable, it must contain well-written items. Constructing and revising test items is difficult and time consuming, and nursing faculty members lack adequate preparation and sufficient time for examination construction and analysis. Published guidelines are available to assist faculty in creating examination items; however, assessments and textbook item banks contain violations of these guidelines, resulting in the administration of assessments containing flawed test items. Developing clear and concise guidelines for nursing faculty to use in developing unbiased test items is one strategy that may improve the quality of nursing assessments, thereby improving the quality of the decisions made based on these assessments.
Methodology: Development and validation of the FIT was a three-phase process grounded in two theoretical frameworks adapted for this research study: the Revised Framework for Quality Assessment and the Conceptual Model for Test Development. In the first phase, the tool was developed by the primary investigator through an extensive review of published higher education and nursing literature related to item-writing rules, examination bias, and cultural bias. Phases two and three used systematic methods to establish the validity and reliability of the FIT. In phase two, content validity and face validity were established through review by a panel of item-writing experts. In phase three, multiple measures were used to establish reliability and construct validity through testing of the FIT by nursing faculty (N = 488) to evaluate sample MCQs. The sample for this research study was drawn from a list of 5,786 names and email addresses systematically sampled from AACN member school websites. Inclusion criteria included active teaching in a nursing program and utilization of faculty-generated MC examinations for student assessment. Faculty-generated MC examinations include those that are developed by faculty through writing new test items, using test bank items, revising test items from any source, or any combination of these activities. Overall, the demographic characteristics of the sample population were fairly representative of the general nursing faculty population, consisting primarily of educated white females over age 45. The sample population was more likely to have doctoral preparation, full-time and tenured or tenure track status, certification in academic nursing education, and hold higher academic rank than the general nursing faculty population. Males were slightly overrepresented in the sample, while African Americans were underrepresented. The sample represented all regions in the United States, over 162 programs of nursing, and diverse clinical specialties.
Results: The results of this research study support the hypothesis that the FIT is a valid and reliable tool for identifying bias in MCQs as a component of a systematic process for test development. The known groups comparison supported the validity of the FIT as a measure of item bias. Tests for independence demonstrated that FIT scores are not affected by demographic variables. Analysis of agreements provided strong support for equivalence, and the KR20 supported the stability of the FIT. Cronbach alpha correlation coefficients demonstrated adequate reliability for a newly developed tool. This research study also demonstrated that participants made similar decisions when using the FIT to evaluate MCQs.
Conclusion/Recommendations: Nurse educators can use the FIT as a component of the test development process to improve the quality of multiple-choice examinations. The FIT provides clear and concise guidelines for writing MCQs and revising textbook test bank items. The FIT provides a means to facilitate systematic research to validate guidelines and testing procedures and to improve the quality of MC test items. Improving the quality of examinations has the potential to improve student success and better prepare graduates for licensure and certification examinations, indirectly increasing the quality, quantity, and diversity of nurses joining the workforce. Note: This research study used Research Electronic Database Capture (REDCap), a secure, web-based survey tool and database, supported by Center for Clinical and Translational Science and Training grant UL1-RR026314.
Description
43rd Biennial Convention 2015 Theme: Serve Locally, Transform Regionally, Lead Globally.`