• unlimited access with print and download
    $ 37 00
  • read full document, no print or download, expires after 72 hours
    $ 4 99
More info
Unlimited access including download and printing, plus availability for reading and annotating in your in your Udini library.
  • Access to this article in your Udini library for 72 hours from purchase.
  • The article will not be available for download or print.
  • Upgrade to the full version of this document at a reduced price.
  • Your trial access payment is credited when purchasing the full version.
Buy
Continue searching

Assessing the reliability of simulation evaluation instruments used in nursing education: A test of concept study

ProQuest Dissertations and Theses, 2011
Dissertation
Author: Katie Anne Adamson
Abstract:
Human patient simulation (HPS) provides experiential learning opportunities for student nurses and may be used as a supplement or alternative to traditional clinical education. The body of evidence supporting HPS as a teaching strategy is growing. However, challenges associated with measuring student learning and performance in HPS activities continue to be a barrier to building the evidence base supporting or contesting the efficacy of HPS in nursing education. This proof of concept study included the development and utilization of a database of leveled, video-archived HPS scenarios for assessing the reliability (inter-rater, inter-instrument, and intra-rater or test-retest) and internal consistency of data produced using the Lasater Clinical Judgment Rubric© (LCJR), the Seattle University Evaluation Tool© and the Creighton Simulation Evaluation Instrument TM (C-SEI). Twenty-nine nurse educators completed the six-week study procedures. Descriptive statistics and ANOVA comparisons of means supported the validity of the leveled, video-archived scenarios. The inter-rater reliability of data from the LCJR© was ICC (2,1) (95% CI) = .889, (.402, .984) the inter-rater reliability of data from the Seattle University Evaluation Tool© was ICC (2,1) (95% CI) = .858 (.286, .979) and the inter-rater reliability of data from the C-SEI TM was ICC (2,1) (95% CI) = .952 (.697, .993). Using ICC (3,1) (95% CI), the intra-rater reliability of data from the LCJR© was .908 (.125, .994), from the Seattle University Evaluation Tool© it was .907 (.120, .994), and from the C-SEI TM it was, .883 (-.001, .992). The internal consistencies of the LCJR©, Seattle University Evaluation Tool© and C-SEI TM using Cronbach's Alpha were α = .974, .965, and .979 respectively. These results provided valuable information for educators and researchers seeking to measure student learning outcomes from HPS activities. Further, the success of this study provided evidence for the feasibility of a novel method for rapid instrument assessment which is being used for ongoing national research.

TABLE OF CONTENTS

Page

ACKNOWLEGEMENTS………………………………… …………

........................

…. . iii

ABSTRACT……………………………………………………

....................

………… iv - v

LIST OF

TABLES……………………………………………………………

................. ..x

LIST OF FIGURES…………………………………… ………

...........

…………………xi

DEDICATION…………………………… ……………………

.......

…………………...xii

CHAPTER

1.

INTRODUCTION……………… …

............................. ……… ………… .. 1

Introduction………………………………………

............................ …… ..1

Stat ement of the problem………………………

.... ……………………….1

Statement of the purpose………………………

.... ………………………..2

Sp ecific Aim and novel methods ……………

................................ ... ……..2

Background and significance …………………………………………

..... ..3

Conclusion…………………………………………………………

..... … ..6

2.

REVIEW OF THE LITERATURE……………

........ ……………………. 7

Introduction…………

. ……………………………………………………. 7

Conceptual framework…

.... ………………………………… …………….7

Challenges to defining and evaluating experiential learning:

Performance evaluation…………

.. …… ………………………………...13

Improving clinical evaluation in nursing education…… .... …… ………...1 4

Simulation e valuation i nstruments c urrently a vailable……

........ ………..16

vii

Simulation e valuation i nstru ments used in this s tudy………

........ …… …22

Measurement, v alidity and r eliability………

............ …… ………………29

Conclusion……………………………………………

.... ……………….40

3.

RESEARCH DESIGN AND METHODS …

................ ……………… …. 41

Introduction ………

............ ………………………… ……………………41

Study design ………………

........ …………………………… …………...42

Database of video - archived simulations …

........ …………………… ……42

Sample …………………………………………… ........ …………………43

Data collection …………………………… ……………

.... ……………...46

Data analyses …………………………… …………………

.... … … …….51

Instru ment, video and training descriptions ……………………

………...56

Potential limitations ………………………………… ……………

.... …...60

Human subjects review… … …………………………………………

.... ..61

4.

RESULTS……………………………

................ ………………………..63

Introduction …………………… …………………

.... ……………………63

Sample …………………………… …………………… …………………63

S ample size ………………………………………………

.... ……………67

Findings ……………………………………………………… ..... ……….68

Conclusion……………………………………………………… …

.... ….94

5.

DISCUSSION…………………………

.................... ……………………95

Introduction ……………………………………………

........ ……………95

Summary of the study………………………… ………………

…………95

viii

Discussion of the results………… …………………………

.... ………… 96

Limitations relat ed to the sample and methods ………… ........ …………103

Implications of findings…………………………… …

........ …………...104

Recommendations for future research ………………… ……

………….107

Conclusion ………………………… …………………………

.... ……...108

REFERENCES ………………………………………………………… …………

.... … 109

APPENDICES

A.

LASATER CLINICAL JUDGMENT RUBRIC … ………

..... ………… 122

B.

SEATTLE UNIVERSITY EVALUATION TOOL© ……

............. …… 125

C.

CREIGHTON SIMULAT I ON EVALUATION

INSTRUMENT™

(C - SEI)

……

................................ ..................... … … 128

D.

LETTER OF SUPPORT FOR USE OF THE LASATER

CLINICAL JUDGMENT RUBRIC …

............. ……………………….. . 131

E.

LETTER OF SUPPORT FOR USE OF THE SEATTLE

UNIVERSITY EVALUATION TOOL© … … …………

................ ……133

F.

LETTER OF SUPPORT FOR USE OF THE CREIGHTON

SIMULATION E VALUATION INSTRUMENT ™

(C - SEI) …

........ . .. .135

G.

INVITATION TO PARTICIPATE IN STUDY………

........ …………..137

H.

FOLLOW - UP RECRUITMENT E - MAIL……

.................... ……….. …139

I.

INTRODUCTION LETTER TO PARTICIPANTS ……

............ ………141

J.

PARTICIPANT CONSENT FORM ……

................ ……………………144

K.

PARTICIPANT QUESTIONNAIRE ABOUT STUDY

METHODS ………………………………………………

................. … 147

L.

THANK YOU LETTER TO PARTICI PANTS

........ …………………..149

M.

STUDY TIMELINE ………………

................................ ................ ……151

ix

N.

INVITATION TO SCORE SCENARIO:

CIRCLE (COUGSTREAM) .

................................ ................................ .. .153

O.

INVITATION TO SCORE SCENARIO: SQUARE

(COUGSTREAM) .

................................ ................................ .................. 155

P.

INVITATION TO SCORE

SCENARIO: TRIA NGLE

(COUGSTREM)

................................ ................................ ...................... 157

Q.

INVITATION TO SCORE SCENARIO: CIRCLE

(MEDIASITE) …

................................ .......................... ………………...159

R.

INVITATION TO SCORE SCENARIO: SQUARE

(MEDIASITE) …

................................ ............................ ……………… . 161

S.

INVITATION TO SCORE SCENARIO: TRIANGLE

(MEDIASITE) …

................................ ............................... ……………..16 3

T.

PARTICIPANT MEETING WEBINAR/ PHONE SCRI PT …

……...…165

U.

PARTICIPANT LOG …………

.................... …………………………..169

V.

INSTITUTIONAL REVIEW BOARD CERTIFICATE OF

EXEMPTION …

................................ ................................ .................... ..171

W.

INSTITUTIONAL REVIEW BOARD APPROVAL OF

AMMENDMENT……………………………….

................................ ... 174

X.

SAMPLE SIZE CALCULATION ……………………………

……….. . 17 6

Y.

CONFIDENCE INTERVAL CALCULATION …………

.... …………..178

x

LIST OF TABLES

1.

SCHEMATIC OF THE STUDY DESIGN

... ………………… ............... .48

2.

RESPONSE

RATES

.. ………………

.................... ……………………...66

3.

DESCRIPTIVE STATISTICS USING ALL RESPONSES

…… ............. 6 9

4.

D ESCRIPTIVE STATISTICS USING ONLY DATA FROM 29

P ARTICIPANT S

WHO COMPLETED THE STUDY ……………….

.... 71

5.

INTER - RATER RELIABILITY ICC

(2,1)

……

................ …………….. 77

6.

BIVARIATE CORRELATIONS BETWEEN SCORES ASSIGNED

TO EACH OF THE SCENARIOS USING EACH INSTRUM ENT … ….83

7.

INTRA - RATER RELIABILITY USING ICC

(3,1), PEARSON (r)

AND SPEARMA N (ρ) ……

.........

……………………………………………. 8 6

8.

INTERNAL CONSISTENCY (CRONBACH‘S α) OF ITEMS ON

EACH INSTRUMENT

........ ……………………………………………..92

xi

LIST OF FIGURES

1.

MILLER‘S (1990) FRAMEWORK FOR CLINICAL ASSESSMENT … 12

2.

CLINICAL JUDGMENT MODEL (TANNER, 2006)

................. ……… 25

3.

G RAPHIC

REPRESENTATION OF VALIDITY AND

RELIABILITY .31

4.

SIMPLE ERROR BAR GRAPH FOR LASATER

CLINICAL

JUDGEM E NT

RUBRIC ……

........... ……………………………………. 73

5.

SIMPLE ERROR BAR GRAPH FOR SEATTLE UNIVERSITY

EVALUATION TOOL©…… …

. ………………………………………..74

6.

SIMPLE ERROR

BAR GRAPH FOR C REIGHTON SIMULA T I ON

E VALUATION INSTRUMENT™ (C - SEI) ………

..... …………………75

7.

MULTIPLE LINE GRAPH

FOR LASATER CLINICAL JUDGMENT

R UBRIC ……

................................ ................................ ............................ .79

8.

MULTIPLE LINE GRAPH FOR SEATTLE UNIVERSITY

EVALUATION TOOL©……

................................ .............. …………….80

9.

MULTIPLE LINE GRAPH FOR CREIGHTON SIMULA T I ON

EVALUATION INSTRUMENT™

(C - S EI) …

. …………………………81

10.

SIMPLE MATRIX SCATTER PLOT OF CORRELATIONS

BETWEEN SCOR ES ON EACH OF THE INSTRUMENTS …

............. .84

11.

SIMPLE SCATTER PLOT FOR LASATER CLINICAL

JUDGMENT RUBRIC

….

................................ ................................ .... …88

12.

SIMPLE SCATTER PLOT FO R SEATTLE UNIVERSITY

EVALUATION TOOL©

......................... ………………………………..8 9

13.

SIMPLE SCATTER PLOT FOR CREIGHTON SIMULA TI ON

EVALUATION INSTRUMENT™

(C - SEI)

.................. ………………...90

14.

BAR GRAPH DISPLAYING MEAN SCORES USING ALL

INSTRUMENTS FOR EACH OF THE SCENARIOS …

... ……………..97

xii

Dedication

This dissertation is dedicated in honor of

Steve and Laurie Adamson, Noah and Jessie Kaarbo,

and Jose ph Adamson

and in

memory of Richard and Gloria DeMay and Lorene Adamson.

1

CHAPTER 1

INTRODUCTION

Introduction

Nursing has a tremendous impact on the health of our nation and world. Current challenges -

including a rapidly changing heal thcare environment, increasing patient acuities, nursing and nursing faculty shortages, expanding and diversified nursing school enrollments, limited clinical sites, protests from industry that new nursing graduates are ill - prepared for the nursing workfor ce ,

and now the economic downturn

-

are examples of why nurse educators must optimize the education of new nurses. In the face of these challenges, the nursing process itself provides an excellent model for how nurse educators can and have ad dressed the ne ed for innovation in preparing nurses

for the workforce: a) assess the situation, b) diagnose the problem(s), c) plan the intervention(s), d ) implement the intervention(s) , and e ) evaluate the results. O ne ‗intervention‘ that is currently being employed to

address the ‗situation‘ and challenges that nursing education is facing

is h uman patient simulation (HPS) . Human patient simulation

provides opportunities for student nurses to practice technical and non - technical skills including communication and teamwo rk in high - acuity, low - occurrence clinical care situations. Further, HPS may be used as a supplement or alternative to traditional clinical education.

Statement of the problem

Nurse educators are increasingly implementing the use of HPS. However, r esearche rs have failed to adequately evaluate learning outcomes from HPS

and have, therefore, fallen short of

completing the nursing process : assess, diagnose, plan, implement and evaluate .

One

barrier standing in the way of accurately evaluating learning outcomes

from HPS is a

lack of

evaluation

2

instruments that allow nurse educators to make

valid and reliable

assessments of

student performance in HPS. In the absence of valid and reliable data about student performance in HPS, robust evaluation research and compa risons with other teaching modalities cannot take place. Developing and testing such instruments requires a rare skill - set: fluency in psychometric analyses and expertise in HPS. To enhance research related to the effectiveness of HPS as a teaching strateg y, investigators must establish sound ways to measure how HPS contributes to the overarching goal of nursing education -

to prepare nurses for practice. This study s ought

to improve future nursing scholarship and to contribute to the evidence base related t o the effectiveness of HPS as a teaching strategy in nursing education by assessing the psychometric properties of three evaluation instruments designed to measure student performance in HPS.

Statement of the purpose

The purposes

of this study were

to a ) assess

the psychometric properties of three ,

recently developed evaluation instruments designed to measure student performance in HPS

and b ) test a method

for rapid instrument

assessment . Further, this study s ought

to provide information that may facilitat e instrument assessment

for the future.

Specific Aim

and novel methods

The specific aim of this study wa s to a ssess the reliability and internal consistency of

data produced using

three instruments from the literature that were designed to measure student performance in HPS.

In order to achieve this aim, this proof of concept study involve d

the development and

3

testing of simulated patient care scenarios and

the use of e - networking application s. Functional aspects of this

e - network included an Angel ™ website, webinar and telecommunication technologies (Skype ™ , Elluminate ™ , telephone and cellular phone), e - mail, and video - archived ( Media - site™ and Windows™ Media Center) simulated patient care scenarios .

In addition to facilitating th e aim of this study, these resources may

be

(and currently are being)

accessed by nurse educators and researchers for future psychometric testing of instruments for evaluating student performance in HPS.

Background and Significanc e

N urse educators recogniz e that traditional clinical placements often do not provide students with adequate opportunities to apply theoretical knowledge and develop nursing skills (Del Bueno, 200 5 ; Feingold, Calaluce & Kallen, 2004) . To address this issue, technology, including hi gh - fidelity HPS, is increasingly used in nursing education. HPS provides students with realistic, supplemental patient - care experiences. This novel and growing application of technology could change the face of clinical nursing education. However, novelty

for the sake of novelty is not

an

acceptable rationale for changing teaching practices. Bland, Topping and Wood (2010) emphasized the concern that the proliferation of HPS in nursing education may be riding a wave powered by an affinity for technology rat her than establishing a firm foundation of philosophically grounded pedagogy. I n order to improve nursing education and to improve patient care for the future, studies about the use

of

HPS and other innovative teaching strategies in nursing education must focus on learning outcomes (Decker, Sportsman, Puetz, & Billings, 2008).

4

Human patient simulation has been described as a ‗disruptive innovation‘

(Armstrong, 2009). It disrupts

the way educators have always provided clinical education. In the past, few pe ople were interested in the efficacy of different methods of clinical education because there really was only one way for students to

apply theory and practice

clinical skills — with real patients. However, HPS provides an alternative and thus, begs the que stion about the efficacy of this alternative. Without evidence about the efficacy of HPS for improving learning outcomes, HPS may prove to be just another ‗fad‘ -

and an expensive ‗fad‘ at that. Human patient simulation

requires costly

facilities and equipm ent and, in the face of severe budgetary constraints, nurse scientists are compelled to investigate the efficacy of HPS in order to support or refute the use of this expensive teaching tool.

Simulation has been used in nursing education for over fifty yea rs; the first computerized manikin ―Sim - one‖ was developed in 1969 (Abrahamson & Hoffman, 1974). Over the past ten years, with advances in technology and increase d

accessibility, t he popularity of high - fidelity HPS has grown tremendously (Jeffries, 2008 ; H arder, 2009). In light of this growth

there

is

now

an increased need

to

assess the efficacy of HPS as a teaching strategy: to measure learning outcomes from HPS. The body of literature documenting the benefits of HPS in nursing education is expanding ( Meln yk, 2008; Starkweather &

Kardong - Edgren,

2008). However, the recent proliferation of literature related to the effectiveness of HPS in nursing education has largely reflected the use of author - designed, psychometrically un/ under - tested instruments for eva luating student performance in HPS (Kardong - Edgren, Adamson, Fitzgerald, 2010 ).

There is a

gap in the literature of rigorous descriptive and experimental studies that evaluate learning outcomes from HPS in meaningful ways -

largely due to the lack of instr uments

5

available for making such measurements .

One of the original instruments used to measure learning outcomes from HPS ,

The Student Satisfaction and Self Confidence in Learning Survey

(N ational L eague for N ursing , 2005), was developed as part of the fi rst large - scale cooperative

HPS project between Laerdal™

and the N ational L eague for N ursing

(NLN) . The instrument has since been used extensively in education practice and in the literature (Fountain & Alfred, 2009 ; Smith & Roehrs, 2009 ). Further example s of how researchers have evaluated learning outcomes from HPS include assessing the acquisition of individual clinical skills, such as medication administration, (Bearnson & Wiker, 2005) and improvement in safe patient handling (Beyea &

Kobokovich, 2004).

Learning outcomes have been also been described by unstructured student evaluations (Arundell & Cioffi, 2005), author - designed evaluation instruments (Arundell & Cioffi, 2005; Bearnson

& Wiker, 2005; Oermann, 2009), measures of student perceptions of lear ning (Schoening, Sittner

& Todd, 2006) and measures of student satisfaction ratings of simulation experiences (Block, Lottenberg, Flint, Jakobsen, & Liebnitzky, 2002; Schoening , et. al. , 2006).

A weakness of the instruments used in these

studies, and cons equently the data they produce, has be en their inability to measure the complex ity

of learning

outcomes that are relevan t to nursing practice including , but not limited to,

students‘ knowledge, skills and values .

In a recent review of the literature, Lapki n, Levett - Jones, Bellchambers and Fernandez (2010, p. e221) cited, ―a lack of tested simulation evaluation instruments for accurately measuring clinical reasoning skills.‖

The use of HPS provide s

unprecedented opportunit ies

for students to demonstrate many

of the

complex facets of learning . N urse educators are remiss to overlook these cha nces to measure learning by evaluating

students‘ performance in HPS. It

is an

6

assumption of the present resear ch that, in order to extrapolate data that reflect s

how HPS c ontributes to learning, psychometrically sound instruments for evaluating student performance in HPS are needed.

Conclusion

Armstrong (2009) accurately described clinical simulation as a ‗disruptive innovation.‘ The emergence of HPS as an alternative to tr aditional clinical education has forced nurse educators to consider the value

of

not only the innovation, but the traditional practice (clinical education) that the innovation builds upon. N urse educators and researchers need to establish an evidence base related to the effectiveness of HPS as a teaching strategy in order to make informed decisions for improving curricula, pedagogy and evaluation (Oermann, 2009). Psychometrically sound evaluation instruments for measuring student performance in HPS are an e ssential pre - requisite for such research. Therefore, this study contribute d

to the futur e of nursing research,

education

and

practice by facilitating rapid instrument development which will help nurse educators establish best practices for teaching

current

and future generations of nurses .

7

CHAPTER 2

LITERATURE REVIEW

Introduction

This is a critical period in the history of nursing education science and the opportunity to examine and optimize how undergraduate nurses are trained is ripe (Shultz, 2009 ). In order to seize this opportunity, educators must be equipped to make evidence - based decision s

about curricula, teaching and evaluation strategies (Oermann, 2009). Evaluation instruments that produce v alid and reliable data about s tudent performance an d learning are important components of this equipping cache (Diekelmann & Ironside, 2002). In the area of HPS, educators must make data - driven decisions about the effectiveness of HPS

as a teaching strategy. Therefore, there is an acute need for psychometr ically sound

instruments for measuring student performance in

HPS activities. It is an assumption of the current study that measuring student performance in simulation activities is a prerequisite for evaluating learning outcomes from HPS. This

study is ba sed on a conceptual framework including evaluation and evidence - based practice , learning theory,

performance measurement ,

and the assumption that

validity and reliability a re

essential qualities of data produced using effective evaluation instruments (Oerm ann & Gaberson, 2006).

Conceptual

framework

The following review describes the literature relevant for explaining the conceptual framework of the present study . Further, this review

looks at evaluation and evidence - based practice , learning theory, and p erformance measurement

including validity and reliability of measures as they pertain to advancing nursing education science.

8

Evidence - based practice and evaluation . Evidence - based practice (EBP) is internationally recognized as the pinnacle of excellence

in nursing practice ( Evidence base practice: Creating a culture of inquiry , 2009 ). Florence Nightingale, indeed, was a proponent of questioning the status quo and she engaged in systematic inquiry to develop innovative approaches to practice (Dossey, 2000 ). Without Nightingale‘s and others‘ contributions, nursing science would have stagnated in an era of poor hygiene and dismal care. Today, in an age

of rapidly evolving technology, the development and implementation of EBP in nursing education

is equally i mportant. The significance

of evaluation in EBP is two - fold: first evidence - based evaluation methods, including psychometrically sound evaluation

instruments ,

are needed to conduct rigorous research about current and future educational practices; and secon d, the data produced from these studies will allow nursing education scholars to make data - driven decisions to improve teaching and learning in the future. In short, e vidence - based evaluation strategies are a critical component to achi eving EBP in nursing education (Diekelmann & Ironside, 2002; Oermann, 2009).

Unfortunately, e valuation of learning outcomes has not kept pace with the rapid evolution of nursing education from hospital - based apprenticeships to technology - intensive, complex and dynamic educatio nal programs. In the original model of nursing education, learners (prospective nurses) were immersed in the patient - care setting for the entirety of their training (residential nursing programs). In the dominant model employed in nursing education for the

past 30 years, learners went

to brick and mortar colleges or universities and made brief forays into the patient - care environment (commonly known as ‗clinical rotations‘). During both of these periods there was not necessarily a need to measure and compar e learning outcomes from traditional teaching

9

strat egies because there really was no t anything to compare them to

(Gaba, 2004) . However, t oday, and in the future, learners do and will have a multitude of options for how they access nursing education. They may attend classes and interact with their colleagues and instructors in a variety of settings . These diverse settings include

physical, time -

and place -

bound classes in addition to synchronous and asynchronous virtual learning environments. Further, simu lated patients may be brought to the classroom and students may practice nursing in a completely simulated healthcare envi ronment. Therefore, evaluation instruments

need to be developed and tested so that researchers may generate valid and reliable data ab out the efficacy of innovative teaching strategies. This data may be used for comparing the effectiveness of one teaching strategy (HPS) with the effectiveness of other teaching strategies .

The data establishing HPS as an evidence - based pedagogy in nursing

education is still lacking (Jeffries, 2005). While t he body of literature documenting the benefits of HPS in nursing education is growing (Kardong - Edgren, Starkweather & Ward, 2008) ,

there has not been a well - acce pted conceptualization of how these benefits are specific to HPS or how HPS contributes to improved learning outcomes that are relevant to nursing practice.

In order to improve the use of HPS in nursing education and to improve patient care for the future,

studies about the effectiveness of HPS in nursing education must reflect how it contributes to students‘ learning (Decker, et al.,

2008). The following sections

address Experiential Learning (Kolb, 1984) and its application to HPS and performance measurem ent in nursing education.

Experiential learning . I hear and I forget. I see and I remember. I do and

I understand. (Confucius, c. 479

B.C.).

10

Various learning theories have been applied to the use of HPS as a teaching strategy in nursing education. Among the

most popular is Kolb‘s (1984) Theory of E xperiential L earning (Waldner &

Olson, 2007; Howard, 2007). Dewey (1938) rocked the world of educational psychology with his groundbreaking work, Experience and Education

and

the progressive notion that, ―there is an intimate and necessary relation between the processes of actual experience and education.‖ (p. 20). Kolb (1984) analyzed

early educational psychology theories

including

Dewey‘s Model of Experiential Learning

(1938) , Lewin‘ s (1951 in Kolb, 1984 ) E xper iential L earning M odel ,

and Piaget‘s (1970 in Kolb, 198 4 ) Model of Learning and Cognitive Development . Kolb

defined experiential learning as ―a process whereby knowledge is created through the transformation of experience‖ ( Kolb, 1984, p. 38). This d efinit ion of learning emphasizes

four essential ideas: a ) learning is viewed as a process of adaptation rather than a n

en dpoint of content or outcomes; b ) knowledge is a fluid, rather than an inert, established entity; c ) learning is both

subjective and objectiv e; and d ) ―to understand learning, we must understand the nature of knowledge, and vice versa‖ ( Kolb, 1984, p. 38).

Human patient simulation is an excellent application of

Kolb‘s

Experiential Learning Theory. By definition, simulation attempts to recreat e patient - care experie nce s

where students may actively engage in skill performance, problem solving, decision making and reflection (Howard, 2007). Each of these elements is

ess ential to the learning process. Unfortunately, many of the components that make

experiential teaching and learning attractive

also make it difficult to evaluate. This may be attributed to the fact that experiential learning

is realistic, and therefore complex. In contrast to learning activities such as absorbing information from a le cture ,

where learning outcomes may be

easily assessed with a multiple choice exam ,

or practicing an IV

11

insertion on a static manikin arm ,

where learning outcomes may be evaluate d using a simple, procedural checklist, experiential learning

activities, inclu ding HPS,

encompass learning that

is much more difficult

to evaluate .

Miller (1990) described this complexity using a framework for clinical assessment

where knowledge, competence, performance and action are on a continuum from the foundation to the apex of a pyramid (Figure 1 ). While traditional evaluation methods have focused exclusively on measuring what a student knows, the broad use of experiential teaching and learning strategies such as HPS implore

investigators to measure learning at the higher lev els.

In order to better understand learning and assess the effectiveness of experiential t eaching and learning activities such as HPS,

various attempts have been made to define and describe the complex aspects of learning.

12

Figure 1 :

Miller‘s (1990) F r amework for clinical assessment.

13

Challenges

to defin ing

and evaluat ing

experiential learning : Performance evaluation

A well - accepted approach for describing

and understanding the multi - dimensional

fa cets of learning is to break learning

down int o three domains: cognitive learning, affective learning and psychomotor learning. This taxonomy of learning is helpful for understanding the complexity of experiential learning, though it cannot and does

no t fully elucidate the complexities

of performance that may be observed and evaluated in HPS.

The challenge to measure clinical (or simulated clinical) performance is not new in undergraduate nursing education. Nursing is a practice - based discipline and there are certain challenges inherent in any evaluati on of practice performance.

One way to enhance current research related to the effectiveness of HPS in nursing education is to measure how simulation contributes to the goal of nursing education, which is to prepare nurses for practice. The knowledge, val ues and abilities that are essential to nursing practice encompass the affective, cognitive and psychomotor learning domains (Davies, 1976, 1981; Jeffries & Norton, 2005; Oermann & Gaberson, 2006). There is not, however, universal agreement about where lea rning related to non - technical skills, interpersonal communication, situation awareness, clinical judgment, leadership, stress and fatigue management and higher level thinking fall into this simplistic taxonomy.

Evaluating

students‘ performance in these ar eas has

proven especially problematic (Mitchell & Flin, 2008).

Wood (1982) described four

challenges

related to clinical evaluation including : a ) bias and subjectivity influence human observation (the basis of most clinical evaluation ) ; b ) the clinical en vironment is organic and students‘ performance opportunities are influenced by the

Full document contains 135 pages
Abstract: Human patient simulation (HPS) provides experiential learning opportunities for student nurses and may be used as a supplement or alternative to traditional clinical education. The body of evidence supporting HPS as a teaching strategy is growing. However, challenges associated with measuring student learning and performance in HPS activities continue to be a barrier to building the evidence base supporting or contesting the efficacy of HPS in nursing education. This proof of concept study included the development and utilization of a database of leveled, video-archived HPS scenarios for assessing the reliability (inter-rater, inter-instrument, and intra-rater or test-retest) and internal consistency of data produced using the Lasater Clinical Judgment Rubric© (LCJR), the Seattle University Evaluation Tool© and the Creighton Simulation Evaluation Instrument TM (C-SEI). Twenty-nine nurse educators completed the six-week study procedures. Descriptive statistics and ANOVA comparisons of means supported the validity of the leveled, video-archived scenarios. The inter-rater reliability of data from the LCJR© was ICC (2,1) (95% CI) = .889, (.402, .984) the inter-rater reliability of data from the Seattle University Evaluation Tool© was ICC (2,1) (95% CI) = .858 (.286, .979) and the inter-rater reliability of data from the C-SEI TM was ICC (2,1) (95% CI) = .952 (.697, .993). Using ICC (3,1) (95% CI), the intra-rater reliability of data from the LCJR© was .908 (.125, .994), from the Seattle University Evaluation Tool© it was .907 (.120, .994), and from the C-SEI TM it was, .883 (-.001, .992). The internal consistencies of the LCJR©, Seattle University Evaluation Tool© and C-SEI TM using Cronbach's Alpha were α = .974, .965, and .979 respectively. These results provided valuable information for educators and researchers seeking to measure student learning outcomes from HPS activities. Further, the success of this study provided evidence for the feasibility of a novel method for rapid instrument assessment which is being used for ongoing national research.