University of Groningen
The clinical learning environment van Hell, Elisabeth
IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document version below.
Document Version Publisher's PDF, also known as Version of record
Publication date: 2009 Link to publication in University of Groningen/UMCG research database
Citation for published version (APA): Hell, E. A. V. (2009). The clinical learning environment: transition, clerkship activities and feedback Groningen: s.n.
Copyright Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons). Take-down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the number of authors shown on this cover page is limited to 10 maximum.
Download date: 13-11-2018
Chapter 6
The digital pen as a novel device to facilitate the feedback process
Elisabeth A. van Hell Jan B. M. Kuks Martha J. Dekker Jan C.C. Borleffs Janke Cohen-Schotanus
Submitted for publication
Chapter 6
ABSTRACT Background Students’ clinical skills performance is usually assessed using checklists. Checklist data should be immediately available to students to offer them instant, detailed feedback and be stored in a database for quality assurance purposes. Aim To describe our experiences with the digital pen – a device that transmits an examiner’s handwritten notes to a database and an electronic file that is immediately available to students. Additionally, we analysed examiner satisfaction and explored the utility of generated checklist data for quality assurance purposes. Methods The digital pen was implemented in three stages, processing 1284 checklists. On the basis of staff interviews and generated data, we described our experiences with the digital pen and the improvements implemented. Descriptive statistical analysis of examiner satisfaction (nine items) and the checklist data was performed. Results The first implementation stage revealed that 88% (476/544) of the checklists were correctly processed and after improvements, the second and third stages showed 100% correctly processed checklists (128 and 612, respectively). The examiners were satisfied with the digital pen, and the descriptives for item, examiner and case level were useful for quality assurance. Conclusion The digital pen is a practicable device for sending completed checklists to student mailboxes and for providing faculty with data for quality assurance purposes.
82
Digital pen to facilitate feedback
INTRODUCTION Clinical performance assessment has been widely applied in medical education, with most medical schools using some sort of checklist to assess student competence. Due to administrative and logistic difficulties, however, there is a risk of students not receiving copies of their completed checklists, and a separate risk that the possible applications of checklist data for quality assurance purposes remain unused. This article introduces the digital pen as a device for overcoming such administrative and logistic difficulties and for providing students and faculty with suitable feedback. The purpose of assessment in a clinical setting should be twofold – first, to judge students’ clinical competences on the basis of performance and second, to provide students with feedback on their performance from which they can learn.1,2 Assessment should be a learning exercise and ‘the providing of information is a key to achieve that’.3 Clinical performance is usually assessed using checklists on which the examiners provide a performance-related evaluation and written feedback to the students. Since detailed ratings and written feedback comments both increase student achievement, all performance results – the completed checklists – should be fed back to the students. However, because distributing the checklists is time-consuming, there is a risk that students will not receive their checklist data. Consequently, the possible educational impact of assessment on student learning may be reduced. Checklist data – detailed ratings and written feedback – can also be used to provide faculty with feedback for quality assurance purposes. Clinical performance assessments generally include a limited number of cases. Consequently, adequate sampling across cases and medical content is of great importance. To ensure valid assessments, the checklist content and the ratings for specific items should be evaluated on a regular basis and adjusted if necessary.3-5 In addition, if checklist data is analysed on a regular basis, examiners can be provided with feedback on their rating and feedback skills and be trained accordingly. Although all the information 83
Chapter 6
required for quality assurance purposes is available on paper, digitizing the checklist data is very labour intensive. Consequently, faculty may not be using the available data fully to improve the content of cases, adjust checklists and provide examiners with feedback on their rating skills. In an attempt to overcome these administrative and logistic difficulties, the use of personal digital assistants (PDAs) was recently introduced.6,7 Compared with paper and pencil checklists, the use of PDAs has resulted in an enormous reduction in the time required to produce assessment results. In addition, feedback to students is generated automatically. How digital checklist data can also provide faculty with useful information for quality assurance purposes remains to be discussed. Some important drawbacks can also be identified in the use of PDAs. Their small screen size requires a tree checklist design. Consequently, checklists need considerable revision, as a global view of the checklist will be limited, and all examiners will have to be trained thoroughly on the proper use of PDAs and the revised checklist design. New tools are desirable to overcome administrative and logistic difficulties and make full use of checklist data. We expected the digital pen to be a useful device for providing students with detailed feedback on their performance and faculty with data useful for quality assurance purposes. We aimed to describe our experiences with the digital pen, a device that transmits an examiner’s handwriting to a database and an electronic file that is immediately available to students. Additionally, we analysed examiner satisfaction and explored the utility of the generated checklist data for quality assurance purposes.
84
Digital pen to facilitate feedback
METHODS Context The six-year medical curriculum at the University of Groningen in the Netherlands comprises three pre-clinical Bachelor’s years followed by three Master’s years. In the first Master’s year, student performance is regularly assessed using an objective structured clinical examination (OSCE). During the first Master’s year of the curriculum, approximately 450 students are tested on 15 OSCE cases. This means that 6750 checklists must be administered annually. At each OSCE station, one examiner rates the student’s performance on a checklist developed specifically for the case in question. The examiner indicates for each checklist item whether it was performed satisfactorily, unsatisfactorily or not performed and then provides a final score (pass/fail). Thus far, checklist data have not been structurally analysed for quality assurance purposes and the students have received their final scores rather than a copy of their completed checklists. Even though the students’ personal checklists are available on request, our experience shows that very few students take the trouble to examine their checklists. Digital pen In 2000 Anoto developed the digital pen (Anoto Group AB, Headquarter Lund, Sweden). This technology was developed further, for application in performance assessments, by the IT Support Team of the University Medical Center Groningen and the University of Groningen, in cooperation with DigiXS (DigiXS, Headquarter Groningen, the Netherlands). At the beginning of our study the technology we used consisted of three parts – a digital pen, a checklist printed on generic pattern paper and a mobile phone to transfer the data. Generic pattern paper is paper on which an almost invisible pattern of very small dots is printed. The digital pen looks and feels like a normal ballpoint and contains the usual ink cartridge. However, the pen also contains a small camera, a force sensor, an image microprocessor, memory, a battery and a Bluetooth transceiver. While writing, the camera automatically takes approximately 75 digital snapshots 85
Chapter 6
per second of its exact position on the pattern paper. The data is then stored in the pen’s memory as a series of coordinates. Once the examiner has completed a checklist, the ‘send box’ is ticked on the checklist and the pen uses Bluetooth to transfer the data through a mobile phone to the server. By using the newly developed software it is possible to generate a database with detailed ratings and to export images of the students’ individual checklists (PDF format) to their mailboxes. Of course, the paper checklists also remain available. Procedure The digital pen was implemented in three stages. First, the original pen technology was tested during a full assessment week (five days). By interviewing project staff and analysing the data generated we detected several shortcomings and, consequently, the digital pen technology was adapted. The improved version of the digital pen technology was first implemented for only a short period – a single assessment day – (Stage 2) and then for a full assessment week (Stage 3). During this final stage, examiners were asked to respond to a nine-item questionnaire (5-point Likert scale, 1 = strongly disagree, 5 = strongly agree) concerning user satisfaction with the electronic pen and the examiners’ need for feedback on their rating skills. We calculated means and standard deviations for the questionnaire items. Furthermore, we analysed the checklist data generated and explored how this data could be used to improve the content of OSCE cases, adjust checklists and provide examiners with feedback on their rating skills. Before the start of each OSCE, examiners were instructed in the use of the digital pens. Since the checklist formats had not been changed but were only printed on pattern paper, a very short introduction and instruction (at most 15 minutes) was expected to be adequate. The examiners were instructed on both how to use the digital pens and pattern paper and how to send the completed checklist to the server. Afterwards, the OSCEs were taken and the examiners used the digital pens independently.
86
Digital pen to facilitate feedback
RESULTS During the first implementation stage (January 2007), 544 checklists were processed, of which 476 (88%) were converted correctly into spreadsheet format and PDF files were sent to the students’ own mailboxes. On the basis of interviews with project staff and an analysis of the generated spreadsheet data, two main shortcomings were detected in the digital pen technology. First, in order to link a particular student with his/her checklist data (ratings and written feedback), the examiner had to scan a barcode identifying specific examination characteristics (student name and number, examiner name, case, date, time and location). Due to scanning difficulties, however, the examiner’s ratings and written feedback were occasionally not linked to a specific examination. Second, difficulties were encountered in ticking the ‘send box’ and transmitting the checklist information to the server. We implemented two improvements to resolve these problems. First, we replaced generic pattern paper with unique pattern paper, which has a unique identification for the digital pen. By printing our checklists on unique pattern paper, the examiner’s ratings and supplementary feedback were linked automatically and combined with the corresponding examination characteristics printed at the top of each checklist. Consequently, the barcode to identify specific examination characteristics became redundant. Second, in order to transmit the checklist data from the digital pen to the server we used a USB port instead of the mobile phone. During regular breaks in the examination schedule, the examiner handed in the paper checklists and at the same time inserted the digital pen into a pen holder which transmitted the checklist information to the server in a couple of seconds. The digital pen was then immediately available for further use. During the second and third implementation stages (September 2008 and October 2008) the use of the improved device resulted in 100 percent of the checklist data being correctly processed (128 and 612 checklists, respectively). Eighteen examiners were involved in the third stage and 15 (83%) of them completed the questionnaire concerning user satisfaction with the digital pen. The mean scores showed that they were satisfied with 87
Chapter 6
the ease of use (mean = 4.80, SD = 0.56) and the instruction in the use of the digital pen (mean = 4.47, SD = 0.47) (Table 1). The examiners had moderate confidence in the functioning of the digital pen (mean = 3.93, SD = 1.16) and felt that using the digital pen did hardly influence student assessment (mean = 1.40, SD = 0.74 and mean = 1.93, SD = 1.39). Furthermore, the examiners felt confident about their judgements of students’ performance (mean = 4.20, SD = 0.56) and they wanted to know how their ratings related to those of other examiners (mean = 4.13, SD = 0.99). When asked whether feedback would influence their rating methods, the examiners expressed a neutral opinion (mean = 3.13, SD = 0.99). Table 1 Examiner satisfaction with the electronic pen and their need for feedback, mean scores and standard deviations (1 = strongly disagree, 5 = strongly agree)
1 The digital pen is easy to use 2 The instruction in the use of the digital pen was sufficient 3 I have confidence in the functioning of the digital pen 4 I make different judgements of student performance through using the pen 5 The use of the digital pen hampers the assessment of students 6 Through the use of the digital pen I feel under Faculty supervision 7 I feel confident about my judgement of student performance 8 I would like to know how my ratings compare to those of other examiners 9 Personal feedback where my ratings are compared to those of other examiners would influence my rating methods SD = standard deviation
Mean 4.80 4.47 3.93 1.40
(SD) (0.56) (0.74) (1.16) (0.74)
1.93 1.73 4.20 4.13
(1.39) (0.96) (0.56) (0.99)
3.13
(0.99)
Figure 1, as an example, presents a small part of the database (anonymized) – checklist data generated from the ‘examination pregnant woman’ OSCE case. This example query provides useful quality assurance information. Ratings were frequently not provided for a number checklist items (Items 12 and 13) and, in addition, the same items were rated as ‘unsatisfactorily’ or ‘not performed’ several times. On two of the checklists (Students 4 and 15), which were completed by different examiners, a considerable number of ratings were missing (7 and 4, respectively). Five students were rated 88
Digital pen to facilitate feedback
‘unsatisfactorily ’ on one checklist item – three received a ‘pass’ for their final scores (Students 8, 9, 21), whereas the other two received a ‘fail’ (Students 4 and 11) for the case in question. Figure 1 Part of a generated spreadsheet (checklist data about the ‘examination pregnant woman’) OSCE case – 23 checklists consisting of 13 items assessed by five examiners (A – E)
1 Satisfactorily/pass 2 Unsatisfactorily/fail
3 Not performed No score indicated
89
Chapter 6
During the third implementation stage we analysed all the checklist data generated and found that only 42 (7%) of all checklists had a ‘fail’ as their final score. The cases ‘examination pregnant woman’ and ‘examination sensitivity, reflexes and motor system’ had the highest failure rates (9.5%). There were ten cases for which all students received a ‘pass’. In addition, differences between the examiners were found – some examiners gave all students a ‘pass’, whereas one examiner gave 13 percent of the students a ‘fail’.
DISCUSSION AND CONCLUSIONS The results of our study indicate that the combination of the digital pen and checklists printed on unique pattern paper is a practicable device for transmitting written checklist data. By means of this device the written information was stored in a database and the completed checklists were sent to the students’ mailboxes. Providing students with feedback on their clinical skills performance is necessary to enhance student learning (assessment as a learning tool). Recently, an extensive review and a model for feedback by Hattie and Timperley showed that informative feedback which focuses on specific goals is a key tool for enhancing student achievement.8 Since performance assessment checklists contain similar informative and goal-oriented data, they are very suitable for providing students with proper feedback. Obviously, we consider the automatic provision of instant, detailed feedback to students using the digital pen technology a considerable improvement for clinical performance assessments. In addition to solving administrative and logistic difficulties, the digital pen technology also offers various possibilities for improving feedback to students. Future research should examine student experience with the automatic email provision of their personal checklist data, how students use their checklist data and what kind of feedback is appropriate for stimulating the students to improve their clinical performance. Furthermore, we should investigate 90
Digital pen to facilitate feedback
the examiners’ ratings (quantitative) and written feedback (qualitative) and analyse the relationship with student performance. This information can be used to determine what kind of feedback influences student learning and, subsequently, to train the examiners in order to improve their rating and feedback skills. Previous studies have described the possibilities of utilizing assessment data to inform test developers and monitor the quality of training programmes and assessment.3,9 By means of the digital pen technology, we were able to transmit handwritten checklist data to a database. The descriptive analyses performed in this study have already provided useful information for improving the content of OSCE cases, adjusting checklists and providing examiners with feedback on their rating skills. The information we found concerning the checklist items for which a high proportion of students were rated as unsatisfactorily and those items that were frequently not rated by the examiners, could indicate unreasonably difficult or unrealistic demands on students, unclear rating criteria or insufficient training opportunities for students. We also found differences in the final scores of students who were rated unsatisfactorily on one item. These data can be used to determine which checklist items are most important for passing an examination. Additionally, the examination requirements and rating criteria can be discussed with the examiners and, if necessary, they can be clarified or adjusted. Furthermore, these data can be employed to improve training opportunities for future students where necessary. The differences between cases and examiners can be used to further analyse case content and inter-rater reliability. At present, the digital checklist data are being used to achieve further improvements in clinical performance assessment and the University Medical Center Groningen is developing new software for determining whether the checklists meet predetermined rules. Examples of such rules are that the checklist does not contain too many uncompleted items, that a final score is provided and the checklist does not contain items that are rated as both satisfactorily and unsatisfactorily. The software divides these 91
Chapter 6
checklists into two groups – ‘ready to send’ and ‘fall short’. The ‘ready to send’ checklists meet all predetermined rules and, consequently, are ready to send to the students’ mailboxes. Checklists that do not meet the predetermined rules are placed in the ‘fall short’ category and need to be completed before they are sent to the students’ mailboxes. The advantages of this software are numerous. First, the information obtained stimulated the checklists developers to reconsider the criteria that have to be met. This resulted in improvements to checklist content and design, and uniformity in criteria. Second, the checklists the students receive in their mailboxes always meet specified criteria and therefore have a minimum level of feedback quality. Third, examiners who are accountable for their ratings are known to provide more accurate performance ratings.10 Because examiners are aware of the requirement to justify their ratings they are more motivated to take steps to better prepare themselves for making a rating. Another improvement that will be developed during this project is providing examiners with continuous feedback. Although providing feedback to examiners is expected to improve their rating and feedback skills,11 continuous feedback on their performance is usually lacking. The checklist data can readily be used to provide examiners with personal feedback. For example, data can be presented to the examiners showing their own ratings compared to those of other examiners. Accordingly, the examiners will become aware of personal rating bias, such as structurally high or low ratings or low variation in ratings. Since through the use of the digital pen all checklist data will be available digitally, these data can be used for future quality assurance processes and to structurally improve skills performance assessments.
92
Digital pen to facilitate feedback
REFERENCES 1 Wass V, van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. Lancet 2001;357:945–949. 2 Epstein RM. Assessment in medical education. N Engl J Med 2007;356:387–396. 3 Van der Vleuten CPM. The assessment of professional competence: Developments, research and practical implications. Adv Health Sci Educ 1996;1:41–67. 4 Newble D, Dawson B. Guidelines for assessing clinical competence. Teach Learn Med 1994;6:213–220. 5 Petrusa ER. Clinical performance assessments. In: Norman GR, van der Vleuten CPM, Newble DI, eds. International handbook of research in medical education. Dordrecht: Kluwer Academic Publishers 2002;673–709. 6 Schmidts MB. OSCE logistics: Handheld computers replace checklists and provide automated feedback. Med Educ 2000;34:957–958. 7 Treadwell I. The usability of personal digital assistants (PDAs) for assessment of practical performance. Med Educ 2006;40:855–861. 8 Hattie J, Timperley H. The power of feedback. Rev Educ Res 2007;77:81–112. 9 Boulet JR, McKinley DW, Whelan GP, Hambleton RK. Quality assurance methods for performance-based assessments. Adv Health Sci Educ 2003;8:27–47. 10 Mero NP, Motowidlo SJ, Anna AL. Effects of accountability on rating behavior and rater accuracy. J Appl Soc Psychol 2003;33:2493–2514. 11 Steinert Y, Mann K, Centeno A, Dolmans D, Spencer J, Gelula M, Prideaux D. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME Guide No. 8. Med Teach 2006;28:497–526.
93
94