OEC logo

Site Search

  • How to Search
  • Advisory Group
  • Editorial Board
  • OEC Fellows
  • History and Funding
  • Using OEC Materials
  • Collections
  • Research Ethics Resources
  • Ethics Projects
  • Communities of Practice
  • Get Involved
  • Submit Content
  • Open Access Membership
  • Become a Partner

Undergraduate Case Analysis Rubric

This rubric can be used for guiding undergraduate case analysis for the course " Genomics, Ethics, and Society ."

Related Resources

Submit Content to the OEC   Donate

NSF logo

This material is based upon work supported by the National Science Foundation under Award No. 2055332. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

15.7 Evaluation: Presentation and Analysis of Case Study

Learning outcomes.

By the end of this section, you will be able to:

  • Revise writing to follow the genre conventions of case studies.
  • Evaluate the effectiveness and quality of a case study report.

Case studies follow a structure of background and context , methods , findings , and analysis . Body paragraphs should have main points and concrete details. In addition, case studies are written in formal language with precise wording and with a specific purpose and audience (generally other professionals in the field) in mind. Case studies also adhere to the conventions of the discipline’s formatting guide ( APA Documentation and Format in this study). Compare your case study with the following rubric as a final check.

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/writing-guide/pages/1-unit-introduction
  • Authors: Michelle Bachelor Robinson, Maria Jerskey, featuring Toby Fulwiler
  • Publisher/website: OpenStax
  • Book title: Writing Guide with Handbook
  • Publication date: Dec 21, 2021
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/writing-guide/pages/1-unit-introduction
  • Section URL: https://openstax.org/books/writing-guide/pages/15-7-evaluation-presentation-and-analysis-of-case-study

© Dec 19, 2023 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Case Study - Rubric

  • Available Topics
  • Top Documents
  • Recently Updated
  • Internal KB

CTLM Instructional Resources

This KB document is part of a larger collection of documents on Equity and inclusion. More Equity & Inclusion documents

Rubric example: a case study

  • Investigation and Research Discussions
  • Case Study - Description
  • Case Study - Example
  • Affordances of Online Discussions
  • Steps for Building an Online Asynchronous Discussion
  • Using Online Asynchronous Discussions to Increase Student Engagement & Active Learning

rubrics for case study report

Case Studies

Case studies (also called "case histories") are descriptions of real situations that provide a context for engineers and others to explore decision-making in the face of socio-technical issues, such as environmental, political, and ethical issues. case studies typically involve complex issues where there is no single correct answer--a student analyzing a case study may be asked to select the "best" answer given the situation 1 . a case study is not a demonstration of a valid or "best" decision or solution. on the contrary, unsuccessful or incomplete attempts at a solution are often included in the written account. 2.

The process of analyzing a case study encourages several learning tasks:

Exploring the nature of a problem and circumstances that affect a decision or solution

Learning about others' viewpoints and how they may be taken into account

Learning about one's own viewpoint

Defining one's own priorities

Making one's own decisions to solve a problem

Predicting outcomes and consequences 1

Student Learning Outcomes in Ethics

Most engineering case studies available pertain to engineering ethics. After a two year study of education in ethics sponsored by the Hastings Center, an interdisciplinary group agreed on five main outcomes for student learning in ethics:

Sensitivity to ethical issues, sometimes called "developing a moral imagination," or the awareness of the needs of others and that there is an ethical point of view;

Recognition of ethical issues or the ability to see the ethical implications of specific situations and choices;

Ability to analyze and critically evaluate ethical dilemmas, including an understanding of competing values, and the ability to scrutinize options for resolution;

Ethical responsibility, or the ability to make a decision and take action;

Tolerance for ambiguity, or the recognition that there may be no single ideal solution to ethically problematic situations 2 .

These outcomes would make an excellent list of attributes for designing a rubric for a case analysis.

Ideas for Case Study Assignments

To assign a case analysis, an instructor needs

skill in analyzing a case (and the ability to model that process for students)

skill in managing classroom discussion of a case

a case study

a specific assignment that will guide students' case analyses, and

a rubric for scoring students' case analyses.

Below are ideas for each of these five aspects of teaching with case studies. Another viewpoint is to consider how not to teach a case study .

1. Skill in analyzing a case

For many engineering instructors, analyzing cases is unfamiliar. Examining completed case analyses could help develop case analysis skills. As an exercise for building skill in analyzing cases, use the generic guidelines for case analysis assignments (#4 below) to carefully review some completed case analyses. A few completed case analyses are available:

Five example analyses of an engineering case study

Case study part 1 [Unger, S. The BART case: ethics and the employed engineer. IEEE CSIT Newsletter. September 1973 Issue 4, p 6.]

Case study part 2 [Friedlander, G. The case of the three engineers vs. BART. IEEE Spectrum. October 1974, p. 69-76.]

Case study part 3 [Friedlander, G. Bigger Bugs in BART? IEEE Spectrum . March 1973. p32,35,37.]

Case study with an example analysis

2. Skill in managing classroom discussion of a case

Managing classroom discussion of a case study requires planning.

Suggestions for using engineering cases in the classroom

Guidelines for leading classroom discussion of case studies

3. Case studies

Case studies should be complex enough and realistic enough to be challenging, yet be manageable within the time frame. It is time-consuming to create case studies, but there are a large number of engineering case studies online.

Online Case Libraries

Case Studies in Technology, Ethics, Environment, and Public Policy

Teaching Engineering Ethics: A Case Study Approach

The Online Ethics Center for Engineering and Science

Ethics Cases

The Engineering Case Library

Cases and Teaching Tips

4. A specific assignment that will guide students' case analyses

There are several types of case study assignment:

Nine approaches to using case studies for teaching

Written Case Analysis

Case Discussions

Case analyses typically include answering questions such as:

What kinds of problems are inherent in the situation?

Describe the socio-technical situation sufficiently to enable listeners (or readers) to understand the situation faced by the central character in the case.

Identify and characterize the issue or conflict central to the situation. Identify the parties involved in the situation. Describe the origins, structure, and trajectory of the conflict.

Evaluate the strengths and weaknesses of the arguments made by each party.

How would these problems affect the outcomes of the situation?

Describe the possible actions that could have been taken by the central character in the case.

Describe, for each possible action, what the potential outcomes might be for each party involved.

Describe what action was actually taken and the outcomes for each party involved.

How would you solve these problems? Why?

Describe the action you would take if you were the central character in the case. Explain why.

What should the central character in the situation do? Why?

Describe the action you think that the central character in the case should take. Explain why.

What can be learned from this case?

Delineate the lessons about ethical (or other) issues in engineering that are illuminated by this case.

This list is adapted from two online case analysis assignments by McGinn from 3 & 4 ):

5. A rubric for scoring students' case analyses

Case studies help students explore decision-making in the face of issues. Thus, for an engineering ethics case study, the outcomes that can be assessed by scoring case analyses are a) sensitivity to ethical issues, b) recognition of ethical issues, c) the ability to analyze and critically evaluate ethical dilemmas, d) the ability to make an ethical decision and take action, and e) tolerance for ambiguity. Scoring rubrics for ethics case analyses should address these outcomes, not basic knowledge of the ethical standards of the profession. Professional standards can best be assessed by a traditional graded exam in which students must demonstrate, for example, which practices are ethically acceptable versus which are in violation of ethical standards given a hypothetical scenario 5 .

Making Scoring/Grading Useful for Assessment

General principles for making scoring/grading useful for assessment ( rubrics )

Example rubrics

For a written analysis of a case study in engineering

For a written analysis of a case study in general #1

For a written analysis of a case study in general #2

For a written and oral analysis of a case study by a group

For an oral analysis of a case study by a group

For a written analysis of a case study on ethics

For a self-assessment of learning from a case study

Rubric Best Practices, Examples, and Templates

A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.

How to Get Started

Best practices, moodle how-to guides.

  • Workshop Recording (Fall 2022)
  • Workshop Registration

Step 1: Analyze the assignment

The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:

  • What is the purpose of the assignment and your feedback? What do you want students to demonstrate through the completion of this assignment (i.e. what are the learning objectives measured by it)? Is it a summative assessment, or will students use the feedback to create an improved product?
  • Does the assignment break down into different or smaller tasks? Are these tasks equally important as the main assignment?
  • What would an “excellent” assignment look like? An “acceptable” assignment? One that still needs major work?
  • How detailed do you want the feedback you give students to be? Do you want/need to give them a grade?

Step 2: Decide what kind of rubric you will use

Types of rubrics: holistic, analytic/descriptive, single-point

Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.

Advantages of holistic rubrics:

  • Can p lace an emphasis on what learners can demonstrate rather than what they cannot
  • Save grader time by minimizing the number of evaluations to be made for each student
  • Can be used consistently across raters, provided they have all been trained

Disadvantages of holistic rubrics:

  • Provide less specific feedback than analytic/descriptive rubrics
  • Can be difficult to choose a score when a student’s work is at varying levels across the criteria
  • Any weighting of c riteria cannot be indicated in the rubric

Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.

Advantages of analytic rubrics:

  • Provide detailed feedback on areas of strength or weakness
  • Each criterion can be weighted to reflect its relative importance

Disadvantages of analytic rubrics:

  • More time-consuming to create and use than a holistic rubric
  • May not be used consistently across raters unless the cells are well defined
  • May result in giving less personalized feedback

Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.

Advantages of single-point rubrics:

  • Easier to create than an analytic/descriptive rubric
  • Perhaps more likely that students will read the descriptors
  • Areas of concern and excellence are open-ended
  • May removes a focus on the grade/points
  • May increase student creativity in project-based assignments

Disadvantage of analytic rubrics: Requires more work for instructors writing feedback

Step 3 (Optional): Look for templates and examples.

You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.

Step 4: Define the assignment criteria

Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.

  Helpful strategies for defining grading criteria:

  • Collaborate with co-instructors, teaching assistants, and other colleagues
  • Brainstorm and discuss with students
  • Can they be observed and measured?
  • Are they important and essential?
  • Are they distinct from other criteria?
  • Are they phrased in precise, unambiguous language?
  • Revise the criteria as needed
  • Consider whether some are more important than others, and how you will weight them.

Step 5: Design the rating scale

Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:

  • Given what students are able to demonstrate in this assignment/assessment, what are the possible levels of achievement?
  • How many levels would you like to include (more levels means more detailed descriptions)
  • Will you use numbers and/or descriptive labels for each level of performance? (for example 5, 4, 3, 2, 1 and/or Exceeds expectations, Accomplished, Proficient, Developing, Beginning, etc.)
  • Don’t use too many columns, and recognize that some criteria can have more columns that others . The rubric needs to be comprehensible and organized. Pick the right amount of columns so that the criteria flow logically and naturally across levels.

Step 6: Write descriptions for each level of the rating scale

Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.

Building a rubric from scratch

For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.

For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.

  • Consider what descriptor is appropriate for each criteria, e.g., presence vs absence, complete vs incomplete, many vs none, major vs minor, consistent vs inconsistent, always vs never. If you have an indicator described in one level, it will need to be described in each level.
  • You might start with the top/exemplary level. What does it look like when a student has achieved excellence for each/every criterion? Then, look at the “bottom” level. What does it look like when a student has not achieved the learning goals in any way? Then, complete the in-between levels.
  • For an analytic rubric , do this for each particular criterion of the rubric so that every cell in the table is filled. These descriptions help students understand your expectations and their performance in regard to those expectations.

Well-written descriptions:

  • Describe observable and measurable behavior
  • Use parallel language across the scale
  • Indicate the degree to which the standards are met

Step 7: Create your rubric

Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric

Step 8: Pilot-test your rubric

Prior to implementing your rubric on a live course, obtain feedback from:

  • Teacher assistants

Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.

  • Limit the rubric to a single page for reading and grading ease
  • Use parallel language . Use similar language and syntax/wording from column to column. Make sure that the rubric can be easily read from left to right or vice versa.
  • Use student-friendly language . Make sure the language is learning-level appropriate. If you use academic language or concepts, you will need to teach those concepts.
  • Share and discuss the rubric with your students . Students should understand that the rubric is there to help them learn, reflect, and self-assess. If students use a rubric, they will understand the expectations and their relevance to learning.
  • Consider scalability and reusability of rubrics. Create rubric templates that you can alter as needed for multiple assignments.
  • Maximize the descriptiveness of your language. Avoid words like “good” and “excellent.” For example, instead of saying, “uses excellent sources,” you might describe what makes a resource excellent so that students will know. You might also consider reducing the reliance on quantity, such as a number of allowable misspelled words. Focus instead, for example, on how distracting any spelling errors are.

Example of an analytic rubric for a final paper

Example of a holistic rubric for a final paper, single-point rubric, more examples:.

  • Single Point Rubric Template ( variation )
  • Analytic Rubric Template make a copy to edit
  • A Rubric for Rubrics
  • Bank of Online Discussion Rubrics in different formats
  • Mathematical Presentations Descriptive Rubric
  • Math Proof Assessment Rubric
  • Kansas State Sample Rubrics
  • Design Single Point Rubric

Technology Tools: Rubrics in Moodle

  • Moodle Docs: Rubrics
  • Moodle Docs: Grading Guide (use for single-point rubrics)

Tools with rubrics (other than Moodle)

  • Google Assignments
  • Turnitin Assignments: Rubric or Grading Form

Other resources

  • DePaul University (n.d.). Rubrics .
  • Gonzalez, J. (2014). Know your terms: Holistic, Analytic, and Single-Point Rubrics . Cult of Pedagogy.
  • Goodrich, H. (1996). Understanding rubrics . Teaching for Authentic Student Performance, 54 (4), 14-17. Retrieved from   
  • Miller, A. (2012). Tame the beast: tips for designing and using rubrics.
  • Ragupathi, K., Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In: Sanger, C., Gleason, N. (eds) Diversity and Inclusion in Global Higher Education. Palgrave Macmillan, Singapore.

What Is a Case Study Rubric and What Criteria Does It Usually Include?

A rubric is nothing but a scoring tool, which is utilized to evaluate student performance on assignments, such as case studies, term papers , presentations, essays, and more. The usual format for a rubric is a grid or table in which the criteria are listed in columns or rows. All criteria come with points, and every criterion is clearly explained in such a case study rubric .

Below, you can find a table with the most common grading criteria for case study :

This grading rubric of case study is a great example of what you should expect, but the criteria can vary. It means that you should know what your university expects from your paper.

How to Write a Case Study?

To start with, one can safely say that it is important to define a strategic issue or a few of them correctly if you are willing to come up with a high-quality case study . When evaluating your ability to analyze a certain situation, an educator will pay attention to whether the paper demonstrates knowledge of the present situation. In addition to that, it is critical to clearly identify the main problem without trying to summarize all the information introduced in the original case.

One more aspect to consider is that you have to perform high-quality work in terms of providing the potential audience with well-justified arguments . As you can see from the provided case study rubric , the educator will evaluate whether your paper is logically organized. You should also present clearly defined key points and arguments supported by the relevant information from a variety of sources. In general, you need to bear in mind that an educator will pay attention to the aspects of:

  •       The arguments` relevance.
  •       Their reasoning.
  •       How logical their presentation is.

According to the case study grading rubric, it is critical to use the most actual and relevant data to justify your point of view. In most cases, this type of assignment requires utilizing academic works only. It means that it is unacceptable to refer to unreliable sources, such as web encyclopedias.

Understanding the structure of a case study is vital for organizing your research effectively. When selecting a topic, consider the top case study topics to engage your readers. Follow our guidelines on how to write a case study to ensure your paper is well-structured and coherent. Before you start, familiarize yourself with what is a case study ? to grasp the concept fully.

One more critical aspect is to provide possible solutions and recommendations. Therefore, you have to introduce your ideas regarding different solutions so that there is an evident connection between them and the problems you identified before.

When grading this type of assignment, your educator will follow the general guidelines of the academic style of writing, which is usually mentioned in the case study grading rubric . Consequently, your manner of narration must be highly professional and coherent. It is also important to make sure that your paper is objective and contains proper grammar, spelling, and punctuation.

Let Us Follow Grading Criteria for Your Case Study

As you can see from the provided case study rubric , writing this type of assignment is not easy. The thing is that there are a lot of aspects students have to consider if they want good case study marks. That is why many of them are extremely stressed, which may even cause poor grades.

Writing Metier is always there to help you with your assignments. Our professionals know how university grade case study papers, as well as what should be done to meet all requirements. We will provide you with an outstanding copy, and you will get great marks with no stress!

Free topic suggestions

Laura Orta is an avid author on Writing Metier's blog. Before embarking on her writing career, she practiced media law in one of the local media. Aside from writing, she works as a private tutor to help students with their academic needs. Laura and her husband share their home near the ocean in northern Portugal with two extraordinary boys and a lifetime collection of books.

Similar posts

Case study analysis techniques.

Case study analysis requires more than just good writing skills—it demands a deep understanding of the subject matter, critical thinking abilities, and the ability to present complex ideas in a clear and concise manner. In this article, we delve into the essential techniques that will help you.

Top Case Study Topics

Case Study project with our curated list of top topics.

What is a Case Study ?

Ever wondered what exactly a case study is and why it's gaining popularity? In a nutshell, a case study is a dynamic research method that has reshaped the way we learn and absorb information.

Structure of a Case Study

In this article, we'll explore the key elements of a case study structure, from the introduction to the conclusion.

We rely on cookies to give you the best experince on our website. By browsing, you agree to it. Read more

  • help_outline help

iRubric: Case Study rubric

  • Case study rubric

rubrics for case study report

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Med Educ

Developing, evaluating and validating a scoring rubric for written case reports

Peggy r. cyr.

1 Department of Family Medicine, Maine Medical Center, Portland, Maine, USA

Kahsi A. Smith

2 Center for Outcomes Research and Evaluation, Maine Medical Center, Portland, Maine, USA

India L. Broyles

3 Department of Medical Education, University of New England, College of Osteopathic Medicine, Biddeford, Maine, USA

Christina T. Holt

The purpose of this study was to evaluate Family Medicine Clerkship student s’ writing skills using an anchored scoring rubric. In this study, we report on the assessment of a current scoring rubric (SR) used to grade written case description papers (CDP) for medical students, describe the development of a revised SR with examination of scoring consistency among faculty raters, and report on feedback from students regarding SR revisions and written CDP.

Five faculty members scored a total of eighty-three written CDP using both the Original SR (OSR) and the Revised SR1 (RSR1) during the 2009-2010 academic years.

Overall increased faculty inter-rater reliability was obtained using the RSR1. Additionally, this subset analysis revealed that the five faculty using the Revised SR2 (RSR2) had a high measure of inter-rater reliability on their scoring of this subset of papers (as measured by intra-class correlation (ICC) with ICC = 0.93, p < 0.001.

Conclusions

Findings from this research have implications for medical education, by highlighting the importance of the assessment and development of reliable evaluation tools for medical student writing projects.

Introduction

Writing skills are essential to the practicing physician; therefore, assessment of writing projects by medical students can provide an opportunity to hone critical professional skills. The ability to write clearly and efficiently is critical to performing many essential skills required of physicians such as diagnostic reasoning, management of cases, and overall communication with colleagues and patients. Medical schools may be working from an assumption that writing skills are obtained prior to entering medical school. These skills are rarely taught in a formal capacity during medical school, but are well assessed by an evaluation of a written practical assignment. 1 Research has shown that when writing skills are taught within medical schools, students demonstrate improved knowledge and overall performance. 2 The purpose of this study was to evaluate Family Medicine (FM) Clerkship students’ writing skills using an anchored scoring rubric. Various arenas for assessment of FM Clerkship Students’ performance have been delineated, such as clinical skills, communication and writing skills. The Alliance for Clinical Education’s Guide for Clerkship Directors lists sixteen different methods used for clinical education evaluation (ACE Handbook). 3 , 4 One particular evaluation method that is used widely across disciplines is the written case description paper. This assignment requires students to select a topic relevant to family medicine and write a detailed case description of approximately five pages. 5 Evaluation of writing samples requires subjective assessment of a complex performance; therefore, formalized scoring formats are often used (“Scoring Rubric”, SR). A rubric outlines a set of criteria and standards linked to specific learning objectives and may assign a numeric value to coincide with each criteria category. The likelihood that independent evaluators will consistently assign a similar numeric score to the same piece of written work is increased by using anchored descriptors. Scoring rubrics provide the student with feedback outlining the extent to which criteria have been reached, indicating specific areas in which to improve their performance. 6 Others have developed and studied the use of four point Likert-scale assessment for written case reports. 7 The use of a scoring rubric allows for a standardized evaluation of performance to enhance consistency when grading subjective assignments and provides essential written feedback to students.As review of previous literature shows, there is not a common practice for consistent evaluation of medical student writing. Therefore, we aimed to develop a tool so that students would have consistent evaluation even when faculty members who evaluate student performance are located in different hospitals in several states. Under these circumstances, it is imperative to use a well devised tool to obtain consistency between various faculty members who are scoring assignments. Furthermore, the process of assessing and revising assessment tools can be an opportunity to educate faculty about the goals of the assignment, and the goals of assessment. 8 , 9 The overall goal of this project was to develop a more efficient tool for faculty scoring and to optimize feedback provided to students on their written case description papers. First, we evaluated the current scoring rubric used to score students’ written case description papers in a family medicine third year clerkship. Second, we developed a revised scoring rubric and tested for scoring consistency among faculty using both the old and new version of the rubric. Finally, we obtained qualitative feedback from students regarding rubric revisions and overall course evaluation. Using a process outlined by Green and colleagues, 8 we focused both on the development of the tool itself and on the process of creating and validating a scoring rubric in our setting.

This study used a mixed methods design employing both quantitative and qualitative data collection methods.

Participants

This study took place among third year clerkship students in a major medical school in New England, which has two main clerkship sites.

Sampling method

This study used a convenience sample of third year medical students enrolled in our academic program. There were no exclusion criteria. One third of the students are in the satellite program. All students participate in a Family Medicine clerkship, and ten percent of their grade is based on a written assignment. This scoring rubric study focused on the case description papers written for the rotation, by all of the students. A total of 83 papers were submitted.

Data collection

All faculty members scored papers using the written scoring rubrics. For the student interviews, one faculty member recorded answers to open ended questions. Survey monkey was used to survey students who were unable to be interviewed in person. Data was sent to the principle investigator (PI) for analysis and storage. This study was sent to both institutions’ Institutional Review Boards where it was exempted from review. Informed consent was waived per the IRB’s approval.

Developing the Revised Scoring Rubric (RSR1)

The study began with an evaluation of the scoring rubric that was being used to evaluate students’ written case description papers (Original Scoring Rubric, OSR). The OSR included seven criteria: organization/clarity, focused discussion of key points, knowledge of topic, relevance of topic to family medicine, psycho-social determinants of health, appropriate references to literature, and awareness of how cost influences care. The OSR was rated on a five-point Likert scale anchored with logic and sequencing words. On the OSR, a student could receive 35 possible points. There was also a section at the end of the rubric for faculty to provide narrative comments.Based on the initial review of this current scoring form, the PI devised a Revised Scoring Rubric Version 1 (RSR1) which shifted from Likert scoring to an anchored rubric with keyword descriptors. The categories were renamed to be more descriptive, but kept the basic spectrum of evaluative topics: writing conventions, depth of knowledge/focus, logical sequencing, and topic relevance, biopsychosocial determinants of health, references, and cost issues. After the RSR1 was developed, faculty members from both sites used both rubrics (OSR and RSR1) to score their students’ written case description papers during the 2009-2010 academic years. Each student’s paper was evaluated by two faculty members from their respective site.

Revised Scoring Rubric Development

The Revised Scoring Rubric 2 (RSR2) was developed through feedback via a teleconference held to discuss the strengths and weaknesses of the RSR1. The revised draft was subject to further comments and revisions prior to re-implementation. The RSR2 was used to examine scoring consistency and inter-rater reliability among all five faculty members on a subset of seven papers. The seven papers were a purposeful sample selected by the PI to represent higher and lower scoring papers using the OSR. The purpose for selecting papers of both higher and lower quality was to establish benchmarks for scoring various types of papers and to assess the RSR2 ability to differentiate at both ends of the scoring range. All papers were de-identified and randomly distributed to all faculty members for rescoring using the RSR2.

Final revisions and feedback

All five faculty members met in person to discuss the results from the revision (RSR2) and to finalize the scoring rubric. Detailed discussions of discrepancies among scoring within each subcategory revealed additional changes to be made. Final changes were incorporated and the Final Revised Version (FRV) was approved by all faculty members.

The process for evaluating the present scoring rubric, developing and piloting the new scoring rubric (RSR1), revising the scoring rubric (RSR2) and finalizing the format with feedback from student participants took place over one academic year.

Qualitative evaluation

Lastly, qualitative evaluation was obtained from medical students at one site. Students were offered the option to provide feedback via individual in-person interview, telephone interview or an online survey. A research team member (not the PI) conducted five individual interviews, four students participated by telephone interview, and three students provided feedback via an online survey. All students were asked for their input regarding their knowledge of how they were being evaluated, the appropriateness of the scoring rubric categories, whether or not they had reviewed the evaluation tools for the written case report and if the scoring rubric added to their understanding of the assignment. These questions were answered on a Likert scale of 1-5. The students were also asked open-ended questions to ascertain which parts of the paper were the hardest to write, suggestions for additional categories and scoring of the rubric as well as any suggestions to make the assignment more interesting for them.

Data analysis

All quantitative data analysis was performed using the Statistical Package for Social Sciences (SPSS) version 16.0. Consistency among faculty scoring was examined by comparing scores given to students by each faculty member on the same paper using two different scoring rubrics (OSR and RSR1). The closer the scores are to each other indicates higher consistency between raters. The measure of this closeness of score is called inter-rater reliability (IRR) and is assessed by calculating intraclass correlation coefficients (ICC), as described in Kenny et al. 10 The higher the ICC, the higher the inter-rater reliability among pairs, or, the more likely that the score one trained rater will give would be the same as any other similarly trained rater.Consistency in the RSR2 was examined using the same methods by comparing overall scores and scores from each subcategory for each of seven papers. Range of overall scores and individual category scores were computed to examine for discrepancies among faculty scores. Inter-rater reliability among all five faculty scores was assessed by calculating ICC. Answers to open-ended questions in the qualitative analysis were analyzed thematically and for focused areas of concern.

Eighty-three student papers were graded by one of three faculty pairs using each of the two scoring rubrics, the OSR and RSR1. Quantitative results are reported for each faculty pair in Table 1 . Each student score is reported as the average for the OSR and the RSR1 between the two faculty raters, and the faculty ICC was calculated and shown in Table 1 . This table shows that pairs of faculty raters had a lower ICC using the OSR.

*p-value < 0.05 determined to be statistically significant

Qualitative analysis on the RSR1

The qualitative analysis on the RSR1 from all five faculty members indicated areas of inconsistent interpretations of the scoring criteria. Qualitative descriptions of these areas of discrepancy were clarified through individual and group discussions using thematic analysis. Adjustments were made to the rubric after each discussion and sent to the group by email for a further refinement and verification. This led to the development of the second version of the scoring rubric (RSR2). The faculty approved the new structure and format allowing for more objective scoring. Table 2 shows the general themes which arose from these discussions, and some of the solutions to these thematic issues were included in the anchors. Additions and subtractions to the RSR1 were made after the faculty met to review strengths and limitations.

RSR1 Revision results (RSR2)

Seven papers were selected and scored by all five faculty members using only the RSR2. There was consistency among scores for higher quality papers (mean score = 19.05, SD = 2.52) and lower quality papers (mean score = 12.73, SD = 3.57) out of a possible 21 points. Results revealed significant ICC between all five faculty raters’ overall scores for each paper and in each subcategory on all seven papers using the RSR2 ( Table 3 ). This shows that the RSR2 allows consistency in scoring for trained faculty reviewers.

Final revisions of RSR2 (FRV)

After the RSR2 was piloted, a faculty meeting was held to discuss the results. Concerns about specific criteria and form and content of the RSR2 were raised. Further refinements were included, and a Final Revised Version (FRV) created (Appendix A). The new rubric was approved for use on both campuses with new rotations starting March 2010.

Student feedback

Twelve of the satellite site students supplied feedback via survey questions conducted after the course. Thematic analysis revealed that a majority of students strongly agreed that they were aware of how they were being evaluated on their case description papers (7/12, 58.3%) and had looked at the evaluation tools while writing their papers (6/12, 50%). Students also strongly agreed that the scoring rubric categories seemed appropriate and added to their understanding of their assignment (9/12, 75%).In open ended questions, students indicated concerns about the following three categories: Biopsychosocial Determinants of Health, Cost Issues, and References. Students considered the bio-psychosocial category to be too broad and identified this category as the hardest to write. This issue was addressed through revision of the rubric to include actual descriptors of biopsychosocial aspects of a case - family, living situation, impact of disease on life, perspective on their illness, and state of psychological health (see Appendix A). Regarding Cost Issues, students recommended a change in the instructions for this category, which was addressed by including a clear definition of cost issues with specific examples in the FRV. Finally, several students reported difficulty with understanding what types of references were required. The final revision of this category includes clarification of the requirement for use of current literature from the last five years and evidence that the references support the conclusions of the paper. At least one student indicated interest in having top papers submitted for publication.

Discussion and conclusions

Our analysis of the OSR indicated a need to improve reliability and develop more useful descriptors for students’ work. Our process of revising the scoring rubric allowed greater internal consistency and reliability as well as improved guidance for students. Finally, our analysis of student feedback yielded additional insights into how students interpret assignments and scoring systems. Future work will need to use independent faculty who had never scored papers to assess the ease of use of the RSR2 tool.

Implications

Results from this study highlight the importance of clearly defined anchoring criteria in scoring rubrics in order to ensure consistency among scorers. Furthermore, revising and testing scoring rubrics by content experts is a labor-intensive process, involving multiple phases, resulting in a more reliable tool. This is not a novel idea regarding written work, since studies have shown inter-rater reliability can be achieved when experienced evaluators meet regularly to refine criteria. 11 Although our curriculum does not include teaching about writing skills, the scoring rubric and written instructions can set the standard for the improving quality of written work by medical students. Posting exemplar papers for students to access and read serves as an additional resource without requiring formal didactic instruction. The student feedback provided important ideas for the implementation of the assignment with more concise written instructions. Students and faculty identified the Bio-psychosocial Determinants of Health and Cost Issues categories as most difficult to write and most difficult to score, respectively.Medical students are continuously being assessed in multiple realms of performance. The ongoing evaluation of our assessment tools should become common practice in Family Medicine Clerkships. This study highlights the importance of developing clear criteria for scoring rubrics in evaluating medical student writing. These well-defined scoring rubrics assist the student in completion of complex performance assignments. Assessment of inter-rater reliability of scoring among faculty strengthens the internal consistency of the tool. Scoring rubrics should be evaluated and validated by expert faculty and the medical student users. These efforts will lead to a more robust tool. Having faculty and students work collaboratively enhances our medical student education.

Limitations

There are a few limitations to this study that merit discussion. First, the initial revision to the scoring rubric was done solely by the principal investigator. Making the initial revisions as a joint effort of multiple experts may have resulted in a superior tool. This was also a single medical school study with a small convenience sample which limits generalizability. Student feedback would have been strengthened with information collected before and after the change in the scoring rubric. Nevertheless, our approach served as a valuable faculty development process, and a student curriculum development process.

Conflict of Interest

The authors declare that they have no conflict of interest.

Acknowledgements

The authors would like to thank the faculty from the University of Vermont for their assistance with data collection and their thoughtful feedback and advice throughout this study: Dr. David Little, Dr. Candace Fraser and Martha Seagrave, PA, from University of Vermont Medical School and Julie Schirmer, LCSW from Maine Medical Center. They would also like to thank Cassandra Sweetser, from Maine Medical Center’s Department of Family Medicine, for her assistance in editing, reformatting, and submitting their manuscript.

Scoring Rubric: Final Revised Version (FRV)

Developed after iterative evaluation and development of anchored scoring rubrics, family medicine clinical core clerkship.

Case Description Paper Evaluation (Total Point Available = 70)

NAME: DATE: TOPIC:

COMMENTS

  1. PDF Case Study Grading Rubric Presenter Name: Judges Initials: 4 3 2 1 0 Score

    features of the case • identifies some problems in the case The presenter(s): • discusses few of the important aspects of the background of the case not demonstrate unique • demonstrates few unique features of the case • identifies a few problems in the case The presenter(s) does not ts, problems of the case and do feature of the case.

  2. PDF Case Analysis Grading Rubric

    15-21 pts. Throughout little to none of the work. 0-7 pts. Support 35 pts. claims are supported with detailed and persuasive examples. the analysis incorporates required and additional resources, when necessary. Throughout the. whole work. 29-35 pts.

  3. PDF Case Study Evaluation Rubric

    The National School Psychology Certification Board (NSPCB) of the National Association of School Psychologists (NASP) developed the following rubric to help guide applicants in structuring an effective case study. Additionally, the NSPCB utilizes the rubric as part of the evaluation process for NCSP candidates from graduate programs without ...

  4. PDF Using a Rubric to Evaluate Quality in Case Study Writing

    Using a Rubric to Evaluate Quality in Case Study Writing A case study, also called a case report, is defined as an individual record of the diagnosis and treatment of a single patient by a single physician. Case records have been found in Chinese medical writings dating back to the legendary physician Huatuo in the 3rd

  5. Undergraduate Case Analysis Rubric

    3 Points. Basic position effectively justified; fair presentation of others' positions; charitable interpretation of others' arguments. 3. Thinking critically about own and others' views. 1.5 Points. Complete lack of critical thinking about sources and arguments used; doesn't offer objection to own argument. 2 Points.

  6. 15.7 Evaluation: Presentation and Analysis of Case Study

    Learning Outcomes. By the end of this section, you will be able to: Revise writing to follow the genre conventions of case studies. Evaluate the effectiveness and quality of a case study report. Case studies follow a structure of background and context, methods, findings, and analysis. Body paragraphs should have main points and concrete details.

  7. Case Study

    Using a rubric to assess student performance using a case study in an online discuss to facilitate investigation and research of content. Rubric example: a case study. Rubric example; Points 10 7 3 0; ... case study, rubric, online, discussion: Doc ID: 103920: Owner: Timmo D. Group: Instructional Resources: Created: 2020-07-13 12:47:34: Updated ...

  8. PDF Case Analysis Rubric

    Recognizes one or more key problems in the case. Recognizes multiple problems in the case. Indicates some issues are more important than others and explains why. Clearly describes the unique perspectives of multiple key characters. More than one reasonable action proposed. Consequences are tied to the issues deemed most important.

  9. INT 700 Case Study Guidelines and Rubric

    INT 700 Case Study Guidelines and Rubric Prompt: Case studies are integral to this course and the business world. Careful analysis of each case study is required. When an analysis is completed, it should read a narrative, summarizing and explaining your findings and your recommendations for solving the given problem. Refer to the Business Case ...

  10. A Rubric for Evaluating Student Analyses of Business Cases

    Case analysis rubric. ... Google Scholar. Vega G. (2010). The undergraduate case research study model. Journal of Management Education, 34, 574-604. Crossref. Google Scholar. White C. S. (2007). Levels of understanding: A guide to the teaching and assessment of knowledge. ... Case Report. Files in this Data Supplement: 10.1177_1052562916644283 ...

  11. PDF I. Comprehensive Case Study

    The report must address each of these areas and meet the criteria specified on the attached rubric. Note that this rubric titled "National Association of School Psychologists Report on Case Study Evaluation" is one requirement for individuals seeking the Nationally Certified School Psychologist (NCSP) credential, who have not graduated from ...

  12. PDF CREATING RUBRICS

    Sample Rubric for Case Study Assignment Note: This sample is extremely detailed. The samples in the Rubric Gallery include many that are more concise. Criterion 4 points A-level qualities (90-100) 3 points B-level qualities (80-89) 2 points C-level qualities (70-79) 1 or 0 points D- or F-level qualities (60-69 or below 60)

  13. Assessment for Curricular Improvement

    Case Studies. Case studies (also called "case histories") are descriptions of real situations that provide a context for engineers and others to explore decision-making in the face of socio-technical issues, such as environmental, political, and ethical issues. Case studies typically involve complex issues where there is no single correct ...

  14. Rubric Best Practices, Examples, and Templates

    Step 7: Create your rubric. Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle.

  15. Best Rubrics for Case Study

    Understands and identifies a few of the major problems in the case study. 2. Review and literature research. The bigger part of references and articles must be within ten years. 15%. Great research with clear associations between questions or problems and main course concepts. Good use of relevant sources.

  16. Rubrics For Case Study Analysis

    Short Paper/Case Study Analysis Rubric Guidelines for Submission:Short papers should use double spacing. 12-point Times New Roman font. and one-inch margins. Sources should be cited according to a discipline-appropriate citation method. (a) Rubric for participation and group work. It is also suitable for self-assessment and peer feedback.

  17. iRubric: Case Study rubric

    iRubric: Case Study rubric find rubric edit ... Applies to written case reports Rubric Code: C227279. By dawnwickstrom Ready to use Public Rubric Subject: Business Type: Assessment Grade Levels: Undergraduate Desktop Mode Mobile Mode Case Study Excellent ...

  18. PDF Case Study Evaluation Rubric

    present in the case study (i.e., clear succinct and well written text with clearly labeled graphs). Errors in writing convention, style, and graphing interfere with readability and interpretation of data. 1.6 Personal identifying information of the case study subject is redacted from the report. Personal identifying information is

  19. PDF Business Rubric Examples

    Business Rubric Examples. Rubrics from the University of Scranton. Business Strategy Analysis Rubric 2. Case Analysis Rubric 3. Decision Making Rubric 4. Critical Thinking Rubric 5. Ethical Considerations Rubric 6. California State University East Bay Undergraduate Business Rubrics.

  20. PDF Reviewer Rubrics: Rubric for Case Reports and Case Series Rubric for

    Case Reports and Case Series This rubric is designed to guide reviewers through the initial review of case reports and case series (defined as descriptive studies including a few animals) submitted for publication in Veterinary Surgery. Reviewers are invited to evaluate the scientific content of the submission, its originality, and the overall

  21. Developing, evaluating and validating a scoring rubric for written case

    Objectives. The purpose of this study was to evaluate Family Medicine Clerkship students' writing skills using an anchored scoring rubric. In this study, we report on the assessment of a current scoring rubric (SR) used to grade written case description papers (CDP) for medical students, describe the development of a revised SR with examination of scoring consistency among faculty raters ...

  22. Using rubrics to improve the assessment lifecycle: a case study

    This research takes a case study approach over a 3-year period. A module on a level 4 undergraduate (first year) computing course with an average of 42 students was used to review the assessment lifecycle, both from the perspective of the assessor and the student to establish mechanisms for improving learner outcomes.