The University of Edinburgh home

  • Schools & departments

Reflection Toolkit

Assessing assignments

Reflective assignments can be assessed in different ways; below you will find information about summative, formative, peer, and self-assessment of assignments.

Criteria and rubrics can help you in your assessment

As highlighted on the ‘Should I assess?’ page, different levels of assessment will either require or benefit from explicit criteria and rubrics. They will help you in your assessment and will particularly support the reflectors when producing their reflections. Moreover if you decide to use peer or self-assessment, criteria and rubrics will be of great help as part of the guidelines students should be given for the assessment process.

Should I assess? (within the Facilitators’ Toolkit)

Assessment types that work well for reflective assignments

Reflective assignments lend themselves well to most types of assessment.

Classic summative assessment

In contrast to reflective activities, reflective assignments work particularly well for summative assessments that might carry high proportions of the overall course marks. This would be similar to a final essay in a course.

For example, this could be:

  • A reflective journal
  • A report that pulls on evidence from a reflective journal
  • A reflective blog
  • A reflective essay on the student’s development in the course
  • A reflective essay on meeting benchmark statements
  • A reflective essay on a particular experience (for example a critical incident in an experiential learning course)
  • A skills-development log

Peer-assessment can be used, but summative assessment might lend itself better to assessment from tutors or course organisers. While it is strictly possible, self-assessment might not suit summative work and it is recommended to use for formative work instead.

Summative assessments are high-stakes assessment. It is therefore important that students receive support on how to reflect and perform well. For instance, having a chance to practice in a low-stakes environment such as formative assessment can be valuable.

Formative assessment

Reflection is an excellent way of checking-in partway through an initiative and supporting students with their further development. Any kind of formative assessment is a valuable way of practising for a summative assignment and therefore smaller or interim versions of final assessments are great for formative feedback.

  • Individual entries from a reflective journal
  • A reflective blogpost
  • Interim essays on development during the course or on benchmark statements
  • Drafts on reflective summative assessments
  • Reflective workbooks

As mentioned, formative assessment is low-stakes and can be a good way of engaging either peers or students themselves in the assessment process.

For completion or pass/fail

Reflective assignments can easily be implemented ‘for completion’ or ‘pass/fail’. Including reflection ‘for completion’ will ensure that students start the process, but not necessarily engage with it fully. By creating a ‘pass/fail’ option you ensure that students will engage at least to the point of ‘good enough’ with the reflective process.

Types of reflective assignments that can work well ‘for completion’ or ‘pass/fail’:

  • Reflective journals/diaries
  • Skills-development logs
  • Reflective videos/audio recordings

While ‘pass/fail’ of assessment is lower stakes than many other forms of summative assessments and ‘for completion’ is generally very low stakes, you still have the responsibility of ensuring that students have enough information on how to complete the assignment satisfactorily. For ‘pass/fail’, just like any other summative assessment, it means having both criteria and a rubric.

When assessing reflective assignments it is essential to have clear guidelines and criteria. The higher the stakes of the assessment (for example summative versus formative), the more important clear guidelines and rubrics become.

You can use both formative and summative assessment for reflective assignments.  When posing a summative assessments it is important to allow students to practise or you must be extremely clear about what you want and provide detailed guidance.

Reflective assignments lend themselves better to summative assessment than activities do.

Where next?

To get a sense of typical assessment criteria to include when assessing reflection, head to the assessment criteria page. For sample rubrics, see the rubrics page.

Assessment criteria (within the Facilitators’ Toolkit)

Assessment rubrics (within the Facilitators’ Toolkit)

Teaching, Learning, & Professional Development Center

  • Teaching Resources
  • TLPDC Teaching Resources

How Do I Create Meaningful and Effective Assignments?

Prepared by allison boye, ph.d. teaching, learning, and professional development center.

Assessment is a necessary part of the teaching and learning process, helping us measure whether our students have really learned what we want them to learn. While exams and quizzes are certainly favorite and useful methods of assessment, out of class assignments (written or otherwise) can offer similar insights into our students' learning.  And just as creating a reliable test takes thoughtfulness and skill, so does creating meaningful and effective assignments. Undoubtedly, many instructors have been on the receiving end of disappointing student work, left wondering what went wrong… and often, those problems can be remedied in the future by some simple fine-tuning of the original assignment.  This paper will take a look at some important elements to consider when developing assignments, and offer some easy approaches to creating a valuable assessment experience for all involved.

First Things First…

Before assigning any major tasks to students, it is imperative that you first define a few things for yourself as the instructor:

  • Your goals for the assignment . Why are you assigning this project, and what do you hope your students will gain from completing it? What knowledge, skills, and abilities do you aim to measure with this assignment?  Creating assignments is a major part of overall course design, and every project you assign should clearly align with your goals for the course in general.  For instance, if you want your students to demonstrate critical thinking, perhaps asking them to simply summarize an article is not the best match for that goal; a more appropriate option might be to ask for an analysis of a controversial issue in the discipline. Ultimately, the connection between the assignment and its purpose should be clear to both you and your students to ensure that it is fulfilling the desired goals and doesn't seem like “busy work.” For some ideas about what kinds of assignments match certain learning goals, take a look at this page from DePaul University's Teaching Commons.
  • Have they experienced “socialization” in the culture of your discipline (Flaxman, 2005)? Are they familiar with any conventions you might want them to know? In other words, do they know the “language” of your discipline, generally accepted style guidelines, or research protocols?
  • Do they know how to conduct research?  Do they know the proper style format, documentation style, acceptable resources, etc.? Do they know how to use the library (Fitzpatrick, 1989) or evaluate resources?
  • What kinds of writing or work have they previously engaged in?  For instance, have they completed long, formal writing assignments or research projects before? Have they ever engaged in analysis, reflection, or argumentation? Have they completed group assignments before?  Do they know how to write a literature review or scientific report?

In his book Engaging Ideas (1996), John Bean provides a great list of questions to help instructors focus on their main teaching goals when creating an assignment (p.78):

1. What are the main units/modules in my course?

2. What are my main learning objectives for each module and for the course?

3. What thinking skills am I trying to develop within each unit and throughout the course?

4. What are the most difficult aspects of my course for students?

5. If I could change my students' study habits, what would I most like to change?

6. What difference do I want my course to make in my students' lives?

What your students need to know

Once you have determined your own goals for the assignment and the levels of your students, you can begin creating your assignment.  However, when introducing your assignment to your students, there are several things you will need to clearly outline for them in order to ensure the most successful assignments possible.

  • First, you will need to articulate the purpose of the assignment . Even though you know why the assignment is important and what it is meant to accomplish, you cannot assume that your students will intuit that purpose. Your students will appreciate an understanding of how the assignment fits into the larger goals of the course and what they will learn from the process (Hass & Osborn, 2007). Being transparent with your students and explaining why you are asking them to complete a given assignment can ultimately help motivate them to complete the assignment more thoughtfully.
  • If you are asking your students to complete a writing assignment, you should define for them the “rhetorical or cognitive mode/s” you want them to employ in their writing (Flaxman, 2005). In other words, use precise verbs that communicate whether you are asking them to analyze, argue, describe, inform, etc.  (Verbs like “explore” or “comment on” can be too vague and cause confusion.) Provide them with a specific task to complete, such as a problem to solve, a question to answer, or an argument to support.  For those who want assignments to lead to top-down, thesis-driven writing, John Bean (1996) suggests presenting a proposition that students must defend or refute, or a problem that demands a thesis answer.
  • It is also a good idea to define the audience you want your students to address with their assignment, if possible – especially with writing assignments.  Otherwise, students will address only the instructor, often assuming little requires explanation or development (Hedengren, 2004; MIT, 1999). Further, asking students to address the instructor, who typically knows more about the topic than the student, places the student in an unnatural rhetorical position.  Instead, you might consider asking your students to prepare their assignments for alternative audiences such as other students who missed last week's classes, a group that opposes their position, or people reading a popular magazine or newspaper.  In fact, a study by Bean (1996) indicated the students often appreciate and enjoy assignments that vary elements such as audience or rhetorical context, so don't be afraid to get creative!
  • Obviously, you will also need to articulate clearly the logistics or “business aspects” of the assignment . In other words, be explicit with your students about required elements such as the format, length, documentation style, writing style (formal or informal?), and deadlines.  One caveat, however: do not allow the logistics of the paper take precedence over the content in your assignment description; if you spend all of your time describing these things, students might suspect that is all you care about in their execution of the assignment.
  • Finally, you should clarify your evaluation criteria for the assignment. What elements of content are most important? Will you grade holistically or weight features separately? How much weight will be given to individual elements, etc?  Another precaution to take when defining requirements for your students is to take care that your instructions and rubric also do not overshadow the content; prescribing too rigidly each element of an assignment can limit students' freedom to explore and discover. According to Beth Finch Hedengren, “A good assignment provides the purpose and guidelines… without dictating exactly what to say” (2004, p. 27).  If you decide to utilize a grading rubric, be sure to provide that to the students along with the assignment description, prior to their completion of the assignment.

A great way to get students engaged with an assignment and build buy-in is to encourage their collaboration on its design and/or on the grading criteria (Hudd, 2003). In his article “Conducting Writing Assignments,” Richard Leahy (2002) offers a few ideas for building in said collaboration:

• Ask the students to develop the grading scale themselves from scratch, starting with choosing the categories.

• Set the grading categories yourself, but ask the students to help write the descriptions.

• Draft the complete grading scale yourself, then give it to your students for review and suggestions.

A Few Do's and Don'ts…

Determining your goals for the assignment and its essential logistics is a good start to creating an effective assignment. However, there are a few more simple factors to consider in your final design. First, here are a few things you should do :

  • Do provide detail in your assignment description . Research has shown that students frequently prefer some guiding constraints when completing assignments (Bean, 1996), and that more detail (within reason) can lead to more successful student responses.  One idea is to provide students with physical assignment handouts , in addition to or instead of a simple description in a syllabus.  This can meet the needs of concrete learners and give them something tangible to refer to.  Likewise, it is often beneficial to make explicit for students the process or steps necessary to complete an assignment, given that students – especially younger ones – might need guidance in planning and time management (MIT, 1999).
  • Do use open-ended questions.  The most effective and challenging assignments focus on questions that lead students to thinking and explaining, rather than simple yes or no answers, whether explicitly part of the assignment description or in the  brainstorming heuristics (Gardner, 2005).
  • Do direct students to appropriate available resources . Giving students pointers about other venues for assistance can help them get started on the right track independently. These kinds of suggestions might include information about campus resources such as the University Writing Center or discipline-specific librarians, suggesting specific journals or books, or even sections of their textbook, or providing them with lists of research ideas or links to acceptable websites.
  • Do consider providing models – both successful and unsuccessful models (Miller, 2007). These models could be provided by past students, or models you have created yourself.  You could even ask students to evaluate the models themselves using the determined evaluation criteria, helping them to visualize the final product, think critically about how to complete the assignment, and ideally, recognize success in their own work.
  • Do consider including a way for students to make the assignment their own. In their study, Hass and Osborn (2007) confirmed the importance of personal engagement for students when completing an assignment.  Indeed, students will be more engaged in an assignment if it is personally meaningful, practical, or purposeful beyond the classroom.  You might think of ways to encourage students to tap into their own experiences or curiosities, to solve or explore a real problem, or connect to the larger community.  Offering variety in assignment selection can also help students feel more individualized, creative, and in control.
  • If your assignment is substantial or long, do consider sequencing it. Far too often, assignments are given as one-shot final products that receive grades at the end of the semester, eternally abandoned by the student.  By sequencing a large assignment, or essentially breaking it down into a systematic approach consisting of interconnected smaller elements (such as a project proposal, an annotated bibliography, or a rough draft, or a series of mini-assignments related to the longer assignment), you can encourage thoughtfulness, complexity, and thoroughness in your students, as well as emphasize process over final product.

Next are a few elements to avoid in your assignments:

  • Do not ask too many questions in your assignment.  In an effort to challenge students, instructors often err in the other direction, asking more questions than students can reasonably address in a single assignment without losing focus. Offering an overly specific “checklist” prompt often leads to externally organized papers, in which inexperienced students “slavishly follow the checklist instead of integrating their ideas into more organically-discovered structure” (Flaxman, 2005).
  • Do not expect or suggest that there is an “ideal” response to the assignment. A common error for instructors is to dictate content of an assignment too rigidly, or to imply that there is a single correct response or a specific conclusion to reach, either explicitly or implicitly (Flaxman, 2005). Undoubtedly, students do not appreciate feeling as if they must read an instructor's mind to complete an assignment successfully, or that their own ideas have nowhere to go, and can lose motivation as a result. Similarly, avoid assignments that simply ask for regurgitation (Miller, 2007). Again, the best assignments invite students to engage in critical thinking, not just reproduce lectures or readings.
  • Do not provide vague or confusing commands . Do students know what you mean when they are asked to “examine” or “discuss” a topic? Return to what you determined about your students' experiences and levels to help you decide what directions will make the most sense to them and what will require more explanation or guidance, and avoid verbiage that might confound them.
  • Do not impose impossible time restraints or require the use of insufficient resources for completion of the assignment.  For instance, if you are asking all of your students to use the same resource, ensure that there are enough copies available for all students to access – or at least put one copy on reserve in the library. Likewise, make sure that you are providing your students with ample time to locate resources and effectively complete the assignment (Fitzpatrick, 1989).

The assignments we give to students don't simply have to be research papers or reports. There are many options for effective yet creative ways to assess your students' learning! Here are just a few:

Journals, Posters, Portfolios, Letters, Brochures, Management plans, Editorials, Instruction Manuals, Imitations of a text, Case studies, Debates, News release, Dialogues, Videos, Collages, Plays, Power Point presentations

Ultimately, the success of student responses to an assignment often rests on the instructor's deliberate design of the assignment. By being purposeful and thoughtful from the beginning, you can ensure that your assignments will not only serve as effective assessment methods, but also engage and delight your students. If you would like further help in constructing or revising an assignment, the Teaching, Learning, and Professional Development Center is glad to offer individual consultations. In addition, look into some of the resources provided below.

Online Resources

“Creating Effective Assignments” http://www.unh.edu/teaching-excellence/resources/Assignments.htm This site, from the University of New Hampshire's Center for Excellence in Teaching and Learning,  provides a brief overview of effective assignment design, with a focus on determining and communicating goals and expectations.

Gardner, T.  (2005, June 12). Ten Tips for Designing Writing Assignments. Traci's Lists of Ten. http://www.tengrrl.com/tens/034.shtml This is a brief yet useful list of tips for assignment design, prepared by a writing teacher and curriculum developer for the National Council of Teachers of English .  The website will also link you to several other lists of “ten tips” related to literacy pedagogy.

“How to Create Effective Assignments for College Students.”  http:// tilt.colostate.edu/retreat/2011/zimmerman.pdf     This PDF is a simplified bulleted list, prepared by Dr. Toni Zimmerman from Colorado State University, offering some helpful ideas for coming up with creative assignments.

“Learner-Centered Assessment” http://cte.uwaterloo.ca/teaching_resources/tips/learner_centered_assessment.html From the Centre for Teaching Excellence at the University of Waterloo, this is a short list of suggestions for the process of designing an assessment with your students' interests in mind. “Matching Learning Goals to Assignment Types.” http://teachingcommons.depaul.edu/How_to/design_assignments/assignments_learning_goals.html This is a great page from DePaul University's Teaching Commons, providing a chart that helps instructors match assignments with learning goals.

Additional References Bean, J.C. (1996). Engaging ideas: The professor's guide to integrating writing, critical thinking, and active learning in the classroom . San Francisco: Jossey-Bass.

Fitzpatrick, R. (1989). Research and writing assignments that reduce fear lead to better papers and more confident students. Writing Across the Curriculum , 3.2, pp. 15 – 24.

Flaxman, R. (2005). Creating meaningful writing assignments. The Teaching Exchange .  Retrieved Jan. 9, 2008 from http://www.brown.edu/Administration/Sheridan_Center/pubs/teachingExchange/jan2005/01_flaxman.pdf

Hass, M. & Osborn, J. (2007, August 13). An emic view of student writing and the writing process. Across the Disciplines, 4. 

Hedengren, B.F. (2004). A TA's guide to teaching writing in all disciplines . Boston: Bedford/St. Martin's.

Hudd, S. S. (2003, April). Syllabus under construction: Involving students in the creation of class assignments.  Teaching Sociology , 31, pp. 195 – 202.

Leahy, R. (2002). Conducting writing assignments. College Teaching , 50.2, pp. 50 – 54.

Miller, H. (2007). Designing effective writing assignments.  Teaching with writing .  University of Minnesota Center for Writing. Retrieved Jan. 9, 2008, from http://writing.umn.edu/tww/assignments/designing.html

MIT Online Writing and Communication Center (1999). Creating Writing Assignments. Retrieved January 9, 2008 from http://web.mit.edu/writing/Faculty/createeffective.html .

Contact TTU

Brooks and Kirk

Assessment Methods – What Exactly Are They?

Steve Kirk Assessment Tips 0

The main part of an assessors job role is to assess learners. To assess your learners, you’ll need to use a variety of different assessment methods. Here we have a handful of the main assessment methods you will need as an NVQ / On-Programme Assessor . You’ll be putting at least half of these assessments in to practice during your assessor course if you’re a Brooks and Kirk trainee assessor!

Assessment Methods What Exactly Are They

Professional Discussion

  • Questioning

Projects and Assignments

  • RPL (Recognition of Prior Learning)

Witness Testimony

Work products, assessment methods, observation.

If you’re a qualified assessor and gained your qualification with us at Brooks and Kirk, you will know exactly what an observation is from your CAVA study day. For those of you who aren’t so sure, an observation is quite self-explanatory; the Assessor watches (or, observes) their candidate carrying out a task or presenting a specific skill.

These tasks/skills have to be relevant to the criteria the Assessor is looking for. Observation is the probably the most popular of the assessment methods. It’s reliable and an easy way to cover lots of criteria in a fairly short amount of time. This reduces stress levels from the learner, and saves your time as the assessor. Hypothetically speaking, your learner could cover everything out of a whole unit in just one observation.

Professional discussions are a great assessment method for a candidate to show they have a really good level of understanding of the criteria. Not to be confused with the questioning assessment method (which will be explained shortly), professional discussions are where the Assessor has a chat with the candidate about the criteria, to try and get to grips with their understanding of it. You can guarantee a quality level of assessment evidence from a professional discussion, as you can prove the knowledge is organic straight from the candidates themselves.

It’s easy for a professional discussion to slip into a Q&A session. If your learner won’t engage in the conversation with you, or is giving you ‘yes/no’ answers, then you’re not having a professional discussion; you’re having a question & answer session. So, it’s really important not to as that is a completely different assessment method, which we’ll go on to now…

Questioning / Question & Answer

As mentioned before, and not be confused with the professional discussion assessment method, questioning is pretty much what it says on the tin. Asking your candidate questions to figure out their understanding of the criteria .

Now, this can be oral or written questioning. It can even be in the form of quizzes or exams. It will show you exactly what your candidate does and doesn’t know, and therefore is a great way to look at areas for improvement and how to move forward from then on.

By setting projects and assignments, you can gather a lot of information on your candidate’s knowledge in one document. The project/assignment assessment method can be put into practice with reports or essays, perhaps even some relevant research for the candidate to cover. Any mistakes or parts that aren’t quite right from, for example, an essay, will highlight gaps in the candidate’s knowledge. This will allow you to support them with developing their knowledge further, so that all the necessary criteria can be covered at a later date.

Recognition of Prior Learning

Or, for short, RPL. This is one of the more popular assessment methods with candidates. RPL takes into consideration any qualifications, awards, or certificates that the candidate has gained prior to this qualification. It involves you, as the Assessor, cross-referencing any previous work with the criteria that you are looking for to see if the learner has already covered it before. This is why it’s a favourite with candidates. Simply because it may mean less work for them to do if they have already covered it before.

This assessment method tends to be quite tricky from the assessor’s point of view. There are many questions to be asked. Is the prior learning actually relevant? How long ago did they complete this? Does it prove they still have the required level of knowledge? Generally, assessors will ask for learners to provide a statement when using the RPL assessment method. The statement will back up what they have studied previously, and how this means they have met the criteria already.

As an Assessor, you’ll have to have a lot of trust in your candidate to use this assessment method. To get a valid Witness Testimony, you’ll require an occupationally competent professional who works with your candidate to, for example, write a report about a time they witnessed your candidate carry out a task/skill, and confirm their competence of them.

The candidate would also need to write a report on when they carried out this task or skill. Then, you as the Assessor would need to compare these with each other. If both reports describe the same situation perfectly, then it’s very likely to be a valid report that can be used to tick off certain assessment criteria .

If your candidate is working in a job role relevant to their desired qualification (for example, an Apprenticeship), then it’s likely that they are producing work on a daily basis that could be used as evidence for some criteria. If the candidate is carrying out tasks that meet the criteria daily, why make them repeat it in an assignment? Not surprisingly, this is another of the more popular assessment methods in the candidate’s point of view; ‘two birds in one stone’?

A work product can be as simple as, for example, a business administration learner screenshotting emails to internal and external sources. Of course it depends on what the criteria is asking for, but if a screenshot can evidence ‘be able to’ criteria; then why not?

These are the main assessment methods used as a NVQ assessor or On-Programme assessor. There are strengths and limitations to all of these assessment methods, so take a look at our blog on Strengths and Limitations to Assessment Methods .

If you’re looking at becoming an Independent End-Point Assessor, there’s a strong chance you’ll run into some other assessment methods. To find out more about Assessment Methods Used in End-Point Assessment , take a look at our blog.

We hope you have found this useful. If you did, please help our team by answering our quick checkbox survey below.

Steve Kirk

Steve is a Chartered Manager and a Fellow of the Chartered Management Institute. He provides Educational Consultancy to the 19+ sector as well as being an Assessor, IQA, EPA and Digital Marketing Professional. When not doing any of these he finds time, every now and then, to write blogs and articles.

Related Posts

CPD Courses

Assessment Tips , Assessor Training , Continuing Professional Development

Holistic Assessment: What is it and how do we do it?

cecilie-johnsen-G8CxFhKuPDU-unsplash

Assessment Tips , Career Advice , CPD Courses , News , Tutor Training

Enhancing Inclusive Education: The Importance of the Award in LGBTQ+ Inclusivity

lgbt-gb2f84253a_1280

Assessment Tips , Assessor Training , Career Advice , Tutor Training

Creating LGBTQ+ Inclusive Safe Spaces: The Essential Course for Assessors

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

SIGN UP TO THE BROOKS AND KIRK NEWSLETTER

Get the latest news , announcements , exclusive offers and industry updates .

  • Prodigy Math
  • Prodigy English

From our blog

  • Is a Premium Membership Worth It?
  • Promote a Growth Mindset
  • Help Your Child Who's Struggling with Math
  • Parent's Guide to Prodigy
  • Assessments
  • Math Curriculum Coverage
  • English Curriculum Coverage
  • Game Portal

6 Types of Assessment (and How to Use Them)

no image

Written by Maria Kampen

Reviewed by Stephanie McEwan, B.Ed.

Do your students hate assessments?

Make assessments fun and engaging with Prodigy's game-based platform. And guess what? It's completely free for teachers!

  • Teacher Resources
  • Teaching Strategies

What's the purpose of different types of assessment?

6 types of assessment to use in your classroom, how to create effective assessments, final thoughts about different types of assessment.

How do you use the different  types of assessment  in your classroom to promote student learning?

School closures and remote or hybrid learning environments have posed some challenges for educators, but motivating students to learn and grow remains a constant goal.

Some students have lost a portion of their academic progress. Assessing students in meaningful ways can help motivate and empower them to grow as they become agents of their own learning. 

But testing can contribute to  math anxiety  for many students. Assessments can be difficult to structure properly and time-consuming to grade. And as a teacher, you know that student progress isn't just a number on a report card. 

There’s so much more to assessments than delivering an end-of-unit exam or prepping for a standardized test. Assessments help shape the learning process at all points, and give you insights into student learning. As John Hattie, a professor of education and the director of the Melbourne Education Research Institute at the University of Melbourne, Australia puts it :

The major purpose of assessment in schools should be to provide interpretative information to teachers and school leaders about their impact on students, so that these educators have the best information possible about what steps to take with instruction and how they need to change and adapt. So often we use assessment in schools to inform students of their progress and attainment. Of course this is important, but it is more critical to use this information to inform teachers about their impact on students. Using assessments as feedback for teachers is powerful. And this power is truly maximized when the assessments are timely, informative, and related to what teachers are actually teaching.

Six types of assessments are:

  • Diagnostic assessments
  • Formative assessments
  • Summative assessments
  • Ipsative assessments
  • Norm-referenced assessments
  • Criterion-referenced assessments

Let’s find out how assessments can analyze, support and further learning.

Smiling student completing an assessment

Different types of assessments can help you understand student progress in various ways. This understanding can inform the teaching strategies you use, and may lead to different adaptations.

In your classroom, assessments generally have one of three purposes:

  • Assessment  of  learning
  • Assessment  for  learning
  • Assessment  as  learning

Assessment of learning

You can use assessments to help identify if students are meeting grade-level standards. 

Assessments of learning are usually  grade-based , and can include:

  • Final projects
  • Standardized tests

They often have a concrete grade attached to them that communicates student achievement to teachers, parents, students, school-level administrators and district leaders. 

Common types of assessment of learning include: 

Assessment for learning

Assessments for learning provide you with a clear snapshot of student learning and understanding  as you teach  -- allowing you to adjust everything from your  classroom management strategies  to your lesson plans as you go. 

Assessments for learning should always be  ongoing and actionable . When you’re creating assessments, keep these key questions in mind:

  • What do students still need to know?
  • What did students take away from the lesson?
  • Did students find this lesson too easy? Too difficult?
  • Did my teaching strategies reach students effectively?
  • What are students most commonly misunderstanding?
  • What did I most want students to learn from this lesson? Did I succeed?

There are lots of ways you can deliver assessments for learning, even in a busy classroom.  We’ll cover some of them soon!

For now, just remember these assessments aren’t only for students -- they’re to provide you with actionable feedback to improve your instruction.

Common types of assessment for learning include formative assessments and diagnostic assessments. 

Assessment as learning

Assessment as learning  actively involves students  in the learning process. It teaches critical thinking skills, problem-solving and encourages students to set achievable goals for themselves and objectively measure their progress. 

They can help engage students in the learning process, too! One study "showed that in most cases the students pointed out the target knowledge as the reason for a task to be interesting and engaging, followed by the way the content was dealt with in the classroom."

Another found:

“Students develop an interest in mathematical tasks that they understand, see as relevant to their own concerns, and can manage.  Recent studies of students’ emotional responses to mathematics suggest that both their positive and their negative responses diminish as tasks become familiar and increase when tasks are novel”

Douglas B. McLeod

Some examples of assessment as learning include ipsative assessments, self-assessments and peer assessments.

There’s a time and place for every type of assessment. Keep reading to find creative ways of delivering assessments and understanding your students’ learning process!

1. Diagnostic assessment

Student working on an assessment at a wooden table

Let’s say you’re starting a lesson on two-digit  multiplication . To make sure the unit goes smoothly, you want to know if your students have mastered fact families,  place value  and one-digit multiplication before you move on to more complicated questions.

When you structure  diagnostic assessments  around your lesson,  you’ll get the information you need to understand student knowledge and engage your whole classroom .

Some examples to try include:

  • Short quizzes
  • Journal entries
  • Student interviews
  • Student reflections
  • Classroom discussions
  • Graphic organizers (e.g., mind maps, flow charts, KWL charts)

Diagnostic assessments can also help benchmark student progress. Consider giving the same assessment at the end of the unit so students can see how far they’ve come!

Using Prodigy for diagnostic assessments

One unique way of delivering diagnostic assessments is to use a game-based learning platform that engages your students.

Prodigy’s assessments tool  helps you align the math questions your students see in-game with the lessons you want to cover.

Screenshot of assessment pop up in Prodigy's teacher dashboard.

To set up a diagnostic assessment, use your assessments tool to create a  Plan  that guides students through a skill. This adaptive assessment will support students with pre-requisites when they need additional guidance.

Want to give your students a sneak peek at the upcoming lesson?  Learn how Prodigy helps you pre-teach important lessons .

2. Formative assessment

Just because students made it to the end-of-unit test, doesn’t mean they’ve  mastered the topics in the unit .  Formative assessments  help teachers understand student learning while they teach, and provide them with information to adjust their teaching strategies accordingly. 

Meaningful learning involves processing new facts, adjusting assumptions and drawing nuanced conclusions. As researchers  Thomas Romberg and Thomas Carpenter  describe it:

“Current research indicates that acquired knowledge is not simply a collection of concepts and procedural skills filed in long-term memory. Rather, the knowledge is structured by individuals in meaningful ways, which grow and change over time.”

In other words, meaningful learning is like a puzzle — having the pieces is one thing, but knowing how to put it together becomes an engaging process that helps solidify learning.

Formative assessments help you track how student knowledge is growing and changing in your classroom in real-time.  While it requires a bit of a time investment — especially at first — the gains are more than worth it.

A March 2020 study found that providing formal formative assessment evidence such as written feedback and quizzes within or between instructional units helped enhance the effectiveness of formative assessments.

Some examples of formative assessments include:

  • Group projects
  • Progress reports
  • Class discussions
  • Entry and exit tickets
  • Short, regular quizzes
  • Virtual classroom tools like  Socrative  or  Kahoot!

When running formative assessments in your classroom, it’s best to keep them  short, easy to grade and consistent . Introducing students to formative assessments in a low-stakes way can help you benchmark their progress and reduce math anxiety.

Find more engaging formative assessment ideas here!

How Prodigy helps you deliver formative assessments

Prodigy makes it easy to create, deliver and grade formative assessments that help keep your students engaged with the learning process and provide you with actionable data to adjust your lesson plans. 

Use your Prodigy teacher dashboard to create an  Assignment  and make formative assessments easy!

Assignments  assess your students on a particular skill with a set number of questions and can be differentiated for individual students or groups of students.

For more ideas on using Prodigy for formative assessments, read:

  • How to use Prodigy for spiral review
  • How to use Prodigy as an entry or exit ticket
  • How to use Prodigy for formative assessments

3. Summative assessment

Students completing a standardized test

Summative assessments  measure student progress as an assessment of learning. Standardized tests are a type of summative assessment and  provide data for you, school leaders and district leaders .

They can assist with communicating student progress, but they don’t always give clear feedback on the learning process and can foster a “teach to the test” mindset if you’re not careful. 

Plus, they’re stressful for teachers. One  Harvard survey  found 60% of teachers said “preparing students to pass mandated standardized tests” “dictates most of” or “substantially affects” their teaching.

Sound familiar?

But just because it’s a summative assessment, doesn’t mean it can’t be engaging for students and useful for your teaching. Try creating assessments that deviate from the standard multiple-choice test, like:

  • Recording a podcast
  • Writing a script for a short play
  • Producing an independent study project

No matter what type of summative assessment you give your students, keep some best practices in mind:

  • Keep it real-world relevant where you can
  • Make questions clear and instructions easy to follow
  • Give a rubric so students know what’s expected of them
  • Create your final test after, not before, teaching the lesson
  • Try blind grading: don’t look at the name on the assignment before you mark it

Use these summative assessment examples to make them effective and fun for your students!

Preparing students for summative assessments with Prodigy

Screenshot of Prodigy's test prep tool in the Prodigy teacher dashboard.

Did you know you can use Prodigy to prepare your students for summative assessments — and deliver them in-game?

Use  Assignments  to differentiate math practice for each student or send an end-of-unit test to the whole class.

Or use our  Test Prep  tool to understand student progress and help them prepare for standardized tests in an easy, fun way!

See how you can benchmark student progress and prepare for standardized tests with Prodigy.

4. Ipsative assessments

How many of your students get a bad grade on a test and get so discouraged they stop trying? 

Ipsative assessments  are one of the types of assessment  as  learning that  compares previous results with a second try, motivating students to set goals and improve their skills . 

When a student hands in a piece of creative writing, it’s just the first draft. They practice athletic skills and musical talents to improve, but don’t always get the same chance when it comes to other subjects like math. 

A two-stage assessment framework helps students learn from their mistakes and motivates them to do better. Plus, it removes the instant gratification of goals and teaches students learning is a process. 

You can incorporate ipsative assessments into your classroom with:

  • A two-stage testing process
  • Project-based learning  activities

One study on ipsative learning techniques  found that when it was used with higher education distance learners, it helped motivate students and encouraged them to act on feedback to improve their grades.

In Gwyneth Hughes' book, Ipsative Assessment: Motivation Through Marking Progress , she writes: "Not all learners can be top performers, but all learners can potentially make progress and achieve a personal best. Putting the focus onto learning rather than meeting standards and criteria can also be resource efficient."

While educators might use this type of assessment during pre- and post-test results, they can also use it in reading instruction. Depending on your school's policy, for example, you can record a student reading a book and discussing its contents. Then, at another point in the year, repeat this process. Next, listen to the recordings together and discuss their reading improvements.

What could it look like in your classroom?

5. Norm-referenced assessments

student taking a summative assessment

Norm-referenced assessments  are tests designed to compare an individual to a group of their peers, usually based on national standards and occasionally adjusted for age, ethnicity or other demographics.

Unlike ipsative assessments, where the student is only competing against themselves, norm-referenced assessments  draw from a wide range of data points to make conclusions about student achievement.

Types of norm-referenced assessments include:

  • Physical assessments
  • Standardized college admissions tests like the SAT and GRE

Proponents of norm-referenced assessments point out that they accentuate differences among test-takers and make it easy to analyze large-scale trends. Critics argue they don’t encourage complex thinking and can inadvertently discriminate against low-income students and minorities. 

Norm-referenced assessments are most useful when measuring student achievement to determine:

  • Language ability
  • Grade readiness
  • Physical development
  • College admission decisions
  • Need for additional learning support

While they’re not usually the type of assessment you deliver in your classroom, chances are you have access to data from past tests that can give you valuable insights into student performance.

6. Criterion-referenced assessments

Criterion-referenced assessments   compare the score of an individual student to a learning standard and performance level,  independent of other students around them. 

In the classroom, this means measuring student performance against grade-level standards and can include end-of-unit or final tests to assess student understanding. 

Outside of the classroom, criterion-referenced assessments appear in professional licensing exams, high school exit exams and citizenship tests, where the student must answer a certain percentage of questions correctly to pass. 

Criterion-referenced assessments are most often compared with norm-referenced assessments. While they’re both considered types of assessments of learning, criterion-referenced assessments don’t measure students against their peers. Instead, each student is graded to provide insight into their strengths and areas for improvement.

You don’t want to use a norm-referenced assessment to figure out where learning gaps in your classroom are, and ipsative assessments aren’t the best for giving your principal a high-level overview of student achievement in your classroom. 

When it comes to your teaching, here are some best practices to help you identify which type of assessment will work and how to structure it, so you and your students get the information you need.

Make a rubric

Students do their best work when they know what’s expected of them and how they’ll be marked. Whether you’re assigning a  cooperative learning  project or an independent study unit, a rubric  communicates clear success criteria to students and helps teachers maintain consistent grading.

Ideally, your rubric should have a detailed breakdown of all the project’s individual parts, what’s required of each group member and an explanation of what different levels of achievement look like.

A well-crafted rubric lets multiple teachers grade the same assignment and arrive at the same score. It’s an important part of assessments for learning and assessments of learning, and teaches students to take responsibility for the quality of their work. 

There are plenty of  online rubric tools  to help you get started -- try one today!

Ask yourself  why  you're giving the assessment

Teacher in classroom supervising students completing a test

While student grades provide a useful picture of achievement and help you communicate progress to school leaders and parents, the ultimate goal of assessments is to improve student learning. 

Ask yourself questions like:

  • What’s my plan for the results?
  • Who’s going to use the results, besides me?
  • What do I want to learn from this assessment?
  • What’s the best way to present the assessment to my students, given what I know about their progress and learning styles?

This helps you effectively prepare students and create an assessment that moves learning forward.

Don't stick with the same types of assessment — mix it up!

Teacher in front of a classroom and pointing at a student with a raised hand.

End-of-unit assessments are a tried and tested (pun intended) staple in any classroom. But why stop there?

Let’s say you’re teaching a unit on  multiplying fractions . To help you plan your lessons, deliver a diagnostic assessment to find out what students remember from last year. Once you’re sure they understand all the prerequisites, you can start teaching your lessons more effectively. 

After each math class, deliver short exit tickets to find out what students understand and where they still have questions. If you see students struggling, you can re-teach or deliver intervention in small groups during  station rotations . 

When you feel students are prepared, an assessment of learning can be given to them. If students do not meet the success criteria, additional support and scaffolding can be provided to help them improve their understanding of the topic. You can foster a growth mindset by reminding students that mistakes are an important part of learning!

Now your students are masters at multiplying fractions! And when standardized testing season rolls around, you know which of your students need additional support — and where. 

Build your review based on the data you’ve collected through diagnostic, formative, summative and ipsative assessments so they perform well on their standardized tests.

Remember: learning extends well beyond a single score or assessment!

It’s an ongoing process, with plenty of opportunities for students to build a  growth mindset  and develop new skills. 

Prodigy is a fun, digital game-based learning platform used by over 100 million students and 2.5 million teachers. Join today to make delivering assessments and differentiating math learning easy with a free teacher account!

Types of Assignments and Assessments

Assignments and assessments are much the same thing: an instructor is unlikely to give students an assignment that does not receive some sort of assessment, whether formal or informal, formative or summative; and an assessment must be assigned, whether it is an essay, case study, or final exam. When the two terms are distinquished, "assignment" tends to refer to a learning activity that is primarily intended to foster or consolidate learning, while "assessment" tends to refer to an activity that is primarily intended to measure how well a student has learned. 

In the list below, some attempt has been made to put the assignments/assessments in into logical categories. However, many of them could appear in multiple categories, so to prevent the list from becoming needlessly long, each item has been allocated to just one category. 

Written Assignments:

  • Annotated Bibliography : An annotated bibliography is a list of citations or references to sources such as books, articles, websites, etc., along with brief descriptions or annotations that summarize, evaluate, and explain the content, relevance, and quality of each source. These annotations provide readers with insights into the source's content and its potential usefulness for research or reference.
  • Summary/Abstract : A summary or abstract is a concise and condensed version of a longer document or research article, presenting the main points, key findings, and essential information in a clear and brief manner. It allows readers to quickly grasp the main ideas and determine whether the full document is relevant to their needs or interests. Abstracts are commonly found at the beginning of academic papers, research articles, and reports, providing a snapshot of the entire content.
  • Case Analysis : Case analysis refers to a systematic examination and evaluation of a particular situation, problem, or scenario. It involves gathering relevant information, identifying key factors, analyzing various aspects, and formulating conclusions or recommendations based on the findings. Case analysis is commonly used in business, law, and other fields to make informed decisions and solve complex problems.
  • Definition : A definition is a clear and concise explanation that describes the meaning of a specific term, concept, or object. It aims to provide a precise understanding of the item being defined, often by using words, phrases, or context that distinguish it from other similar or related things.
  • Description of a Process : A description of a process is a step-by-step account or narrative that outlines the sequence of actions, tasks, or events involved in completing a particular activity or achieving a specific goal. Process descriptions are commonly used in various industries to document procedures, guide employees, and ensure consistent and efficient workflows.
  • Executive Summary : An executive summary is a condensed version of a longer document or report that provides an overview of the main points, key findings, and major recommendations. It is typically aimed at busy executives or decision-makers who need a quick understanding of the content without delving into the full details. Executive summaries are commonly used in business proposals, project reports, and research papers to present essential information concisely.
  • Proposal/Plan : A piece of writing that explains how a future problem or project will be approached.
  • Laboratory or Field Notes:  Laboratory/field notes are detailed and systematic written records taken by scientists, researchers, or students during experiments, observations, or fieldwork. These notes document the procedures, observations, data, and any unexpected findings encountered during the scientific investigation. They serve as a vital reference for later analysis, replication, and communication of the research process and results.
  • Research Paper : A research paper is a more extensive and in-depth academic work that involves original research, data collection from multiple sources, and analysis. It aims to contribute new insights to the existing body of knowledge on a specific subject. Compare to "essay" below.
  • Essay : A composition that calls for exposition of a thesis and is composed of several paragraphs including an introduction, a body, and a conclusion. It is different from a research paper in that the synthesis of bibliographic sources is not required. Compare to "Research Paper" above. 
  • Memo : A memo, short for memorandum, is a brief written message or communication used within an organization or business. It is often used to convey information, provide updates, make announcements, or request actions from colleagues or team members.
  • Micro-theme : A micro-theme refers to a concise and focused piece of writing that addresses a specific topic or question. It is usually shorter than a traditional essay or research paper and requires the writer to present their ideas clearly and concisely.
  • Notes on Reading : Notes on reading are annotations, comments, or summaries taken while reading a book, article, or any other written material. They serve as aids for understanding, retention, and later reference, helping the reader recall essential points and ideas from the text.
  • Outline : An outline is a structured and organized plan that lays out the main points and structure of a written work, such as an essay, research paper, or presentation. It provides a roadmap for the writer, ensuring logical flow and coherence in the final piece.
  • Plan for Conducting a Project : A plan for conducting a project outlines the steps, resources, timelines, and objectives for successfully completing a specific project. It includes details on how tasks will be executed and managed to achieve the desired outcomes.
  • Poem : A poem is a literary work written in verse, using poetic devices like rhythm, rhyme, and imagery to convey emotions, ideas, and experiences.
  • Play : A play is a form of literature written for performance, typically involving dialogue and actions by characters to tell a story or convey a message on stage.
  • Choreography : Choreography refers to the art of designing dance sequences or movements, often for performances in various dance styles.
  • Article/Book Review : An article or book review is a critical evaluation and analysis of a piece of writing, such as an article or a book. It typically includes a summary of the content and the reviewer's assessment of its strengths, weaknesses, and overall value.
  • Review of Literature : A review of literature is a comprehensive summary and analysis of existing research and scholarly writings on a particular topic. It aims to provide an overview of the current state of knowledge in a specific field and may be a part of academic research or a standalone piece.
  • Essay-based Exam : An essay-based exam is an assessment format where students are required to respond to questions or prompts with written, structured responses. It involves expressing ideas, arguments, and explanations in a coherent and organized manner, often requiring critical thinking and analysis.
  • "Start" : In the context of academic writing, "start" refers to the initial phase of organizing and planning a piece of writing. It involves formulating a clear and focused thesis statement, which presents the main argument or central idea of the work, and creating an outline or list of ideas that will support and develop the thesis throughout the writing process.
  • Statement of Assumptions : A statement of assumptions is a declaration or acknowledgment made at the beginning of a document or research paper, highlighting the underlying beliefs, conditions, or premises on which the work is based. It helps readers understand the foundation of the writer's perspective and the context in which the content is presented.
  • Summary or Precis : A summary or precis is a concise and condensed version of a longer piece of writing, such as an article, book, or research paper. It captures the main points, key arguments, and essential information in a succinct manner, enabling readers to grasp the content without reading the full text.
  • Unstructured Writing : Unstructured writing refers to the process of writing without following a specific plan, outline, or organizational structure. It allows the writer to freely explore ideas, thoughts, and creativity without the constraints of a predefined format or order. Unstructured writing is often used for brainstorming, creative expression, or personal reflection.
  • Rough Draft or Freewrite : A rough draft or freewrite is an initial version of a piece of writing that is not polished or edited. It serves as an early attempt by the writer to get ideas on paper without worrying about perfection, allowing for exploration and creativity before revising and refining the final version.
  • Technical or Scientific Report : A technical or scientific report is a document that presents detailed information about a specific technical or scientific project, research study, experiment, or investigation. It follows a structured format and includes sections like abstract, introduction, methods, results, discussion, and conclusion to communicate findings and insights in a clear and systematic manner.
  • Journal article : A formal article reporting original research that could be submitted to an academic journal. Rather than a format dictated by the professor, the writer must use the conventional form of academic journals in the relevant discipline.
  • Thesis statement : A clear and concise sentence or two that presents the main argument or central claim of an essay, research paper, or any written piece. It serves as a roadmap for the reader, outlining the writer's stance on the topic and the key points that will be discussed and supported in the rest of the work. The thesis statement provides focus and direction to the paper, guiding the writer's approach to the subject matter and helping to maintain coherence throughout the writing.

Visual Representation

  • Brochure : A brochure is a printed or digital document used for advertising, providing information, or promoting a product, service, or event. It typically contains a combination of text and visuals, such as images or graphics, arranged in a visually appealing layout to convey a message effectively.
  • Poster : A poster is a large printed visual display intended to catch the attention of an audience. It often contains a combination of text, images, and graphics to communicate information or promote a particular message, event, or cause.
  • Chart : A chart is a visual representation of data or information using various formats such as pie charts, bar charts, line charts, or tables. It helps to illustrate relationships, trends, and comparisons in a concise and easy-to-understand manner.
  • Graph : A graph is a visual representation of numerical data, usually presented using lines, bars, points, or other symbols on a coordinate plane. Graphs are commonly used to show trends, patterns, and relationships between variables.
  • Concept Map : A concept map is a graphical tool used to organize and represent the connections and relationships between different concepts or ideas. It typically uses nodes or boxes to represent concepts and lines or arrows to show the connections or links between them, helping to visualize the relationships and hierarchy of ideas.
  • Diagram : A diagram is a visual representation of a process, system, or structure using labeled symbols, shapes, or lines. Diagrams are used to explain complex concepts or procedures in a simplified and easy-to-understand manner.
  • Table : A table is a systematic arrangement of data or information in rows and columns, allowing for easy comparison and reference. It is commonly used to present numerical data or detailed information in an organized format.
  • Flowchart : A flowchart is a graphical representation of a process, workflow, or algorithm, using various shapes and arrows to show the sequence of steps or decisions involved. It helps visualize the logical flow and decision points, making it easier to understand and analyze complex processes.
  • Multimedia or Slide Presentation : A multimedia or slide presentation is a visual communication tool that combines text, images, audio, video, and other media elements to deliver information or a message to an audience. It is often used for educational, business, or informational purposes and can be presented in person or virtually using software like Microsoft PowerPoint or Google Slides.
  • ePortfolio : An ePortfolio, short for electronic portfolio, is a digital collection of an individual's work, accomplishments, skills, and reflections. It typically includes a variety of multimedia artifacts such as documents, presentations, videos, images, and links to showcase a person's academic, professional, or personal achievements. Eportfolios are used for self-reflection, professional development, and showcasing one's abilities to potential employers, educators, or peers. They provide a comprehensive and organized way to present evidence of learning, growth, and accomplishments over time.

Multiple-Choice Questions : These questions present a statement or question with several possible answer options, of which one or more may be correct. Test-takers must select the most appropriate choice(s). See CTE's Teaching Tip "Designing Multiple-Choice Questions."  

True or False Questions : These questions require test-takers to determine whether a given statement is true or false based on their knowledge of the subject.

Short-Answer Questions : Test-takers are asked to provide brief written responses to questions or prompts. These responses are usually a few sentences or a paragraph in length.

Essay Questions : Essay questions require test-takers to provide longer, more detailed written responses to a specific topic or question. They may involve analysis, critical thinking, and the development of coherent arguments.

Matching Questions : In matching questions, test-takers are asked to pair related items from two lists. They must correctly match the items based on their associations.

Fill-in-the-Blank Questions : Test-takers must complete sentences or passages by filling in the missing words or phrases. This type of question tests recall and understanding of specific information.

Multiple-Response Questions : Similar to multiple-choice questions, but with multiple correct options. Test-takers must select all the correct choices to receive full credit.

Diagram or Image-Based Questions : These questions require test-takers to analyze or interpret diagrams, charts, graphs, or images to answer specific queries.

Problem-Solving Questions : These questions present real-world or theoretical problems that require test-takers to apply their knowledge and skills to arrive at a solution.

Vignettes or Case-Based Questions : In these questions, test-takers are presented with a scenario or case study and must analyze the information to answer related questions.

Sequencing or Order Questions : Test-takers are asked to arrange items or events in a particular order or sequence based on their understanding of the subject matter.

Projects intended for a specific audience :

  • Advertisement : An advertisement is a promotional message or communication aimed at promoting a product, service, event, or idea to a target audience. It often uses persuasive techniques, visuals, and compelling language to attract attention and encourage consumers to take specific actions, such as making a purchase or seeking more information.
  • Client Report for an Agency : A client report for an agency is a formal document prepared by a service provider or agency to communicate the results, progress, or recommendations of their work to their client. It typically includes an analysis of data, achievements, challenges, and future plans related to the project or services provided.
  • News or Feature Story : A news story is a journalistic piece that reports on current events or recent developments, providing objective information in a factual and unbiased manner. A feature story, on the other hand, is a more in-depth and creative piece that explores human interest topics, profiles individuals, or delves into issues from a unique perspective.
  • Instructional Manual : An instructional manual is a detailed document that provides step-by-step guidance, explanations, and procedures on how to use, assemble, operate, or perform specific tasks with a product or system. It aims to help users understand and utilize the item effectively and safely.
  • Letter to the Editor : A letter to the editor is a written communication submitted by a reader to a newspaper, magazine, or online publication, expressing their opinion, feedback, or comments on a particular article, topic, or issue. It is intended for publication and allows individuals to share their perspectives with a broader audience.

Problem-Solving and Analysis :

  • Taxonomy : Taxonomy is the science of classification, categorization, and naming of organisms, objects, or concepts based on their characteristics, similarities, and differences. It involves creating hierarchical systems that group related items together, facilitating organization and understanding within a particular domain.
  • Budget with Rationale : A budget with rationale is a financial plan that outlines projected income and expenses for a specific period, such as a month or a year. The rationale provides explanations or justifications for each budget item, explaining the purpose and reasoning behind the allocated funds.
  • Case Analysis : Case analysis refers to a methodical examination of a particular situation, scenario, or problem. It involves gathering relevant data, identifying key issues, analyzing different factors, and formulating conclusions or recommendations based on the findings. Case analysis is commonly used in various fields, such as business, law, and education, to make informed decisions and solve complex problems.
  • Case Study : A case study is an in-depth analysis of a specific individual, group, organization, or situation. It involves thorough research, data collection, and detailed examination to understand the context, challenges, and outcomes associated with the subject of study. Case studies are widely used in academic research and professional contexts to gain insights into real-world scenarios.
  • Word Problem : A word problem is a type of mathematical or logical question presented in a contextual format using words rather than purely numerical or symbolic representations. It challenges students to apply their knowledge and problem-solving skills to real-life situations.

Collaborative Activities

  • Debate : A debate is a structured discussion between two or more individuals or teams with differing viewpoints on a specific topic or issue. Participants present arguments and counterarguments to support their positions, aiming to persuade the audience and ultimately reach a resolution or conclusion. Debates are commonly used in academic settings, public forums, and formal competitions to foster critical thinking, communication skills, and understanding of diverse perspectives.
  • Group Discussion : A group discussion is an interactive conversation involving several individuals who come together to exchange ideas, opinions, and information on a particular subject. The discussion is typically moderated to ensure that everyone has an opportunity to participate, and it encourages active listening, collaboration, and problem-solving. Group discussions are commonly used in educational settings, team meetings, and decision-making processes to promote dialogue and collective decision-making.
  • An oral report is a form of communication in which a person or group of persons present information, findings, or ideas verbally to an audience. It involves speaking in front of others, often in a formal setting, and delivering a structured presentation that may include visual aids, such as slides or props, to support the content. Oral reports are commonly used in academic settings, business environments, and various professional settings to share knowledge, research findings, project updates, or persuasive arguments. Effective oral reports require clear organization, articulation, and engaging delivery to effectively convey the intended message to the listeners.

Planning and Organization

  • Inventory : An inventory involves systematically listing and categorizing items or resources to assess their availability, quantity, and condition. In an educational context, students might conduct an inventory of books in a library, equipment in a lab, or supplies in a classroom, enhancing their organizational and data collection skills.
  • Materials and Methods Plan : A materials and methods plan involves developing a structured outline or description of the materials, tools, and procedures to be used in a specific experiment, research project, or practical task. It helps learners understand the importance of proper planning and documentation in scientific and research endeavors.
  • Plan for Conducting a Project : This learning activity requires students to create a detailed roadmap for executing a project. It includes defining the project's objectives, identifying tasks and timelines, allocating resources, and setting milestones to monitor progress. It enhances students' project management and organizational abilities.
  • Research Proposal Addressed to a Granting Agency : A formal document requesting financial support for a research project from a granting agency or organization. The proposal outlines the research questions, objectives, methodology, budget, and potential outcomes. It familiarizes learners with the process of seeking funding and strengthens their research and persuasive writing skills.
  • Mathematical Problem : A mathematical problem is a task or question that requires the application of mathematical principles, formulas, or operations to find a solution. It could involve arithmetic, algebra, geometry, calculus, or other branches of mathematics, challenging individuals to solve the problem logically and accurately.
  • Question : A question is a sentence or phrase used to elicit information, seek clarification, or provoke thought from someone else. Questions can be open-ended, closed-ended, or leading, depending on their purpose, and they play a crucial role in communication, problem-solving, and learning.

More Resources

CTE Teaching Tips

  • Personal Response Systems
  • Designing Multiple-Choice Questions
  • Aligning Outcomes, Assessments, and Instruction

Other Resources

  • Types of Assignments . University of Queensland.

If you would like support applying these tips to your own teaching, CTE staff members are here to help.  View the  CTE Support  page to find the most relevant staff member to contact.

teachingitps

Catalog search

Teaching tip categories.

  • Assessment and feedback
  • Blended Learning and Educational Technologies
  • Career Development
  • Course Design
  • Course Implementation
  • Inclusive Teaching and Learning
  • Learning activities
  • Support for Student Learning
  • Support for TAs

Center for Teaching

Student assessment in teaching and learning.

assignment assessment method

Much scholarship has focused on the importance of student assessment in teaching and learning in higher education. Student assessment is a critical aspect of the teaching and learning process. Whether teaching at the undergraduate or graduate level, it is important for instructors to strategically evaluate the effectiveness of their teaching by measuring the extent to which students in the classroom are learning the course material.

This teaching guide addresses the following: 1) defines student assessment and why it is important, 2) identifies the forms and purposes of student assessment in the teaching and learning process, 3) discusses methods in student assessment, and 4) makes an important distinction between assessment and grading., what is student assessment and why is it important.

In their handbook for course-based review and assessment, Martha L. A. Stassen et al. define assessment as “the systematic collection and analysis of information to improve student learning.” (Stassen et al., 2001, pg. 5) This definition captures the essential task of student assessment in the teaching and learning process. Student assessment enables instructors to measure the effectiveness of their teaching by linking student performance to specific learning objectives. As a result, teachers are able to institutionalize effective teaching choices and revise ineffective ones in their pedagogy.

The measurement of student learning through assessment is important because it provides useful feedback to both instructors and students about the extent to which students are successfully meeting course learning objectives. In their book Understanding by Design , Grant Wiggins and Jay McTighe offer a framework for classroom instruction—what they call “Backward Design”—that emphasizes the critical role of assessment. For Wiggens and McTighe, assessment enables instructors to determine the metrics of measurement for student understanding of and proficiency in course learning objectives. They argue that assessment provides the evidence needed to document and validate that meaningful learning has occurred in the classroom. Assessment is so vital in their pedagogical design that their approach “encourages teachers and curriculum planners to first ‘think like an assessor’ before designing specific units and lessons, and thus to consider up front how they will determine if students have attained the desired understandings.” (Wiggins and McTighe, 2005, pg. 18)

For more on Wiggins and McTighe’s “Backward Design” model, see our Understanding by Design teaching guide.

Student assessment also buttresses critical reflective teaching. Stephen Brookfield, in Becoming a Critically Reflective Teacher, contends that critical reflection on one’s teaching is an essential part of developing as an educator and enhancing the learning experience of students. Critical reflection on one’s teaching has a multitude of benefits for instructors, including the development of rationale for teaching practices. According to Brookfield, “A critically reflective teacher is much better placed to communicate to colleagues and students (as well as to herself) the rationale behind her practice. She works from a position of informed commitment.” (Brookfield, 1995, pg. 17) Student assessment, then, not only enables teachers to measure the effectiveness of their teaching, but is also useful in developing the rationale for pedagogical choices in the classroom.

Forms and Purposes of Student Assessment

There are generally two forms of student assessment that are most frequently discussed in the scholarship of teaching and learning. The first, summative assessment , is assessment that is implemented at the end of the course of study. Its primary purpose is to produce a measure that “sums up” student learning. Summative assessment is comprehensive in nature and is fundamentally concerned with learning outcomes. While summative assessment is often useful to provide information about patterns of student achievement, it does so without providing the opportunity for students to reflect on and demonstrate growth in identified areas for improvement and does not provide an avenue for the instructor to modify teaching strategy during the teaching and learning process. (Maki, 2002) Examples of summative assessment include comprehensive final exams or papers.

The second form, formative assessment , involves the evaluation of student learning over the course of time. Its fundamental purpose is to estimate students’ level of achievement in order to enhance student learning during the learning process. By interpreting students’ performance through formative assessment and sharing the results with them, instructors help students to “understand their strengths and weaknesses and to reflect on how they need to improve over the course of their remaining studies.” (Maki, 2002, pg. 11) Pat Hutchings refers to this form of assessment as assessment behind outcomes. She states, “the promise of assessment—mandated or otherwise—is improved student learning, and improvement requires attention not only to final results but also to how results occur. Assessment behind outcomes means looking more carefully at the process and conditions that lead to the learning we care about…” (Hutchings, 1992, pg. 6, original emphasis). Formative assessment includes course work—where students receive feedback that identifies strengths, weaknesses, and other things to keep in mind for future assignments—discussions between instructors and students, and end-of-unit examinations that provide an opportunity for students to identify important areas for necessary growth and development for themselves. (Brown and Knight, 1994)

It is important to recognize that both summative and formative assessment indicate the purpose of assessment, not the method . Different methods of assessment (discussed in the next section) can either be summative or formative in orientation depending on how the instructor implements them. Sally Brown and Peter Knight in their book, Assessing Learners in Higher Education, caution against a conflation of the purposes of assessment its method. “Often the mistake is made of assuming that it is the method which is summative or formative, and not the purpose. This, we suggest, is a serious mistake because it turns the assessor’s attention away from the crucial issue of feedback.” (Brown and Knight, 1994, pg. 17) If an instructor believes that a particular method is formative, he or she may fall into the trap of using the method without taking the requisite time to review the implications of the feedback with students. In such cases, the method in question effectively functions as a form of summative assessment despite the instructor’s intentions. (Brown and Knight, 1994) Indeed, feedback and discussion is the critical factor that distinguishes between formative and summative assessment.

Methods in Student Assessment

Below are a few common methods of assessment identified by Brown and Knight that can be implemented in the classroom. [1] It should be noted that these methods work best when learning objectives have been identified, shared, and clearly articulated to students.

Self-Assessment

The goal of implementing self-assessment in a course is to enable students to develop their own judgement. In self-assessment students are expected to assess both process and product of their learning. While the assessment of the product is often the task of the instructor, implementing student assessment in the classroom encourages students to evaluate their own work as well as the process that led them to the final outcome. Moreover, self-assessment facilitates a sense of ownership of one’s learning and can lead to greater investment by the student. It enables students to develop transferable skills in other areas of learning that involve group projects and teamwork, critical thinking and problem-solving, as well as leadership roles in the teaching and learning process.

Things to Keep in Mind about Self-Assessment

  • Self-assessment is different from self-grading. According to Brown and Knight, “Self-assessment involves the use of evaluative processes in which judgement is involved, where self-grading is the marking of one’s own work against a set of criteria and potential outcomes provided by a third person, usually the [instructor].” (Pg. 52)
  • Students may initially resist attempts to involve them in the assessment process. This is usually due to insecurities or lack of confidence in their ability to objectively evaluate their own work. Brown and Knight note, however, that when students are asked to evaluate their work, frequently student-determined outcomes are very similar to those of instructors, particularly when the criteria and expectations have been made explicit in advance.
  • Methods of self-assessment vary widely and can be as eclectic as the instructor. Common forms of self-assessment include the portfolio, reflection logs, instructor-student interviews, learner diaries and dialog journals, and the like.

Peer Assessment

Peer assessment is a type of collaborative learning technique where students evaluate the work of their peers and have their own evaluated by peers. This dimension of assessment is significantly grounded in theoretical approaches to active learning and adult learning . Like self-assessment, peer assessment gives learners ownership of learning and focuses on the process of learning as students are able to “share with one another the experiences that they have undertaken.” (Brown and Knight, 1994, pg. 52)

Things to Keep in Mind about Peer Assessment

  • Students can use peer assessment as a tactic of antagonism or conflict with other students by giving unmerited low evaluations. Conversely, students can also provide overly favorable evaluations of their friends.
  • Students can occasionally apply unsophisticated judgements to their peers. For example, students who are boisterous and loquacious may receive higher grades than those who are quieter, reserved, and shy.
  • Instructors should implement systems of evaluation in order to ensure valid peer assessment is based on evidence and identifiable criteria .  

According to Euan S. Henderson, essays make two important contributions to learning and assessment: the development of skills and the cultivation of a learning style. (Henderson, 1980) Essays are a common form of writing assignment in courses and can be either a summative or formative form of assessment depending on how the instructor utilizes them in the classroom.

Things to Keep in Mind about Essays

  • A common challenge of the essay is that students can use them simply to regurgitate rather than analyze and synthesize information to make arguments.
  • Instructors commonly assume that students know how to write essays and can encounter disappointment or frustration when they discover that this is not the case for some students. For this reason, it is important for instructors to make their expectations clear and be prepared to assist or expose students to resources that will enhance their writing skills.

Exams and time-constrained, individual assessment

Examinations have traditionally been viewed as a gold standard of assessment in education, particularly in university settings. Like essays they can be summative or formative forms of assessment.

Things to Keep in Mind about Exams

  • Exams can make significant demands on students’ factual knowledge and can have the side-effect of encouraging cramming and surface learning. On the other hand, they can also facilitate student demonstration of deep learning if essay questions or topics are appropriately selected. Different formats include in-class tests, open-book, take-home exams and the like.
  • In the process of designing an exam, instructors should consider the following questions. What are the learning objectives that the exam seeks to evaluate? Have students been adequately prepared to meet exam expectations? What are the skills and abilities that students need to do well? How will this exam be utilized to enhance the student learning process?

As Brown and Knight assert, utilizing multiple methods of assessment, including more than one assessor, improves the reliability of data. However, a primary challenge to the multiple methods approach is how to weigh the scores produced by multiple methods of assessment. When particular methods produce higher range of marks than others, instructors can potentially misinterpret their assessment of overall student performance. When multiple methods produce different messages about the same student, instructors should be mindful that the methods are likely assessing different forms of achievement. (Brown and Knight, 1994).

For additional methods of assessment not listed here, see “Assessment on the Page” and “Assessment Off the Page” in Assessing Learners in Higher Education .

In addition to the various methods of assessment listed above, classroom assessment techniques also provide a useful way to evaluate student understanding of course material in the teaching and learning process. For more on these, see our Classroom Assessment Techniques teaching guide.

Assessment is More than Grading

Instructors often conflate assessment with grading. This is a mistake. It must be understood that student assessment is more than just grading. Remember that assessment links student performance to specific learning objectives in order to provide useful information to instructors and students about student achievement. Traditional grading on the other hand, according to Stassen et al. does not provide the level of detailed and specific information essential to link student performance with improvement. “Because grades don’t tell you about student performance on individual (or specific) learning goals or outcomes, they provide little information on the overall success of your course in helping students to attain the specific and distinct learning objectives of interest.” (Stassen et al., 2001, pg. 6) Instructors, therefore, must always remember that grading is an aspect of student assessment but does not constitute its totality.

Teaching Guides Related to Student Assessment

Below is a list of other CFT teaching guides that supplement this one. They include:

  • Active Learning
  • An Introduction to Lecturing
  • Beyond the Essay: Making Student Thinking Visible in the Humanities
  • Bloom’s Taxonomy
  • How People Learn
  • Syllabus Construction

References and Additional Resources

This teaching guide draws upon a number of resources listed below. These sources should prove useful for instructors seeking to enhance their pedagogy and effectiveness as teachers.

Angelo, Thomas A., and K. Patricia Cross. Classroom Assessment Techniques: A Handbook for College Teachers . 2 nd edition. San Francisco: Jossey-Bass, 1993. Print.

Brookfield, Stephen D. Becoming a Critically Reflective Teacher . San Francisco: Jossey-Bass, 1995. Print.

Brown, Sally, and Peter Knight. Assessing Learners in Higher Education . 1 edition. London ; Philadelphia: Routledge, 1998. Print.

Cameron, Jeanne et al. “Assessment as Critical Praxis: A Community College Experience.” Teaching Sociology 30.4 (2002): 414–429. JSTOR . Web.

Gibbs, Graham and Claire Simpson. “Conditions under which Assessment Supports Student Learning. Learning and Teaching in Higher Education 1 (2004): 3-31.

Henderson, Euan S. “The Essay in Continuous Assessment.” Studies in Higher Education 5.2 (1980): 197–203. Taylor and Francis+NEJM . Web.

Maki, Peggy L. “Developing an Assessment Plan to Learn about Student Learning.” The Journal of Academic Librarianship 28.1 (2002): 8–13. ScienceDirect . Web. The Journal of Academic Librarianship.

Sharkey, Stephen, and William S. Johnson. Assessing Undergraduate Learning in Sociology . ASA Teaching Resource Center, 1992. Print.

Wiggins, Grant, and Jay McTighe. Understanding By Design . 2nd Expanded edition. Alexandria, VA: Assn. for Supervision & Curriculum Development, 2005. Print.

[1] Brown and Night discuss the first two in their chapter entitled “Dimensions of Assessment.” However, because this chapter begins the second part of the book that outlines assessment methods, I have collapsed the two under the category of methods for the purposes of continuity.

Teaching Guides

  • Online Course Development Resources
  • Principles & Frameworks
  • Pedagogies & Strategies
  • Reflecting & Assessing
  • Challenges & Opportunities
  • Populations & Contexts

Quick Links

  • Services for Departments and Schools
  • Examples of Online Instructional Modules

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Assessment Design: Perspectives and Examples Informed by Universal Design for Learning

Section 2.2: Assessment Methods and Examples – Exams and Assignments

Quizzes and exams.

Quizzes and exams can be formative or summative depending on the design and are considered necessary in many educational contexts. Common question types include multiple choice, true or false, matching, and short answers. Quizzes help learners practice existing knowledge and can be effective in recalling what they have learned. Other than mid or final exams, instructors can also design pre-tests or knowledge checks where learners can assess what they already know and don’t know. Applying UDL approach in quizzes and exams can provide a supportive environment for all learners.

Here is a UDL-informed example:

Written Assignments

Written assignments are commonly used to assess learners’ ability to understand a topic in a text-based format. To be effective in this area, instructors need to be clear on what they are assessing and if written assessment is the most suitable format. From a UDL perspective, if you are assessing the students’ ability to understand the topic, you could allow the learner to demonstrate their learning in other formats such as in a video, podcast, or PowerPoint presentation. If you are assessing the students’ quality of writing, you may allow students to choose their topic of interest. As an instructor, you can add flexibility in the assessment choices once you identify the purpose of the assessment. As learners work towards the same learning goal, it is important that educators offer opportunities and methods for learning to be demonstrated in a variety of ways.

Here is a UDL informed example:

A Comprehensive Guide to Applying Universal Design for Learning by Dr. Seanna Takacs; Junsong Zhang; Helen Lee; Lynn Truong; and David Smulders is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Assignments

What to consider when using assignments as an assessment method for a course.

An assignment is a piece of (academic) work or task. It provides opportunity for students to learn, practice and demonstrate they have achieved the learning goals. It provides the evidence for the teacher that the students have achieved the goals. The output can be judged using sensory perception (observing, reading, tasting etc.). The assignment can focus on a product as output (e.g. research report, design, prototype, etc.) and/or a process (e.g. research process, group process) and/or the performance of individual skills or competences (e.g. professional skills, communications skills).

assignment assessment method

When assessing with assignments, we should pay attention to:  >>  validity : we really test what we want to test; the assignment and the way we assess the results are aligned with the learning goals. >> reliability : based on the results, we make a right, just, fair, objective distinction between pass/fail or provide the just grade. Our scoring or grading is done in a consistent way and the  judgments or the grades are meaningful. >> transparency : it clear upfront for the students what they will learn, what they have to do (as evidence; what to deliver or show), how they will be assessed and what to expect during the process. >> the assignment and the feedback provided will support the learning process .  

With the toolbox below, related to the questions and issues mentioned above, we hope to offer you useful tips and guidelines for designing and assessing assignments.

assignment cloud

  • Top 10 tips on designing assessment tasks with particular focuses on learning outcomes, and assessment criteria. Resource: Learnhigher .   Resource picture: Nick Youngson - link to - http://nyphotographic.com/

assignment assessment method

  • Assessment Criteria . About: characteristics; threshold or marking criteria; hidden criteria.(University of Kent) 
  • Know what it is that you are assessing: writing assessment criteria . Things to remember when writing assessment criteria and an example format.(University of Reading) 

assignment assessment method

Useful resources to learn more about rubrics, to find templates or examples:

  • What are rubrics and why are they important?  Explanation about the purpose of rubrics and about different types of rubrics. (ASCD, by Susan M. Brookhart)  
  • Introduction to Rubrics . By Danielle Stevens and Antonia Levi from Portland State University. Including templates and examples.
  • Grading and Performance Rubrics . Explanation and some very nice examples. Eberly Center.
  • More Examples of Rubrics and Other Resources . Examples for specific purposes, like class participation, team work, multidisciplinary work, research papers and more. DePaul university Teaching Commons.    

The disadvantage of assignments is, most of the time, that scoring and grading will take a lot of time. Especially if you want to give the students detailed feedback. The resources below may give you some (new) ideas and tips to assess and provide feedback in an efficient as well as an effective way.      

  • Clare Furneaux of the University of Reading (UK) offers her tip for assessing large numbers of students and at the same time provide elaborate feedback. Short video . 
  • Stimulate success.  Tips on providing ‘Feed Forward’ guidance  (tips from the University of Reading, UK).  
  • Grading Student Papers: Reducing Faculty Workload While Improving Feedback to Students . An article by Kathy Pezdek with tips (e.g. using a coding system).  
  • If you are working at the University of Twente and would like some support or just discuss your ideas or plans, please turn to the Technology Enhanced Learning & Teaching group .  
  • The Centre for Teaching Excellence of the University of Waterloo developed a usefull webpage about fast and equitable grading. 

assignment assessment method

  • Helping Students to Reflect on their Group Work .  With useful instruments and tips.(UNSW)  
  • Methods for Assessing Group Work . A very  worthwhile site about ways to assess group work. With advantages and disadvantages for different methods and formula to provide scores/grades. (University of Waterloo; Centre for Teaching Excellence)   
  • Group Work and Group Assessment . Handbook / guidelines and some useful instruments. (Centre for Academic Development; Victoria University of Wellington) 

Academic integrity is important and most students will agree and act accordingly. But nevertheless fraud occurs occasionally and as an examiner you are expected to detect fraud, whether it is real cheating, like delivering work someone else made, or plagiarism or free-riding. But how can you detect it? And what to do next? In case of plagiarism or free-riding, it might not always happen with the wrong intentions or circumstances may have influenced what happened. Better to look for ways to prevent it, but what can be done? Below you will find some useful resources dealing with these issues.   NB. Specific rules and regulations may apply for your educational programme. For the University of Twente you have to check the Educational Examination Rules (EER) for your own educational programme and the  Rules & Regulations of the Examination Board for your programme or faculty. Be aware that you have to report fraud to the Examination Board!

  • Top10 tips on deterring plagiarism . (LearnHigher site).This resource includes tips on how to prevent and eradicate the appeal for plagiarism. Ideas for task and assessment design are suggested, with a particular focus on the research process.
  • Reduce the risk of plagiarism in just 30 minutes!   Leaflet with tips. (ASKe; Oxford Brookes University)   
  • A short note with 10 tips to prevent freeriding . 

assignment assessment method

This exercise is especially developed for the course Testing & Assessment. This course is offered by the Centre of Expertise in Learning and Teaching (CELT), University of Twente. The course is part of the UTQ (BKO) and UEQ (BKE) trajectory. Copyright  CELT-UT / Expertise team T&A.  The material may be used by other parties provided that reference is made. If you would like us to give a workshop on this subject, either in English or Dutch, face-to-face or online, please contact us: [email protected] 

  • Campus Maps
  • Faculties & Schools

Now searching for:

  • Home  
  • Courses  
  • Subjects  
  • Design standards  
  • Methods  
  • Types  
  • Marking criteria and rubrics  
  • Review checklist  
  • Alterations  
  • Moderation  
  • Feedback  
  • Teaching  
  • Learning technology  
  • Professional learning  
  • Framework and policy  
  • Strategic projects  
  • About and contacts  
  • Help and resources  

Assessment methods

To effectively evaluate whether your students meet a subject's learning outcomes, you need to choose an appropriate assessment method.

Different assessment methods allow you to assess different skills. For example, while one method may ask students to demonstrate analytical skills, another may focus on collaboration. The method of assessment chosen will then inform the selection of an appropriate task.

To choose an appropriate assessment method, you must understand:

  • the subject's learning outcomes
  • the skills and knowledge associated with those learning outcomes
  • which assessment methods will allow your students to demonstrate the skills and knowledge.

Considering these three aspects puts the student and their learning at the centre of learning design.

Definitions and examples

You can also download a PDF table of the definitions and examples .

Please note that these are examples and not an exhaustive and complete list.

Source: Teaching @UNSW | Assessment Toolkit Aligning Assessment with Outcomes Document Version Date 07/08/2015 teaching.unsw.edu.au/aligning-assessment-learning-outcomes (Dunn, 2010, adapted from Nightingale et al., 1996). University of New South Wales

Methods of assessment and assessment types

Methods of assessment can also be aligned with types of assessment, allowing for types of assessment to be altered at the offering level (ie. in subject outlines) without changing the assessment method.

If your criteria and standards focus on the skills and knowledge to be assessed, you can change elements such as:

  • detail of task
  • form of task.

Other elements should remain the same. These include:

  • number of assessment tasks

See the full list of assessment types

Center for Teaching Innovation

Resource library.

  • Direct & Indirect Measures Summary
  • Rice University Workload Calculator

Measuring student learning

Assessment methods should help the instructor answer the questions, “How do I know the required learning has taken place? What might I need to modify about the course to best support student learning?”  

Information about student learning can be assessed through both direct and indirect measures. Direct measures may include homework, quizzes, exams, reports, essays, research projects, case study analysis, and rubrics for oral and other performances. Examples of indirect measures include course evaluations, student surveys, course enrollment information, retention in the major, alumni surveys, and graduate school placement rates. 

Approaches to measuring student learning 

Methods of measuring student learning are often characterized as summative or formative assessments: 

  • Summative assessments - tests, quizzes, and other graded course activities that are used to measure student performance. They are cumulative and often reveal what students have learned at the end of a unit or the end of a course. Within a course, summative assessment includes the system for calculating individual student grades. 
  • Formative assessment  - any means by which students receive input and guiding feedback on their relative performance to help them improve. It can be provided face-to-face in office hours, in written comments on assignments, through rubrics, and through emails. 

Formative assessments can be used to measure student learning on a daily, ongoing basis. These assessments reveal how and what students are learning during the course and often inform next steps in teaching and learning. Rather than asking students if they understand or have any questions, you can be more systematic and intentional by asking students at the end of the class period to write the most important points or the most confusing aspect of the lecture on index cards. Collecting and reviewing the responses provides insight into what themes students have retained and what your next teaching steps might be. Providing feedback on these themes to students gives them insight into their own learning. 

You can also ask students to reflect and report on their own learning. Asking students to rate their knowledge about a topic after taking your course as compared to what they believe they knew before taking your course is an example.  

Considerations for Measuring Student Learning

As you develop methods for assessing your students consider:

  • including indirect and direct assessments as well as formative and summative assessments
  • evaluating whether or not the assessment aligns directly with a learning outcome
  • ensuring the measurement is sustainable and reasonable in terms of time and resources, both for the students and the instructors (e.g., grading, response time, and methods). To estimate the time that students need to complete different assignments, see the Rice University workload calculator
  • using a mid-semester student survey, such as the CTI's  Mid-Semester Feedback Program , a great way to gather feedback on what students are learning and what is helping them learn
  • using the results of the assessments to improve the course. Examples include revising course content in terms of depth vs. breadth, realignment between goals and teaching methods, employment of more appropriate assessment methods, or effective incorporation of learning technologies

Getting started with measuring student learning

At the course level, it is helpful to review course assignments and assessments by asking: 

  • What are the students supposed to get out of each assessment? 
  • How are the assessments aligned with learning outcomes ? 
  • Knowledge acquired? 
  • Skill development? 
  • Values clarification? 
  • Performance attainment? 
  • How are homework and problem sets related to exams? 
  • How are the exams related to each other? 
  • What other forms of assessment (besides exams) can be used as indicators of student learning? 
  • If writing assignments are used, are there enough of them for students to develop the requisite skills embedded in them? 
  • How is feedback on student work provided to help students improve? 
  • Are the assessments structured in a way to help students assess their own work and progress? 
  • Does the assignment provide evidence of an outcome that was communicated? Is the evidence direct or indirect? 
  • International
  • Business & Industry
  • MyUNB Intranet
  • Activate your IT Services
  • Give to UNB
  • Centre for Enhanced Teaching & Learning
  • Teaching & Learning Services
  • Teaching Tips
  • Assessment Methods
  • Variety in Assignment and Assessment Methods

Variety in assignment and assessment methods

Printable Version (PDF)

There are several good reasons to consider offering a variety of assessment methods, beyond the typical quiz/test/exam:

  • Students need to understand concepts deeply, as opposed to memorize information and reproduce it on an exam, so they can handle advanced course work and later work effectively in their chosen field.
  • Students need to be able to apply knowledge in authentic learning and assessment activities to develop the skills necessary for work in their chosen field.
  • Students have diverse abilities, backgrounds, interests, and learning styles, so assessment variety puts all students on a level playing field in terms of demonstrating what they know and can do.

This statement is typical: “In addition to knowledge and technical proficiency in core content areas, … professionals need well-developed oral and written communication skills, need to be able to work well in interdisciplinary teams (either as leaders or team members), need “people” skills to successfully interact with a diverse set of colleagues and stakeholders, and need a well-developed appreciation of professionalism and ethics” (Abbott, 34).

To develop these types of abilities, students need to engage in authentic learning assessment activities. Authentic assessments:

  • Require application of knowledge and skills in a “real world” context (realistic, even if it’s an artificial learning environment).
  • Involve unstructured, complex problems that may have multiple solutions. Such problems contain both relevant and irrelevant factors, unlabelled, just like real life, and students need to decide what’s relevant and to develop a solution they can explain and defend.
  • Require students to “perform” discipline-specific activities or procedures, drawing on a wide range of knowledge and skills.
  • Provide feedback, practice, and opportunities to revise and resubmit solutions, so they can refine their skills, rather like an apprenticeship between the instructor/TA experts and students.

Authentic assessments include such things as performance demonstrations of specific skills, use and manipulation of tools and instruments, oral and/or poster presentations, debates, panel discussions, role plays, teaching others, conducting experiments, and conducting interviews. Also included are “product assessments” such as essays, research reports, annotated bibliographies, data analysis and interpretation, argument construction and analysis, reviews, critiques and analysis of written work, problem analysis, planning, mapping, budget development, experimental design, peer editing, portfolios, poster, games, and podcast, video, and multimedia productions (Abbott, 37).

Begin assignment and assessment design by focusing on learning outcomes: what do you want students to remember, understand, apply, analyze, evaluate, or create (Davis, 362)? The table below outlines a variety of assignment and assessment options with rationales for using them and implementation details.

Assessment Item 1:  Regular practical application work

Assessment Item 2:  Application cards

Assessment Item 3:  Final exams

Assessment Item 4:  Online tests or exams

Assessment Item 5:  Oral exam option

Assessment Item 6:  Open book & take-home tests/ exams

Assessment Item 7:  Group exams

Assessment Item 8:  Essays and assignments

Assessment Item 9:  Article review

Assessment Item 10:  Field reports

Assessment Item 11:  Portfolios

Assessment Item 12:  Performance & presentations

Assessment Item 13:  Case studies, scenarios, problem-based assessment, role plays

Assessment Item 14:  Projects

Assessment Item 15:  Independent study

Assessment Item 16:  Peer assessment

Abbott, L. (2012). Tired of Teaching to the Test? Alternative Approaches to Assessing Student Learning in Rangelands 34 (3) .

Brothen, T. & Wambach, C. (2004). The Value of Time Limits on Internet Quizzes in Teaching of Psychology , 31 (1) . 

Daniel, D. B. & Broida, J. (2004). Using Web-Based Quizzing to Improve Exam Performance: Lessons Learned in Teaching of Psychology,  31 (3) .

Davis, B. G. (2009).  Tools for Teaching,  Second Edition. Jossey-Bass, San Francisco.

Oxford Brookes University Centre for Staff and Teaching Development. (Undated).  Selecting Methods of Assessment . 

University of New Brunswick

  • Campus Maps
  • Campus Security
  • Careers at UNB
  • Services at UNB
  • Conference Services
  • Online & Continuing Ed

Contact UNB

  • © University of New Brunswick
  • Accessibility
  • Web feedback
  • Admissions Events
  • Request Info

2 teachers reviewing the work of 2 students in a science classroom

Assessment Methods: A Look at Formal and Informal Techniques

How we assess what students have learned is one of the most hotly debated topics in education today. High-stakes testing determines whether students get into the college of their choice and helps school districts judge the effectiveness of their teaching staff.

But all this focus on testing raises concerns that the urge to test is overwhelming what really matters: whether children are actually getting the education they need to thrive in an increasingly sophisticated, knowledge-driven economy.

We asked Joseph McDonald , professor of teaching and learning at NYU’s Steinhardt School of Culture, Education, and Human Development, about the best ways to assess the success of teaching. McDonald notes that the best teachers are always assessing – that is, doing things deliberately to figure out whether they are getting through to their students – even when they are not handing out weekly quizzes and final exams.

In this Q&A, McDonald talks about:

  • The most effective assessment methods
  • The advantages of doing summative and formative assessments
  • Scheduling tests to avoid over- or undertesting
  • Analyzing the data teachers gather from assessments
  • Adapting teaching to diverse classrooms

Let’s go to Professor McDonald’s answers:

What are the most effective ways to assess a child’s learning?

When people think of assessment today, they often think first of standardized tests – that is, ones developed by testing companies and used by states, schools, and districts in standardized ways to measure what students have learned with respect to some criteria. These tests are an important part of U.S. education, and are likely to remain important for the foreseeable future. But these tests are also likely to change in format and prevalence.

For example, they are likely to be increasingly administered online, and to be adaptive – that is, adapt the level of prompts to an estimated level of a particular test taker’s understanding or skill level. This change will save time in both testing itself and in test prep. But the time saved won’t be enough by itself to deal with widespread concerns today about overtesting.

As the standardized testing opt-out movement has proved powerful in states such as New York and Illinois, some policymakers are ready to cut back on testing demands. Meanwhile, there are signs too that a new kind of standardized testing system may emerge – at least for teenagers and young adults. This system would be one to support – and provide legitimacy and validity for – a growing interest in digital badging, which use digital credentials to convey core academic knowledge and other competencies that can’t be measured by traditional assessments.

The individualized pursuit of badging is likely to emerge as an important design element in initiatives to reimagine 21st-century high schooling – so long as it can be undergirded with an authentic and valid system of assessment.

For now, however, the most adventurous area of assessment is assessment in teaching. This interest in assessing teaching is encouraged by research on learning that reveals that learning is never simply additive. Learning nearly always involves un learning too – or subtracting some previous understanding.

So even as they teach toward the goal of new understanding, good teachers probe continuously for current understanding and emerging understanding. Many do such probing through questioning, but this use of questioning is still rarer than it should be.

One culprit here is that many teachers – even those who are well versed in the content they teach – are untutored in the typical patterns of misunderstanding that are characteristic of their content area. As most of us know from our own experiences as students, content expertise by itself is not enough to make a good teacher.

But there are other ways besides questioning to assess knowledge while teaching. A simple technique is to pause in the middle of a lesson, and to ask everyone to write down or tweet or otherwise share briefly their understanding of [such and such] at this instant.

A similar and favorite device of new teachers, because it’s relatively easy to use, is called entry and exit tickets . Students have to “pay” their way into class by quickly writing down – or otherwise sharing – accounts of their current understanding of something on the day’s learning agenda. This is the entry ticket, which has the advantage of previewing the learning agenda.

At the end of the class, the students pay their way out by accounting for changes in their understanding: “Take no more than one minute to write down your current understanding of [such and such], and don’t sign your name.” This is the exit ticket, which has the advantage of introducing metacognition into the learning process – something that research on learning suggests plays a big role in learning. After class, the teacher can quickly read and compare entry and exit tickets to estimate the range of cognitive change, and the relative need to reteach, review, or move on in the curriculum.

How do you define, interpret, and strike the right balance between formal and informal means of assessment?

Instead of formal and informal, I prefer summative assessment – at the end of some unit of instruction, or some “gate” like the end of fourth grade, or the completion of the unit on gasses in the chemistry course – and formative assessment.

The summative ones should be much less frequent in a student’s education than the formative ones. Students should always feel that a summative assessment is an appropriate capstone of some kind, and have time to prepare for it – and summon up metacognition for the purpose.

The best summative assessments ask students to construct a response or a set of responses – rather than select right answers from a list as in multiple-choice exams. The best summative assessments are also ones that seem authentic – that is, true in some fashion to how the assessed knowledge is actually used in the world. It’s much better, for example, to ask students to write a movie review than to ask them to list the major requirements of a movie review.

Formative assessments are really a kind of teaching. What students understand is not, after all, confined to how they cognitively enter and exit a particular instructional period. It evolves during the period, too, and nearly every one of a teacher’s moves can help it evolve in the right direction.

This learning progression only happens, however, if the teacher teaches in ways that continually inquire about the evolution – not just en masse as in exit tickets (What do my students understand now?) but also in individualized ways (What does Jose understand at this moment? How about Mirabelle?). Good teachers are alert to signs of learning – Mirabelle’s look, Jose’s posture – and, of course, they literally ask individual students to explain their understanding.

These are the kind of teachers, however, who do not ask questions to fish for right answers, and who do not discount wrong answers. They explore whatever answers they get in order to unearth misunderstandings – so that these can be cleared away, and so that scaffolds can be communally erected to reach higher levels of understanding across the class.

And they teach students to ask questions, too, as a good way to put them in touch with each other’s emerging understandings. Sadly, however, too few teachers know how to do all of these methods well. The gap may be in part an ironic artifact of the fixation of much assessment on right answers.

How often should assessments occur? Can you easily over- or undertest and/or assess, whether it’s formal or informal?

How often? Summative assessments should be infrequent, while formative ones should be highly frequent. Undertest? Yes. A college professor who lectures incessantly and assigns only one long paper is guilty of this.

Overtesting is much more prevalent, however – at least P-12. Overtesting with standardized tests, including test prep, is particularly dangerous. Standardized tests use a relatively small sample of items drawn from a large domain.

Overtesting can invalidate the sample by displacing attention to the larger domain. Educators may think, “This test doesn’t ask questions about Asian history so we’re not going to cover it.” Overtesting can also displace its practice in it, by encouraging fourth-graders to spend so much time reading short passages and answering multiple-choice questions about them – in preparation for a reading test – that they don’t read any actual books.

What should teachers do with the data they get back from these assessments to modify their teaching approach to enhance student performance?

I’m studying data use in schools right now, and one of the things my team and I are learning is the phrasing of your question is misguided when it comes to the heart of the process of using data in teaching.

What I mean is that only a small proportion of the data that’s useful in teaching is data that teachers “get back.” This is standardized test data – whether from annual testing, or so-called benchmark testing (periodic tests that claim to predict outcomes on annual tests). But good teachers understand that the data they derive from teaching itself – from pressing for understanding, unearthing misunderstanding, and so on – is much richer. And so is data that they gather themselves in formative assessments of various kinds and in collecting other purposeful samples of student work – and that in the best schools, they examine together with their teaching colleagues.

Indeed, what we’re learning from our study is that opportunities to look together at student work with colleagues, and regularly talk together about teaching and learning – is one of four key things that teachers need in order to use data to benefit their students’ learning. The other three things are:

  •   An inquiry-guided teaching habit
  •   A good data management system, including tech-savvy colleagues, at the school level
  •   A smart policy environment that doesn’t oversell the power of standardized testing or downplay the measurement error inherent in it

In a diverse, inclusive classroom, how do you adapt and/or tailor assessment methods to address differing needs and ways of learning?

Lots of formative assessment helps, because it makes diverse levels of understanding and skill development discernible and thus more addressable. It helps a lot, too, to build a classroom culture that acknowledges diversity as a learning asset for the whole group, and that encourages and creates opportunities for all students to work on “hard stuff” with the understanding that one person’s hard is nearly always another person’s easy – but any group of humans is always “smarter” as a whole than any of its parts.

And it helps to create summative assessments that to the furthest extent possible permit a range of ways to express one’s understanding of what they cover.

Finally, it helps to incorporate a range of technology supports into teaching, and formative and summative assessment, too – for example, smartphones and apps that can turn text into sound and vice versa, post questions that others can read and respond to, and respond in an instant to teacher inquiries about what students know – even graph the results. And all students should have access to such tools – not just students whose Individualized Education Plans say they should. Most classrooms are far more diverse and inclusive than most people imagine.

Now Accepting Applications for the Teacher Residency program

Our embedded approach to teacher education places you in a high-need urban classroom from the first day of school. In just over one year, you’ll earn your master of arts in teaching through an immersive teacher residency experience paired with online course work.

Visit the Teacher Residency program page  to explore our innovative teacher preparation model.

Back to Articles

Take the Next Step

Request Information

Yes! By submitting this form, I consent to be contacted on behalf of NYU Steinhardt to receive information regarding the Teacher Residency program. I agree that I may be contacted by email, text, or phone call and agree that if I am called, automated technology may be used to dial the number(s) I provided. I understand that this consent is not required to enroll in the program.

Pediaa.Com

Home » Education » What is the Difference Between Assignment and Assessment

What is the Difference Between Assignment and Assessment

The main difference between assignment and assessment is that assignments refer to the allocation of a task or set of tasks that are marked and graded while a ssessment refers to methods for establishing if students have achieved a learning outcome, or are on their way toward a learning objective.  

Assignments and assessment are two important concepts in modern education. Although these two words are similar, they have different meanings. Assignments are the pieces of coursework or homework students are expected to complete. Assessment, on the other hand, refer to the method of assessing the progress of students. Sometimes, assignments can act as tools of assessment.

Key Areas Covered

1. What is an Assignment       – Definition, Goals, Characteristics 2. What is an Assessment      – Definition, Characteristics 3. Difference Between Assignment and Assessment      – Comparison of Key Differences

Difference Between Assignment and Assessment - Comparison Summary

What is an Assignment

Assignments are the pieces of coursework or homework given to the students by teachers at school or professors at university. In other words, assignments refer to the allocation of a task or set of tasks that are marked and graded. Assignments are essential components in primary, secondary and tertiary education.

Assignments have several goals, as described below:

– gives students a better understanding of the topic being studied

– develops learning and understanding skills of students

– helps students in self-study

– develops research and analytical skills

– teaches students time management and organization

– clear students’ problems or ambiguities regarding any subject

– enhance the creativity of students

Difference Between Assignment and Assessment

Generally, educators assign such tasks to complete at home and submit to school after a certain period of time. The time period assigned may depend on the nature of the task. Essays, posters, presentation, annotated bibliography, review of a book, summary, charts and graphs are some examples of assignments. Writing assignments develop the writing skills of students while creative assignments like creating posters, graphs and charts and making presentation enhance the creativity of students. Ultimately, assignments help to assess the knowledge and skills, as well as the students’ understanding of the topic.

What is an Assessment

Assessment refers to methods for establishing if students have achieved a learning outcome, or are on their way toward a learning objective. In other words, it is the method of assessing the progress of students. Assessment helps the educators to determine what students are learning and how well they are learning it, especially in relation to the expected learning outcomes of a lesson. Therefore, it helps the educator to understand how the students understand the lesson, and to determine what changes need to be made to the teaching process. Moreover, assessment focuses on both learning as well as teaching and can be termed as an interactive process. Sometimes, assignments can act as tools of assessment.

Main Difference - Assignment vs Assessment

There are two main types of assessment as formative and summative assessment . Formative assessments occur during the learning process, whereas summative assessments occur at the end of a learning unit. Quizzes, discussions, and making students write summaries of the lesson are examples of formative assessment while end of unit tests, term tests and final projects are examples of summative assessment. Moreover, formative assessments aim to monitor student learning while summative assessments aim to evaluate student learning.

Difference Between Assignment and Assessment

Assignments refer to the allocation of a task or set of tasks that are marked and graded while assessment refers to methods for establishing if students have achieved a learning outcome, or are on their way toward a learning objective. 

Assignments are the pieces of coursework or homework students have to complete while assessment is the method of assessing the progress of students

Goal                

Moreover, assignments aim to give students a more comprehensive understanding of the topic being studied and develop learning and understanding skills of students. However, the main goal of assessment is monitoring and evaluating student learning and progress.

Assignments are the pieces of coursework or homework students have to complete while assessment refers to the method of assessing the progress of students. This is the main difference between assignment and assessment. Sometimes, assignments can also act as tools of assessment.

Image Courtesy:

1. “Focused schoolgirl doing homework and sitting at table” (CC0) via Pexels 2. “Assessment” By Nick Youngson (CC BY-SA 3.0) Alpha Stock Images

' src=

About the Author: Hasa

Hasanthi is a seasoned content writer and editor with over 8 years of experience. Armed with a BA degree in English and a knack for digital marketing, she explores her passions for literature, history, culture, and food through her engaging and informative writing.

​You May Also Like These

Leave a reply cancel reply.

assignment assessment method

Methods of assessment: An Introduction

CanMedEd-ipedia: The CORAL Collection. Concepts as Online Resources for Accelerated Learning.

Gwenna Moss Centre for Teaching and Learning Jul 4, 2018

Introduction

Assessment drives learning 1 . How we assess learners is a major force in ensuring that learners are prepared for the next stage of training by driving the studying they do and by providing feedback on their learning. This Cell is designed to give you an overview of the major types of assessments along with their strengths and weaknesses so you can help make better decisions about how to assess your learners. You will also end up remembering and understanding the ABCs of assessment: assess at an appropriate level, be wary of weaknesses, and consider complementarity.

Learning Objectives (what you can reasonably expect to learn in the next 15 minutes):

  • List and classify different assessment methods  according to Miller’s pyramid of clinical competence.
  • Given a testing situation (or goal) select and justify an assessment method.
  • Explain the ABCs of assessment and apply to testing situations.

To what extent are you now able to meet the objectives above? Please record your self-assessment. (0 is Not at all and 5 is Completely)

To get started, please take a few moments to list the assessment methods that you are familiar with. Can you think of a weakness or two? Write them out here so you can refer back to them shortly: 

Now proceed to the rest of this CORAL Cell.

An Overview of Common Assessment Methods based on Millar’s Pyramid

assignment assessment method

A simple way to classify assessment methods is to use Miller’s pyramid (1):

The bottom or first two levels of this pyramid represent knowledge. The “Knows” level refers to facts, concepts and principles that learners can recall and describe. In Bloom’s taxonomy this would be remembering and understanding. The “Knows how” level refers to learners’ ability to apply these facts, concepts, and principles to analyse and solve problems. Again, in Bloom’s taxonomy, this would match applying, analyzing, evaluating, and creating of the cognitive domain. The “Shows” level refers to requiring students to demonstrate performance in a simulated setting. Finally, the “Does” level refers to what learners do in the real world, typically in the clinical setting for medical education. The “Does” level is addressed in more detail in the companion Cell, Methods of Workplace-based Assessment.

Clearly, the higher the level on the pyramid, the more authentic and complex the task is likely to be. The less standardised too. And typically as you move up the pyramid, it takes more time per case or question to assess attainment of the objective. This is a practical issue because clinical competence varies across cases, a phenomenon called case-specificity 2 . Good multiple-choice questions may be less authentic than real patient interactions, but you can test a broader range of cases in a given amount of time with well-written MCQs compared to the traditional long case oral exam or even the OSCE. Every testing method has strengths and weaknesses so it is usually a good idea to use a variety of methods to achieve complementarity (ABCs!). The accompanying table briefly describes and explains a variety of testing methods.

“Knows” and “Knows how”

To test knowledge, you can use written or oral questions. These can be selected-response, where learners have to pick one or several answers from a pre-specified list ( multiple-choice questions or pick N questions – where students must pick a given ‘N’ number of options would fall in this category), or constructed-response where students must generate their answer from scratch ( short-answer questions for example). Table 1 contains a more complete presentation of various testing approaches. Whether a question tests at the first or second level of the pyramid is not determined by its format but rather by the task implied by the question. Compare the two questions below: which one tests at a higher level?

Selected-response : single-best answer multiple choice question

You are called to examine Mrs. Peters, a 65 year-old woman, with a history of diabetes and hypertension. Her current illness started with a sore throat but got steadily worse. She has been in bed for 3 days with a temperature of 39 C and a dry cough. On examination, her temperature is 39.5, HR 90 bpm, BP 120/80, RR 18/min. Her pharynx is red but the rest of the examination including chest auscultation, is normal. Which of the following is the most likely pathogen?

assignment assessment method

  • Mycoplasma pneumoniae
  • Streptococcus A
  • Streptococcus pneumoniae

Constructed-response: short answer question

What are the three most common symptoms of influenza?

The MCQ tests at a higher level. Analysis and application of knowledge is required to select the best response but the short answer question is just asking for recall of knowledge.

To test knowledge application (“Knows how”), regardless of question format, the task must involve some sort of problem or case. Case-based multiple-choice questions are often more difficult to write so many of us have been exposed to the easier factual recall multiple-choice questions, which has created an erroneous perception that MCQ only test trivia. However MCQs can in fact be used to test clinical reasoning and have the advantage of enabling assessment of a broad range of cases in a short time. Of course, a written examination will never fully capture the complexities of actual clinical work, with challenges such as history-taking in patients who may be upset or have hearing issues, in a clinical setting characterised by noise and multiple interruptions. In other words, written examinations overestimate competence in clinical reasoning.   

assignment assessment method

“Shows”

The Objective Structured Clinical Examinations (OSCE) is a typical method used here. The OSCE enables a fairly standardized assessment of clinical competence. By breaking down clinical work into tasks that can be performed in 5-15 minutes, OSCEs can test several cases (typically 10-15). However this tends to reduce clinical competence to discrete skills and fails to capture the integration of skills required in actual clinical practice. Simulated Office Orals - where the examiner both plays the patient and asks probing questions - is another method designed to assess the “Shows” level. See Table 1 for a more complete listing and description of various approaches to testing.

“Does”

Assessment in the clinical setting can focus on a single event (e.g. a single patient interaction) or be based on multiple observations aggregated across time. There are many tools - with as many acronyms - to assess single events: e.g., the mini-CEX (mini Clinical Evaluation Exercise), PMEX (Professionalism Mini-Evaluation Exercise), O-SCORE (Ottawa Surgical Competence Operating Room Evaluation). Tools that aggregate across multiple observations include end of clinic forms (e.g. field notes ) or end-of-rotation forms (often referred to as ITERs , In-Training Evaluation Reports or ITARs – In-Training Assessment Reports). What can be observed can also vary, from patient encounters to procedures, case presentations, interactions with nurses and other healthcare professionals, notes, letters, and others. Please consider completing the sister Cell in the CORAL Collection specifically on the topic of workplace-based assessment.

Choosing an assessment method

assignment assessment method

In choosing an assessment method, multiple factors come into play, including feasibility. However, the goal will be to maximise validity within the constraints of your setting. Validity does not reside in the method alone; a test is not in and of itself valid or not valid. The same method can be implemented well or poorly, appropriately or not, and this will determine whether scores are being used appropriately (in a valid manner). Many factors influence whether or not the same tool is used in a valid way or not. An IQ test might help predict academic success but not marital or relationship success. The same multiple-choice exam in an invigilated versus non-invigilated setting, where students may collaborate or check answers in textbooks, may not be valid to use for summative decisions about individuals’ clinical knowledge.

assignment assessment method

In summary, there is no silver bullet for assessment in the health professions. Different methods have their purposes, strengths and weaknesses (see again Table 1; a similar table for the “Does” level is available in the Workplace-based Assessment Cell). In making decisions about which method (and ideally methods) to use, you should (here again are the ABCs but with more detail):

A ssess at the A ppropriate level of Miller’s pyramid (and Bloom’s taxonomy)

B eware of the weaknesses of each method and the factors that will influence validity in your own setting

C onsider the Complementarity of different methods in the overall program of assessment to leverage the strengths of different methods and compensate for their individual weaknesses

Table 1: Methods of assessment Levels 1-3 of Miller’s pyramid

Adapted from faculty development materials, Faculty of Medicine, McGill University

Check for Understanding

1. classify each testing situation under one of millar’s stages (knows, knows how, shows, does). the learner is asked to ….

a. … describe how to mirror a patient’s emotional state.

Knows (Miller) BONUS: Remembering (Bloom)

b. … select the name of the most appropriate drugs to treat a plasmodium parasite.

c. … demonstrate on a mannequin the insertion of a central line

Shows (Miller) BONUS: Beyond Bloom cognitive domain (psychomotor)

d. … explain how to draw blood from a patient

Knows How (Miller) BONUS: Understanding/Applying (Bloom)

2. Answer each of the following True/False questions and briefly explain your answer referring where helpful to the ABCs of Assessment.

a. If you want to test clinical reasoning, the best method is directly observing a patient encounter True/False

Expert answer: False: There are multiple ways of testing clinical reasoning. Directly observing a learner performing a history and physical examination can provide clues about what hypotheses they are considering. It can also avoid pitfalls in assessing case presentations when learners can sometimes – whether deliberately or not - reconstruct their encounter to portray a case consistent with their hypotheses when in fact they may have missed something that you would be able to see in a direct observation. Well written multiple-choice, extended-matching, or pick N questions (like MCQs) can be very effective and efficient means of testing clinical reasoning across a broad range of cases, even though they aren’t as authentic and complex. BONUS: think complementarity (ABCs of Assessment: using multiple methods may help create the most accurate assessment of the skill level of an individual.

b. OSCEs are the most valid way of testing clinical skills True/False

Expert answer: False. Validity does not reside in the tool or method alone. OSCEs can test clinical skills across several cases and in a standardized way. However the quality of the stations, of the portrayal of cases by standardized patients, of the scoring will all influence how valid the uses of OSCES are. BONUS: there are many factors that may influence the quality (validity) of any method of assessment. ABCs of Assessment: Be wary of weaknesses.

c. MCQs should only be used to test simple recognition of facts. True/False

Expert answer: False. While MCQs are frequently used to test at the knowledge level of Bloom’s taxonomy or at the Knows level of Millar’s pyramid they can be written to assess at higher levels by, for example, first, laying out a patient case then asking a series of MCQs based on the case. (BONUS: it is important to assess at the level of the objectives. ABCs of Assessment: Assess at Appropriate level.

3. Please explain the meaning and importance of the three maxims of the ABCs of Assessment (refer back if needed):

A: Assess at the Appropriate level. There must be alignment among the objectives, the teaching, and the assessment. We do our students a disservice to expect them to show us how but only test at a “knows” level. We frustrate our learners when we expect one level as expressed in our objectives and in our tests but then also teach at a different level.

b) B: 

Beware wary of weaknesses. All tests have flaws and limitations.

Consider Complementarity: use a set of different types of tests to address the weaknesses of each.

Self-assessment

Please complete the following very short self-assessment on the objectives of this CORAL cell.

To what extent are you NOW able to meet the following objectives? (0 is not at all and 5 is completely)

To what extent WERE you able the day before beginning this CORAL Cell to meet the following objectives? (0 is not at all and 5 is completely)

Thank you for completing this CORAL Cell.

We are interested in improving this and other cells and would like to use your answers (anonymously of course) along with the following descriptive questions as part of our evaluation data.

Provide feedback on module

Thanks again, and come back soon!

The CORAL Cell Team

References:

1. Miller GE. The assessment of clinical skills/competence/performance. Academic Medicine. 1990;65(9).

2. Norman G, Bordage G, Page G, Keane D. How specific is case specificity?. Medical education. 2006 Jul 1;40(7):618-23.         

Further reading:

Epstein R. Assessment in Medical Education. New England Journal of Medicine. 2007;356:387-96

Wass V, Van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. The Lancet. 2001;357(9260):945-9.

van der Vleuten CPM, Schuwirth LWT, Scheele F, Driessen EW, Hodges B. The assessment of professional competence: building blocks for theory development. Best Practice & Research Clinical Obstetrics & Gynaecology. 2010;24(6):703-19.

Norcini J, Anderson B, Bollela V, Burch V, Costa MJú, Duvivier R, et al. Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 Conference. Medical Teacher. 2011;33(3):206-14.

Author: Valérie Dory, McGill University Reviewer/consultant: Series Editor: Marcel D’Eon, University of Saskatchewan

Back to the CORAL collection

Was this page helpful? Yes No

What could make this page better?

Thank you for helping us make the university website better. Your comment will be forwarded to the editor of this page. Please note that this form is not intended to provide customer service. If you need assistance, please contact us directly.

Center for Advancing Teaching and Learning Through Research logo

Course Assessment

Course-level assessment is a process of systematically examining and refining the fit between the course activities and what students should know at the end of the course.

Conducting a course-level assessment involves considering whether all aspects of the course align with each other and whether they guide students to achieve the desired learning outcomes.

“Assessment” refers to a variety of processes for gathering, analyzing, and using information about student learning to support instructional decision-making, with the goal of improving student learning. Most instructors already engage in assessment processes all the time, ranging from informal (“hmm, there are many confused faces right now- I should stop for questions”) to formal (“nearly half the class got this quiz question wrong- I should revisit this concept”).

When approached in a formalized way, course-level assessment is a process of systematically examining and refining the fit between the course activities and what students should know at the end of the course. Conducting a course-level assessment involves considering whether all aspects of the course align with each other and whether they guide students to achieve the desired learning outcomes . Course-level assessment can be a practical process embedded within course design and teaching, that provides substantial benefits to instructors and students.

course assessment cycle

Over time, as the process is followed iteratively over several semesters, it can help instructors find a variety of pathways to designing more equitable courses in which more learners develop greater expertise in the skills and knowledge of greatest importance to the discipline or topic of the course.

Differentiating Grading from Assessment

“Assessment” is sometimes used colloquially to mean “grading,” but there are distinctions between the two. Grading is a process of evaluating individual student learning for the purposes of characterizing that student’s level of success at a particular task (or the entire course). The grade of an assignment may provide feedback to students on which concepts or skills they have mastered, which can guide them to revise their study approach, but may not be used by the instructor to decide how subsequent class sessions will be spent. Similarly, a student’s grade in a course might convey to other instructors in the curriculum or prospective employers the level of mastery that the student has demonstrated during that semester, but need not suggest changes to the design of the course as a whole for future iterations.

In contrast to grading, assessment practices focus on determining how many students achieved which learning course outcomes, and to what level of mastery, for the purpose of helping the instructor revise subsequent lessons or the course as a whole for subsequent terms. Since final course grades may include participation points, and aggregate student mastery of all course learning objectives into a single measure, they rarely give clarity on what elements of the course have been most or least successful in achieving the instructor’s goals. Differentiating assessment from grading allows instructors to plot a clear course forward toward making the changes that will have the greatest impact in the areas they define as being most important, based on the results of the assessment.

Course learning outcomes are measurable statements that describe what students should be able to do by the end of a course . Let’s parse this statement into its three component parts: student-centered, measurable, and course-level.

Student-Centered

First, learning outcomes should focus on what students will be able to do, not what the course will do. For example:

  • “Introduces the fundamental ideas of computing and the principles of programming” says what a course is intended to accomplish. This is perfectly appropriate for a course description but is not a learning outcome.
  • A related student learning outcome might read, “ Explain the fundamental ideas of computing and identify the principles of programming.”

Second, learning outcomes are measurable , which means that you can observe the student performing the skill or task and determine the degree to which they have done so. This does not need to be measured in quantitative terms—student learning can be observed in the characteristics of presentations, essays, projects, and many other student products created in a course (discussed more in the section on rubrics below).

To be measurable, learning outcomes should not include words like understand , learn , and appreciate , because these qualities occur within the student’s mind and are not observable. Rather, ask yourself, “What would a student be doing if they understand, have learned, or appreciate?” For example:

  • “Learners should understand US political ideologies regarding social and environmental issues,” is not observable.
  • “Learners should be able to compare and contrast U.S. political ideologies regarding social and environmental issues,” is observable.

Observable Performance

Course-Level

Finally, learning outcomes for course-level assessment focus on the knowledge and skills that learners will take away from a course as a whole. Though the final project, essay, or other assessment that will be used to measure student learning may match the outcome well, the learning outcome should articulate the overarching takeaway from the course, rather than describing the assignment. For example:

  • “Identify learning principles and theories in real-world situations” is a learning outcome that describes skills learners will use beyond the course.
  • “Develop a case study in which you document a learner in a real-world setting” describes a course assignment aligned with that outcome but is not a learning outcome itself.

Identify and Prioritize Your Higher-Order End Goals

Course-level learning outcomes articulate the big-picture takeaways of the course, providing context and purpose for day-to-day learning. To keep the workload of course assessment manageable, focus on no more than 5-10 learning outcomes per course (McCourt, 2007). This limit is helpful because each of these course-level learning objectives will be carefully assessed at the end of the term and used to guide iterative revision of the course in future semesters.

This is not meant to suggest that students will only learn 5-10 skills or concepts during the term. Multiple shorter-term and lower-level learning objectives are very helpful to guide student learning at the unit, week, or even class session scale (Felder & Brent, 2016). These shorter-term objectives build toward or serve as components of the course-level objectives.

Bloom’s Taxonomy of Educational Objectives (Anderson & Krathwohl, 2001) is a helpful tool for deciding which of your objectives are course-level, which may be unit-to class-level objectives, and how they fit together. This taxonomy organizes action verbs by complexity of thinking, resulting in the following categories:

Bloom's taxonomy organizes action verbs by complexity of thinking

Download a list of sample learning outcomes from a variety of disciplines .

Typically, objectives at the higher end of the spectrum (“analyzing,” “evaluating,” or “creating”) are ideal course-level learning outcomes, while those at the lower end of the spectrum (“remembering,” “understanding,” or “applying”) are component parts and day, week, or unit-level outcomes. Lower-level outcomes that do not contribute substantially to students’ ability to achieve the higher-level objectives may fit better in a different course in the curriculum.

Course learning outcomes spectrum

Consider Involving Your Learners

Depending on the course and the flexibility of the course structure and/or progression, some educators spend the first day of the course working with learners to craft or edit learning outcomes together. This practice of giving learners an informed voice may lead to increased motivation and ownership of learning.

Alignment, where all components work together to bolster specific student learning outcomes, occurs at multiple levels. At the course level, assignments or activities within the course are aligned with the daily or unit-level learning outcomes, which in turn are aligned with the course-level objectives. At the next level, the learning outcomes of each course in a curriculum contribute directly and strategically to programmatic learning outcomes.

Alignment Within the Course

Since learning outcomes are statements about key learning takeaways, they can be used to focus the assignments, activities, and content of the course (Wiggins & McTighe, 2005). Biggs & Tang (2011) note that, “In a constructively aligned system, all components… support each other, so the learner is enveloped within a supportive learning system.”

Alignments within the course

For example, for the learning outcome, “learners should be able to collaborate effectively on a team to create a marketing campaign for a product,” the course should: (1) intentionally teach learners effective ways to collaborate on a team and how to create a marketing campaign; (2) include activities that allow learners to practice and progress in their skillsets for collaboration and creation of marketing campaigns; and (3) have assessments that provide feedback to the learners on the extent that they are meeting these learning outcomes.

Alignment With Program

When developing your course learning outcomes, consider how the course contributes to your program’s mission/goals (especially if such decisions have not already been made at the programmatic level). If course learning outcomes are set at the programmatic level, familiarize yourself with possible program sequences to understand the knowledge and skills learners are bringing into your course and the level and type of mastery they may need for future courses and experiences. Explicitly sharing your understanding of this alignment with learners may help motivate them and provide more context, significance, and/or impact for their learning (Cuevas, Matveevm, & Miller, 2010).

If relevant, you will also want to ensure that a course with NUpath attributes addresses the associated outcomes . Similarly, for undergraduate or graduate courses that meet requirements set by external evaluators specific to the discipline or field, reviewing and assessing these outcomes is often a requirement for continuing accreditation.

See our program-level assessment guide for more information.

Transparency

Sharing course learning outcomes with learners makes the benchmarks for learning explicit and helps learners make connections across different elements within the course (Cuevas & Mativeev, 2010). Consider including course learning outcomes in your syllabus , so learners know what is expected of them by the end of a course and can refer to the outcomes as the term progresses. When educators refer to learning outcomes during the course before introducing new concepts or assignments, learners receive the message that the outcomes are important and are more likely to see the connections between the outcomes and course activities.

Formative Assessment

Formative assessment practices are brief, often low-stakes (minimal grade value) assignments administered during the semester to give the instructor insight into student progress toward one or more course-level learning objectives (or the day-to unit-level objectives that stair-step toward the course objectives). Common formative assessment techniques include classroom discussions , just-in-time quizzes or polls , concept maps , and informal writing techniques like minute papers or “muddiest points,” among many others (Angelo & Cross, 1993).

Refining Alignment During the Semester

While it requires a bit of flexibility built into the syllabus, student-centered courses often use the results of formative assessments in real time to revise upcoming learning activities. If students are struggling with a particular outcome, extra time might be devoted to related practice. Alternatively, if students demonstrate accomplishment of a particular outcome early in the related unit, the instructor might choose to skip activities planned to teach that outcome and jump ahead to activities related to an outcome that builds upon the first one.

Supporting Student Motivation and Engagement

Formative assessment and subsequent refinements to alignment that support student learning can be transformative for student motivation and engagement in the course, with the greatest benefits likely for novices and students worried about their ability to successfully accomplish the course outcomes, such as those impacted by stereotype threat (Steele, 2010). Take the example below, in which an instructor who sees that students are struggling decides to dedicate more time and learning activities to that outcome. If that instructor were to instead move on to instruction and activities that built upon the prior learning objective, students who did not reach the prior objective would become increasingly lost, likely recognize that their efforts at learning the new content or skill were not helping them succeed, and potentially disengage from the course as a whole.

formative assessment cycle

Artifacts for Summative Assessment

To determine the degree to which students have accomplished the course learning outcomes, instructors often assign some form of project , essay, presentation, portfolio, renewable assignment , or other cumulative final. The final product of these activities could serve as the “artifact” that is assessed. In this context, alignment is particularly critical—if this assignment does not adequately guide students to demonstrate their achievement of the learning outcomes, the instructor will not have concrete information to guide course design for future semesters. To keep assessment manageable, aim to design a single final assignment that create the space for students to demonstrate their performance on multiple (if not all) course learning outcomes.

Since not all courses are designed with a final assignment that allows students to demonstrate their highest level of achievement of all course learning outcomes, the assessment processes could use the course assignment that represents the highest level of achievement that students had an opportunity to demonstrate during the term. However, some learning objectives that do not come into play during the final may be better categorized as unit-level, rather than course-level, objectives.

Direct vs. Indirect Measures of Student Learning

Some instructors also use surveys, interviews, or other methods that ask learners whether and how they believe they have achieved the learning outcomes. This type of “indirect evidence” can provide valuable information about how learners understand their progress but does not directly measure students’ learning. In fact, novices commonly have difficulty accurately evaluating their own learning (Ambrose et al., 2010). For this reason, indirect evidence of student learning (on its own) is not considered sufficient for summative assessment.

Together, direct and indirect evidence of student learning can help an instructor determine whether to bolster student practice in certain areas or whether to simply focus on increasing transparency about when students are working toward which learning outcome.

Creating and Assessing Student Work with Analytic Rubrics

One tool for assessing student work is analytic rubrics (shown below) which are matrices of characteristics and descriptions of what it might look like for student products to demonstrate these characteristics at different levels of mastery. Analytic rubrics are commonly recommended for assessment purposes, since they provide more detailed feedback to guide course design in more meaningful ways than holistic rubrics. Pre-existing analytic rubrics such as the AAC&U VALUE Rubrics can be tailored to fit your course or program, or you can develop an outcome-specific rubric yourself (Moskal, 2000 is a useful reference, or contact CATLR for a one-on-one consultation). The process of refining a rubric often involves multiple iterations of applying the rubric to student work and identifying the ways in which it captures or does not capture the characteristics representing the outcome.

assignment assessment method

Summative assessment results can inform changes to any of the course components for subsequent terms. If students have underperformed on a particular course learning objective, the instructor might choose to revise the related assignments or provide additional practice opportunities related to that objective, and formative assessments might be revised or implemented to test whether those new learning activities are producing better results. If the final assessment does not provide sufficient information about student performance on a certain outcome, the instructor might revise the assessment guidelines or even implement a different assessment that is more aligned to the outcome. Finally, if an instructor notices during the assessment process that an important outcome has not been articulated, or would be more clearly stated a different way, that instructor might revise the objectives themselves.

For assistance at any stage of the course assessment cycle, contact CATLR for a one-on-one or group consultation.

Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010).  How learning works: Seven research-based principles for smart teaching . San Francisco, CA: John Wiley & Sons.

Anderson, L. W., & Krathwohl, D. R. (2001).  A taxonomy for learning, teaching and assessing: A revision of Bloom’s Taxonomy of Educational Objectives . New York, NY: Longman.

Bembenutty, H. (2011). Self-regulation of learning in postsecondary education.  New Directions for Teaching and Learning ,  126 , 3-8. doi: 10.1002/tl.439

Biggs, J., & Tang, C. (2011).  Teaching for Quality Learning at University . Maidenhead, England: Society for Research into Higher Education & Open University Press.

Cauley, K. M., & McMillan, J. H. (2010). Formative assessment techniques to support student motivation and achievement.  The Clearing House: A Journal of Educational Strategies, Issues and Ideas ,  83 (1), 1-6. doi: 10.1080/00098650903267784

Cuevas, N. M., Matveev, A. G., & Miller, K. O. (2010). Mapping general education outcomes in the major: Intentionality and transparency.  Peer Review ,  12 (1), 10-15.

Felder, R. M., & Brent, R. (2016).  Teaching and learning STEM: A practical guide . San Francisco, CA: John Wiley & Sons.

Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview.  Theory into practice ,  41 (4), 212-218. doi:  10.1207/s15430421tip4104_2

McCourt, Millis, B. J., (2007).  Writing and Assessing Course-Level Student Learning Outcomes . Office of Planning and Assessment at the Texas Tech University.

Moskal, B. M. (2000). Scoring rubrics: What, when and how?  Practical Assessment, Research & Evaluation ,  7 (3).

Setting Learning Outcomes . (2012). Center for Teaching Excellence at Cornell University. Retrieved from  https://teaching.cornell.edu/teaching-resources/designing-your-course/setting-learning-outcomes .

Steele, C. M. (2010).  Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do . New York, NY: WW Norton & Company, Inc.

Wiggins, G., & McTighe, J. (2005).  Understanding by Design (Expanded) . Alexandria, US: Association for Supervision & Curriculum Development (ASCD).

  • Search Search Please fill out this field.
  • Business Essentials

Assignment Method: Examples of How Resources Are Allocated

assignment assessment method

What Is the Assignment Method?

The assignment method is a way of allocating organizational resources in which each resource is assigned to a particular task. The resource could be monetary, personnel , or technological.

Understanding the Assignment Method

The assignment method is used to determine what resources are assigned to which department, machine, or center of operation in the production process. The goal is to assign resources in such a way to enhance production efficiency, control costs, and maximize profits.

The assignment method has various applications in maximizing resources, including:

  • Allocating the proper number of employees to a machine or task
  • Allocating a machine or a manufacturing plant and the number of jobs that a given machine or factory can produce
  • Assigning a number of salespersons to a given territory or territories
  • Assigning new computers, laptops, and other expensive high-tech devices to the areas that need them the most while lower priority departments would get the older models

Companies can make budgeting decisions using the assignment method since it can help determine the amount of capital or money needed for each area of the company. Allocating money or resources can be done by analyzing the past performance of an employee, project, or department to determine the most efficient approach.

Regardless of the resource being allocated or the task to be accomplished, the goal is to assign resources to maximize the profit produced by the task or project.

Example of Assignment Method

A bank is allocating its sales force to grow its mortgage lending business. The bank has over 50 branches in New York but only ten in Chicago. Each branch has a staff that is used to bring in new clients.

The bank's management team decides to perform an analysis using the assignment method to determine where their newly-hired salespeople should be allocated. Given the past performance results in the Chicago area, the bank has produced fewer new clients than in New York. The fewer new clients are the result of having a small market presence in Chicago.

As a result, the management decides to allocate the new hires to the New York region, where it has a greater market share to maximize new client growth and, ultimately, revenue.

assignment assessment method

  • Terms of Service
  • Editorial Policy
  • Privacy Policy
  • Your Privacy Choices

ORIGINAL RESEARCH article

A control banding method for chemical risk assessment in occupational settings in france.

Abir Aachimi,,

  • 1 Department of Pollutant Metrology, Institut National de Recherche et de Sécurité (INRS), Paris, France
  • 2 Department of Expertise and Technical Consulting, Institut National de Recherche et de Sécurité (INRS), Paris, France
  • 3 Univ Rennes, EHESP, Inserm, Irset (Institut de recherche en santé, environnement et travail), Rennes, France

Background: This study describes a method whose aim is to help companies assess the chemical occupational risks related to labeled products and industrial chemical emissions. The method is intended to be used by industrial hygienists at the scale of one company. Both inhalation and cutaneous exposure routes are taken into account.

Methods: The method relies on a control-banding scheme. A work situation is described by exposure parameters such as the process or the local exhaust ventilation and by the hazard of the product. Each possible value of the parameters is associated with a “band,” which is associated with an integer value. The multiplication of these values results in a score, which represents a priority for intervention. The higher the score, the more the situation warrants investigation for implementing prevention measures, such as chemical substitution and the addition of local exhaust ventilation. To simplify communication, the priority is associated with a colored priority band: red for “very high priority,” orange for “high priority,” and green for “moderate priority.” The priority bands are computed for all work situations performed in a company.

Results: An example of the use of this method is described in a French façade insulation company.

Conclusion: A tool named Seirich was developed to implement this method and promote good practices for helping industrial hygienists in the prioritization of interventions for reducing chemical risk in France.

1 Introduction

Occupational health and safety consists of identifying, assessing, prioritizing, and reducing health risks related to exposure to workplace hazards to ensure the safety of employees. In the specific case of chemical risk assessment, a four-step approach is commonly used: identification of the hazard, characterization of the hazard, exposure assessment, and risk characterization (United States Environmental Protection Agency). 1 In this context, the combination of hazard and exposure data available at the workplace is used. The most accurate way to assess risk is, first, to identify all chemical products found at the workplace and estimate their potential adverse effects with dose–response relationships and, second, to measure workers’ personal exposure through biomonitoring or atmospheric sampling according to Landberg et al. ( 1 ). Nevertheless, this approach is often difficult to practically implement by companies due to the lack of competencies, information, and resources. Indeed, the time and money required to conduct exposure measurements within the normative constraints ( 2 ) and the many uncertainties associated with the characterization of the products’ potential hazards are not always tractable and even suitable for the size of a company using thousands of chemical products. The “control banding” method can be used as an alternative solution as it uses simplified and more accessible parameters.

Control banding is a qualitative method to assess and manage workplace risks. It consists of matching the “class” for health hazards, exposure potential, and risk mitigation measures. The result of this matching is the generation of a “risk band” that represents the level of risk, which helps the hygienist prioritize and determine prevention action plans as described in Zalk and Nelson ( 3 ) and Zalk and Heussen ( 4 ). According to Naumann et al. ( 5 ), this method was first developed in the 1980s within the pharmaceutical industry to ensure the safety of workers regarding the use of products for which little information was available. To make this method user-friendly and accessible to all companies and to determine an appropriate control strategy for occupational risks, several tools were then developed. As an example, 30 years ago, the UK Health and Safety Executive developed “COSHH Essentials” described in Brooke ( 6 ) and Garrod et al. ( 7 ) and in the Health and Safety Executive ( 8 ) guidance, which is a control-banding tool that determines, through advice and guidance, a control approach to monitor substances that may affect workers’ health. More recently, in 2008, in the context of a Dutch program to reinforce the working conditions policy on hazardous substances, the web-based tool “Stoffenmanager,” described by Cherrie et al. ( 9 ) and Marquart et al. ( 10 ), was developed to identify chemical hazards and control exposure in the workplace. The hazard banding scheme consists of allocating substances to particular hazard groups based on their toxicological classification and labeling under the CLP regulation, as mentioned by Garrod et al. ( 7 ). In 2010, “EMKG” (Einfaches Maßnahmenkonzept Gefahrstoffe) was developed by the German Federal Institute for Occupational Safety and Health ( 11 ). As with the other tools, EMKG offers a simple approach to evaluate occupational risks and identify management measures requiring only a minimal number of input parameters.

In 2005, the French National Research and Safety Institute for occupational risk prevention ( 12 ), in collaboration with the National Prevention and Protection Centre (CNPP), developed a simplified control banding method described by Vincent et al. ( 13 ). The method is intended to be used by anyone with minimal knowledge of chemical risks, using simple and easily accessible parameters. This method evaluates the chemical risks resulting from the potential hazard and exposure to the products used during a task. Later, in 2008, the EU CLP regulation was introduced: the method was updated to support the “H” hazard statements instead of the “R-phrases.” The method is therefore always based on a qualitative assessment of chemical risks, and the output is a relative prioritization of products and industrial chemical emissions for each task performed in the company. The aim of this prioritization is to sort work situations that warrant investigation for implementing prevention measures. Concretely, a hazard band and score are assigned to each product used with regard to the “H” hazard statements. Then, an exposure band and score are assigned, based on sub-scores for each descriptive parameter influencing exposure (process, protective equipment, etc.). Finally, the hazard score and the exposure score are multiplied, and the resulting score is a relative prioritization of the chemical product ( Figure 1 ).

www.frontiersin.org

Figure 1 . Principles of assessment for chemical risks using the control-banding method.

In the first part of this article, the control banding method mostly used by French companies is described. In the second part of this article, a case study of a French insulation and house façade repair company is presented. The workstation chosen for the assessment was the “installation of thermal insulation,” which includes numerous tasks conducted with different products used or emitted.

2 Materials and methods

The proposed control banding method has a broad domain of applicability. Since it focuses on the chemical products (a mixture of substances), it allows us to prioritize any CLP-labeled chemical product used in the company, whatever their toxicity since the starting point is the H statements. The chemical products not submitted to the CLP regulations (for example, cosmetics, food products, or waste) and the chemical industrial emissions can also be prioritized. The method consists of three main steps: (I) assignment of the hazard class and score; (II) assignment of the exposure class and score; and (III) calculation of the priority score and assignment of the colored priority band: red for “very high priority,” orange for “high priority,” and green for “moderate priority.” This method has to be followed by the set-up of a prevention action plan to eliminate or reduce the risks threatening the health and safety of employees.

2.1 Step 1: assignment of the hazard class and score

In a preliminary task, a map of working areas, workstations, and tasks performed at the company must be prepared. Then, the chemical hazards for each task can be inventoried. For each product, the hazard may be related to a labeled product covered by the European labeling regulation (CLP; i.e., paints, inks, and solvents), a product not covered by the CLP labeling (i.e., flour, sugar, and cosmetic products) or industrial chemical emissions during a particular process without a precise description of products (i.e., wood sanding dust or welding fumes). The hazard is expressed as a hazard class and its corresponding score is expressed as an integer. The hazard class is attributed differently depending on the nature of the chemical:

• For the labeled products covered by CLP labeling, the hazard class is determined through the H and EUH statements available in the SDS or on the product label. Each H or EUH statement is associated with a hazard score according to gravity and potential for immediacy of effect mentioned by the statement. If a product has several hazard statements, the most severe is considered. An overview of the hazard classification for the inhalation route is presented in Table 1 . The same principle is used for dermal exposure (data not shown).

www.frontiersin.org

Table 1 . Overview of inhalation hazard classification according to gravity and potential for immediacy of effect.

• For the chemical products not covered by CLP labeling and the industrial chemical emissions, the hazard is defined by a consensus of a group of experts in the field of chemical risk prevention. The substances emitted, their toxicity and reactivity, as well as their generation are considered to determine these hazard classes.

In both cases, the assignment of hazard classes process was conducted over months by a group of +20 experts in the field of chemical risk prevention. The results are directly inspired by those from the Health and Safety Executive (HSE), 2008 ( 8 ) and in the end very similar to those proposed by ( 14 ).

2.2 Step 2: assignment of the exposure class and score

For the inhalation route, five parameters are needed to evaluate the exposure score ( Figure 2 ). The different modalities of these parameters and their relative classification are listed in Table 2 .

www.frontiersin.org

Figure 2 . Exposure characterization parameters for the inhalation route.

www.frontiersin.org

Table 2 . Modalities and classes for the inhalation exposure parameters.

• The physical state can be “liquid,” “solid,” or “gas.” It is used to describe the potential of the substance to become airborne. When it is a liquid, this potential is defined by the vapor pressure, and in this case, the temperature of use and the boiling temperature can be used (EUSES, European Union System for the Evaluation of Substances, available 2 ). When it is a solid, including powders, the potential is related to the dustiness: the finer the powder, the higher the potential. When it is a gas, the potential is always at the maximum level because gases are considered to generate maximum exposure.

• The type of process is used to define the level of dispersion of the product in the workplace. It can be defined by using the REACH process reference framework (PROC) defined in the European Chemicals Agency ( 15 ) guidance or by using the four modalities defined in the Technical Guidance Document on Risk Assessment ( 16 ).

• Collective protective equipment concerns the installation of ventilation controls and local exhaust ventilation, which contributes to the protection of employees’ health. These measures help to reduce the levels of exposure to chemicals for employees.

• The daily amount corresponds to the amount of product used during a specific task over a day (8 h) or during a work sequence. The daily amount is only used with dispersive processes; it defines the amount of product dispersed voluntarily in the work atmosphere.

• The duration of the task performed by the employee is considered when the most severe hazard occurs after repeated exposure over time (chronic exposure, i.e., carcinogenic products). On the contrary, the duration of exposure is not considered when the most severe hazard occurs after acute exposure (i.e., highly toxic products that can cause immediate irreversible effects).

For the dermal route, which includes both the skin and eyes, four parameters are needed to assess the exposure ( Figure 3 ). The different modalities of these parameters and their relative classification are listed in Table 3 .

www.frontiersin.org

Figure 3 . Exposure characterization parameters for dermal exposure.

www.frontiersin.org

Table 3 . Modalities and classes for dermal exposure parameters.

• The exposure scenario corresponds to the nature of the operations performed by the employee. There are four modalities for the exposure scenario describing a part of the exposure level.

• The exposed surface corresponds to the total surface area of skin that can be exposed to the product without considering personal protective equipment.

• The daily amount is taken into account in the same way as for the inhalation route. This parameter is considered when the effects appear because of exposure through skin penetration (systemic effects). It is not used when the product produces local effects.

• The duration is considered in the same way, with the same modalities, as for the inhalation route.

An integer value is allocated to each of the abovementioned entry parameter modalities, and the exposure score is the multiplication of these integer values.

2.3 Step 3: calculation of the priority score and assignment of the priority band

The priority score is calculated by multiplying the hazard score and the exposure score: one for the inhalation route and another for the dermal route. The value attributed to the hazard score has the most important weight compared to the exposure score. Then, the inhalation route priority band and the dermal route priority band are assigned with regard to their respective priority score: “moderate priority” (green color), “high priority” (orange color), and “very high priority” (red color). The priority bands are calculated for each work situation in the company. Then, the work situations are sorted according to their respective priority.

3 Results. Example of application for a workplace: installation of thermal insulation

In 2019, a visit to a company specialized in the insulation and repair of house façades was conducted. The company was identified following a request made by a hygienist from the French public health insurance service, explaining that the director of this company wanted to evaluate and establish an action plan to reduce the potential chemical risks within his company. The aim of the visit was to contact the director, understand his needs, and explain the usefulness of the method and its usage. To do this and to facilitate the task, the authors suggested carrying out an assessment of one of the company’s workstations, from the inventory to the action plan, according to the three steps defined above.

3.1 Step 1: assignment of the hazard class and score

Different workstations using chemical products were identified in the company: scaffolding, installation of thermal insulation, repair and renovation of façades, painting, and coating. The workstation chosen for the assessment was the “installation of thermal insulation” due to the numerous tasks conducted with different products used or emitted. Information concerning the tasks and the products was collected during the company visit. Table 3 represents the eight tasks performed with the inventory of labeled products and industrial chemical emissions.

3.2 Step 2: assignment of exposure class and score

The details regarding the calculation of priority level via both inhalation and dermal routes are shown in Table 4 for all labeled products used in the workstation. For industrial chemical emissions, the determination details are shown in Table 5 .

www.frontiersin.org

Table 4 . Tasks performed in the workstation with labeled products and industrial chemical emissions.

www.frontiersin.org

Table 5 . Hazard and exposure data and levels assigned by the method to calculate the inhalation and dermal chemical priority scores for all labeled products used in the workstation.

3.3 Step 3: calculation of the priority score and assignment of the priority band

Figure 4 represents the sorted list of products used during each task according to their respective inhalation and dermal priority bands.

www.frontiersin.org

Figure 4 . Prioritization results according to the risk scores for inhalation and dermal (skin and eyes) exposure for labeled products and industrial chemical emissions.

Regarding the priorities for the inhalation route illustrated in this example, the four products used during tasks with “very high inhalation priority” were as follows: (1) the expanding foam used to fill fractional gaps, (2) the surface hardener, (3) the bonding resin used for the façade coating, (4) and the primer used as a fixative between the lattice and the plaster. Moreover, two industrial chemical emissions also showed “very high inhalation priority”: the dust emitted (1) during the surface preparation and installation of the starting rails and (2) during the treatment of protruding angles. The next seven products and the plastic combustion fumes released during the cutting of polystyrene insulation boards had a high priority, as shown in orange in Figure 4 . Regarding priorities related to the dermal route, six products were used during tasks with “very high priority” as follows: (1&4) the façade coat, (2) the surface hardener, (3) the bonding resin, all used for the façade coating and the finishing; (5) the hydrochloric acid used for the finishing task; and (6) the expanding foam used to fill the fractional gaps. Moreover, one industrial chemical emission also showed “very high dermal priority”: the dust released during the treatment of protruding angles.

The aim of this prioritization at a company is to guide the development and the follow-up of a preventive or corrective action plan helpful to reduce occupational risks for the most problematic situations. Therefore, to help the company determine the appropriate actions, an occupational hygienist from the French public health insurance service was asked to review the results. A precise action plan was established. In particular, the substitution of the expanding foam, the surface hardener, and the bonding resin were required because their use was considered a “very high priority” for inhalation and dermal routes. The dust emitted during the surface preparation and the treatment of protruding angles presented a very high inhalation priority. Since these tasks are performed outdoors, the use of collective protective equipment is not applicable. For this reason, the use of personal respiratory protective equipment is highly recommended to avoid the risks related to this task. In addition, given the very high priority via the dermal route for the treatment of protruding angles, the use of dermal protective equipment (goggles and gloves) is recommended during the treatment of protruding angles ( Table 6 ).

www.frontiersin.org

Table 6 . Hazard and exposure data and levels assigned by the method to calculate the inhalation and dermal chemical risk scores for all industrial emissions released in the workstation.

4 Discussion

The method can be used to attribute an intervention priority to work situations involving exposure to chemical products through inhalation and dermal routes. This method’s domain of applicability extends to almost all types of products except for non-specific powders (i.e., without CLP statements). Moreover, the priorities can be attributed according to any type of working situation, whatever the task or the process involved.

To evaluate the hazard, the labeled products are associated with hazard classes based on their H and EUH statements. In addition to the major sources mentioned previously in this article, other tools such as Stoffenmanager, EMKG, and Ecetoc TRA described in Bögi et al. ( 17 ) use similar schemes. As there is no reference methodology for assigning each hazard statement to a specific band, the assignments made by each tool are different with the use of different rules. In these tools, the carcinogenic, mutagenic and reprotoxic hazards are often associated with the most severe hazard band. In the proposed methodology, the most severe band refers to lethal acute toxicity. A similarity between these tools is the classification of products capable of causing harm to unborn babies or impacting negatively on fertility, which is classified just after the classification of the most severe hazards. The qualitative identification of hazards includes subjectivity related to the use of expert judgments that are based on training and experience. As for the risks, hazard perceptions depend on many variables, such as personal and socio-demographic aspects, and the professional experience of the evaluators, as noted by Skjong et al. ( 18 ). Since different institutions and individuals develop these different tools, this may explain the differences in the hazard ranking tables. Moreover, as control banding is a relative method, the prioritization of the hazard into five classes helps to rank and prioritize products according to their level of dangerousness, but the least severe class in the hazard table does not mean that the hazard represented is not considerable.

To assess the exposure, most models cited above evaluate the concentration of substances contained in the products in the worker’s breathing zone. This concentration is compared to occupational exposure limits (OELs) to assess the chemical risk, expressed as “above OEL” or “below OEL.” By comparison, in this method, a risk assessment is conducted regarding the use of products and not only the substances. This is considered more convenient to field practitioners since workers are usually exposed to a mixture of substances that constitute the products and not to the substances individually. However, even if this method provides a risk assessment of the products used in the company, it does not replace the regulations related to the monitoring of occupational exposure, which, in all cases, require employers to carry out exposure measurements for regulated substances that are considered to be of concern and to compare them with occupational exposure limit values.

In this method, the input parameters must be easily accessible. The parameters that are difficult to access, but which are essential for evaluation, are simplified. For example, the air change rate is represented by the type of mitigation system used and the product volatility, which is defined by the vapor pressure, and can be estimated by using the boiling point and the temperature of use if the vapor pressure is not available. Moreover, the frequency of use of products is not considered relevant because the aim is to evaluate the risk resulting from the exposure of the worker during the task (at the time he/she performs the work operation) and not at the workplace in general. The number of exposed workers in the workplace is an important parameter in risk management. However, regardless of the number of workers in the area of potential damage, the severity of this damage must be the same: this parameter does not influence the risk assessment. The volume and/or the surface area of the work zone is also not considered because it is not easily accessible to all users.

Even relying on a robust control banding methodology, chemical risk assessment remains difficult. Some specific issues related to particular substances can be improved. First, when the product evaluated does not have an SDS or is not classified according to the CLP regulation for health hazards, the chemical risk given by the method is always at the minimum level. Among these unclassified products, there are powder products with non-specific effects (i.e., calcium carbonate, amorphous silica, and alumina). This type of chemical agent can cause various respiratory system pathologies resulting from pulmonary overload or carcinogenic, allergenic, or irritant substances, as mentioned in a report by the French Agency for Food, Environmental and Occupational Health and Safety – ANSES ( 19 ). The method underestimates these effects since the products do not have a classification according to the CLP regulation. This methodology limitation was reported during its use and a solution is currently being developed to rectify it. Second, endocrine disruptors are difficult to identify and the evaluation of their effects on health is a scientific challenge and an important public health issue as noted by ANSES ( 20 ) and the ECHA ( 21 ). Despite these uncertainties, a preventive approach should be implemented to limit the workers’ exposure to the lowest possible level, particularly pregnant women or women of childbearing age, as recognized in the INRS ( 22 ) report. This issue and a solution to address it will be proposed in the future. Third, the quality of the assessment depends on the quality of the information from the SDSs. Meanwhile, SDSs often do not provide complete or accurate information. For example, the physicochemical properties (vapor pressure) are sometimes missing. More importantly, the product’s descriptions of health effects need more improvement within the European Chemicals Agency ( 23 ) report. This lack of data in the SDSs mainly concerns powders, especially nanometric ones. These powders are not always well identified in the SDSs and information on their composition or their potential hazards is often not available. This leads to a misjudged risk assessment for this type of product. Hodson et al. ( 24 ) evaluated the reliability and accuracy of a sample of SDS specific to engineered nanomaterials. Their evaluation showed that their information quality is not sufficient to provide adequate data on the inherent health and safety hazards of engineered nanomaterials. Thus, the use of SDSs alone to characterize the products’ hazards could be considered as a limitation because even though each user is asked to verify the adequacy and SDS updates, the method is not able to confirm their accuracy and the quality of data provided on the product’s effects.

This method is implemented in a software named “Seirich,” which was developed by the INRS in partnership with the French Ministry of Labor, national health insurance, and French professional organizations. In addition to the control banding chemical risk assessment, Seirich software guides users in the development and follow-up of a preventive or corrective action plan to reduce risks at work. A risk assessment is provided for fire and explosion hazards. The software also offers regulation information and good practices to guide the user in the implementation of preventive actions. It is available free of charge on the web 3 (French and English languages).

5 Conclusion

For more than 20 years, and particularly since the coming into force of the EU CLP regulation in 2015 (for mixtures), a constant evolution of the presented method has been conducted, with several improved versions implemented in the Seirich software. This involves either considering regulatory updates, introducing ergonomic evolutions, or adding new features. Currently, this method is widely used for occupational chemical risk assessment in France with more than 30,000 users. The INRS is committed to promoting this tool and ensuring its continuous improvement. This tool represents a very important step in the risk prevention process by allowing the identification and evaluation of chemical risks to which employees are exposed in the workplace. This must be followed by the implementation of a specific prevention action plan based on the results obtained, with the aim of eliminating or reducing the identified risks as much as possible. Finally, to allow foreign companies to use it easily, this tool is also available in an English version but is still adapted to French regulations.

Data availability statement

Publicly available datasets were analyzed in this study. This data can be found at: https://doi.org/10.1080/15459624.2021.2023161 .

Author contributions

AA: Data curation, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft. FM: Conceptualization, Formal analysis, Funding acquisition, Methodology, Project administration, Resources, Validation, Writing – review & editing. NB: Conceptualization, Project administration, Resources, Supervision, Validation, Writing – review & editing. FC: Conceptualization, Formal analysis, Funding acquisition, Project administration, Resources, Supervision, Validation, Writing – review & editing.

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The reviewer JM declared a past co-authorship with the author FV to the handling editor. The handling editor US declared a past co-authorship with the author FC.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

1. ^ https://www.epa.gov/risk/human-health-risk-assessment

2. ^ https://echa.europa.eu/fr/support/dossier-submission-tools/euses

3. ^ https://www.seirich.fr/

1. Landberg, H.E. , Westberg, H , and Tinnerberg, H . (2018). Evaluation of risk assessment approaches of occupational chemical exposures based on models in comparison with measurements - ScienceDirect. Available at: https://www.sciencedirect.com/science/article/pii/S0925753517315631 (accessed 1.22.20).

Google Scholar

2. EN 689 . Workplace exposure - Measurement of exposure by inhalation to chemical agents - Strategy for testing compliance with occupational exposure limit values. Bruxelles, Belgium: European Committee for Standardization EN. (2018) 689:2018.

3. Zalk, D , and Nelson, D . History and evolution of control banding: a review. J Occup Environ Hyg . (2008) 5:330–46. doi: 10.1080/15459620801997916

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Zalk, DM , and Heussen, GH . Banding the world together; the global growth of control banding and qualitative occupational risk management. Saf Health Work . (2011) 2:375–9. doi: 10.5491/SHAW.2011.2.4.375

5. Naumann, BD , Sargent, E , Starkman, BS , Fraser, WJ , Becker, GT , and Kirk, GD . Performance-based exposure control limits for pharmaceutical active ingredients. Am Ind Hyg Assoc J . (2010) 57:33–42. doi: 10.1080/15428119691015197

6. Brooke, IM . A UK scheme to help small firms control health risks from chemicals: toxicological considerations. Ann Occup Hyg . (1998) 42:377–90. doi: 10.1093/annhyg/42.6.377

CrossRef Full Text | Google Scholar

7. Garrod, ANI , Evans, PG , and Davy, CW . Risk management measures for chemicals: the “COSHH essentials” approach. J Expo Sci Environ Epidemiol . (2007) 17:S48–54. doi: 10.1038/sj.jes.7500585

8. Health and Safety Executive (HSE) . COSHH essentials: easy steps to control chemicals; [under the] Control of Substances Hazardous to Health Regulations; [control guidance sheets], HSG . Sudbury, Suffolk: HSE Books (1999).

9. Cherrie, J , Schneider, T , Spankie, S , and Quinn, M . A new method for structured, subjective assessments of past concentrations. Occup Hyg . (1996) 3:75–83.

10. Marquart, H , Heussen, H , Le Feber, M , Noy, D , Tielemans, E , Schinkel, J, et al. “Stoffenmanager”, a web-based control banding tool using an exposure process model. Ann Occup Hyg . (2008) 52:429–41. doi: 10.1093/annhyg/men032

11. Kindler, P , and Winteler, R . Anwendbarkeit von Expositionsmodellen für Chemikalien auf Schweizer Verhältnisse. Teilproject 1: Überprüfung der Modelle ‘EASE’und ‘EMKG-EXPO-TOOL’. Eidgenossisches Volkwirtschaftsdepartement EDV, Staatsekretariat fur Wirtschaft SECO—Arbeitsbedingungen, Chemikalien und Arbeit. (2010). Available at: https://www.baua.de/EN/Topics/Work-design/Hazardous-substances/EMKG/Easy-to-use-workplace-control-scheme-EMKG.html (accessed 9.28.21).

12. INRS . (2005). Méthodologie d’évaluation simplifiée du risque chimique: un outil d’aide à la décision - Article de revue. Available at: https://www.inrs.fr/media.html?refINRS=ND%202233 (accessed 7.20.21).

13. Vincent, R. , Bonthoux, F. , Mallet, G. , Iparraguirre, J.F. , and Rio, S. Méthodologie d’évaluation simplifiée du risque chimique: un outil d'aide à la décision. (2005). Hygiène et Sécurité au Travail, 2005, HST-Cahiers de notes documentaires -3ème trimestre 2005, ND 2233 (200), pp. 39–62. ⟨hal-03752064⟩

14. Arnone, M , Koppisch, D , Smola, T , Gabriel, S , Verbist, K , and Visser, R . Hazard banding in compliance with the new globally harmonised system (GHS) for use in control banding tools. Regul Toxicol Pharmacol . (2015) 73:287–95. doi: 10.1016/j.yrtph.2015.07.014

15. European Chemicals Agency . Guidance on information requirements and chemical safety assessment: Chapter 12: Use description, version 3.0 December 2015 . LU: Publications Office (2015).

16. Institute for Health and Consumer Protection. European Chemicals Bureau . (2003). Technical Guidance Document on Risk Assessment- Part I. Available at: https://www.scribd.com/document/442040269/tgdpart1-2ed-en-pdf

17. Bögi, C. , Jacobi, Sylvia , Chang, Hsieng-Ye , and Noij, Dook . ECETOC TRA, ECETOC TRA version 3: Background and Rationale for the Improvements. European Centre for Ecotoxicology and Toxicology of Chemicals Brussels, Belgium; (2012).

18. Skjong, R. , and Wentworth, BH . Expert judgment and risk perception. In: ISOPE International Ocean and Polar Engineering Conference . ISOPE; (2001) p. ISOPE-I.

19. ANSES . (2019). AVIS et RAPPORT de l’Anses relatif à la proposition de valeurs limites d’exposition à des agents chimiques en milieu professionnel - Evaluation des effets sur la santé sur le lieu de travail pour les poussières dites sans effet spécifique (PSES). Available at: https://www.anses.fr/fr/content/avis-et-rapport-de-lanses-relatif-%C3%A0-la-proposition-de-valeurs-limites-dexposition-%C3%A0-des (accessed 11.29.21).

20. ANSES . (2013). ANSES’s work and involvement in the area of endocrine disruptors. Available at: https://www.anses.fr/en/content/ansess-work-and-involvement-area-endocrine-disruptors (accessed 1.28.22).

21. ECHA, JRC, EFSA with the technical support of the JRAndersson, N , Arena, M , Auteri, D , Barmaz, S, et al. Guidance for the identification of endocrine disruptors in the context of Regulations (EU) No 528/2012 and (EC) No 1107/2009. EFSA journal . (2018) 16:e05311.

22. INRS . (2021). Perturbateurs endocriniens. Ce qu’il faut retenir. Available at: https://www.inrs.fr/risques/perturbateurs-endocriniens/ce-qu-il-faut-retenir.html (accessed 2.22.22).

23. European Chemicals Agency (EU body or agency) . Report on improvement of quality of SDS: WG “joint initiative ECHA forum – ECHA ASOs on improvement of the quality of SDS”: Forum . LU: Publications Office of the European Union (2019).

24. Hodson, L , Adrienne, E , and Herbers, R . An evaluation of engineered nanomaterial safety data sheets for safety and health information post implementation of the revised hazard communication standard. J Chem Health Saf . (2019) 26:12–8. doi: 10.1016/j.jchas.2018.10.002

Keywords: chemical risk assessment, control banding, chemical product, industrial hygiene, priority of intervention

Citation: Aachimi A, Marc F, Bonvallot N and Clerc F (2023) A control banding method for chemical risk assessment in occupational settings in France. Front. Public Health . 11:1282668. doi: 10.3389/fpubh.2023.1282668

Received: 24 August 2023; Accepted: 03 November 2023; Published: 13 December 2023.

Reviewed by:

Copyright © 2023 Aachimi, Marc, Bonvallot and Clerc. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Frederic Clerc, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

IMAGES

  1. What is the Difference Between Assignment and Assessment

    assignment assessment method

  2. Understanding marking rubrics

    assignment assessment method

  3. PPT

    assignment assessment method

  4. THE ASSIGNMENT: ASSESSMENT AND TEACHING SAMPLE

    assignment assessment method

  5. assessment

    assignment assessment method

  6. What is the Difference Between Assignment and Assessment

    assignment assessment method

VIDEO

  1. Assignment 0

  2. A3

  3. How to write assignment in scientific method part 3

  4. Metode Asesmen-Assessment Method #kurikulummerdeka #kemdikbudristekdikti #assessment

  5. How to write assignment in scientific method part 1

  6. ASSESSMENT 4

COMMENTS

  1. Assessing Student Learning

    To provide an overview of learning assessment, this teaching guide has several goals, 1) to define student learning assessment and why it is important, 2) to discuss several approaches that may help to guide and refine student assessment, 3) to address various methods of student assessment, including the test and the essay, and 4) to offer sever...

  2. Assessing assignments

    Reflective assignments can be assessed in different ways; below you will find information about summative, formative, peer, and self-assessment of assignments. Criteria and rubrics can help you in your assessment

  3. How Do I Create Meaningful and Effective Assignments?

    By being purposeful and thoughtful from the beginning, you can ensure that your assignments will not only serve as effective assessment methods, but also engage and delight your students. If you would like further help in constructing or revising an assignment, the Teaching, Learning, and Professional Development Center is glad to offer ...

  4. Assessment Methods

    Questioning Projects and Assignments RPL (Recognition of Prior Learning) Witness Testimony Work Products Assessment Methods Observation If you're a qualified assessor and gained your qualification with us at Brooks and Kirk, you will know exactly what an observation is from your CAVA study day.

  5. 6 Types of Assessment (and How to Use Them)

    Common types of assessment for learning include formative assessments and diagnostic assessments. Assessment as learning. Assessment as learning actively involves students in the learning process. It teaches critical thinking skills, problem-solving and encourages students to set achievable goals for themselves and objectively measure their ...

  6. Types of Assignments and Assessments

    Assignments and assessments are much the same thing: an instructor is unlikely to give students an assignment that does not receive some sort of assessment, whether formal or informal, formative or summative; and an assessment must be assigned, whether it is an essay, case study, or final exam.

  7. PDF Assessment METHODS

    Assessment methods are the strategies, techniques, tools and instruments for collecting information to determine the extent to which students demonstrate desired learning outcomes. Several methods should be used to assess student learning outcomes. See the Assessment Methods Table for an overview of some commonly used direct and indirect ...

  8. Student Assessment in Teaching and Learning

    Below are a few common methods of assessment identified by Brown and Knight that can be implemented in the classroom. ... (Henderson, 1980) Essays are a common form of writing assignment in courses and can be either a summative or formative form of assessment depending on how the instructor utilizes them in the classroom.

  9. Section 2.2: Assessment Methods and Examples

    Written assignments are commonly used to assess learners' ability to understand a topic in a text-based format. To be effective in this area, instructors need to be clear on what they are assessing and if written assessment is the most suitable format.

  10. Assignments

    What to consider when using assignments as an assessment method for a course? An assignment is a piece of (academic) work or task. It provides opportunity for students to learn, practice and demonstrate they have achieved the learning goals. It provides the evidence for the teacher that the students have achieved the goals.

  11. PDF Assess Students: the Assignment

    Overview Assignments are tasks that require student engagement and a final tangible product. They are one of the most common ways to assess student learning. The type and number of assignments you will design depends on your course learning objectives and teaching goals. Types of assignments

  12. PDF Quick Guide to Program Assessment Methods

    Assessment methods are information, evidence, and/or data we collect to determine the extent to which students are meeting learning outcomes. The data (or measures, exams, artifacts - you may see in the literature different ways of describing methods) you collect can be assignments, tests, projects, etc.

  13. Assessment methods

    detail of task form of task. Other elements should remain the same. These include: number of assessment tasks weighting category method. See the full list of assessment types Division of Learning and Teaching Assessments Methods

  14. Measuring student learning

    Formative assessment - any means by which students receive input and guiding feedback on their relative performance to help them improve. It can be provided face-to-face in office hours, in written comments on assignments, through rubrics, and through emails. Formative assessments can be used to measure student learning on a daily, ongoing basis.

  15. Variety in assignment and assessment methods

    Variety in assignment and assessment methods. There are several good reasons to consider offering a variety of assessment methods, beyond the typical quiz/test/exam: Students need to understand concepts deeply, as opposed to memorize information and reproduce it on an exam, so they can handle advanced course work and later work effectively in ...

  16. Assessment Methods: A Look at Formal and Informal Techniques

    McDonald notes that the best teachers are always assessing - that is, doing things deliberately to figure out whether they are getting through to their students - even when they are not handing out weekly quizzes and final exams. In this Q&A, McDonald talks about: The most effective assessment methods. The advantages of doing summative and ...

  17. What is the Difference Between Assignment and Assessment

    The main difference between assignment and assessment is that assignments refer to the allocation of a task or set of tasks that are marked and graded while assessment refers to methods for establishing if students have achieved a learning outcome, or are on their way toward a learning objective.

  18. PDF Assessment methods

    Choosing methods There are many methods you can use to assess your learners, these will depend upon what you are assessing and where. They can be informal to check ongoing progress and/or formal to confirm achievement. The methods you choose will be influenced by whether you are assessing vocational skills or academic knowledge.

  19. Methods of assessment: An Introduction

    A simple way to classify assessment methods is to use Miller's pyramid (1): The bottom or first two levels of this pyramid represent knowledge. The "Knows" level refers to facts, concepts and principles that learners can recall and describe. In Bloom's taxonomy this would be remembering and understanding.

  20. Course Assessment

    Course-level assessment is a process of systematically examining and refining the fit between the course activities and what students should know at the end of the course. Conducting a course-level assessment involves considering whether all aspects of the course align with each other and whether they guide students to achieve the desired ...

  21. PDF Advantages and Disadvantages of Various Assessment Methods

    Disadvantages. Measures relatively superficial knowledge or learning. Unlikely to match the specific goals and objectives of a program/institution. Norm-referenced data may be less useful than criterion-referenced. May be cost prohibitive to administer as a pre- and post-test.

  22. PDF Alternative Assessment Methods in Primary Education: Review and ...

    assessment methods, including classroom-based, informal performance assessment and authentic assessment, portfolio assessment, and project-based assignments. AA ... A performance assignment is a set of tasks that are required perform in order to create a

  23. Assignment Method: Examples of How Resources Are Allocated

    The assignment method is used to determine what resources are assigned to which department, machine, or center of operation in the production process. The goal is to assign resources in such a...

  24. Frontiers

    In both cases, the assignment of hazard classes process was conducted over months by a group of +20 experts in the field of chemical risk prevention. ... However, even if this method provides a risk assessment of the products used in the company, it does not replace the regulations related to the monitoring of occupational exposure, which, in ...