Welcome to CodeGrade!

An automatically graded SQL query assignment handed in by a student

Creating an automatically graded SQL assignment in CodeGrade

In 30 seconds....

For a web development course in CodeGrade, I created an SQL assignment that was automatically graded. Using SQL in CodeGrade is easy and autograding SQL queries is rather intuitive. After some initial research, I was able to set up new SQL assignments in CodeGrade in a matter of minutes. In this blog, I will go over my considerations while creating and designing this assignment and explain how you can easily set up AutoTests for an SQL assignment.

Choosing SQL software

There are many relational database management systems available. From my experience of working with different teachers from many different universities and schools, the most popular relational database systems in education are MySQL, PostgreSQL and SQLite. Even though all of these could be excellent choices to teach SQL to students, we have to research which one is the best suited for instant automatic grading via CodeGrade AutoTest.

The best suited database system is one that has the shortest start up time, that is not error prone and allows for quick prototyping. As CodeGrade AutoTest instantly runs for each student submission, and builds a fresh sandboxed environment for this, a system with the quickest set up time is preferable. Furthermore, for educational purposes, we prefer a system that allows students with quick prototyping and is not that error-prone (as I want to teach students SQL and not a specific database system).

To research that, I found some great comparisons online, for instance one from DigitalOcean . Here we find that SQLite is our best option, as this is a ‘ serverless ’ database: instead of loading a database it directly reads from and writes to the database disk file. This simplifies the setup process and makes it quicker, since it eliminates the need to configure the server. This setup also allows for quick and easy prototyping that is not error prone, as students do not need to first configure the server and import the database on their local machine either, something that is necessary in MySQL and PostgreSQL.

Designing the SQL CodeGrade assignment

We can now design our SQL CodeGrade assignment. For this, we of course need to find an interesting sample database for our assignment. I want to find a database that simulates a real-world example, to engage the students more, and that is timeless enough so that I do not have to change my assignments in the near future.

I found the open source Chinook database a good choice for this assignment. The Chinook data model represents a media webshop, including tables for artists, albums, media tracks, invoices and customers. It can be found on GitHub here: https://github.com/lerocha/chinook-database . We can download and find the SQLite database we need in the ChinookDatabase/DataSources/Chinook_Sqlite.sqlite file.

Using this database, I will design my assignment to have the students make a seperate SQL query per task and save them in a separate file, e.g. customers_from_canada.sql . Some of the tasks I have designed are:

  • Get a list of all customers who are from Canada, write your query in customers_from_canada.sql .
  • Get a list of all tracks composed by Jimi Hendrix, return only the song names, write your query in songs_from_hendrix.sql .
  • Find out the top 5 most sold songs, return the name of the song and the number of times sold sorted by number of times sold first and name of the song (ascending) second, write your query in top_5_songs.sql .

With this setup, CodeGrade can nicely instruct students to only hand in these files, using Hand In Requirements . As I want students to only hand in the files I specify, but I do allow them to hand in partial submissions (i.e. if they only have a couple tasks finished), I chose the policy to Deny All Files by default and allowed only these files to be handed in.

Finally, before moving on to the autograder configuration, I have turned on the option for GitHub/GitLab uploads as I encourage students to create this assignment using version control.

Setting up automatic grading

One of the best features of CodeGrade’s autograder is that it allows you to install and run any software that you like. Many packages are installed by default, but SQLite is not. So, the first thing I have to do is install SQLite to the AutoTest of my assignment. I can do this by writing the simple command `sudo apt install -y sqlite3 ` in the Setup Script of the AutoTest. While we chose SQLite because of its performance and ease of use, we could have simply installed MySQL or PostgreSQL in this step instead too.

Next we have to upload our SQLite database as a fixture to our AutoTest. In this case, I will upload the downloaded Chinook_Sqlite.sqlite file. This will now be available in our AutoTest. One of the benefits of SQLite, and one of the reasons I prefer to use it for educational purposes, is that I do not have to install or load the database file before I can start testing, this greatly improves the performance of our AutoTest configuration while reducing its complexity.

With CodeGrade’s IO (Input and Output) Tests and SQLite, it is now very easy to test the different queries that were handed in by the student. In our IO Test, we simply run `sqlite3 $FIXTURES/Chinook_Sqlite.sqlite ` and in the different input and expected output pairs, we redirect the content of the handed in query files to SQLite, with for instance `< songs_from_hendrix.sql ` as input. The expected output can now be simply written or copy and pasted as expected.

For my assignment, I turn on the Ignore All Whitespace option and turn off the Substring match option. As a SQL query with superfluous output means the query is incorrect.

Automatic SQL assignment test by CodeGrade autograder

Job is done!

It now took me only a couple of minutes to create tests to check all of my 7 queries that students can hand in. After I’ve turned on my AutoTest, the students will see immediately if their queries got the correct results every time they hand in. Engaging them and motivating them to continuously improve their answers until they have as many as they can right.

Next to the automatic grading, I have also added a rubric category to manually assess the query style and readability.  ‍

So that’s it, the job is done! We have created an automatically graded SQL assignment in CodeGrade. After some initial research (which you can skip now!) I was able to quickly set up automatic tests for many questions and assignments. Feel free to reach out to me via [email protected] if you have any questions regarding autograding SQL assignments, CodeGrade in general or if you would like to receive a copy of the assignments and queries I have created for this blog.

Devin Hillenius

Devin Hillenius

Continue reading.

peer graded assignment based on sqlite

CodeGrade Announces Partnership with Pearson to Transform Coding Education

peer graded assignment based on sqlite

Why Data Security Matters in Academia: Safeguarding Your Digital Assets

peer graded assignment based on sqlite

The Importance of Engagement in Your Introductory Programming Course

peer graded assignment based on sqlite

ISO27001 Certification Announcement!

Sign up to our newsletter, schedule a personalized tour of codegrade today.

Streamline code learning and grading

  • Higher Education
  • Coding Bootcamps
  • K-12 Schools
  • Automatic grading
  • Code submissions
  • Inline comments
  • Peer feedback
  • Plagiarism detection
  • LMS integration
  • Community library
  • Case studies
  • Privacy policy
  • Accessibility
  • Terms and conditions

Book cover

International Conference on Database Systems for Advanced Applications

DASFAA 2017: Database Systems for Advanced Applications pp 352–363 Cite as

Task Assignment of Peer Grading in MOOCs

  • Yong Han 17 ,
  • Wenjun Wu 17 &
  • Yanjun Pu 17  
  • Conference paper
  • First Online: 22 March 2017

1583 Accesses

1 Citations

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10179))

In a massive online course with hundreds of thousands of students, it is unfeasible to provide an accurate and fast evaluation for each submission. Currently the researchers have proposed the algorithms called peer grading for the richly-structured assignments. These algorithms can deliver fairly accurate evaluations through aggregation of peer grading results, but not improve the effectiveness of allocating submissions. Allocating submissions to peers is an important step before the process of peer grading. In this paper, being inspired from the Longest Processing Time (LPT) algorithm that is often used in the parallel system, we propose a Modified Longest Processing Time (MLPT), which can improve the allocation efficiency. The dataset used in this paper consists of two parts, one part is collected from our MOOCs platform, and the other one is manually generated as the simulation dataset. We have shown the experimental results to validate the effectiveness of MLPT based on the two type datasets.

  • Task assignment
  • Crowdsourcing
  • Peer grading

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Fonteles, A.S., Bouveret, S., Gensel, J.: Heuristics for task recommendation in spatiotemporal crowdsourcing systems. In: Proceedings of the 13th International Conference on Advances in Mobile Computing and Multimedia, pp. 1–5. ACM (2015)

Google Scholar  

Cheng, J., Teevan, J., Bernstein, M.S.: Measuring crowdsourcing effort with error-time curves. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 1365–1374. ACM (2015)

Qiu, C., Squicciarini, A.C., Carminati, B., et al.: CrowdSelect: increasing accuracy of crowdsourcing tasks through behavior prediction and user selection. In: Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pp. 539–548. ACM (2016)

Howe, J.: The rise of crowdsourcing. Wired Mag. 14 (6), 1–4 (2006)

Wiki for Multiprocessor Scheduling Information. https://en.wikipedia.org/wiki/Multiprocessor_scheduling

Piech, C., Huang, J., Chen, Z., et al.: Tuned models of peer assessment in MOOCs. arXiv preprint arXiv:1307.2579 (2013)

Coffman, Jr. E.G., Sethi, R.: A generalized bound on LPT sequencing. In: Proceedings of the 1976 ACM SIGMETRICS Conference on Computer Performance Modeling Measurement and Evaluation, pp. 306–310. ACM (1976)

Alfarrarjeh, A., Emrich, T., Shahabi, C.: Scalable spatial crowdsourcing: a study of distributed algorithms. In: 2015 16th IEEE International Conference on Mobile Data Management, vol. 1, pp. 134–144. IEEE (2015)

Baneres, D., Caballé, S., Clarisó, R.: Towards a learning analytics support for intelligent tutoring systems on MOOC platforms. In: 2016 10th International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS), pp. 103–110. IEEE (2016)

Gonzalez, T., Ibarra, O.H., Sahni, S.: Bounds for LPT schedules on uniform processors. SIAM J. Comput. 6 (1), 155–166 (1977)

Article   MathSciNet   MATH   Google Scholar  

Mi, F., Yeung, D.Y.: Probabilistic graphical models for boosting cardinal and ordinal peer grading in MOOCs. In: AAAI, pp. 454–460 (2015)

Massabò, I., Paletta, G., Ruiz-Torres, A.J.: A note on longest processing time algorithms for the two uniform parallel machine makespan minimization problem. J. Sched. 19 (2), 207–211 (2016)

Gardner, K., Zbarsky, S., Harchol-Balter, M., et al.: The power of d choices for redundancy. ACM SIGMETRICS Perform. Eval. Rev. 44 (1), 409–410 (2016)

Article   Google Scholar  

Feier, M.C., Lemnaru, C., Potolea, R.: Solving NP-complete problems on the CUDA architecture using genetic algorithms. In: International Symposium on Parallel and Distributed Computing, ISPDC 2011, Cluj-Napoca, Romania, pp. 278–281. DBLP, July 2011

Ul, Hassan U., Curry, E.: Efficient task assignment for spatial crowdsourcing. Expert Syst. Appl: Int. J. 58 (C), 36–56 (2016)

Jung, H.J., Lease, M.: Crowdsourced task routing via matrix factorization. Eprint Arxiv (2013)

Karger, D.R., Oh, S., Shah, D.: Budget-optimal crowdsourcing using low-rank matrix approximations. In: 2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 284–291. IEEE (2011)

Yan, Y., Fung, G.M., Rosales, R., et al.: Active learning from crowds. In: Proceedings of the 28th International Conference on Machine Learning (ICML 2011), pp. 1161–1168 (2011)

Tong, Y., She, J., Ding, B., et al.: Online minimum matching in real-time spatial data: experiments and analysis. Proc. VLDB Endow. (PVLDB) 9 (12), 1053–1064 (2016)

Tong, Y., She, J., Ding, B., et al.: Online mobile micro-task allocation in spatial crowdsourcing. In: Proceedings of the 32nd International Conference on Data Engineering (ICDE 2016), pp. 49–60 (2016)

Tong, Y., She, J., Meng, R.: Bottleneck-aware arrangement over event-based social networks: the max-min approach. World Wide Web J. 19 (6), 1151–1177 (2016)

She, J., Tong, Y., Chen, L., et al.: Conflict-aware event-participant arrangement and its variant for online setting. IEEE Trans. Knowl. Data Eng. (TKDE) 28 (9), 2281–2295 (2016)

She, J., Tong, Y., Chen, L., et al.: Conflict-aware event-participant arrangement. In: Proceedings of the 31st International Conference on Data Engineering (ICDE 2015), pp. 735–746 (2015)

Download references

Acknowledgments

This work was supported in part by grant from State Key Laboratory of Software Development Environment (Funding No. SKLSDE-2015ZX-03) and NSFC (Grant No. 61532004).

Author information

Authors and affiliations.

State Key Laboratory of Software Development Environment, School of Computer Science, Beihang University, Beijing, China

Yong Han, Wenjun Wu & Yanjun Pu

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Yong Han .

Editor information

Editors and affiliations.

Royal Melbourne Institute of Technology , Melbourne, Australia

Zhifeng Bao

Northwestern University , Evanston, Illinois, USA

Goce Trajcevski

University of New South Wales , Sydney, New South Wales, Australia

Lijun Chang

The University of Queensland , Brisbane, Queensland, Australia

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper.

Han, Y., Wu, W., Pu, Y. (2017). Task Assignment of Peer Grading in MOOCs. In: Bao, Z., Trajcevski, G., Chang, L., Hua, W. (eds) Database Systems for Advanced Applications. DASFAA 2017. Lecture Notes in Computer Science(), vol 10179. Springer, Cham. https://doi.org/10.1007/978-3-319-55705-2_28

Download citation

DOI : https://doi.org/10.1007/978-3-319-55705-2_28

Published : 22 March 2017

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-55704-5

Online ISBN : 978-3-319-55705-2

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Peer-graded Assignment: Milestone Report

Ali nikseresht.

This is a milestone report for peer graded assigment as part of Data Science Captone Course from Coursera in Week 2. The objective of this document is as follows,

1- Demonstrate that you’ve downloaded the data and have successfully loaded it in. 2- Create a basic report of summary statistics about the data sets. 3- Report any interesting findings that you amassed so far. 4- Get feedback on your plans for creating a prediction algorithm and Shiny app. This report also will be served as a base for creating the next assignment report, hence it should be as clear and concise as possible. The content of this report will be structured to 5 sections as per the objective mentioned above.

(1/5) Fetching and Loading the Training Dataset

The training dataset to get started that will be the basis for most of the capstone. The dataset must be downloaded from the link below and not from external websites to start. https://d396qusza40orc.cloudfront.net/dsscapstone/dataset/Coursera-SwiftKey.zip .

Let’s set our working directory, and then specify the url from where the data to be downloaded, download the data and uncompressed it.

(2/5) Examining Training Dataset: Data Statistics

The dataset should be placed in the correct directory as we set by now. It contains dataset with 4 different languages: German, English, Finnish and Russian. Each language dataset contains dataset from 3 different sources: Blogs, News and Twitters. While we may use all the different languages corpora, for now let’s explore just the English dataset.

We have done the basic exploration of the English dataset. From the table above, we can observe that the dataset statistics from the 3 different sources are more or less of equal size. The dataset from twitter have the least file size (~159 MB) but double the number of lines compared to the other two sources. Also, the ones from twitter have the least number of words and the mean (average of number of word per line). These are expected because twitter data is composed of short text (with defined text limit), and we also expect that there can be more of ‘garbage words’ that we need to clean from the twitter data than blogs and news data.

Before we do some data exploratory analysis, let’s clean the data in the next section.

(3/5) Dataset Cleaning: Data Preprocessing

This is important steps to get the more accurate model. Because the dataset are quite big (up to 200MB and 2 million lines from each source), to speed up the data exploration and to test the data cleaning, let’s take just 1% of the dataset as a sample. First let’s load the sample data into a Corpus (a collection of documents) which is the main data structure used by tm.

The result is a structure of type VCorpus (‘virtual corpus’ that is, loaded into memory) with 42,695 documents (each line of text in the source is loaded as a document in the corpus). The VCorpus object is a nested list, or list of lists. At each index of the VCorpus object, there is a PlainTextDocument object, which is essentially a list that contains the actual text data (content), as well as some corresponding metadata (meta) which can help to visualize a VCorpus object and to conceptualize the whole thing. Now, we can start with the data cleaning process to this corpus.

The tm package provides several function to carry out these tasks, which are applied to the document collection as transformations via the tm_map() function. We will do the several data cleaning tasks through tm_map as follows:

  • Converting the corpus documents to lower case
  • Removing stopwords (extremely common words such as “and”, “or”, “not”, “in”, “is” etc). We’ll first look at the standard english stop words.

We can see that we stll can include more modal verbs which is not in the standard list. Let’s added few more stopwords.

  • Removing punctuation marks (periods, commas, hyphens etc).
  • Removing numbers.
  • Removing extra whitespace.

We can improve further by removing unwanted list of words such as profanity/foul language, ‘garbage words’ or informal/slang languages (commonly used in twitter with less formal meaning). However, this should be done with extra care since it might alter the context or meaning of original sentences. Hence, we won’t do this as for now in the data cleaning steps.

(4/5) Data Exploratory Analysis: Interesting Findings

In this steps, we will plot histograms of the most common words remaining in the data after the data cleaning. We will use Rweka library:n-grams functions to create different n-grams from the corpus then construct a term-document matricies for the various n-gram tokens. Then we will use ggplot2 library to plot the histogram of the matricies frequency vs n-gram words.

The document-term matrix is used when you want to have each document represented as a row. This can be useful if you are comparing authors within rows, or the data is arranged chronologically and you want to preserve the time series. Also we used remove sparse term with sparsity treshold of 0.9999. Sparsity refers to the threshold of relative document frequency for a term, above which the term will be removed, then with our treshold, it will remove only terms that are more sparse than 0.9999.

The plot below shows histograms of the ten most frequent words (unigram, bigram, and trigram) in the corpus. The plots are using distinct RGB color to make it easy to distinguish between n-grams statistics. Let’s start with the unigram first.

We can observe that the some of the most common unigram words are ‘said’, ‘now’ and ‘time’. This is interesting and somewhat valid because the datasource is collection of news, blogs and tweets which mostly reporting on what other people said about certain events and the timing, when it is happened. Let’s now look at the bigram (pair of words that appear together).

For the bigram, we also still observe that the most common words is also about timing, such as ‘last year’, ‘right now’ and also about places , such as ‘new york’, ‘high school’. Also if we notice that there are several bigram start with ‘last’ in differant ranking. Hence, when we design the text prediction, we can set a range e.g. 10-20 most common words and rank them which will result the suggestion options listed in this order ‘last year’, ‘last night’, ‘last week’. Let’s continue with the trigram.

For the trigram, we can see some congratulatory message that is related to occasional events such as ‘’mother’s day’, ‘new year’ which we expect will be a lot in conversational text like tweets. Also we can see expression of time, place and certain subject (e.g. president barrack obama) which usually appears a lot in news or tweets when the subject is an popular topics.

Alternatively, n-gram the statistics also can use term_stats function from the corpus library by specifying number of n-grams to be calculated (below are the results without using sparse and document matrix for trigram).

(5/5) Next Steps: Plan for prediction algorithm and Shiny app

We have done examining the dataset and get some intereting findings from the exploratory analysis. Now we are ready to train and create our first predictive model. Machine Learning is an iterative process where we preprocess the training data, then train and evaluate the model and repeat the steps again iteratively to get better performace model based on our evaluation metrics.

The shiny app we’ll going to built based on our trained predictive model later on will have similar functionality as the SwiftKey app. It will have text fielf for user to input thir text and it will pop up 3-5 options of sugestions what the next word might be which user can choose and it will be appended automatically to what they already type (save alot of time typing!).

Before we end this report, It is important to note that each of the steps (data cleaning, preprocessing, model training and evaluation) are important and each steps need to be re-evaluated continuosly to get really working and accurate ML model for our predictive text app. We are looking forward on the next report on the predictive model and shiny app we’ll going to build!

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Am J Pharm Educ
  • v.79(8); 2015 Oct 25

Assessment of a Revised Method for Evaluating Peer-graded Assignments in a Skills-based Course Sequence

Objective. To evaluate the modified peer-grading process incorporated into the SOAP (subjective, objective, assessment, plan) note sessions in a skills-based pharmacy course sequence.

Design. Students assessed a de-identified peer’s SOAP note in a faculty-led peer-grading session followed by an optional grade challenge opportunity. Using paired t tests, final session grades (peer-graded with challenge opportunity) were compared with the retrospective faculty-assigned grades. Additionally, students responded to a survey using 4-point Likert scale and open-answer items to assess their perceptions of the process.

Assessment. No significant difference was found between mean scores assigned by faculty members vs those made by student peers after participation in 3 SOAP note sessions, which included a SOAP note-writing workshop, a peer-grading workshop, and a grade challenge opportunity. The survey data indicated that students generally were satisfied with the process.

Conclusion. This study provides insight into the peer-grading process used to evaluate SOAP notes. The findings support the continued use of this assessment format in a skills-based course.

INTRODUCTION

The formulation and documentation of patient care plans is a skill central to the provision of patient-centered care, supported by the Center for Advancement of Pharmacy Education (CAPE) 2013 Educational Outcomes. 1 To teach and evaluate this skill, Midwestern University College of Pharmacy – Glendale (MWU-CPG) incorporated a series of active-learning experiences relevant to pharmacy practice using the “subjective, objective, assessment, plan” (SOAP) written documentation format. Supplementing each SOAP note-writing workshop, students assessed and scored an anonymous peer’s written SOAP assignment using a faculty-developed rubric in a required faculty-led workshop, followed by a self-reflection opportunity.

This assessment method was designed to meet the Accreditation Council for Pharmacy Education (ACPE) Draft Standards 2016 recommendation that schools and colleges should “utilize teaching/learning methods that facilitate achievement of learning outcomes, actively engage learners, promote self-directed learning, and foster collaborative learning”. 2 In addition, this method helps address Domain 4 of the CAPE 2013 Outcomes, which focuses on educating future pharmacists to be self-aware by providing a structured opportunity to develop the skills of peer- and self-assessment. 1

Peer assessment not only serves as a grading procedure, but a method to practice self-evaluation skills. Peer assessment is a valuable form of assessment in higher education for a variety of assignments, and students generally perceive it as beneficial. 3 Peer assessment is used in pharmacy education for student case presentations, laboratory courses, medication management assignments, patient interviews, and advanced practice experiences. 4-9

A proposed benefit of the peer assessment strategy is providing an enhanced learning environment for students in which a high order of thinking is utilized and guided by evaluation. Other benefits include self-reflection, peer interaction, and a decreased faculty workload. 4 Finally, peer assessment can help to foster higher levels of responsibility among students by requiring them to be fair and accurate with the feedback and evaluations they make of their peers. 3 Limitations identified in previous research include a lack of student understanding of the purpose of the peer-grading session, lack of student experience in grading, and lack of confidentiality. In these studies, peer grades were generally higher than faculty grades for the assignments. 4-9

A previous study comparing student and faculty scores for 3 peer-graded assignments at the college resulted in the students assigning significantly lower scores than faculty members to student peers, which was inconsistent with other research. 10 One goal of using peer grading was to replace the need for traditional faculty SOAP note grading, so several changes were made to improve the consistency of faculty and peer-assigned scores. A standardized format, titled the Comprehensive Medication Management Plan (CMMP), was adopted for guiding SOAP-note writing, which detailed the expectations for each section and served as the basis for the grading checklists used in the peer-grading sessions. This approach attempted to provide the students clear and consistent expectations among different faculty members when formulating their written SOAP note.

Additionally, a “challenge” opportunity was implemented to allow each student the chance to self-evaluate their peer-graded note prior to the final grade assignment and submit a grade challenge if appropriate. 10 Outcomes of a “challenge” strategy, including the accuracy of final grades after this opportunity, has not been formally evaluated in the literature. The intent of these changes was to improve the scoring process so the peer-assigned scores would be similar to those assigned by a faculty grader, thus validating the peer-grading process used for the SOAP notes sessions as a stand-alone assessment.

The purpose of this study was to evaluate the modified peer-grading process incorporated into the SOAP note sessions. The specific aims were to compare the final recorded student grade (given after the SOAP note session and “challenge” opportunity) with a retrospective traditional faculty-assigned grade, and to gather the student’s perceptions about the process.

Professional Skills Development (PSD) is a required 8-quarter, workshop-based course sequence in a 3-year didactic curriculum (PSD 1-8). In the second year of the sequence in 2012-2013, there are 3 SOAP note sessions. Each session consists of 3 parts: a writing workshop, a peer-grading workshop, and a challenge opportunity that build on the foundational knowledge taught in the therapeutic course sequence (PSD 5, 6, and 8). The final score received after the 3 sessions is considered a summative assessment of this knowledge.

This study involved a single cohort of students completing the second year of their PSD sequence. All students enrolled in PSD were required to participate in the 3 SOAP note sessions. The case content for each session corresponded to the material taught by the subject matter expert in the concurrent therapeutics course during each of the 3 quarters (hyperlipidemia, psychiatry, and cardiovascular). The subject matter expert also designed the case and grading checklist, led the peer-grading workshop, was present at the challenge opportunity, and graded the challenge submissions.

Organization of Sessions

For each SOAP note writing workshop, students were given a template prior to writing their SOAP note, which included a list of medical problems students were required to address for the patient case. Given the subjective and objective data, students were asked to individually write the assessment and plan for the patient. The first SOAP note-writing workshop was completed outside of class and submitted via Safe Assign, a program found in the Blackboard learning system (Blackboard Inc., Washington, DC), to check for plagiarism. The other 2 SOAP note-writing workshops were completed within a 2-hour time block in the university’s testing center. The subject matter expert was not present at the writing workshop. To aid in developing their assessment and plan, students were given a CMMP grid ( Appendix I ). Completion of the CMMP was optional during the SOAP note-writing workshops, although highly encouraged, as the grading checklists for each note were based on the same categories.

Peer-grading workshops occurred during class time within one week of completing the writing workshop. Each of the 3 peer-grading workshops were held in 2 sections (2 hours each) on the same day to accommodate one-half of the class at a time. This ensured that students who wrote the SOAP notes being evaluated were not present in that particular section of the peer-grading workshop. In an effort to hold the students accountable for attending the session, for grading the SOAP note to the best of their ability, and for providing quality feedback to their peers, students wrote their names on the cover sheet of the evaluation form. The course coordinator kept track of who graded each SOAP note, but students and the subject matter expert were blinded throughout the entire process. The grading checklist was distributed first, prior to the cases to be graded. The subject matter expert directed the students to review the grading checklist point by point. This was an opportunity for students to reflect on what they had individually written on their own note and ask questions for clarification, prior to being held responsible for grading a peer’s note. Peer notes then were distributed, and students were given approximately 20 minutes to read through their assigned note and grade based off of the supplied grading checklist. Once students had completed grading, they again reviewed the rubric. At this time, students could request whether an alternative written response not included within the grading checklist be considered for points by the subject matter expert facilitating the session. Once the grading was complete, students were instructed to provide anonymous constructive written feedback to their peers.

A 1-hour challenge opportunity was offered within 24 hours after each peer-grading workshop for students to review their peer-graded note and to challenge their peer-assigned grade. Only students who attended the challenge opportunity had the ability to submit a challenge form. If students wanted to proceed with a challenge, they completed a preprinted form specifying which point(s) they wanted to challenge and providing justification for each. The subject matter expert was present at the challenge opportunity to answer additional student questions regarding the case or their responses. After the challenge opportunity, the subject matter expert reviewed the challenge forms and addressed only the inaccuracies noted by the student. However, the subject matter expert reserved the right to regrade the assignment if necessary. Grade challenges were not accepted after this session was completed.

Evaluation of Sessions

To evaluate the scores assigned during each of the 3 SOAP note sessions, the subject matter expert retrospectively graded each assignment using the standard grading checklist used in the peer-grading workshop which modeled the traditional grading method used at the college. This traditional method did not include a peer-grading workshop or challenge opportunity and resulted only in a scored rubric with written comments. The subject matter expert was blinded to the student names and all results of the peer-grading workshop and/or challenge opportunity.

The final session grades (peer-graded with challenge opportunity) were compared to the retrospective faculty-assigned grades for each session. Evaluations were made using paired t tests with a p value of ≤0.05 being considered significant. An anonymous survey was also administered to all students enrolled in the final quarter of the PSD course sequence. The survey was voluntarily completed during a course period after the last of the three SOAP note sessions. The survey assessed the students’ opinions of the SOAP note sessions utilizing 4-point Likert scale statements, multiple-choice questions, and open-ended questions. Descriptive statistics were utilized to analyze the findings of the survey. Approval for the study was obtained from the Institutional Review Board (IRB) at Midwestern University.

EVALUATION AND ASSESSMENT

Scores for 453 peer-graded assignments (151 for session 1, 152 for session 2, and 150 for session 3) were analyzed. No difference was found between the mean scores for final recorded grades and retrospective faculty assigned grades ( Table 1 ). Differences between the mean scores for each of the 3 individual sessions were 0.8, 0.9, and 1.3, respectively. A subgroup analysis was performed to compare the final recorded grades and the retrospective faculty-assigned grades based on participation in the challenge opportunity ( Table 2 ).

Final Recorded Grades Using Peer-Grading Method vs Retrospective, Traditional Faculty Grades for 3 Sessions

An external file that holds a picture, illustration, etc.
Object name is ajpe798123-t1.jpg

Final Recorded Grades Using Peer-Grading Method vs Retrospective, Traditional Faculty Grades for 3 Sessions Based on Participation in the Challenge Opportunity

An external file that holds a picture, illustration, etc.
Object name is ajpe798123-t2.jpg

We found that for all 3 sessions combined, 158 students (35%) challenged their peer-graded SOAP note for faculty review (93% of which resulted in a score change), 243 (54%) reviewed their peer-graded note but did not challenge for faculty review, and 52 (11%) did not review their assignment. No difference was found if the student reviewed their submission and either opted to challenge or not to challenge. However, there was a significantly lower score assigned for students who did not review their assignments in the challenge opportunity.

One hundred twenty students (80%) participated in the survey assessing their perspectives on the SOAP note sessions. ( Table 3 ) Responses from 2 surveys were not included because the respondents did not meet the criteria of having attended at least two challenge opportunities. Regarding the SOAP note-writing workshop and subsequent peer-grading workshop, the majority of students either “agreed” or “strongly agreed” that the CMMP helped them structure their SOAP notes (89%), the instructions for participating in the sessions were clear (96%), the grading checklists were easy to follow (95%), the peer grading sessions enhanced learning (70%), and faculty guidance during the peer-grading workshops allowed them to effectively grade a peer’s note (90%). Regarding the challenge opportunity, the majority of students either “agreed” or “strongly agreed” that the challenge opportunity allowed them to self-reflect on their work (73%), that the challenge sessions are necessary to receive a fair score on peer-graded assignments (89%), and that faculty members awarded points fairly after the grade challenge opportunity (77%). The lowest score was in response to whether peer-provided comments were constructive and useful in improving SOAP note skills. However, 54% of the students“agreed” or “strongly” agreed with this statement as well.

Students’ Opinions Regarding SOAP Note Sessions in the Professional Skills Development Course Sequence

An external file that holds a picture, illustration, etc.
Object name is ajpe798123-t3.jpg

The open-ended questions asked what the students liked most and least about the SOAP note sessions and suggestions for improvement. ( Table 4 ) Students most commonly stated that their learning was enhanced by exposure to the various ways their peers approached the case. Other commonly reported responses included enhancement of disease state knowledge through immediate faculty feedback, fairness in grading, and improvement of general SOAP note-writing skills.

Student Comments Regarding Peer Grading Process in the Professional Skills Development Course Sequence

An external file that holds a picture, illustration, etc.
Object name is ajpe798123-t4.jpg

The majority of the responses regarding what students liked least included the process taking too long or classmates asking too many questions. Other common responses included inconsistency in points awarded by peer graders, rubrics being too strict, and reviewing the rubric prior to starting taking too much time or being too repetitive. The most frequent responses for how to improve the process included structuring the sessions to take less time and/or limiting the number of questions asked by other students. Other suggestions included revising the rubrics to be less stringent, holding the graders accountable for inaccuracy in grading, and allowing the notes to be typed instead of handwritten.

The revised peer-grading process for SOAP note sessions in the PSD course sequence resulted in the same overall score when compared to traditional faculty grading. This contrasts to a previous study, which showed higher traditional faculty-assigned scores than student peer-assigned scores. 8 This result strengthens our confidence in the use of the process in place of traditional faculty grading. The most likely contributor to this result was the institution of the challenge opportunity; allowing students to self-assess their work, review peer comments, and submit a challenge form if inaccuracies were noted. In the subgroup analysis, we noted a difference in the final recorded grade vs the traditional faculty-assigned grades for those students who did not participate in the challenge session.

The use of the CMMP document was another possible contributor to the equalized scores, since it served as a structure by which all grading checklists were formatted. This may have streamlined the grading process by improving the clarity of the grading checklists used in the peer grading sessions.

Advantages and disadvantages of the peer-grading process from student and faculty perspectives were noted. Student noted that learning from their peers’ approach to the case, obtaining immediate faculty feedback that enhanced disease state knowledge, and fair grading were advantages. From the faculty perspective, an advantage to this process was the reduced faculty grading workload. Scoring student-submitted challenge forms (n=158) took a total of approximately 3 hours for the subject matter experts to complete. In contrast, scoring each submission separately (n=453) for the purposes of this research approximated 10 minutes per submission.

This process is not without disadvantages. Some students said the peer-grading workshop took too long because other students asked too many questions even though the workshop was limited to 2 hours and was conducted during normal course time. Some also indicated an inconsistency among peer graders and may have viewed the challenge as an opportunity to double-check their peers rather than an opportunity for self-reflection. Finally, some students indicated a lack of constructive feedback from peers despite receiving examples from instructors of appropriate vs inappropriate feedback. One disadvantage from a faculty perspective was the negative connotation implied by the terms “peer-grading” and “challenge opportunity.” Despite an formal orientation on the benefits of peer and self-assessment, some students perceived this process as a way for faculty members to decrease their workload and transfer the burden to students.

There are several limitations to this research. The 3 SOAP note sessions were each designed, conducted, and graded by 3 different subject matter experts and focused on a wide array of therapeutic topics. Given this, the students’ perceptions about the process could be influenced by the therapeutic topic or faculty member. Additionally, the SOAP note sessions were conducted in the fall, winter, and summer quarters, while the survey was administered at the end of the final session, possibly reflecting the students’ opinions of the most recent session. Finally, this study is reflective of a specific cohort of pharmacy students enrolled in a workshop-based course sequence that may not be applicable to other schools or other disciplines.

The SOAP note sessions in PSD have continued with subsequent student cohorts, without the use of traditional faculty grading. Several adjustments have been instituted based on the findings of this study. For example, in addition to the formal orientation, a more formal method for providing peer feedback has been instituted. After the peer-grading workshop, students are asked to state 3 things the author did well and 3 areas for improvement. Additionally, the procedure during the peer-grading workshop has been streamlined to allow students to grade peers’ submissions at the beginning of each session in pencil, eliminating the initial faculty review of the rubric with the class. Once all individual grading is complete, the subject matter expert reviews the rubric and entertains student questions and feedback. This change was based on feedback that students found it redundant to review the rubric twice and that it lengthened the overall session time. Finally, the description and name of this process has changed from “peer grading” to “peer assessment” to further convey the positive academic intent of the process. In addition, the term “challenge” opportunity was changed to “self-reflection and review workshop” to promote the individual benefits of participating in the process.

This study provides insight into the peer-grading process used in the PSD course sequence at the college. Our findings indicated no difference between grades assigned to students using the peer-grading process and traditional faculty grades for all 3 sessions. Additionally, students perceived that the process was beneficial to their learning, supporting the continued use of this assessment format in our curriculum. We feel the process has the potential to be successfully used in other professional programs as well.

Appendix 1.

Comprehensive Medication Management Plan (CMMP)

An external file that holds a picture, illustration, etc.
Object name is ajpe798123-t5.jpg

Instantly share code, notes, and snippets.

@MarzioMT

MarzioMT / Assignment.ipynb

  • Star 0 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Embed Embed this gist in your website.
  • Share Copy sharable link for this gist.
  • Clone via HTTPS Clone using the web URL.
  • Learn more about clone URLs

peer graded assignment based on sqlite

Data Science (Archived) — User16309093021964293586 asked a question.

So in this assignment i need to solve SQL queries. But the problem is the service credentials in the db2 are not working.

I need to enter the "user-id:password@hostname:port:secutiry....etc. I created the servide credentials in the db2 but they are not working.

every time when i executing the code it says :: 'Connection info needed in SQLAlchemy format, example:

postgresql://username:password@hostname/dbname

or an existing connection: dict_keys([])

Can't load plugin: sqlalchemy.dialects:ibm_db_sa

Connection info needed in SQLAlchemy format, example:

or an existing connection: dict_keys([]):::

I dont know why i am getting this output even i tried it many times and also create new credentials but not sorted.

please help anyone if possible because i need to complete the peer graded assignment as soon as possible.

but this problem is not getting solved.

please anyone can get me out through this.

  • Data Science

peer graded assignment based on sqlite

Jose A (Courserian)

Hello @User16309093021964293586 ​, you may also want to look or post in your course’s discussion forums since there will be more people that may know how to help you with this. 

You can check out this article that explains how to find and use your course discussion forums.

peer graded assignment based on sqlite

User16684422549162634567

HiPP Organic Baby Formula was originally started in Germany, but now offers baby formula that can be purchased by parents worldwide. This organic brand of baby formula places quality first when developing their formulas.

Related Questions

peer graded assignment based on sqlite

© 2021 Coursera Inc. All rights reserved.

peer graded assignment based on sqlite

IMAGES

  1. GitHub

    peer graded assignment based on sqlite

  2. how to complete peer graded assignment and review on Coursera

    peer graded assignment based on sqlite

  3. Thanh BCCS 170094 Peer-graded-Assignment -Critically-evaluating-sources

    peer graded assignment based on sqlite

  4. Insert Data In SQLite Query And Assignment Solution With Practical, #

    peer graded assignment based on sqlite

  5. The structure of peer grading assignment data in Coursera’s database

    peer graded assignment based on sqlite

  6. Solve any peer graded assignment on Coursera using this methodology

    peer graded assignment based on sqlite

VIDEO

  1. Peer graded Assignment Course Project

  2. Peer graded Assignment Course Project V2

  3. 2024 03 03 23 03 36

  4. Video for Peer-graded Assignment

  5. Coursera: IBM

  6. Flutter SQLite Database Storage using SQLite & SQFLite tutorial

COMMENTS

  1. IBM: Hands-on Lab:Peer graded assignment based on SQLite

    longlq2002/IBM_Hands_on-_Lab_Peer_graded_assignment_based_on_SQLite This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. About

  2. sql for data science with python peer-assignment

    Something went wrong and this page crashed! If the issue persists, it's likely a problem on our side. Unexpected token < in JSON at position 4. keyboard_arrow_up. content_copy. SyntaxError: Unexpected token < in JSON at position 4. Refresh. Explore and run machine learning code with Kaggle Notebooks | Using data from No attached data sources.

  3. Creating an automatically graded SQL assignment in CodeGrade

    One of the best features of CodeGrade's autograder is that it allows you to install and run any software that you like. Many packages are installed by default, but SQLite is not. So, the first thing I have to do is install SQLite to the AutoTest of my assignment. I can do this by writing the simple command `sudo apt install -y sqlite3 ` in ...

  4. Using Databases with Python

    This course will introduce students to the basics of the Structured Query Language (SQL) as well as basic database design for storing data as part of a multi-step data gathering, analysis, and processing effort. The course will use SQLite3 as its database. We will also build web crawlers and multi-step data gathering and visualization processes.

  5. Assignment: SQL Notebook for Peer Assignment

    Now open the Db2 console, open the LOAD tool, Select / Drag the .CSV file for the dataset, Next create a New Table, and then follow the steps on-screen instructions to load the data. Name the new table as follows: SPACEXDATASET. Follow these steps while using old DB2 UI which is having Open Console Screen.

  6. SQL for Data Science

    Access to lectures and assignments depends on your type of enrollment. If you take a course in audit mode, you will be able to see most course materials for free. To access graded assignments and to earn a Certificate, you will need to purchase the Certificate experience, during or after your audit. If you don't see the audit option:

  7. How to solve problems with peer-graded assignments

    If there's an attempt limit for your assignment, you'll see an 'Attempts' section listed near the top of the page when you open the assignment. If you meet the attempt limit and need help with your grade, you can reach out to your program support team. You can find your dedicated support email address in the onboarding course for your program.

  8. IBM_Hands_on-_Lab_Peer_graded_assignment_based_on_SQLite ...

    You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window.

  9. database

    Or is there somewhere already an SQLite-ready Yelp database to download? Thanks for your help. sql; database; sqlite; yelp; Share. Improve this question. Follow ... It is the last of the 4 sections in the Peer Review Assignment panel. Good luck with the assignment. Share. Improve this answer. Follow answered Apr 22, 2021 at 12:23.

  10. Task Assignment of Peer Grading in MOOCs

    In this section, we present the overview of the entire peer grading framework and introduce the design of grading task assignment in detail. Figure 2 illustrates the basic framework of our peer grading process. It consists of three major components: the student performance evaluation model, the peer grading task allocator and the score aggregation model.

  11. Peer-graded Assignment: Milestone Report

    The objective of this document is as follows, 1- Demonstrate that you've downloaded the data and have successfully loaded it in. 2- Create a basic report of summary statistics about the data sets. 3- Report any interesting findings that you amassed so far. 4- Get feedback on your plans for creating a prediction algorithm and Shiny app. This ...

  12. Using SQLite with Python

    Using SQLite with Python. In this lab, you will learn to perform CRUD processes on a SQLite database. You will first create a data table and then add data to it. You will practice reading data from the table, and need to make corrections to the data. The skills you practice and learn in this lab will be applicable to most other major databases.

  13. IBM SQL for data science course Peer-Graded Assignment

    IBM SQL for data science course Peer-Graded Assignment. Ask Question Asked 2 years, 3 months ago. Modified 1 year, 8 months ago. ... Making statements based on opinion; back them up with references or personal experience. To learn more, see our tips on writing great answers. Sign ...

  14. Assessment of a Revised Method for Evaluating Peer-graded Assignments

    Additionally, a "challenge" opportunity was implemented to allow each student the chance to self-evaluate their peer-graded note prior to the final grade assignment and submit a grade challenge if appropriate. 10 Outcomes of a "challenge" strategy, including the accuracy of final grades after this opportunity, has not been formally ...

  15. Hands-on Lab:Peer graded assignment based on SQLite

    A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior.

  16. Peer graded assignment for "Tools for Data Science" created by Marzio

    Peer graded assignment for "Tools for Data Science" created by Marzio Melis · GitHub. Instantly share code, notes, and snippets.

  17. Data Science: Statistics and Machine Learning Specialization

    Each course in this Data Science: Statistics and Machine Learning Specialization includes a hands-on, peer-graded assignment. To earn the Specialization Certificate, you must successfully complete the hands-on, peer-graded assignment in each course, including the final Capstone Project. ... Time to completion can vary based on your schedule ...

  18. SQL For Data Science Peer Graded Assignment and Review For ...

    In This Video I show you SQL For Data Science Peer Graded Assignment and Review For Weak 4 SolutionsAssignment link :https://coursera-assessments.s3.amazonaw...

  19. Assignment: Notebook for Peer Assignment

    \""," ],"," \"text/plain\": ["," \"[(533,)]\""," ]"," },"," \"execution_count\": 75,"," \"metadata\": {},"," \"output_type\": \"execute_result ...

  20. I am doing the peer graded assignment name "databases and SQL for data

    So in this assignment i need to solve SQL queries. But the problem is the service credentials in the db2 are not working. I need to enter the "user-id:password@hostname:port:secutiry....etc. I created the servide credentials in the db2 but they are not working.

  21. IBM-Data-Science-Projects/SQL Assignment

    You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window.

  22. KrishnadevAD/Database-and-SQL-for-Data-Science-with-Python-

    You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window.