If a term or explanation is not provided below, please Contact Us.

# Correct: How many students got this question correct.

% Correct: The percentage of correct answers.

% Grade: The percentage of questions the student answered correctly.

% -tile: The percentage of all the scores for the exam that are at or below that students score. For example, if a student has a percentile of 55, then there are 55% of the students that are at that same score or below.

Answer Key: The answers for the exam must be filled out on a bubble sheet. In the name field, it must be filled out as KKEY (space) Your Last Name. Then fill in the answers corresponding to the item.

Asterick *: Indicates a bad response; i.e., no response or multiple responses.

Bad Responses: This reports who marked multiple answers for one question or left that question unanswered. It also helps us to see if there may be problems with the exam if there are a high number of bad responses. For example: it may be that a student only answered a couple of questions and left the rest blank or the answer key was filled out for 40 responses but the students only filled out 30.

Form or FM: This is used for those instructors who use the MAP. If you only have one answer key, then Form will state A for everyone.

High Score: Highest raw score of the exam with its percent.

ID Number: This reports the students scores by an ID rather than by a name. Have them use their WSU ID.

Item: The question number.

Item Deleted by Instructor: This will appear on the Item Analysis Report when the instructor has left an item unanswered or with multiple answers. (Our grade program can only read one correct answer)

Items Missed: This reports what questions the student missed and the answer they marked.

Key: The answer given on the answer key.

Low Score: Lowest raw score of the exam with its percent.

MAP: This is used for those instructors who have multiple answer keys for the same exam (ie., version A and version B, etc.). See the MAP document for further explanation.

Mean: The average of the scores with its percent.

Median: The score that which there is as many scores above as below. It reports the median raw score and the median percent score.

N: Number of exams graded.

Point Biserial Coefficients (PBSR):

Broken down by item - to be found with the Response Analysis: It gives the PBSR for the correct item (*) and the PRSR for other items that people chose if those had been the correct choices. For example, lets say that the correct item is B, 95.0% chose it with a PBSR of .254 meaning if you got this item correct you probably did well on the exam. However, for the same item, some people chose A which was incorrect. 2.5% chose it and it had a negative PBSR meaning that if A had been the correct item, those who chose it would have done poorly on the exam. It is reassuring to see negative PBSR for the items which are not the correct choices.

Not broken down by item - to be found with the Item Analysis: This tells you the relationship (correlation) between a specific item and the overall test score. A positive PBSR means that students who answered that item correctly had overall higher scores on the exam. A negative PBSR mans that students who answered the item correctly had lower scores on the exam. (The opposite, of course, is also true: For positive PBSR, students who answered the item incorrectly did worse on the exam. For negative PBSR, students who answered incorrectly had higher scores on the exam).

PPRM: If you would like to have any of your questions worth more than 1 point, then you will need to fill out a bubble sheet representing the point value needed for each question. In the name field, write PPRM and fill in the bubbles. On the questions you want additional points fill in those bubbles. For example if you want your 25 question exam to be worth 50 points, then you will fill in the B or 2 bubble for each question. The PPRM must be accompanied with the answer key.

Quartiles: Take the distribution of scores and break them down by quarters. The scores reported are the scores at each of the breaks.

Raw Score: The number of questions the student got correct.

Reliability: Extent to which each item measures the same thing (internal consistency).

Response Analysis: This section, which is found with the Descriptive Statistics, is to help you in understanding the quality of your questions. For further explanation, please see our document on Response Analysis.

Skewness: For a reported negative skewness score, the majority of the student's scores were on the high end of the exam. If it is positive, then the majority were on the low end.

Standard Deviation: How much on the average the scores differ from the mean.

Total Points: This is how many points the exam was worth. If there are 25 questions, then it is worth 25 points since each question is worth 1 point. If you would like your exam to be worth more points then a PPRM must be filled out.

WT (Weight): How much each question is worth.

Z-score: This shows where the students score fell relative to the mean when measured in standard deviation units.