1. Define “validity” of a test. Describe why it is a significant concept and how test developers/teachers can make a test more valid. Use two specific examples to strengthen your statements/arguments.

The validity of a test is defined as the measurement that is obtained by the test. It can also refer to what the test is intended to measure. For example, the test may have been set with the aim of obtaining measurements for writing but turns out to be measuring grammar. The validity test is significant because it acts as a device that is vital in the measurement of particular skills. Through focusing on the various types of validity, a developer or a teacher can make a test more valid. The teacher or the developer of the test should strive in ensuring that there is validity in the appearance of the tests. For example, there should be a relationship between the test and the results obtained after the test has been done. An invalid test may occur when the test is intended for the measurement of a certain skill but ends up measuring a different skill. For instance, it could be that the test was meant to measure the masterly of vocabulary by the students but ends up measuring the reading skills of the learners.

2.Define “reliability” of a test. Describe why it is a significant concept and how test developers/teachers can make a test more reliable. Use two specific examples to strengthen your statements/arguments.

If there is a consistency of the results for a particular test for the same group of students, then this is referred to as the reliability of a test. Reliability of a test is significant because it indicates that there were no problems in the test. Test developers or teachers can improve the reliability of a test through examining the same group of the students using the same test and without altering of the examination conditions. If the results lack consistency, then it is an indication that the test cannot be reliable. To that end, the examiners may have to set a new test. For example, a test developer may set a test that is intended to measure the masterly of grammar among a group of students who are viewed as being on the same level ground regarding their performance. If one group performs exemplary well as compared to the others, then this is a signal that there was a problem with the test. Similarly, another example arises when similar results are obtained after the examination has repeatedly been done by one group of the students. This is a demonstration that the test can be relied upon.

 

 

  1. Compare and contrast two types of tests: “progress tests” and “achievement tests.” Describe the features, strengths, and weaknesses, and provide two examples for each test. Especially select one instance for each test and analyze how/why it can be an appropriate example.

The measurement of the progression of the learners in the course of their studies is known as the progress test. One feature of this test is that it is administered in the middle of a semester. The strength of the progress test is that it enables the instructors to have an understanding of how far the students have gone in achieving the intended goals of the discipline. Progress test has a weakness of only focusing on a sample of what has been covered in the class and therefore may not give the real account of the skill being measured. An example of a progress test is the midterm examinations. On the other hand, an achievement test is a test that is administered at the conclusion of a semester. Its feature is it captures all the topics that were covered in the classroom. Thus, its strength is that it gives an accurate account of the student’s mastery of the skill being measured. The weakness of this test is that it may not be a good measure for the attainment of the discipline’s goals by the students.  The final examinations that are given at the end of a semester are an example of an achievement test. The final exams are essential examples of an achievement test because they are intended for the measurement of the attainment of the course objectives by the learner.

.

 

  1. Define “washback” as a cornerstone of testing. How can a test influence students’ learning positively and negatively, and why can this occur? Provide your own experiences about positive and negative influences of tests (one for each).

The impact of a test on the way that students learn and the manner in which they are taught is known as a “Washback.” A test can influence the learners positively if they learn towards the attainment of the discipline’s objectives. Negatively, the test can compel the students to learn with the goal of only attaining high marks in the examination. These two situations take place because of pressure to perform either from the parents or the demands of a particular dream career. For instance, at a high school, I worked hard so that I could obtain high marks so that I can join the University of my Choice. To that end, I disregarded the goals of the disciplines, and therefore this is an ideal example of a negative washback. An example of a positive washback is when I read a lot so that I can have a mastery of the English language which was one of the objectives of doing IELTS.

 

 

 

  1. Compare and contrast two types of assessments: “summative assessments” and “formative assessments.” Define both summative assessments and formative assessments, and describe the features, strengths, and weaknesses, and provide three examples (tasks/tests) for each assessment (six in total). Describe the (potential) use of each example and clarify how each example can measure students’ learning.

Formative assessment refers to an assessment that is carried out with the goal of providing feedbacks to the learners. The strength of this test is that the assessments are conducted to establish the areas of improvement to boost the performance of the students. Its weakness is that it is time-consuming. An ideal example of a formative test is the diagnostic test which is used in the measurement of a learner’s knowledge in a particular topic being covered in the classroom. For instance, when a teacher requires a thesis statement from the students, this is helpful in the learning process of the learners because it gives the instructor an overview of the learner’s ability in addressing a vital claim. If the teacher wants a summary from the students, then this improves their writing abilities. On the other hand, summative assessments are conducted at the conclusion of a semester with the goal of understanding whether the learners have obtained the purpose of the subject. The strength of this assessment is that it provides the students with different questions which are imperative in measuring the attainment of the set objectives.  The flaw of the assessment is that the use of many questions may be irrelevant in the measurements of the learner’s understanding. The final projects that are done towards the end of the conclusion of a course are an ideal example of a summative assessment. They measure the students learning through giving the teachers an opportunity to measure the student’s overall attainment of the set goals.

 

 

  1. Compare and contrast “computerized testing” and “web-based testing.” Define “computer-based testing” (CBT), “computer-adaptive testing” (CAT), and “web-based testing,” and describe the features, advantages, and drawbacks. Describe the current and potential use of these tests in L2 assessment processes. Specific examples (one or more for each) should be provided.

 

CBT is a paper test that is inserted in a computer. Its feature is that it occurs in the form of layouts, colors, and fonts.  CAT testing requires that all the learners should respond to various questions as they are presented on the computer. In this case, the selection of the question by the computer depends on how a student responded to the first question. For instance, if the student writes the correct answer for the first question, the next question will be difficult.

The internet is applied in the writing of Hypertext Markup Language (HTML) which forms a web-based testing. In this case, the students answer the questions through the use of software known as a web-browser. An example of this test is the placement tests.

 

 

  1. Discuss authenticity of tests. Discuss why test authenticity is important, and how test developers/teachers can provide authentic tests to students. Offer two questions/tasks: one authentic question/task and one unauthentic question/task. For each, describe why/how it is authentic or unauthentic.

 

Test authenticity refers to a test that can prepare the learners for the real-life situations. The authenticity of the tests is important because it enables the learners to have the motivation to face the day-to-day real-life environment. The authenticity of the tests can be improved by the teachers through an inclusion of questions that capture the day-to-day activities of the students. For instance, the students may be asked to elaborate an answer through their real-life experiences. This task is authentic because it is directly related to the children’s real life. It is inauthentic in a situation whereby the teacher requires the students to speak about the experience of others. For instance, if they are to give an experience that is related to the old people.

 

 

 

  1. Compare and contrast three assessment types: “selected-response,” “productive-response,” and “personal-response.” Describe the features, advantages, and drawbacks. Develop two questions for each assessment type (six in total), describe how your questions are consistent with the features, and evaluate strengths and weaknesses of your questions.

 

The measurement of a students mastery of grammar or the vocabulary is referred to as selected- response assessment. Its strength is that it gives clear guidelines on how to study. However, the assessment has the weakness of unsuitability for some of the language skills. For instance, it is good if a student is required to identify words and their meanings in a scenario for the vocabulary test. The same question is unsuitable to be applied in a writing test.

 

The difference between selected response and productive-response assessment is that productive –response assessment looks at how the production skills of the learners may be improved while the selected response does not. The strength of productive response assessment is that it is important for the measurement of the learner’s abilities in the spoken and written language skills. Its weakness is that its application is limited because it cannot be used in the assessment of grammar.

 

Personal response differs from the productive response and selected response assessments in that personal response assessments concentrate on the provision of opinion in tasks that require the students’ views. Its strength is that it strengthens the thinking abilities of the learners. Its weakness is that it cannot be applied as the only tool for the development of the language techniques. An ideal example of a personal response assessment is the assignment to the students whereby they are required to provide their views on a speech by somebody else. The questions are significant to the learner because they motivate him and boost his confidence.