Thursday, November 17, 2011

Chapter 5: Assessing Listening


The assessment of listening abilities is one of the least understood, least developed and yet one of the most important areas of language testing and assessment (Alderson & Bachman, 2001). In fact, Nunan (2002) calls listening comprehension “the poor cousin amongst the various language skills “because it is the most neglected skill area. As teachers we recognize the importance of teaching and then assessing the listening skills of our students, but - for a number of reasons - we are often unable to do this effectively. One reason for this neglect is the availability of culturally appropriate listening materials suitable for EF/SL contexts. The biggest challenges for teaching and assessing listening comprehension center around the production of listening materials. Indeed, listening comprehension is often avoided because of the time, effort and expense required to develop, rehearse, record and produce high quality audio tapes or CDs.

Approaches to Listening Assessment


The discrete-point approach:  became popular during the early 1960’s with the advent of the Audiolingual Method. This approach identified and isolated listening into separate elements. Some of the question types that were utilized in this approach included phonemic discrimination, paraphrase recognition and response evaluation. An example of phonemic discrimination is assessing students by their ability to distinguish minimal pairs like ship/sheep. Paraphrase recognition is a format that required students to listen to a statement and then select the option closest in meaning to the statement. Response evaluation is an objective format that presents students with questions and then four response options. The underlying rationale for the discrete-point approach stemmed from two beliefs. First, it was important to be able to isolate one element of language from a continuous stream of speech. Secondly, spoken language is the same as written language, only it is presented orally.

The integrative approach: starting in the early 1970s called for integrative testing. The underlying rationale for this approach is best explained by Oller (1979:37) who stated “whereas discrete items attempt to test knowledge of language one bit at a time, integrative tests attempt to assess a learner’s capacity to use many bits at the same time.” Proponents of the integrative approach to listening assessment believed that the whole of language is greater than the sum of its parts. Common question types in this approach were dictation and cloze.
Fundamentals of Language Assessment
The communicative approach: arose at approximately the same time as the integrative approach as a result of the Communicative Language Teaching movement. In this approach, the listener must be able to comprehend the message and then use it in context. Communicative question formats must be authentic in nature.

Considerations in designing Listening Tasks

A number of issues make the assessment of listening different from the assessment of other skills. Buck (2001) has identified several issues that need to be taken into account. They are: background, test content, texts, vocabulary, test structure, formats, item writing, timing, and skill contamination. Each is briefly described below and recommendations are offered.

Background
Back ground or prior knowledge needs to be taken into account because research suggest that background knowledge affects comprehension and test performance.

Test Content

The test specificatrion might provide you with information about the following:

·         Text type
·         Speech type to be used
·         Mode of imput
·         Varieties of English to be used
·         Scripted or unscripted input
·         Length of imput

Texts
Many teachers feel that the unavailability of suitable texts is listening comprehension’s most pressing issue. The reason for this is that creating scripts which have the characteristics of oral language is not an easy task. Some teachers simply take a reading text and ‘transform’ it into a listening script. The transformation of reading texts into listening scripts results in contrived and inauthentic listening tasks because written texts often lack the redundant features which are so important in helping us understand speech.



Vocabulary
Research recommends that students must know between 90-95% of the words to understand a text/script. Indeed the level of the vocabulary that you utilize in your scripts can affect the difficulty and hence the comprehension of students. If your institution employs word lists, it is recommended that you seed vocabulary from your own word lists into listening scripts whenever possible.


Test Structure
The way a test is structured depends largely on who constructs it. There are generally two schools of thought on this: British and the American perspectives. British exam boards generally grade input from easy to difficult in a test and mix formats within a section. This means that the easier sections come first with the more difficult sections later. American exam boards, on the other hand, usually grade question difficulty within each section of an exam and follow the 30/40/30 rule. This rule states that 30% of the questions within a test or test section are of an easy level of difficulty; 40% of the questions represent mid range levels of difficulty; and the remaining 30% of the questions are of an advanced level of difficulty. American exam boards usually use one format within each section. The structure you use should be consistent with external benchmarks you use in your program. It is advisable to start the test with an‘easy’ question. This will lower students’ test anxiety by relaxing them at the outset of the test.


Formats
Perhaps the most important piece of advice here is that students should never be exposed to a new format in a testing situation. If new formats are to be used, they should be first practiced in a teaching situation and then introduced into the testing repertoire. Objective formats like MCQs and T/F are often used because they are more reliable and easier to mark and analyze. When using these formats, make sure that the N option is dropped from T/F/N and that three response options instead of four are utilized for MCQs. Remember that with listening comprehension, memory plays a role. Since students don’t have repeated access to the text, more options add to the memory load and affect the difficulty of the task and question. Visuals are often used as part of listening comprehension assessment. When using them as input, make certain that you use clear copies that reproduce well.



Timing
The length of a listening test is generally determined by one of two things: the length of the tape or the number of repetitions of the passages. Most published listening tests do not require the proctor to attend to timing. He/she simply inserts the tape or CD into the machine. The test is over when the proctor hears a pre-recorded “this is the end of the listening test” statement. For teacher-produced listening tests, the timing of a test will usually be determined by how many times the test takers are permitted to hear each passage. Proficiency tests like the TOEFL usually allow one repetition whereas achievement tests usually repeat the input twice. Buck (2001) recommends that if you’re assessing main idea, input should be heard once and if you’re assessing detail, input should be heard twice. According to Carroll (1972), listening tests should not exceed 30 minutes.
It is important to remember to give students time to pre-read the questions before the test and answer the questions throughout the test. If students are required to transfer their answers from the test paper to an answer sheet, extra time to do this should be built into the exam.


Skill contamination
It is an issue that is regularly discussed with regard to listening comprehension. Skill contamination is the idea that a test-taker must use other language skills in order to answer questions on a listening test. For example, a test taker must first read the question and then write the answer. Whereas skill contamination used to be viewed negatively in the testing literature, it is now viewed more positively and termed ‘skill integration.’

No comments:

Post a Comment