ÐÓÁÐÈÊÈ

Types of tests used in English Language Teaching Bachelor Paper

 ÐÅÊÎÌÅÍÄÓÅÌ

Ãëàâíàÿ

Èñòîðè÷åñêàÿ ëè÷íîñòü

Èñòîðèÿ

Èñêóññòâî

Ëèòåðàòóðà

Ìîñêâîâåäåíèå êðàåâåäåíèå

Àâèàöèÿ è êîñìîíàâòèêà

Àäìèíèñòðàòèâíîå ïðàâî

Àðáèòðàæíûé ïðîöåññ

Àðõèòåêòóðà

Ýðãîíîìèêà

Ýòèêà

ßçûêîâåäåíèå

Èíâåñòèöèè

Èíîñòðàííûå ÿçûêè

Èíôîðìàòèêà

Èñòîðèÿ

Êèáåðíåòèêà

Êîììóíèêàöèè è ñâÿçü

Êîñìåòîëîãèÿ

ÏÎÄÏÈÑÀÒÜÑß

Ðàññûëêà ðåôåðàòîâ

ÏÎÈÑÊ

Types of tests used in English Language Teaching Bachelor Paper

relevant and efficient for her and her colleague’s future teaching. The

students were divided according to their English language abilities: the

students with better knowledge were put together, whereas the weaker

students formed their own group. It does not mean discrimination between

the students. The teachers have explained the students the reason for such

actions, why it was necessary – they wanted to produce an appropriate

teaching for each student taking his/her abilities into account. The

teachers have altered their syllabus to meet the demands of the students.

The result proved to be satisfying. The students with better knowledge

progressed; no one halted them. The weaker students have gradually improved

their knowledge, for they received due attention than it would be in a

mixed group.

3.3 Progress test

Having discussed two types of tests that are usually used at the

beginning, we can approach the test typically employed during the study

year to check the students’ development. We will speak about a progress

test. According to Alderson (1996:217), progress test will show the teacher

whether the students have learnt the recently taught material successfully.

Basically, the teacher intends to check certain items, not general topics

covered during the school or study year. Commonly, it is not very long and

is determined to check the recent material. Therefore, the teacher might

expect his/her learners to get rather high scores. The following type is

supposed to be used after the students have learnt either a set of units on

a theme or have covered a definite topic of the language. It will display

the teacher whether the material has been successfully acquired or the

students need additional practice instead of starting a new material.

A progress test will basically display the activities based on the

material the teacher is determined to check. To evaluate it the teacher can

work out a certain system of points that later will compose a mark.

Typically, such tests do not influence the students’ final mark at the end

of the year.

The authorities of school demand the teachers to conduct progress

tests, as well. However, the teachers themselves decide on the necessity of

applying them. Nevertheless, we can claim that progress test is inevitable

part of the learning process. We can even take a responsibility to declare

that progress test facilitate the material acquisition in a way. The

students preparing for the test look through the material again and there

is a chance it can be transferred to their long-term memory.

Further, we can come to Alderson (ibid.) who presumes that such type

of testing could function as a motivating fact for the learners, for

success will develop the students’ confidence in their own knowledge and

motivate them study further more vigorously. In case, there will be two or

three students whose scores are rather low, the teacher should encourage

them by providing support in future and imply the idea that studying hard

will allow them to catch up with the rest of the students sooner or later.

The author of the paper basing on her experience agrees with the statement,

for she had noticed that weaker students when they had managed to write

their test successfully became proud of their achievement and started

working better.

However, if the majority of the class scores a rather low grade, the

teacher should be cautious. This could be a signal that there is either

something wrong with the teaching or the students are low motivated or

lazy.

3.4 Achievement tests

Apart from a progress test the teachers employ another type –

achievement test. According to Longman Dictionary of LTAL (3), an

achievement test is a test, which measures a language someone has learned

during a specific course, study or program. Here the progress is

significant and, therefore, is the main point tested.

Alderson (1996:219) posits that achievement tests are “more formal”,

whereas Hughes (1989:8) assumes that this type of tests will fully involve

teachers, for they will be responsible for the preparation of such tests

and giving them to the learners. He repeats the dictionary defining the

notion of achievement tests, adding just that success of the students,

groups of students, or the courses.

Furthermore, Alderson (ibid.) conceives that achievement tests are

mainly given at definite times of the school year. Moreover, they could be

extremely crucial for the students, for they are intended either to make

the students pass or fail the test.

At this instant the author of the paper is determined to compare a

progress and achievement test. Again if we look at these two types they

might seem similar, however, it is not so. Drawing on the facts listed

above (see sub-chapter 2.3) we can report that a progress test is typically

used during the course to check the acquisition of an excerpted material.

An achievement test checks the acquisition of the material, as well.

Although, it is far different in its application time. We basically use an

achievement test at the end of the course to check the acquisition of the

material covered during the study year, not bits of it as it is with a

progress test.

Quoting Hughes (ibid.) we can differentiate between two kinds of

achievement tests: final and progress tests. Final tests are the tests that

are usually given at the end of the course in order to check the students’

achieved results and whether the objectives set at the beginning have been

successfully reached. Further Hughes highlights that ministries of

education, official examining boards, school administration and even the

teachers themselves design these tests. The tests are based on the

curriculum and the course that has been studied. We assume, that is a well-

known fact that teachers usually are responsible for composing such tests,

and it requires a careful work.

Alternatively, Alderson (ibid.) mentions two usage types of

achievement tests: formative and summative. The notion of a formative test

denotes the idea that the teacher will be able after evaluating the results

of the test reconsider his/her teaching, syllabus design and even slow down

the pace of studying to consolidate the material if it is necessary in

future. Notwithstanding, these reconsiderations will not affect the present

students who have taken the test. They will be applied to the future

syllabus design.

Summative usage will deal precisely with the students’ success or

failure. The teacher will immediately can take up remedial activities to

improve a situation.

Further, Alderson (ibid.) and Heaton (1990:14) stipulate that

designing an achievement test is rather time-consuming, for the achievement

test is basically devised to cover a broad topic of the material covered

during the course. In addition, one and the same achievement test could be

given to more than one class at school to check both the students’ progress

and the teachers’ work. At that point it is very essential to consider the

material covered by different classes or groups. You cannot ask the

students what they have not been taught. Heaton (ibid.) emphasises the

close cooperative work of the teachers as a crucial element in test design.

However, in the school the author of the paper used to work the teachers

did not cooperate in designing achievement tests. Each teacher was free to

write the test that best suits his/her children.

Developing the topic, we can focus on Hughes’ idea that there is an

approach how to design a test; it is called syllabus-content approach. The

test is based on a syllabus studied or a book taken during the course. This

test could be described as a fair test, for it focuses mainly on the

detailed material that the students are supposed to have studied. Hughes

(ibid.) points out that if the test is inappropriately designed, it could

result in unsuccessful accomplishment of it. Sometimes the demands of the

test may differ from the objectives of the course. Therefore, the test

should be based directly on the objectives of the course. Consequently, it

will influence the choice of books appropriate to the syllable and syllable

itself. The backwash will be positive not only for the test, but also for

the teaching. Furthermore, we should mention that the students have to know

the criteria according to which they are going to be evaluated.

To conclude we shall state again that achievement tests are meant to

check the mastery of the material covered by the learners. They will be

great helpers for the teacher’s future work and will contribute a lot to

the students’ progress.

3.5 Proficiency tests

The last type of test to be discussed is a proficiency test. Regarding

Longman Dictionary of LTAL (292) proficiency test is a test, which measures

how much of a language a person knows or has learnt. It is not bound to any

curriculum or syllabus, but is intended to check the learners’ language

competence. Although, some preparation and administration was done before

taking the test, the test’s results are what being focused on. The examples

of such tests could be the American Testing of English as Foreign Language

test (further in the text TOEFL) that is used to measures the learners’

general knowledge of English in order to allow them to enter any high

educational establishments or to take up a job in the USA. Another

proficiency test is Cambridge First Certificate test that has almost the

same aim as TOEFL.

Hughes (1989:10) gives the similar definition of proficiency tests

stressing that training is not the thing that is emphasised, but the

language. He adds that ‘proficient’ in the case of proficiency tests means

possessing a certain ability of using the language according to an

appropriate purpose. It denotes that the learner’s language ability could

be tested in various fields or subjects (art, science, medicine, etc.) in

order to check whether the learner could suit the demands of a specific

field or not. This could refer to TOEFL tests. Apart from TOEFL we can

speak about Cambridge First Certificate test, which is general and does not

concern any specific field. The aim of this test is to reveal whether the

learners’ language abilities have reached a certain standard set. The test

could be taken by anyone who is interested in testing the level of language

knowledge. There are special tests levels, which can be chosen by a

candidate. If a candidate has passed the exam s/he can take another one of

a different level. However, these entire tests are not free of charge, and

in order to take it an individual has to pay for them.

Regarding Hughes (ibid.) who supposes that the only similar factor

about such tests that they are not based on any courses, but are intended

to measure the candidates’ suitability for a certain post or course at the

university, we can add that in order to pass these tests a candidate has to

attend special preparatory courses.

Moreover, Hughes (ibid.) believes that the proficiency tests affect

learners’ more in negative way, than in positive one.

The author of the paper both agrees and does not agree with the

Hughes’ proposed statement. Definitely, this test could make the testee

depressed and exhausted by taking a rather long test. Moreover, the

proficiency tests are rather impartial; they are not testee-friendly.

However, there is a useful factor amongst the negative ones. It is

preparation to proficiency tests, for it involves all language material

starting from grammar finishing with listening comprehension. All four

skills are being practised during the preparation course; various reading

task and activities have been incorporated; writing has been stressed

focusing on all possible types of essays, letters, reviews, etc. Speaking

has been practiced as well. The whole material has been consolidated for

many times.

To summarize we can claim that there are different types of tests that

serve for different purposes. Moreover, they all are necessary for the

teacher’s work, for them, apart from a proficiency test, could contribute

to successful material acquisition by learners.

Chapter 4

Ways of testing

In this chapter we will attempt to discuss various types of testing

and if possible compare them. We will start with the most general ones and

move to more specific and detailed ways of testing.

4.1 Direct and indirect testing

The first types of testing we are intended to discuss are direct and

indirect testing. First, we will try to define each of them; secondly, we

will endeavour to compare them.

We will commence our discussion with direct testing that according to

Hughes (1989:14) means the involvement of a skill that is supposed to be

tested. The following view means that when applying the direct testing the

teacher will be interested in testing a particular skill, e.g. if the aim

of the test is to check listening comprehension, the students will be given

a test that will check their listening skills, such as listening to the

tape and doing the accompanying tasks. Such type of test will not engage

testing of other skills. Hughes (ibid.) emphasises the importance of using

authentic materials. Though, we stipulate that the teacher is free to

decide him/herself what kind of material the students should be provided

with. It the teacher’s aim is to teach the students to comprehend the real,

native speech, s/he will apply the authentic material in teaching and

later, logically, in tests. Developing the idea we can cite Bynom (2001:8)

who assumes that direct testing introduces real-life language through

authentic tasks. Consequently, it will lead to the usage of role-plays,

summarising the general idea, providing the missing information, etc.

Moving further and analysing the statements made by the linguists (Bynom,

2001; Hughes,1989) we can posit the idea that direct testing will be task-

oriented, effective and easy to manage if it tests such skills as writing

or speaking. It could be explained by the fact that the tasks intended to

check the skills mentioned above give us precise information about the

learners’ abilities. Moreover, we can maintain that when testing writing

the teacher demands the students to write a certain task, such as an essay,

a composition or reproduction, and it will be precisely the point the

teacher will be intended to check. There will be certain demands imposed on

writing test; the teacher might be just interested in the students’ ability

to produce the right layout of an essay without taking grammar into

account, or, on the contrary, will be more concerned with grammatical and

syntactical structures. What concerns testing speaking skills, here the

author of the paper does not support the idea promoted by Bynom that it

could be treated as direct testing. Definitely, you will have a certain

task to involve your speaking skills; however, speaking is not possible

without employment of listening skills. This in turn will generate the idea

that apart from speaking skills the teacher will test the students’ ability

to understand the speech s/he hears, thus involving speaking skills.

It is said that the advantages of direct testing is that it is

intended to test some certain abilities, and preparation for that usually

involves persistent practice of certain skills. Nevertheless, the skills

tested are deprived from the authentic situation that later may cause

difficulties for the students in using them.

Now we can shift to another notion - indirect testing. It differs from

direct one in the way that it measures a skill through some other skill. It

could mean the incorporation of various skills that are connected with each

other, e.g. listening and speaking skills.

Indirect testing, regarding to Hughes, tests the usage of the language

in real-life situation. Moreover, it suits all situations; whereas direct

testing is bound to certain tasks intended to check a certain skill. Hughes

(ibid.) assumes that indirect testing is more effective than direct one,

for it covers a broader part of the language. It denotes that the learners

are not constrained to one particular skill and a relevant exercise. They

are free to elaborate all four skills; what is checked is their ability to

operate with those skills and apply them in various, even unpredictable

situations. This is the true indicator of the learner’s real knowledge of

the language.

Indirect testing has more advantages that disadvantages, although the

only drawback according to Hughes is that such type of testing is difficult

to evaluate. It could be frustrating what to check and how to check;

whether grammar should be evaluated higher, than composition structure or

vice versa. The author of the paper agrees with that, however, basing on

her experience at school again, she must claim that it is not so easy to

apply indirect testing. This could be rather time-consuming, for it is a

well-known fact that the duration of the class is just forty minutes;

moreover, it is rather complicated to construct indirect test – it demands

a lot of work, but our teachers are usually overloaded with a variety of

other duties. Thus, we can only hope on the course books that supply us

with a variety of activities that involve cooperation of all four skills.

4.2 Discrete point and integrative testing

Having discussed the kinds of testing that deal with general aspects,

such as certain skills and variety of skills in cooperation, we can come to

the more detailed types as discrete point and integrative testing.

According to Longman Dictionary of LTAL (112), discrete point test is a

language test that is meant to test a particular language item, e.g.

tenses. The basis of that type of tests is that we can test components of

the language (grammar, vocabulary, pronunciation, and spelling) and

language skills (listening, reading, speaking, and writing) separately. We

can declare that discrete point test is a common test used by the teachers

in our schools. Having studied a grammar topic or new vocabulary, having

practiced it a great deal, the teacher basically gives a test based on the

covered material. This test usually includes the items that were studied

and will never display anything else from a far different field. The same

will concern the language skills; if the teacher’ aim is to check reading

skills; the other skills will be neglected. The author of the paper had

used such types of tests herself, especially after a definite grammar topic

was studied. She had to construct the tests herself basing on the examples

displayed in various grammar books. It was usually gap-filling exercises,

multiple choice items or cloze tests. Sometimes a creative work was

offered, where the students had to write a story involving a certain

grammar theme that was being checked. According to her observance, the

students who studied hard were able to complete them successfully, though

there were the cases when the students failed. Now having discussed the

theory on validity, reliability and types of testing, it is even more

difficult to realize who was really to blame for the test failures: either

the tests were wrongly designed or there was a problem in teaching.

Notwithstanding, this type was and still remains to be the most general and

acceptable type in schools of our country, for it is easy to design, it

concerns a certain aspect of the language and is easy to score. If we speak

about types of tests we can say that this way of testing refers more to a

progress test (You can see the examples of such type of test in Appendix

2).

Nevertheless, according to Bynom (2001:8) there is a certain drawback

of discrete point testing, for it tests only separated parts, but does not

show us the whole language. It is true, if our aim is to incorporate the

whole language. Though, if we are to check the exact material the students

were supposed to learn, then why not use it.

Discussing further, we have come to integrative tests. According to

Longman Dictionary of LTAL, the integrative test intends to check several

language skills and language components together or simultaneously. Hughes

(1989:15) stipulates that the integrative tests display the learners’

knowledge of grammar, vocabulary, spelling together, but not as separate

skills or items.

Alderson (1996:219) poses that, by and large, most teachers prefer

using integrative testing to discrete point type. He explains the fact that

basically the teachers either have no enough of spare time to check a

certain split item being tested or the purpose of the test is only

considered to view the whole material. Moreover, some language skills such

as reading do not require the precise investigation of the students’

abilities whether they can cope with definite fragments of the text or not.

We can render the prior statements as the idea that the teachers are mostly

concerned with general language knowledge, but not with bits and pieces of

it. The separate items usually are not capable of showing the real state of

the students’ knowledge. What concerns the author of the paper, she finds

integrative testing very useful, though more habitual one she believes to

be discrete point test. She assumes that the teacher should incorporate

both types of testing for effective evaluation of the students’ true

language abilities.

4.3 Criterion-referenced and norm referenced testing

The next types of testing to be discussed are criterion-referenced and

norm referenced testing. They are not focused directly on the language

items, but on the scores the students can get. Again we should concern

Longman Dictionary of LTAL (17) that states that criterion-referenced test

measures the knowledge of the students according to set standards or

criteria. This means that there will be certain criteria according to which

the students will be assessed. There will be various criteria for different

levels of the students’ language knowledge. Here the aim of testing is not

to compare the results of the students. It is connected with the learners’

knowledge of the subject. As Hughes (1989:16) puts it the criterion-

referenced tests check the actual language abilities of the students. They

distinguish the weak and strong points of the students. The students either

manage to pass the test or fail it. However, they never feel better or

worse than their classmates, for the progress is focused and checked. At

this point we can speak about the centralized exams at the end of the

twelfth and ninth form. As far as the author of the paper is concerned, the

results of the exams are confident, and the learners after passing the

exams are conferred with various levels relevant to their language ability.

Apart from that, once a year in Latvian schools the students are given

tests designed by the officials of the Ministry of Education to check the

level of the students and, what is most important, the work of the teacher.

They call them diagnostic tests, though according to the material discussed

above it is rather arguable. Nevertheless, we can accept the fact that

criterion-referenced testing could be used in the form of diagnostic tests.

Advancing further, we have come to norm-referenced test that measures

the knowledge of the learner and compares it with the knowledge of another

member of his/her group. The learner’s score is compared with the scores of

the other students. According to Hughes (ibid.), this type of test does not

show us what exactly the student knows. Therefore, we presume that the best

test format for the following type of testing could be a placement test,

for it concerns the students’ placement and division according to their

knowledge of the foreign language. There the score is vital, as well.

4.4 Objective and subjective testing

It worth mentioning that apart from scoring and testing the learners’

abilities another essential role could be devoted to indirect factors that

influence evaluating. These are objective and subjective issues in testing.

According to Hughes (1989:19), the difference between these two types is

the way of scoring and presence or absence of the examiner’s judgement. If

there is not any judgement, the test is objective. On the contrary, the

subjective test involves personal judgement of the examiner. The author of

the paper sees it as when testing the students objectively, the teacher

usually checks just the knowledge of the topic. Whereas, testing

subjectively could imply the teacher’s ideas and judgements. This could be

encountered during speaking test where the student can produce either

positive or negative impression on the teacher. Moreover, the teacher’s

impression and his/her knowledge of the students’ true abilities can

seriously influence assessing process. For example, the student has failed

the test; however, the teacher knows the true abilities of the student and,

therefore, s/he will assess the work of that student differently taking all

the factors into account.

4.5 Communicative language testing

Referring to Bynom (ibid.), this type of testing has become popular

since 1970-80s. It involves the knowledge of grammar and how it could be

applied in written and oral language; the knowledge when to speak and what

to say in an appropriate situation; knowledge of verbal and non-verbal

communication. All these types of knowledge should be successfully used in

a situation. It bases on the functional use of the language. Moreover,

communicative language testing helps the learners feel themselves in real-

life situation and acquire the relevant language.

Weir (1990:7) stipulates that the current type of testing tests

exactly the “performance” of communication. Further, he develops the idea

of “competence” due to the fact that an individual usually acts in a

variety of situations. Afterwards, reconsidering Bachman’s idea he comes

with another notion – ‘communicative language ability’.

Weir (1990:10-11) assumes that in order to work out a good

communicative language test we have to bear in mind the issue of precision:

both the skills and performance should be accurate. Besides, their

collaboration is vital for the students’ placement in the so-called ‘real

life situation’. However, without a context the communicative language test

would not function. The context should be as closer to the real life as

possible. It is required in order to help the student feel him/herself in

the natural environment. Furthermore, Weir (ibid.) stresses that language

‘fades’ if deprived of the context.

Weir (ibid., p.11) says: “to measure language proficiency adequately

in each situation, account must be taken of: where, when, how, with whom,

and why the language is to be used, and on what topics, and with what

effect.” Moreover, Weirs (ibid.) emphasises the crucial role of the

schemata (prior knowledge) in the communicative language tests.

The tasks used in the communicative language testing should be

authentic and ‘direct’ in order the student will be able to perform as it

is done in everyday life.

According to Weir (ibid.), the students have to be ready to speak in

any situation; they have to be ready to discuss some topics in groups and

be able to overcome difficulties met in the natural environment. Therefore,

the tests of this type are never simplified, but are given as they could be

encountered in the surroundings of the native speaker. Moreover, the

student has to possess some communicative skills, that is how to behave in

a certain situation, how to apply body language, etc.

Finally, we can repeat that communicative language testing involves

the learner’s ability to operate with the language s/he knows and apply it

in a certain situation s/he is placed in. S/he should be capable of

behaving in real-life situation with confidence and be ready to supply the

information required by a certain situation. Thereof, we can speak about

communicative language testing as a testing of the student’s ability to

behave him/herself, as he or she would do in everyday life. We evaluate

their performance.

To conclude we will repeat that there are different types testing used

in the language teaching: discreet point and integrative testing, direct

and indirect testing, etc. All of them are vital for testing the students.

Chapter 5

Testing the Language Skills

In this chapter we will attempt to examine the various elements or

formats of tests that could be applied for testing of four language skills:

reading, listening, writing and speaking. First, we will look at multiple-

choice tests, after that we will come to cloze tests and gap filling, then

to dictations and so on. Ultimately, we will attempt to draw a parallel

between them and the skills they could be used for.

5.1 Multiple choice tests

It is not surprising why we have started exactly with multiple-choice

tests (MCQs, further in the text). To the author’s concern these tests are

widely used by teachers in their teaching practice, and, moreover, are

favoured by the students (Here the author has been supported by the

equivalent idea of Alderson (1996:222)). Heaton (1990:79) believes that

multiple-choice questions are basically employed to test vocabulary.

However, we can argue with the statement, for the multiple choice tests

could be successfully used for testing grammar, as well as for testing

listening or reading skills.

It is a well-known fact how a multiple-choice test looks like:

1. ---- not until the invention of the camera that artists

correctly painted horses racing.

A) There was

B) It was

C) There

D) It

“Cambridge Preparation for the TOEFL Test”:

A task basically is represented by a number of sentences, which should

be provided with the right variant, that, in its turn, is usually given

below. Furthermore, apart from the right variant the students are offered a

set of distractors, which are normally introduced in order to “deceive” the

learner. If the student knows the material that is being tested, s/he will

spot the right variant, supply it and successfully accomplish the task. The

distractors, or wrong words, basically slightly differ from the correct

variant and sometimes are even funny. Nevertheless, very often they could

be represented by the synonyms of the correct answer whose differences are

known to those who encounter the language more frequently as their job or

study field. In that case they could be hardly differentiated, and the

students are frustrated. Certainly, the following cases could be implied

when teaching vocabulary, and, consequently, will demand the students’

ability to use the right synonym. The author of the paper had given the

multiple-choice tests to her students and must confess that despite

difficulties in preparing them, the students found them easier to do. They

motivated their favour for them as it was rather convenient for them to

find the right variant, definitely if they knew what to look for. We

presume that such test format as if motivated the learners and supplied

them additional support that they were deprived during the test where

nobody could hope for the teacher’s help.

Everything mentioned above has raised the author’s interest in the

theory on multiple-choice test format and, therefore, she finds extremely

useful the following list of advantages and disadvantages generated by

Weir. He (1990:43) lists four advantages and six disadvantages of the

multiple-choice questions test. Let us look at the advantages first:

. According to Weir, the multiple-choice questions are structured in

such a form that there is no possibility for the teacher or as he

places “marker” to apply his/her personal attitude to the marking

process.

The author of the paper finds it to be very significant, for employing

the test of this format we see only what the student knows or does not

know; the teacher cannot raise or lower the marker basing on the students’

additional ideas displayed in the work. Furthermore, the teacher, though

knowing the strong and weak points of his/her students, cannot apply this

information as well to influence the mark. What s/he gets are the pure

facts of the students’ knowledge.

Another advantage is:

. The usage of pre-test that could be helpful for stating the level of

difficulty of the examples and the test in the whole. That will

reduce the probability of the test being inadequate or too

complicated both for completing and marking.

This could mean that the teacher can ensure his/her students and

him/herself against failures. For this purposes s/he just has to test the

multiple-choice test to avoid troubles connected with its inadequacy that

later can lead to the disaster for the students receiving bad marks due to

the fact that the test’s examples were too complicated or too ambiguous.

The next advantage concerns the format of the test that clearly implies

the idea of what the learner should do. The instructions are clear,

unambiguous. The students know what they are expected to do and do not

waste their precious time on trying to figure out what they are supposed to

do.

The last advantage displayed by Weir is that the MCQs in a certain

context are better than open-ended or short-answer questions, for the

learners are not required to produce their writing skills. This eliminates

the students’ fear of mistakes they can make while writing; moreover, the

task does not demand any creative activity, but only checks the exact

knowledge of the material.

Having considered the advantages of MCQs, it is worth speaking about its

disadvantages. We will not present all of them only what we find of the

utmost interest and value for us.

The first disadvantage concerns the students’ guessing the answers;

therefore, we cannot objectively judge his/her true knowledge of the topic.

We are not able to see whether the student knows the material or have just

luckily ticked or circled the right variant. Therefore, it could be

connected with another shortcoming of the following test format that while

scoring the teacher will not get the right and true picture of what the

students really know.

Another interesting point that could be mentioned it that multiple-

choice differ from the real-life situation by the choice of alternatives.

Usually, in our everyday life we have to choose between two alternatives,

whereas the multiple-choice testing might confuse the learner by the

examples s/he even has not thought about. That will definitely lead to

frustration, and, consequently, to the student’s failure to accomplish the

task successfully.

Besides, regarding Weir (ibid.) who quotes Heaton (1975) we can

stipulate that in some cases multiple-choice tests are not adequate and it

is better to use open-ended questions to avoid the pro-long lists of

multiple-choice items. This probably will concern the subject, which will

require a more precise description and explanation from the students’ side.

To finish up with the drawbacks of MCQs we can declare that they are

relatively costly and time-consuming to prepare. The test designer should

carefully select and analyse each item to be included in the test to avoid

ambiguity and imprecision. Furthermore, s/he should check all possible

grammar, spelling and punctuation mistakes, evaluate the quality of

information offered for the learners’ tasks and choose the correct and

relevant distractors for the students not to confuse them during the test.

To conclude we can cite Heaton (1990:17) who stipulates that designing a

multiple-choice items test is not so fearful and hard as many teachers

think. The only thing you need is practice accompanied by a bit of theory.

He suggests for an inexperienced teacher to use not more than three options

if the teacher encounters certain difficulties in supplying more examples

for the distractors. The options should be grammatically correct and of

equal length. Moreover, the context should be appropriate to illustrate an

example and make the student guess right.

5.2 Short answer tests

A further format that is worth mentioning is short answer test

format. According to Alderson (1996:223) short answer tests could be

substitutes to multiple-choice tests. The only difference is that apart

from the optional answers the students will have to provide short answers.

The author of the paper had not used this test format, thus, she cannot

draw on her experience. Therefore, she will just list the ideas produced by

other linguists, to be more exact Alderson’s suggestions.

Alderson (ibid.) believes that short answer tests will contribute to

the students’ results, for they will be able to support their answers and,

if necessary, clarify why they responded in that way but not the other. It

could be explained that the students will have an opportunity to prove

their answers and support them if necessary.

Nevertheless, the short answer tests are relatively complicated for

the teacher to be designed. The teacher has to consider a variety of ideas

and thoughts to create a fairy relevant test with fairly relevant items.

May be that could explain the fact why this test format is not such a

common occasion as MCQs are.

At this point we have come to advantages and drawbacks of short

answer tests. Weir (1990:44) says that this type of testing differs from

MCQs by the absence of the answers. The students have to provide the answer

themselves. That will give the marker the clear idea whether the students

know what they write about or not. Certainly, the teacher will be definite

about the students’ knowledge, whereas in MCQs s/he can doubt whether the

students know or have just guessed the correct answer. Moreover, short

answer test could make the students apply their various language skills

techniques they use while dealing with any reading, listening or speaking

activity.

Finally, Weir (ibid.) stipulates that if the questions are well

formulated, there is a high chance the student will supply short, well-

formulated answer. Therefore, a variety of questions could be included in

the test to cover a broader field of the student’s knowledge, and certainly

it will require a great work from the teacher.

Nevertheless, there are certain drawbacks displayed by the following

test format. One of the major disadvantages could be the students’

involvement in writing. For if we are determined to check the students

reading abilities, it is not appropriate to give the students writing tasks

due to the high possibility of the spelling and grammar mistakes that may

occur during the process. Therefore, we have to decide upon our priorities

– what do we want to test. Furthermore, the students while writing can

produce far different answers than expected. It will be rather complicated

to decide whether to consider them as mistakes or not.

5.3 The cloze test and gap-filling tests

Before coming to the theory on cloze tests we assume that it is

necessary for us to speak about a term “cloze”. Weir (1990:46) informs that

it was coined by W.L. Taylor (1953) from the word ‘closure’ and meant the

individual’s ability to complete a model.

However, to follow the model one has to posses certain skills to do

so. Hence, we can speak about introduction of such skill that Weir calls

deduction. Deduction is an important aspect for dealing with anything that

is unknown and unfamiliar. Thus, before giving a cloze test the teacher has

to be certain whether his/her students are familiar with the deduction

technique.

Alderson (1996:224) assumes that there are two cloze test techniques:

pseudo-random and rational cloze technique. In the pseudo-random test the

test designer deletes words at a definite rate, or as Heaton (1990:19)

places it, systematically, for example every 7th word should be deleted

occasionally with the initiate letter of the omitting word left as a

prompt:

Although you may think of Britain as England ,i...is really four

countries in one. There a.. …..four very distinct nations within the

British I………: England, Scotland, Wales and Ireland, each with their

o…..unique culture, history, cuisine, literature a…..even languages.

(Discovering Britain, Pavlockij B.

M., 2000)

However, the task could be more demanding if the teacher will not

assist the learners’ guesses and will not provide any hints:

Scotland is in the north and Wales in the west were………separate

countries. They have different customs,……………….., language and, in Scotland’

s case, different legal and educational……………….

(ibid.)

The examples shown above do not yield to be ideal examples at all.

Without doubt, the material used in the task should more or less provide

the students with the appropriate clues to form correct guessing.

Notwithstanding, the author of the paper has used such tests in her

practice and according to her observations; she can conclude that the tasks

with the first letter left are highly motivating for the students and

supply a lot of help for them. Moreover, having discussed the following

test format the teacher has revealed that the students like it and receive

a real pleasure if they are able to confirm their guess and find the right

variant.

However, according to Alderson (ibid.), the teacher commonly does not

intend to check a certain material by the cloze test. The main point here

is the independence of the student and his/her ability to apply all the

necessary techniques to fill in the blank spaces. Concerning the mentioned-

above scholars, we have to agree that the following type of test is

actually relatively challenging, for it demands vast language knowledge

from the student. Heaton (ibid.) believes that each third or fourth deleted

word can turn into the handicap for the learner due to the lack of

prompting devices, such as collocations, prepositions, etc. Whereas, the

removal of each ninth word may even lead to the exhausting reading process.

On the contrary, the rational cloze technique, or as it is usually

called gap-filling, is based on the deletion of words connected with the

topic the teacher wants or intends to check. At this time the teacher

controls the procedure more than it is in the pseudo-random test discussed

above. Moreover, s/he tries to delete every fifth or sixth word, but does

it rather carefully not to distort the meaning and mislead the learner.

Besides, a significant factor in this type of testing is that the teacher

removes exactly the main words that are supposed to be checked, i.e.:

Ñòðàíèöû: 1, 2, 3


© 2008
Ïîëíîå èëè ÷àñòè÷íîì èñïîëüçîâàíèè ìàòåðèàëîâ
çàïðåùåíî.