Evaluating the Reliability and Validity of an English Achievement Test for Third-year Non- major students at the University of Technology, Ho Chi Minh National University and some suggestions for chan

38 1.9K 13
Tài liệu đã được kiểm tra trùng lặp
Evaluating the Reliability and Validity of an English Achievement Test for Third-year Non- major students at the University of Technology, Ho Chi Minh National University and some suggestions for chan

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Evaluating the Reliability and Validity of an English Achievement Test for Third-year Non- major students at the University of Technology, Ho Chi Minh National University and some suggestions for chan

Trang 1

CHAPTER 1: INTRODUCTION1.1 Rationale for choosing this topic

English has already played a specially important role in the increasing developmentof science, technology and international relations, which has resulted in the growing needsfor English language learning and teaching in many parts of the world English has becomea compulsory subject in national education in many countries, among which Vietnam hasconsidered learning and teaching English as a major strategic tool to develop humanresources, as a way to keep up with other countries Therefore, in any level of education,from primary to university or postgraduate degree, learners must learn or want to learnEnglish as a compulsory subject or their target to access to information technology and tofind a good job It is true that English teaching/ learning is essential for job training.

Fully aware of the importance of the English language, the University ofTechnology, Ho Chi Minh National University has encouraged and required their studentsto learn it as a compulsory subject during the first three academic years Therefore, Englishhas been taught at the University of Technology since it was established, aiming atequipping the students with an essential tool to go deeper into the world However, toevaluate how students acquire when they learn a foreign language, how well they use whatthey have been taught and at which level of English they are standing is not paid muchattention to The evaluation only counts for calculating the percentage of the number ofstudents who pass English tests, which ; therefore, doesn’t say anything about the validity,reliability or discrimination of the tests The results of English test are not successfully andcompletely employed In addition, during the time I have worked as a teacher of English atthe University of Technology, I have heard teachers and learners complaining about theEnglish achievement test in terms of its content, its structure As a result, the Englishsection has decided to implement the renewal of the item bank in order to make it morevalid and more reliable.

Seeing the point, the author is encouraged to undertake this study entitled

“Evaluating the Reliability and Validity of an English Achievement Test for year Non- major students at the University of Technology, Ho Chi Minh NationalUniversity and some suggestions for changes” with the intention to find out how valid

Third-and reliable the test is More importantly, the writer hopes that the result of the study can

Trang 2

then be applied to improve the current testing and to create a new really reliable item bank.It is also intended to encourage both teachers and learners in their teaching and learning.

1.2 Scope of study

The scope of this thesis is limited to a research on examining the existingachievement test in terms of its validity and reliability for the third-year non-English majorstudents at the University of Technology, Ho Chi Minh National University The studygives analyzed statistic data of the currently used test and proposes practical suggestions toimprove the test Due to the limitations of time and research conditions, it is impossible forthe author to cover all used achievement tests for third-year students Instead, only one testis studied

1.3 Aims of study

The major aim of the study is to evaluate the currently used achievement test of the 3rdyear non-English students of technology with a special focus on the test reliability andvalidity The specific aims of the research are:

 To evaluate the test validity and reliability through initial score statistics obtainedfrom the achievement test result of third-year students,

 To pinpoint the strengths and weaknesses of the test, and To provide practical suggestions for the test improvement.

Then, quantitative methodology was used to collect and analyze data Aftercollecting data, the author employed statistic software to interpret it and to presentsuggested findings.

Trang 3

1.5 Research questions

This study is implemented to find answers to the following research questions:1 Is the achievement test for third-year non-English major students at the

University of Technology, Ho Chi Minh National University reliable?

2 Is the achievement test for third-year non-English major students at theUniversity of Technology, Ho Chi Minh National University valid?

3 Is it necessary to make some changes to the test? If yes, what are the changes?

1.6 Design of study

The thesis is organized into four major chapters:

Chapter 1- Introduction presents such basic information as: the rationales, the aims, themethod, the research questions and the design of the study.

Chapter 2- Literature Review reviews theoretical backgrounds on evaluating a test, whichincludes language testing, criteria of good tests and theoretical ideas on test reliability andvalidity as well as achievement tests.

Chapter 3- The study is the main part of the thesis showing the context of the study and thedetailed results obtained from collected tests and findings in response to the researchquestions.

Chapter 4- Conclusion offers conclusions and practical implications for the testimprovement In this part, the author also proposes some suggestions for further researchon the topic.

Trang 4

CHAPTER 2: LITERATURE REVIEW

This chapter provides an overview of the theoretical background of the study It

includes four main sections Section 2.1 discusses the importance of testing in education.Section 2.2 is about language testing It is then followed by Section 2.3 in which the

author provides a brief review of major characterictics of a good test with the major focus

on test reliability and validity Finally, in Section 2.4, the achievement test and its types

are explored.

2.1 The importance of testing in education

Testing is an important part of every teaching and leaning experience Testing is atool to measure learners’ ability It may creates positive or negative attitudes towardteaching and learning process Testing reflects teaching process and overall trainingobjectives Through testing, the administrators can make important decisions on the course,the syllabus, the course book, teachers, learners and administration

Testing contributes a very important part in teaching/ learning process It is the laststage in education technology Therefore, to take advantages of testing to measure thequality of education, the administrators must build an essential and eligible testingtechnology This is to evaluate learners’ ability, suitability of teaching methods, teaching/learning materials and teaching/ learning conditions and suitability of set-up trainingobjectives.

Testing and Teaching

Testing and teaching are closely related because it is impossible to work in eitherfield without being constantly concerned with the other (Heaton, 1998: 5) In other words,Heaton implied that teaching and learning provide a great source of language materials fortesting to make use of In turn, testing reinforces, encourages and perfects the teaching/learning process Hughes (1989: 2) summarizes the relationship as: “The properrelationship between teaching and testing is surely that of partnership” To explain this,Hughes mentioned the effect of the vise versa relationship as backwash If the testingleaves good effects on teaching, the backwash is said beneficial However, there may beoccasions when the teaching is good and appropriate and the testing is not, we are thenlikely to suffer from harmful backwash Test result will give information for both teachers

Trang 5

and learners for their future action, such as improving knowledge and skills, revisingknowledge, or applying a new teaching method As Brown (1994: 375) shared the idea thattesting is “what teachers measure or judge learners’ competence all the time and, ideally,learners measure and judge themselves”.

Shortly speaking, it is undeniable that testing is an integrative part of teaching andit can be separated from the program or from the course goals Testing has both positiveand negative impact on teaching Testing provides the teacher with information on howeffective his teaching has been, or the teacher can use tests to diagnose his own efforts aswell as those of his students.

Testing and Learning

Testing is a tool to “pinpoint strengths and weaknesses in the learned abilities of thestudent” (Henning, 1987: 1) That is, through testing, learners can find out at which levelthey are standing and what difficulties they have faced up with As a result, they can adjusttheir learning; explore more effective ways of learning At the same time, the teacher canrely on the result of tests to understand better learners’ ability and then can improve hismethods of teaching or revise knowledge Thus, Read (1982: 2) said that “a test can helpboth teachers and learners to clarify what the learners really need to know” It is clear thatnot only the teacher but also learners may achieve the benefits through testing

To sum up, tests can benefit students, teachers and even administrators byconfirming progress that has been made and showing how they can best redirect theirfuture efforts Tests can help In addition, good tests can sustain or enhance class moraleand aid learning.

2.2 Language Testing

Language testing is one of the forms of testing and it is also one form ofmeasurements Its importance in English learning is reviewed as: “properly made Englishtests can help create positive attitudes toward instruction by giving students a sense ofaccomplishment and a feeling that the teacher’s evaluation of them matches what he hastaught them Good English tests also help students learn the language by requiring them tostudy hard, emphasizing course objectives, and showing them where they need to improve”(Davies, 1996: 5).

Mc Namara (2000) presented three main roles of language testing, which is appliednot only in education but in other fields as well Firstly, language testing is considered as a

Trang 6

key to succeed as language testing is a decisive way in recruitment Secondly, it serveseducational goals According to Mc Namara, tests are used to place learners in a suitablecourse The third role of language testing is for the grant of research Every researcher whowishes to do a research on a language need to evaluate standard tests or to design tests inthat language.

From Henning’s view, he suggested six purposes of language tests as follows: Diagnosis and Feedback: to explore strengths and weaknesses of the learners. Screening and Selection: to assist in the decision of who should be allowed to

participate in a particular program of instruction.

 Placement: to identify a particular performance level of the student and to placehim at an appropriate level of instruction.

 Program Evaluation: to provide information about the effectiveness ofprograms of instruction.

 Providing Research Criteria: to provide a standard of judgment in a variety ofother research contexts based on language test scores.

 Assessment of Attitudes and Sociopsychological Differences: to determine thenature, direction, and intensity of attitudes related to language acquisition.

(Henning, 1987: 1)

2.3 Major characteristics of a good test

In order to make a well-designed test, teachers have to take into account a varietyof factors such as the purpose of the test, the content of the syllabus, the pupils’background, the goal of administrators and so forth Moreover, test characteristics play avery important role in constructing a good test

The most important consideration in determining whether a test is good or not is theuse for which it is intended That is to say, the most important quality of a test is itsusefulness It is believed that test usefulness provides a kind of metric by which testdevelopers can evaluate not only the tests that they develop and use, but also all aspects oftest development and use Generally speaking, usefulness quality includes six components:reliability, construct validity, authenticity, interactness, impact and practicality However,there is problem that should be pointed out that rather than emphasizing the tension amongthe different qualities, test developers need to recognize their complementarity

Trang 7

Bachman and Palmer (1996) consider the criteria as qualities of test usefulnessrather than individual factors Their idea of usefulness can be visually presented as in

Figure 2.1:

Usefulness = reliability + validity +impact + authenticity +interactiveness + practicality

Figure 2.1 Usefulness

(Bachman and Palmer, 1996)

Henning (1987) added more test characteristics and he summarized in the form ofthe table called A checklist for Test Evaluation The checklist is for rating of the adequacyof a test for any given purpose.

Test usefulness Practicality

AuthenticityInteractiveness

Impact

Trang 8

Table 2.1 A checklist for test evaluation

Name of test Purpose Intended

Test characteristic Rating (0 = highly inadequate, 10 = highly adequate)

(Adapted from Henning, 1987: 14)

Other leading scholars in testing also share the idea about test characteristics withthe two scholars mentioned above Among these test characteristics, they all agree thatreliability and validity are essential to the interpretation and use of measures of languageabilities and are the primary qualities to be considered in developing and using tests Forthis reason, in the study, the author would like to employ these essential measurementqualities to evaluate the test taken by a large number of third-year non-English majorstudents at the University of Technology Following is a brief discussion about reliabilityand validity

2.3.1 Test Reliability

Reliability has been defined in different ways by different authors Perhaps the bestway to look at reliability is the extent to which the measurements resulting from a test arethe result of characteristics of those being measured For example, reliability has elsewherebeen defined as "the degree to which test scores for a group of test takers are consistentover repeated applications of a measurement procedure and hence are inferred to bedependable and repeatable for an individual test taker" (Berkowitz, Wolkowitz, Fitch, andKopriva, 2000) This definition will be satisfactory if the scores are indicative of propertiesof the test takers; otherwise they will vary unsystematically and not be repeatable ordependable

Trang 9

Test reliability refers to the consistency of scores students would receive onalternate forms of the same test Due to differences in the exact content being assessed onthe alternate forms, environmental variables such as fatigue or lighting, or student error inresponding, no two tests will consistently produce identical results This is true regardlessof how similar the two tests are For example, a test that includes a translation part wouldprobably produce different scores from one administration to another because it issubjective, and it would thus be unreliable

Henning (1987: 10) claimed that all tests are subject to inaccuracies The ultimatescores gained by the test-takers only provide approximate estimations of their true abilities.While some measurement error is unavoidable, it is possible to quantify and greatlyminimize the presence of measurement error A test on which the scores obtained aregenerally similar when it is administered to the same students with the same ability, but ata different time is said to be a reliable test And since test reliability is related to test length,so that the longer tests tend to be more reliable than shorter tests, knowledge of theimportance of the decision to be based on examination results can lead us to use tests withdifferent numbers of test items.

Test reliability is considered as “a quality of test score” by Bachman (1990: 24) Hemakes a further point that if a student receives a low score on a test one day and high scoreon the same test two days later, the test doesn’t yield consistent results, and the scorecannot be considered reliable indicator of the individual’s ability.

Reliability can also be viewed as an indicator of the absence of random error whenthe test is administered When random error is minimal, scores can be expected to be moreconsistent from administration to administration

Sources of Error

According to Bachman (1990, 165), there are four factors that affect language test

scores The effects of these various factors on a test score can be illustrated as in Figure2.2.

Trang 10

Figure 2.2 Factors that affect language test scores

We can infer from the figure that a score in a language test is indicated bycommunicative language ability Also, the language test is affected by factors other thancommunicative language ability They are:

 Test method facets: are systematic to the extent that is uniform from one testadministration to another (Appendix 1).

 Personal attributes: include individual characteristics such as cognitive style,knowledge of particular content areas and group characteristics such as: sex, race,and ethnic background It is also systematic.

 Random factors: are unsystematic factors including unpredictable and largelytemporary conditions such as his mental alertness or emotional stage and so on.Thus, a test is considered to be reliable if it possesses such ideas as:

 The results of one test achieved at two different times of the same candidate arecoefficient.

 Candidates are not allowed too much freedom. Clear and explicit instructions are provided.

TEST SCORE

Test method facets

Personal attributes

Random factorsCommunicative

language ability

Trang 11

 The same test scores are given by two or three administrators. The test results measure the learners’ true ability.

The reliability of a test is indicated by the reliability coefficient which is calculatedby the formula as follows:

Thus, the higher the reliability coefficient is, the lower the standard error is Thelower the standard error is, the more reliable the test scores are

Types of reliability estimates

According to Henning (1987), there are several types of reliability estimates, eachinfluenced by different sources of measurement error, which may arise from bias of itemselection, from bias due to time of testing or from examiner bias These three majorsources of bias may be addressed by corresponding methods of reliability estimate:

a Selection of specific items:- Parallel Form Reliability

- Internal Consistency Reliability estimates (Split Half Reliability)- Rational equivalence

b Time of testing:

- Test-retest Methodc Examiner bias

- Inter-rater Reliability

Parallel form reliability: indicates how consistent test scores are likely to be if a person

takes two or more forms of a test A high parallel form reliability coefficient indicates that

Trang 12

the different forms of the test are very similar which means that it makes virtually nodifference which version of the test a person takes On the other hand, a low parallel formreliability coefficient suggests that the different forms are probably not comparable; theymay be measuring different things and therefore, cannot be used interchangeably.

A formular for this method may be expressed as follows:(2) Rtt = r A,B (Henning, 1987)

(In which, Rtt: reliability coefficient; rA,B: correlation of form A with form B of the testwhen administered to the same people at the same time).

Internal consistency reliability indicates the extent to which items on a test measure the

same thing A high internal consistency reliability coefficient for a test indicates that theitems of the test are very similar to each other in content It is important to note that thelength of a test can affect internal consistency reliability.

Split-half reliability is one variety of internal consistency methods The test may be split

in a variety of ways, then the two halves are scored separately and are correlated with eachother.

A formula for the split-half method may be expressed as follows: (3)

(In which: Rtt: reliability estimated by the split half method; r A, B: the correlation of thescores from one half of the test with those from the other half).

Rational equivalence is another method which provides us with coefficient of internal

consistency without having to compute reliability estimates for every possible split halfcombination This method focuses on the degree to which the individual items arecorrelated with each other.

(4)

 

 

(Henning, 1987)

Test-retest reliability indicates the repeatability of a test scores with the passage of time.

This estimate also reflects the stability of the characteristics or constructs being measuredby the test.

The formula for this method is as follows:

Trang 13

(5) Rtt = r 1, 2 (Henning, 1987)

(In which: Rtt: the reliability coefficient using this method; r1, 2: the correlation of the scoresat time one with those at time two for the same test used with the same person).

Inter-rater reliability is used when scores on the test are independent estimates by two or

more judges or raters In this case reliability is estimated as the correlation of the ratings ofone judge with those of another This method is summarized in the following formula:

(6)

 AB

To improve the reliability of a test is to become aware of test characteristics thatmay affect reliability Among these characteristics are test difficulty, discriminability, itemquality, etc.

Test difficulty: is calculated by the following formular:

(In which, p: difficulty, Cr: sum of correct responses, N: number of examinees)According to Heaton (1988: 175), the scale for the test difficulty is as follows:p: 0.81-1: very easy (the percentage of correct responses is 81%-100%)p: 0.61-0.8: easy (the percentage of correct responses is 61%-80%)p: 0.41-0.6: acceptable (the percentage of correct responses is 41%-60%)p: 0.21-0.4: difficult (the percentage of correct responses is 21%-40%)p: 0-0.2: very difficult (the percentage of correct responses is 0-20%)

Trang 14

The range of discriminability is from 0 to 1 The greater the D index is, the betterthe discriminability is

The item properties of a test can be shown visually in a table as below:

Table 2.2 Item property

Item propertyIndexInterpretation

Difficulty 0.0-0.330.33-0.670.67-1.00

Discriminability 0.0-0.30.3-0.670.67-1.00

Very poorLowAcceptable

It is taken from the Standards for Educational and Psychological Testing (1985: 9)that “Validity is the most important consideration in test evaluation The concept refers tothe appropriateness, meaningfulness, and usefulness of the specific inferences from the testscores Test validation is the process of accumulating evidence to support such inferences”.Thus, to be valid, a test needs to assess learners’ ability of a specific area that is proposedon the basis of the aim of the test For instance, a listening test with written multiple-choiceoptions may lack validity if the printed choices are so difficult to read that the examactually measures reading comprehension as much as it does listening comprehension.

Trang 15

Validity is classified into such subtypes as:

Content validity

This is a non-statistical type of validity that involves “the systematic examinationof the test content to determine whether it covers a representative sample of the behaviordomain to be measured” (Anastasi & Urbina, 1997: 114) A test has content validity builtinto it by careful selection of which items to include Items are chosen so that they complywith the test specification which is drawn up through a thorough examination of the subjectdomain Content validity is very important in evaluating the validity of the test in terms ofthat “the greater a test’s content validity, the more likely it is to be an accurate measure ofwhat is supposed to measure” (Hughes, 1989: 22).

Construct validity

A test has construct validity if it demonstrates an association between the testscores and the prediction of a theoretical trait Intelligence tests are one example ofmeasurement instruments that should have construct validity Construct validity is viewedfrom a purely statistical perspective in much of the recent American literature Bachmanand Palmer (1981a) It is seen principle as a matter of the posterior statistical validation ofwhether a test has measured a construct that has a reality independence of other constructs.

To understand whether a piece of research has construct validity, three steps shouldbe followed First, the theoretical relationships must be specified Second, the empiricalrelationships between the measures of the concepts must be examined Third, the empirical.evidence must be interpreted in terms of how it clarifies the construct validity of theparticular measure being tested (Carmines & Zeller, 1991: 23).

Face validity

A test is said to have face validity if it looks as if it measures what it is supposed tomeasure Anastasi (1982: 136) pointed out that face validity is not validity in technicalsense; it refers, not to what the test actually measures, but to what it appears superficiallymeasure

Face validity is very closely related to content validity While content validitydepends on a theoretical basis for assuming if a test is assessing all domains of a certaincriterion, face validity relates to whether a test appears to be good measure or not.

Trang 16

Criterion-related validity

Criterion-related validity is used to demonstrate the accuracy of a measure orprocedure by comparing it with another measure or procedure which has beendemonstrated to be valid In other words, the concept is concerned with the extent to whichtest scores correlate with a suitable external criterion of performance Criterion-relatedvalidity consists of two types (Davies, 1977): concurrent validity, where the test scores arecorrelated with another measure of performance, usually an older established test, taken atthe same time (Kelly, 1978; Davies, 1983) and predicative validity, where test scores arecorrelated with some future criterion of performance (Bachman and Palmer, 1981).

2.3.3 Reliability and Validity

Reliability and validity are the two most vital characteristics that constitute a goodtest However, the relationship between reliability and validity is rather complex On theone hand, it is possible for a test to be reliable without being valid It means that a test cangive the same result time after time but not measure what it was intended to measure Forexample, a MCQ test could be highly reliable in the sense of testing individual vocabulary,but it would not be valid if it were taken to indicate the students’ ability to use the wordsproductively Bachman (1990: 25) says “While reliability is a quality of test scoresthemselves, validity is a quality of test interpretation and use”.

On the other hand, if the test is not reliable, it cannot be valid at all To be valid, asfor Hughes (1988: 42), “a test must provide consistently accurate measurements It musttherefore be reliable A reliable test, however, may not be valid at all” For example, in awriting test, candidates may be required to translate a text of 500 words into their nativelanguage This could well be a reliable test but it cannot be a valid test of writing

Thus, there will always be some tension between reliability and validity The testerhas to balance gains in one against losses in the other.

2.4 Achievement test

Achievement tests play an important role in the school programs, especially inevaluating students’ acquired language knowledge and skills during the course, and theyare widely used at different school levels

Achievement tests are known as attainment or summative tests According toHenning (1987: 6), “achievement tests are used to measure the extent of learning in aprescribed content domain, often in accordance with explicitly stated objectives of a

Trang 17

learning program” These tests may be used for program evaluation as well as forcertification of learned competence It follows that such tests normally come after aprogram of instruction directly.

Davies (1999: 2) also shares an idea that “achievement refers to the mastery ofwhat has been learnt, what has been taught or what is in the syllabus, textbook, materials,etc An achievement test therefore is an instrument designed to measure what a person haslearnt within or up to a given time”.

Similarly, Hughes (1989: 10) said that achievement tests are directly related tolanguages courses, their purpose being to establish how successful individual students,groups of students, or the courses themselves have been in achieving objectives.Achievement tests are usually carried out after a course on a group of learners who take thecourse Sharing the same idea about achievement tests with Hughes, Brown (1994: 259)suggests: “An achievement test is related directly to classroom lessons, units or even totalcurriculum” Achievement tests, in his opinion, “are limited to a particular materialcovered in a curriculum within a particular time frame”.

There are two kinds of achievement tests: final achievement test and progressachievement test.

Final achievement tests are those administered at the end of a course of study Theymay be written and administered by ministries of education, official examining boards, orby members of teaching institutions Clearly the content of these tests must be related tothe courses with which they are concerned, but the nature of this relationship is a matter ofdisagreement amongst language testers

According to some testing experts, the content of a final achievement test should bebased directly on a detailed course syllabus or on the books and other materials used Thishas been referred to as the syllabus-content approach It has an obvious appearance, sincethe test only contains what it is thought that the pupils have actually encountered, and thuscan be considered, in this respect at least, a fair test The disadvantage of this type is that ifthe syllabus is badly designed, or the books and other materials are badly chosen, then theresults of a test can be very misleading Successful performance on the test may not trulyindicate successful achievement of course objectives.

The alternative approach is to design the test content directly on the objectives ofthe course, which has a number of advantages Firstly, it forces course designers to elicit

Trang 18

course objectives Secondly, pupils on the test can show how far they have achieved thoseobjectives This in turn puts pressure on those who are responsible for the syllabus and forthe selection of books and materials to ensure that these are consistent with the courseobjectives Tests based on course objectives work against the perpetuation of poor teachingpractice, a kind of course-content-based test, almost as if part of a conspiracy fail to do Itis the author’s belief that test content based on course objectives is much preferable, whichprovides more accurate information about individual and group achievement, and is likelyto promote a more beneficial backwash effect on teaching.

Progress achievement tests, as the name suggests, are intended to measure theprogress that learners are making Since “progress” in approaching course objectives, thesetests should be related to objectives These should make a clear progression toward thefinal achievement tests based on course objectives Then if the syllabus and teachingmethods are appropriate to the objectives, progress tests based on short-term objectiveswill fit well with what has been taught If not, there will be pressure to create a better fit Ifit is the syllabus that is at fault, it is the tester’s responsibility to make clear that it is there,that change is needed, not in the tests.

In addition, more formal achievement tests need careful preparation; teachers couldfeel free to set their own ways to make a rough check on students’ progress to keepstudents on their toes Since such tests will not form part of formal assessment procedures,their construction and scoring need not be purely towards the intermediate objectives onwhich a more formal progress achievement tests are based However, they can reflect aparticular “route” that an individual teacher is taking towards the achievement ofobjectives.

In this chapter, the writer has presented a brief literature review that sets the groundfor the thesis Due to the limited time and the volumn of this thesis, the writer wishes tofocus only on evaluating the reliability and the validity of a chosen achievement test.Therefore, this chapter only deals with those points on which the thesis is carried out.

Trang 19

CHAPTER 3: THE STUDY

This chapter is the main part of the study It provides practical background for thestudy and is an overview of English teaching, learning and testing at the University ofTechnology, Ho Chi Minh National University More importantly, it presents data analysisof the chosen test and findings drawn from the analysis.

3.1 English learning and teaching at the University of Technology, Ho Chi MinhNational University

3.1.1 Students and their backgrounds

Students who have been learning at the University of Technology are of differentlevels of English because of their own background It is common that those who are frombig cities and towns have greater ability of English than those from the rural areas whereforeign language learning is not paid much attention to In addition, there are somestudents who have had over ten years of learning English before entering university, somehave just started for few years, and others have never learned English before Moreover,their entry into the University of Technology is rather low because they don’t have to takeany entrance exams Instead, they only apply their dossiers to be considered and evaluated.As a result, their attitude towards learning English in particular and other subjects, ingeneral, is not very highly appreciated

3.1.2 The English teaching staff

The English section of the University of Technology is a small section with onlyfive teachers They take over teaching both Basic English and English for Specific Purpose(ESP) majoring in Computing All the English teachers here have been well trained inVietnam and none of them has studied abroad One of them obtained Master Degree ofEnglish; three are doing an MA course They prefer using Vietnamese in class, as theyfound it is easy to explain lessons in Vietnamese due to the limitation of students’ Englishability Furthermore, they are always fully aware of adapting suitable methods of teachinghomogenous classes and they have been applying technology in their teaching ESP Thisresults in students’ high involvement in the lessons

3.1.3 Syllabus and its objectives

The English syllabus for Information Technology students is designed by teachersof the English section, the University of Technology, which has been applied for over five

Ngày đăng: 07/11/2012, 15:05

Tài liệu cùng người dùng

Tài liệu liên quan