1. Trang chủ
  2. » Ngoại Ngữ

ielts online rr 2014 2

30 4 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề The relationship between speaking features and band descriptors: A mixed methods study
Tác giả Paul Seedhouse, Andrew Harris, Rola Naeb, Eda ĩstỹnel
Trường học Newcastle University
Chuyên ngành Applied Linguistics
Thể loại Research Report
Năm xuất bản 2014
Định dạng
Số trang 30
Dung lượng 0,98 MB

Cấu trúc

  • 1.1 Background information on the IELTS speaking test (5)
  • 1.2 Research focus and questions (5)
  • 1.3 Relationship to existing research literature (5)
  • 1.4 Methodology (7)
  • 1.5 Data information (8)
  • 2.1 Quantitative analysis (9)
    • 2.1.1 Descriptive analysis (9)
    • 2.1.2 Association between measures and score bands (10)
      • 2.1.2.1 Total number of words (10)
      • 2.1.2.2 Accuracy (10)
      • 2.1.2.3 Fluency (10)
      • 2.1.2.4 Complexity (10)
      • 2.1.2.5 Grammatical range (11)
    • 2.1.3 MANOVA (11)
  • 2.2 Qualitative analysis: Speaking features that have the potential to influence candidate scores (11)
    • 2.2.1 Answering the question: Inter-turn speaking features that can influence candidate scores (12)
      • 2.2.1.1 Candidate requests repetition of the examiner’s question (12)
      • 2.2.1.2 Candidate trouble with a question leads to a lack of an answer (12)
      • 2.2.1.3 A candidate produces a problematic answer (13)
      • 2.2.1.4 Features of answers by high-scoring candidates (14)
    • 2.2.2 Speaking features that have the potential to influence candidate scores – ‘intra-turn’ (15)
      • 2.2.2.1 Functionless repetition (15)
      • 2.2.2.2 Hesitation markers (15)
      • 2.2.2.3 Candidate’s identity construction (16)
      • 2.2.2.4 Candidate’s lexical choice (17)
      • 2.2.2.5 Candidate’s ‘colloquial delivery’ (19)
    • 2.2.3 How clusters of speaking features distinguish tests rated at different levels from each other (19)
  • 3.1 Research question 1 (22)
  • 3.2 Research question 2 (23)
    • 3.2.1 Speaking features which have the potential to impact upon candidate scores (23)
  • 4.1 Combining the answers to the research questions: Findings (23)
  • 4.2 Discussion, implications and recommendations (24)
  • Appendix 1: Operationalising the complexity measure (26)
  • Appendix 2: Verb forms for grammatical range (28)
  • Appendix 3: Transcription conventions (29)
  • Appendix 4: IELTS speaking band descriptors (30)

Nội dung

Background information on the IELTS speaking test

IELTS speaking tests are encounters between one candidate and one examiner and are designed to take between 11 and 14 minutes There are three main parts

Each part fulfils a specific function in terms of interaction pattern, task input and candidate output

! Part 1 (Introduction): Candidates answer general questions about themselves, their homes/families, their jobs/studies, their interests, and a range of familiar topic areas

The examiner begins by introducing themselves and verifying the candidate's identity They then conduct an interview using verbal questions from familiar topic areas, which typically lasts between four to five minutes.

In Part 2 of the speaking test, candidates receive a prompt on a card and have one minute to prepare their response They are expected to speak for one to two minutes on the given topic After the candidate's speech, the examiner will pose one or two follow-up questions to conclude the segment.

! Part 3 (Two-way discussion): The examiner and candidate engage in a discussion of more abstract issues and concepts which are thematically linked to the topic prompt in Part 2

Examiners are provided with comprehensive guidelines to enhance the reliability and validity of the test, emphasizing the critical role of standardization in effective test management The IELTS speaking test requires examiners to adhere strictly to a predetermined script, known as the examiner frame, which must be followed without deviation It is essential that examiners maintain consistency by not rephrasing the rubrics, refraining from unsolicited comments, and avoiding any remarks on a candidate's performance, as outlined in the IELTS Examiner Training Material.

The control over phrasing in the IELTS test varies across its three parts Parts 1 and 2 feature a standardized wording for the frame, ensuring all candidates receive consistent input In contrast, Part 3 allows examiners more flexibility to adapt their questions to the candidate's level, though unscripted comments are discouraged Performance descriptors have been established to outline spoken performance across the nine IELTS bands, based on specific evaluation criteria.

Fluency and coherence are essential for effective communication, encompassing the ability to speak with a natural flow, appropriate pace, and minimal effort while connecting ideas seamlessly Key indicators of fluency include speech rate and continuity, whereas coherence is marked by logical sentence sequencing, clear transitions in discussions or arguments, and the use of cohesive devices such as connectors, pronouns, and conjunctions.

Lexical Resource encompasses the breadth of vocabulary a candidate employs and the accuracy with which they convey meanings and attitudes Key indicators include the diversity of vocabulary, the appropriateness and adequacy of word choices, and the ability to paraphrase effectively, even in the presence of vocabulary gaps, with minimal hesitation.

Grammatical Range and Accuracy evaluates a candidate's ability to use grammatical resources effectively and appropriately Key indicators of grammatical range include the complexity and length of spoken sentences, the correct application of subordinate clauses, diverse sentence structures, and the skill to rearrange elements for emphasis Conversely, grammatical accuracy is assessed by the frequency of errors in speech and their impact on communication.

Pronunciation is the ability to produce clear and understandable speech, which is crucial for meeting speaking test criteria Key factors to consider include the strain placed on the listener, the level of unintelligible speech, and the degree of influence from the speaker's first language (L1).

The IELTS speaking band descriptors are available in Appendix 4 In this project, only the constructs of Fluency, Grammatical Range and Accuracy were investigated.

Research focus and questions

This research investigates the relationship between candidate discourse features and their corresponding scores in the IELTS speaking test (IST), aiming to identify specific speaking characteristics that differentiate various proficiency levels The study is guided by two key research questions.

The grading criteria clearly differentiate between levels 5, 6, 7, and 8 as outlined in the speaking band descriptors These distinctions are crucial in assessing the performance of candidates at each level Understanding the extent of these differences is essential for evaluating tests effectively.

In order to answer this research question, quantitative measures of constructs (fluency, grammatical complexity, range and accuracy) in the band descriptors are applied to the spoken data

2) Which speaking features distinguish tests rated at levels 5, 6, 7 and 8 from each other?

This question is answered by working inductively from the spoken data, applying Conversation Analysis (CA) to transcripts of the speaking tests.

Relationship to existing research literature

This study expands upon previous research in two key areas: the Individual Speaking Test (IST) and oral proficiency interviews (OPIs), as well as the relationship between candidate discourse features and their assigned scores The first area encompasses a diverse array of research methodologies and interests, including investigations into test taker characteristics and aspects of cognitive, scoring, and criterion-related validity (Taylor, 2011).

Interest in the connection between candidate speaking features and their scores emerged in the late 1980s, as researchers began to explore the authenticity of Oral Proficiency Interviews (OPIs) (Weir et al.).

2013) This interest was initiated in part by van Lier’s

(1989) now seminal call to investigate the interaction which takes place in the OPI Nonetheless, according to

Lazaraton (2002) highlights the scarcity of research on the connection between candidate speech output and assigned ratings Understanding this relationship is crucial for several reasons: test developers can utilize discourse analysis of candidate data to create effective rating scales (Fulcher, 1996; 2003), and demonstrating the link between candidate talk and grading criteria can significantly enhance validation processes.

Douglas’s (1994) study of the AGSPEAK test revealed minimal correlation between candidate scores and the categories of grammar, vocabulary, fluency, content, and rhetorical organization, suggesting possible inconsistencies in rating Brown (2006a) expanded on this by developing analytic categories for three of the four rating criteria used in the IST and conducted a quantitative analysis of 20 ISTs Her findings indicated that while discourse features varied by proficiency level, only the total amount of speech showed significant differences Brown concluded that no single measure dominates the assessment; instead, a combination of performance features influences the overall evaluation of a candidate’s proficiency Her research involved identifying key discourse features beforehand and analyzing their presence in the ISTs through a quantitative lens.

Young (1995) also took a quantitative approach to a comparison of different levels of candidates and their respective speaking features (in the First Certificate in

English), and found that the high-level candidates produced more speech at a faster rate, and which was more elaborated, than those at the lower level

Researchers have utilized qualitative methodologies to enhance understanding of Oral Proficiency Interviews (OPIs) Lazaraton (2002) advocates for a Conversation Analysis (CA) approach to validate OPIs, emphasizing that qualitative methods can provide insights into the assessment process itself, rather than solely focusing on outcomes In a previous study (Lazaraton, 1998) involving 20 tests of the IST, it was found that higher-level candidates exhibit fewer repair instances, utilize a wider range of speculative expressions, and display more grammatical errors at lower levels, while complex structures and appropriate conversational responses are more prevalent among higher-scoring candidates.

Seedhouse and Harris’s CA (2010) study of the IST found that the characteristics of high scoring and low scoring tests in relation to topic are as follows

Candidates with higher scores demonstrate extended turns in parts 1 and 3, while weaker candidates often produce shorter responses with lengthy pauses in part 2 There is a notable correlation between test scores and the frequency of interactional trouble requiring repair; high-scoring candidates exhibit fewer instances of such difficulties This aligns with Lazaraton’s (1998) findings regarding the previous IST version High-scoring candidates effectively engage with topics by providing detailed information and multiple examples, allowing examiners to explore the topic further Conversely, low-scoring candidates often struggle to articulate coherent arguments Successful candidates use cohesive markers to connect ideas and may employ less common lexical items, reflecting a higher level of education and social status Those achieving very high scores tend to present themselves as intellectuals and future high-achievers, while low-scoring candidates often convey modest, localized aspirations Examiners consider various aspects of monologic topic development in part 2.

Seedhouse and Harris (2010) identify a structured interaction model in the IST, characterized by a combination of turn-taking, adjacency pairs, and topic development Examiner questions consist of two key elements: an adjacency pair requiring a response and a topic component that demands elaboration This structure, termed 'topic-scripted question-answer (Q-A) adjacency pair,' ensures that topics are always introduced through questions, contrasting with natural conversation To achieve a high score, candidates must comprehend the question, provide an answer, identify the inherent topic, and develop it further This interactional framework allows for clear differentiation between high- and low-scoring responses, highlighting the importance of topic development as noted in the band descriptors for Fluency and Coherence, while the ability to answer questions remains less emphasized.

Research indicates that much remains to be understood about the speaking features that differentiate IST proficiency levels The relationship between a candidate's score and their interaction characteristics is complex, influenced by various factors impacting examiner ratings (Brown, 2006a; Douglas, 1994) While some studies have focused on specific discourse features using quantitative methods, others, like Seedhouse and Harris, have explored different approaches.

(2010) looked inductively in the data for differences using a qualitative approach However, no studies have so far tried to combine both of these approaches using a mixed methods design.

Methodology

This study employs a mixed methods approach that

The research methodology integrates both qualitative and quantitative approaches to achieve a comprehensive understanding of speaking features that differentiate IST proficiency levels (Johnson et al., 2007) This dual approach enhances the analysis by allowing for independent yet concurrent investigations Rola Naeb conducted the quantitative analysis, while Andrew Harris focused on the qualitative conversation analysis (CA) It was only at the final stage of the project that the findings from these two methodologies were combined, providing a richer perspective on the data.

Exploring various perspectives on the research process can enhance understanding (Richards et al., 2012) The mixed methods design examines data from two distinct angles: first, it operationalizes grading criteria by defining fluency, grammatical complexity, range, and accuracy, enabling the coding of a transcript corpus across four performance bands Second, it analyzes audio recordings and transcripts inductively to identify differences in speaking features among test performances at these four levels.

The first research question asked: the grading criteria distinguish between levels 5, 6, 7 and 8 in the ways described in the speaking band descriptors in terms of:

Fluency and Coherence, Lexical Resource, Grammatical

Range and Accuracy, and Pronunciation The question is:

To what extent are these differences evident in tests at those levels? A matching methodology was used to answer this research question The descriptors (see

Appendix 4) anticipate the differences which will emerge in ISTs at these different levels The descriptors were operationalised and matched against the evidence in the recordings and transcripts

Given the restricted scope and budget of the project, we investigated only the descriptors for Fluency,

Grammatical Range and Accuracy by adapting standard tests for these constructs (Ellis and Barkhuizen, 2005)

This research approach is appropriate as it utilizes established measures that have demonstrated validity in assessing the targeted constructs To evaluate accuracy, we analyzed the number of errors per 100 words.

To evaluate grammatical range and complexity, two distinct measures were utilized, reflecting constructs found in band descriptors For grammatical complexity, we modified Foster et al.’s (2000) assessment of subordination Additionally, we employed Yuan and Ellis’s (2003) measure to analyze the variety of verb forms used, thereby assessing the range of structures Fluency was measured using an adapted version of Skehan and Foster’s (1999) pause measurement, specifically focusing on pause length per 100 words.

The research team encountered significant challenges in adapting constructs and measures for analyzing L2 speaking data, originally designed for L1 written texts To address this, they operationalized grammatical complexity by measuring subordination, utilizing Foster et al.’s (2000) concepts to code transcripts They calculated the total number of clauses and subordinate clauses based on AS units, although the original study lacked comprehensive details on unit boundaries and hesitation markers To ensure inter-rater reliability (IRR), the team conducted three workshops where raters independently coded transcripts, leading to the development of additional rules when initial IRR was unsatisfactory Complexity was assessed using two sub-measures: the ratio of A units (subordinate clauses) to AS units and the ratio of A units to total words After refining their coding approach, a satisfactory IRR of 90% was achieved in the final workshop.

The study adapted Yuan and Ellis’s (2003) measure of syntactic variety to assess the grammatical range in terms of different verb forms, including tense, modality, and voice Utilizing the IELTS grammar preparation book by Cambridge University Press as a source, a comprehensive list of targeted verb forms for the IST was created (Appendix 2) According to Hopkins and Cullen (2007), this resource is essential for success in the test The measure involved counting each verb form's first accurate usage by candidates, ensuring inter-rater reliability (IRR) through two workshops, with the second workshop achieving an 80% score.

Defining fluency is complex due to the various aspects it encompasses, such as speech rate, flow, smoothness, and the presence of pauses (Luoma, 2004; Koponen, 1995) This study utilized Skehan and Foster’s (1999) measures of candidate pause length, focusing on intra-turn pauses exceeding 0.5 seconds to assess overall fluency The inter-rater reliability (IRR) rate in the initial workshop was 98.9% Ultimately, fluency was quantified as pause length per 100 words, revealing a direct correlation between the total word count and fluency scores.

In this study, accuracy was evaluated using two key measures: the total word count generated by each candidate during the test and the total number of errors made Accuracy was determined by calculating the number of errors per 100 words produced by the candidates (Mehnert, 1998).

While self-corrected candidate errors were excluded from the count, this exclusion does not eliminate the inherent challenges for analysts in defining what constitutes an error (Ellis and Barkhuizen).

2005), particularly when these measures are applied to spoken interactional data In the first workshop, IRR rates were as follows: Wordcount 98.4%: Errors 87.4%

The second research question set out to identify the speaking features that distinguish tests rated at levels 5,

6, 7, and 8 from each other To answer this, the methodology employed was Conversation Analysis (CA)

This methodology is effective for two main reasons: it connects the organization of interactions to the primary institutional goals, emphasizing the importance of rational design in language assessment Additionally, the analysis is conducted in a bottom-up, data-driven manner, avoiding any preconceived theoretical assumptions or irrelevant contextual details.

The initial phase of Conversation Analysis (CA) is characterized by an unmotivated approach to observation, as noted by Psathas (1995) This stage emphasizes a willingness to uncover patterns and phenomena organically, rather than imposing preconceived notions or hypotheses regarding the speaking features that differentiate various levels of communication.

After conducting an inductive database search, the next step involves identifying regularities and patterns related to the phenomenon, demonstrating that these are systematically produced and guided by participants Following an initial exploratory phase, the analysis shifted to examining the dataset to address specific research questions This involved treating different score bands as distinct collections and investigating the patterns and trends of individual speaking features, including their occurrence and distribution We emphasized particular speaking features and analyzed their prevalence across various score bands The identification of distinguishing speaking features between score bands incorporated informal quantification, using terms like 'commonly' and 'overwhelmingly' to convey the analyst's sense of frequency and distribution, while maintaining a qualitative approach rather than a formal quantitative analysis.

(1993, 118) has stated, CA and ‘formal’ quantification

“are not simply weaker and stronger versions of the same undertaking; they represent different sorts of accounts”, and in this study we employ them as such

This project emphasizes the qualitative analysis of how candidate speaking features are integrated into the design of turns-at-talk According to Conversation Analysis (CA), turns-at-talk consist of one or more turn construction units, highlighting the structural elements that shape spoken interactions.

(TCUs) TCUs can consist of a single embodied action, such as a head nod, or a stretch of talk that delivers a

In conversation analysis, a Transition Relevance Place (TRP) marks the end of a Turn Construction Unit (TCU) and indicates the potential for a change of speaker At a TRP, the current speaker can invite another speaker to take the floor, self-nominate, or continue their turn The design of candidate turns through TCUs is crucial for the qualitative analysis in this study.

Data information

The dataset for this study consisted of 60 audio recordings of IELTS speaking tests These tests include

The study utilized a dataset comprising 26 previously digitized and transcribed tests from an earlier project, along with 34 new tests selected by UCLES and digitally sent to Newcastle University These audio recordings were transcribed by Andrew Harris, an experienced CA transcriber, adhering to strict conventions The final dataset included audio recordings of 60 ISTs and their transcripts, with 15 tests transcribed for each score band (5, 6, 7, 8+), covering recordings from 2004 to 2011 Quantitative measurements were conducted on fluency, accuracy, and grammatical complexity, while qualitative CA analysis addressed the second research question The sample comprised 22 male participants.

38 female candidates The candidates came from different L1 backgrounds as summarised in Table 1

The following sections present the analytic findings of this study The first of these outlines the quantitative analysis (2.1) The second presents the findings of the qualitative analysis (2.2).

Quantitative analysis

Descriptive analysis

Table 2 shows the descriptive statistics for the four measures Looking at the mean scores for each measure, it is evident that:

1 The total number of words per test increased in direct proportion to the scores, band by band

2 The percentage of errors per 100 words decreased as the scores got higher, band by band This suggests that accuracy increases in direct proportion to score

3 The measure of pause length relates to the construct of fluency Pause length is highest at level 5 and lowest at level 8, following the expectations set out in the IELTS descriptors In the raw data, there is a higher level of pause at level 7 than at level 6

The analysis indicates that longer pauses per 100 words correlate with increased fluency scores Additionally, as scores rise, the standard deviation reveals a reduction in variability within the same performance band.

4 Both measures for grammatical complexity showed the same trend While complexity is lowest for band

5, those at band 7 showed more complexity than those at band 8

5 The same trend was seen in the grammatical range measure While band 5 shows the lowest number of verb forms, those who have scored 7 used a wider range of verb forms than those at band 8

A units to total no of words

Table 2: Descriptive analysis across the four measures

Association between measures and score bands

In order to verify whether differences in mean scores across the four levels are statistically significant, inferential statistics were used

To explore differences in relation to the amount of speech, measured as total number of words spoken by the candidate, across band scores, ANOVA was used

It revealed that the differences were highly significant among the four groups with the amount of speech increasing with higher scores, F(56,3)= 13.18, p< 0.001

Figure 1: Total number of words ANOVA

It is obvious from the boxplot that candidates who scored

Candidates at level 8 generated a significantly higher word count compared to those at level 7, despite some level 7 individuals producing more words than a few level 8 candidates Overall, when analyzing the speech output of both groups, level 8 candidates consistently outperformed level 7 in terms of total words produced.

ANOVA test revealed that the difference among the four band scores were statistically significant F(56,3)= 30.6, p< 0.001

Looking firstly at the raw measure (pause length), the differences among the four band scores were not significant, F(56,3)= 10.4, p< 0.38

Figure 3: Pause length and pause length per 100 ANOVA

The analysis of pause length per 100 words showed significant differences among the four band scores, with F(56,3)= 2.92 and p< 0.04 Post hoc Tukey tests indicated that the only significant differences were found between score bands 5 and 8 (p < 0.03).

The complexity of the text was assessed through two key metrics: the ratio of A units to AS units and the ratio of A units to the total word count ANOVA analysis revealed significant differences across the four score bands concerning the ratio of A units.

A units to AS units were significant, F(56,3)= 3.95, p< 0.01

Figure 4: Complexity A to AS units ANOVA

ANOVA also showed that differences were significant for the ratio of A units to total number of words, F(56,3)3.58, p< 0.01

Figure 5: Complexity A to total number of words

ANOVA also revealed that the differences were significant among the four groups in the total number of verb forms used, F(56,3)= 8.06, p< 0.01

MANOVA

To prevent the overestimation of differences from multiple ANOVA tests, a one-way multivariate analysis of variance (MANOVA) was employed in this study This approach assesses differences among independent groups across multiple continuous dependent variables, unlike a one-way ANOVA, which focuses on a single dependent variable The MANOVA results indicated statistically significant differences in the four measures related to the four IELTS band scores, with F (3, 56) = 5.33 and p < 001.

Qualitative analysis: Speaking features that have the potential to influence candidate scores

Answering the question: Inter-turn speaking features that can influence candidate scores

In many institutional settings, interactions are primarily structured around question and answer sequences, as noted by Drew and Heritage (1992) Specifically, parts 1 and 3 of the ISTs rely on a topic-scripted QA adjacency pair (Seedhouse and Harris, 2010) To achieve a high score, candidates must: a) comprehend the question posed; b) provide a relevant answer; c) identify the topic within the question; and d) elaborate on this topic Although the band descriptors do not specifically address a candidate’s interactional ability, their performance in this area likely influences the examiner's overall impression, potentially affecting their score This section examines the various ways candidates can respond to questions and how these responses may impact their scores.

2.2.1.1 Candidate requests repetition of the examiner’s question

There are a number of ways in which candidates can mark trouble with an examiner’s question Extract 1 below illustrates a candidate (C) explicitly requesting repetition of the examiner’s question (E)

46 E: what traditions for na::ming babies are there (.) in you::r

49 C:! >er uh uh< hh (0.6) please HHH HHH repeat the qu.hh.est.hh.ion

52 E: it’s okay< what kind of

In a recent examination interaction, the candidate's response to the examiner's question was marked by a lengthy pause and an extended floor-holding token, indicating uncertainty The candidate then made a clarification request that included laughter and sound stretching, focusing on an unfamiliar person, which did not clearly relate to the examiner's question The examiner promptly latched onto their initial question, repeating it, but after another pause, the candidate attempted to reformulate their request However, the examiner acknowledged the candidate's difficulty with a slow, reassuring response and proceeded to the next question, adhering strictly to examination guidelines This shift suggests the examiner's reluctance to further engage with the candidate's unclear response, likely impacting the candidate's overall score negatively.

Candidates often struggle to provide appropriate answers to examiner questions, as demonstrated in extract 3 In this instance, the candidate's response is deemed unsuitable, leading the examiner to proceed to the next question.

95 E: hh would you prefer to pick (0.2) to bu::y a picture postcard

96 or take a photo of a new place

100 E: now I’m going to give you a topic

In the examiner's question, the candidate must select between two options: purchasing a postcard or taking a picture After a lengthy pause, the candidate begins with a floor-holding token, followed by a prolonged and emphatic "YEs::." This inappropriate response is met with another long pause before the examiner smoothly transitions to part 2 of the test.

In extract 4 above, the candidate (score 5.0) provides a direct answer to the question, but it is monosyllabic and does not develop the topic inherent in the question in any way

144 E: hhh thank you (0.6) hhh d’you have? "many "neighbours?

146 C: hhh er I do (0.5) °(yeah ha)° [th]ere (has) abou:t er:: four147 E: [t-]

148 C: =to five of them? (0.2) but er ((name omitted))’s wife who is

149 of:: er our age yeah >they’re they’re< slightly elderly so::

In extract 5, the candidate, who achieved a score of 8.0, responds to the same question posed in extract 4 but enhances their answer by including extra relevant information This additional detail not only addresses the question but also enriches the discussion, which could positively impact the overall score.

7 E: what subjects (0.2) are you studying

9 C: er::m: (0.3) I study chemistry hh eleven (0.3) erm:: (.) maths

10 eleven hhh and english (.) an::d erm (0.4) ((inaudible)) study

13 E: "why did you decide to study these subjects

15 C: uhm (.) because::e (.) I wi::ll go to- (0.2) I study hhh er::

16 (.) canadian hh er:: subjects I will go to:: hh erm canada

In extract 6, the candidate, who received a score of 5.0, has difficulty articulating a clear rationale for their choice of subjects in lines 15 and 16, resulting in a lack of coherent topic development.

238 E: [so] >we have< a lot of international travel now: (0.5)

242 (1.1) it oo- (0.3) takes lon::g ti:me (0.7) er::: (0.4) and for

243 example if you travel to another country hh er::: and it's a

244 takes th::e (0.5) about er:: (0.5) (airbus) (0.3) to europe

245 (0.5) for about ten:: hours246 E: =[uhu ]

247 C: [time] hhh and er the also (0.2) the ticket is very

250 E: so the negative what is the negative impact (0.5) certain birds that do have

92 special meaning for instance the> crow:: (0.5) is erm ((clears

93 throat)) (0.8) hh the crow: is considered to be a bad omen (.)

94 hh in most cases hhh bu:t (1) sometimes it’s also:: er:m:

95 (1.6) revered in the sense that er: (0.2) they believe that our

96 (0.7) there’s some sort of ancestral connection with the bird

97 and the spirit and hhh (0.3) yeah (0.9) so that’s one of the

In extract 20, the candidate achieves a score of 8.0 by effectively exploring the dual significance of the crow within their culture The response is well-structured and incorporates sophisticated vocabulary, such as "omen," "revere," and "ancestral," to convey deep cultural insights.

How clusters of speaking features distinguish tests rated at different levels from each other

In section 2.2, we explored various speaking features that could impact candidate scores However, it's crucial to note that focusing solely on individual features may provide a misleading interpretation of the data Instead, we believe that no single speaking feature can reliably differentiate between test scores across different bands Rather, it is the clusters of speaking features that effectively distinguish candidates within these bands This concept is further illustrated in the following extract.

52 E: is unhappiness:: (.) always a bad thing?

54 C: "not "necessarily (0.7) bu:t (.) you have to limit it (0.7)

55 like you can be: unhappy like on::e (0.8) a dear frie::nd or

56 someone that you know have passed away (.) you can you know (1)

57 have some grief (0.3) it’s something you know healthy for you

58 to grieve (1.2) but like it’s y’know it’s just a process and

59 then you have to go y’know get back (.) to life (.) and you

60 know (0.2) start finding your happiness again

The analysis of extract 22 (score 8.0) highlights the complexities of identifying specific speaking features that differentiate scores While hesitation and repetition, such as the phrase "you know," are noted in band descriptors as indicators of lower scores, their presence in this extract does not detract from the coherent and accurate development of the topic In fact, the use of pauses and casual language may enhance the impression of a genuine, native-like conversation about deep topics like unhappiness, rather than signaling a lack of competence This underscores the idea that no single speaking feature can definitively determine a high or low score A mixed methods approach is essential for a comprehensive understanding of how these features function in interactions, allowing for a deeper exploration of their significance in response to specific questions.

This study's qualitative analysis provides insights into the complex relationship between speaking features and candidate scores It highlights that clusters of speaking features, rather than isolated ones, effectively differentiate high and low-scoring candidates The analysis emphasizes speaking attributes related to fluency, grammatical range, complexity, and accuracy, as explored in the quantitative component of the research Specifically, it examines the formulation and turn design of responses to initial work-related questions in the IST, such as “Do you work or are you a student?” and “Do you enjoy the work?” The analysis includes a detailed examination of a candidate scoring in band 5, followed by a focus on a candidate in band 8.

4 E: =d’you work or are you a student

6 C: er: actually I’m both I’m er:: (0.3) I study and I: er: work

7 hh8 E: =.hh alright h so what work do you do:

10 C: hh er I’m avi- an aviation engineer I just graduate

12 E: hh (0.3) hh hh and d’you enjoy the work? hh

14 C: yeah (1) I enjoy it er well- (.) HHH

18 C: hh because I:: er:: (0.3) studied f:- about er fixing

19 aeroplanes hh and now I’m doing that

In extract 23, the candidate begins their response to the first question with a hesitation marker ("er:") before providing an answer The examiner's question anticipates a response focused on either work or study; however, the candidate's reply diverges from this expectation.

The candidate's response, "actually I’m both," is considered a dispreferred response, deviating from the expected follow-up of either work or study This is reflected in their use of a hesitation marker at the beginning of their reply The candidate attempts to clarify their answer by saying, “I’m er:: (0.3) I study,” demonstrating self-awareness of their speech and making a grammatical correction This self-repair includes additional hesitation markers and pauses, which are typical in such instances Analyzing these interactional features in light of the examiner’s criteria suggests that this response could negatively impact the candidate's score Despite the speech rate being described as “not too slow,” the presence of hesitation markers and self-repair may indicate issues with the candidate’s speech continuity.

In the examination process, the examiner's question prompts a sequential response from the candidate, with both participants’ in-breaths indicating a transition of speaking turns The examiner’s in-breath signifies their intention to take the floor, which limits the candidate’s response length The candidate begins their answer with an in-breath and a hesitation marker, followed by a self-repair, identifying themselves as an aviation engineer, which may enhance their perceived status as a high achiever However, the candidate's response includes a grammatical error when stating they have "graduate," and they also utilize hesitation markers and self-repairs that could detract from their overall score Additionally, the candidate fails to elaborate on their answer, mirroring the brevity of their previous response.

The candidate's response to the third question reveals several features that may negatively affect their score They begin with a confirmation but follow it with a lengthy pause, indicating disfluency Their answer includes a hesitation marker and a non-standard collocation, "I enjoy it er well-," which further detracts from their performance Additionally, the candidate fails to elaborate, prompting the examiner to request a reason In this follow-up, the candidate again demonstrates detrimental features, including two hesitation markers and a 'word search' with a false start, exemplified by the phrase "studied f:- about er fixing," which also contains a non-standard grammatical construction Despite these issues, the candidate does utilize appropriate tense structures in their response.

The analysis of a candidate who scored 5.0 reveals several speaking features that likely contributed to their low score, particularly in fluency, grammatical range, and accuracy Key issues identified include frequent hesitation markers, self-initiated repairs, false starts, word searches, and grammatical errors Additionally, the candidate's responses are brief and fail to expand on the topics presented in the questions, even when prompted by the examiner.

The following extract, from a candidate who scored 8.5, illustrates how a radically different combination of speaking features can cluster to place the candidate in a high score band

1 E: so in this first part of the test I’d just like to ask you

2 some questions about yourself [.hhh] erm let’s talk about3 C: [okay]

4 E: =what you do hhh do you work or are you a student:?5 C: =I’m a student in university? er::.6 E: =and what subject are you studying

8 C: hh I’m studying business human resources

10 E: H "ah and why did you decide to study this subject

12 C: I’ve always loved business it’s something I’ve always wanted to

13 do:: since I was a little gir::l I used to pretend like I was a

14 business woman [.hHHH] $A.HH.nd huh huh HH sit around with15 E: [°mhm°]

16 C: =a sui:::t n:: wear some glasses: n: pre[tend] like I’m doing17 E: [°mhm°]

18 C: =statistics: so yea:h$ it’s something:: I’ve always wanted to

During the delivery of the examiner’s first question in extract 24 above, the candidate orients to the potential

The examiner's in-breath and acknowledgment token ('okay') signify a potential shift in conversation, highlighting the candidate's awareness of interactional fluency By recognizing the upcoming change in speakers and responding appropriately, the candidate demonstrates effective communication skills.

This can be seen as back-channelling, as opposed to the minimal turns frequent in low-scoring turns (see extract

The examiner initiates the conversation by asking the first question, and the candidate responds promptly, demonstrating a seamless transition from the examiner's query The candidate's answer is grammatically well-structured and delivered at a native-like pace, showcasing their fluency in the language Notably, the candidate's ability to latch onto the examiner's question and respond smoothly, combined with their grammatical accuracy and native-like delivery rate, are key features that can be assessed as positive markers of fluency.

The candidate then utters a floor-holder or hesitation marker As in extract 12, the examiner overlaps and takes the floor, asking the next question in the sequence (line

The candidate's response is grammatically correct, direct, and fluent, effectively establishing their identity as a future high achiever This performance showcases speaking attributes that align with the higher score bands of the IST, particularly in Fluency and Grammatical accuracy.

The accuracy categories reveal that, despite limited elaboration from the candidate regarding grammatical range, the examiner interrupted twice, preventing further interaction Initial exchanges in the IST showcased speaking features indicative of higher proficiency levels Nonetheless, the candidate's subsequent response clearly exemplifies their placement in the highest proficiency band analyzed in this study.

The candidate's response showcases grammatical proficiency through the use of present perfect tense and appropriate adjectives, delivered at a native-like speech rate The second turn construction (TCU) concludes with a sound stretch and slight intonation drop, signaling the end of the turn Notably, the candidate transitions smoothly into the next TCU without pausing, demonstrating exceptional skill in grammatical accuracy She effectively utilizes varied sentence structures, such as “since I was” and “I used to,” to create a coherent narrative Additionally, the inclusion of the term “like” enhances her conversational style.

While the use of certain grammatical structures may seem inaccurate in written discourse, they contribute to a colloquial tone in her delivery This approach enhances the speaking features she exhibits, aligning her with high-scoring candidates.

At the conclusion of the candidate's initial turn in conversation (TCU) in line 14, her audible in-breath is subtly overlapped by the examiner's quiet continuer ("[°mhm°"), which maintains the current speaker's role Following this, the candidate introduces a connective phrase, incorporating an in-breath and laughter tokens ("$A.HH.nd huh huh HH") This laughter is characterized as natural and confident, serving as an effective interactional precursor to the forthcoming dialogue.

‘humorous’ part of the narrative This is formulated, in smile voice, as a list of things ‘she used to wear’ and

Research question 1

This study's quantitative analysis examined the grading criteria for levels 5, 6, 7, and 8, as outlined in the speaking band descriptors The primary research question addressed the extent to which these distinctions are reflected in tests at these levels.

Table 2 showed the descriptive statistics for the four measures Looking at the mean scores for each measure, it is evident that:

The total word count per test increased significantly in direct relation to the scores across different bands, supporting Brown’s (2006a, 84) findings that speech quantity varies notably among different proficiency levels.

As scores improve, the percentage of errors per 100 words significantly decreases, indicating a direct relationship between accuracy and score levels, supporting Brown’s (2006a, 82) findings.

The analysis of pause length, measured as pauses per 100 words, indicates a significant increase in fluency corresponding to higher scores across four bands However, post hoc Tukey tests identified significant differences solely between score bands 5 and 8 This supports Brown's (2006a, 80) observation that the ratio of pauses decreases with increasing scores, although the differences across levels were not statistically significant.

4 Both measures for grammatical complexity showed the same trend While complexity is lowest for band

5, those at band 7 showed more complexity than those at band 8

The analysis of grammatical range reveals a consistent trend, indicating that band 5 exhibits the fewest verb forms, while those achieving a score of 7 utilize a broader variety of verb forms compared to band 8 These findings across various measures of grammatical range and complexity align significantly, supporting Brown's (2006a, 82) conclusions.

“Band 8 utterances were on average less complex than those of Band 7”

The analysis of all four measures reveals variations in the expected directions when comparing bands 5 and 8, thereby providing evidence of validity However, it is important to note that these measures did not consistently demonstrate linear progression across the four bands.

Research question 2

Speaking features which have the potential to impact upon candidate scores

potential to impact upon candidate scores

Key features influencing candidate scores include the quality of question responses, the presence of hesitation markers, functionless repetition, identity construction, lexical choice, colloquial delivery, and the occurrence of trouble and repair Additionally, Seedhouse and Harris (2010) noted that effective engagement with topics, the ability to construct arguments, and the length of turns also play significant roles in assessment.

The analysis revealed localized trends and patterns specific to individual candidates' ISTs; however, these patterns did not consistently appear across different score bands, making it impossible to differentiate between them Additionally, no speaking features were found to be consistent throughout the dataset, as detailed in the previous section.

2.2 above, demonstrated simplistic, generalisable patterning across the score bands investigated, which would allow a robust analytic finding to be drawn from the data For every localised pattern that was identified, a significant number of counter cases where found that contradicted this localised patterning, when viewed across the whole corpus

Individual speaking features do not effectively differentiate between various test bands; instead, clusters of features are more indicative of a candidate's band level An atomistic approach linking single features to band ratings was unsuccessful with this dataset In contrast, a qualitative approach that examines how groups of speaking features cluster at different band levels has shown greater success, as detailed in section 2.2.3 Consequently, band ratings are more closely associated with these clusters of features rather than with individual characteristics.

Combining the answers to the research questions: Findings

The identified features that may influence scores include a candidate's ability to effectively answer questions, the presence of hesitation markers, unnecessary repetitions, identity construction, lexical choices, colloquial delivery, and the occurrence of trouble and repair Additionally, Seedhouse and Harris (2010) highlighted the importance of engaging with and developing a topic, constructing arguments, and managing turn length in part 2 The qualitative analysis indicates that establishing a clear, direct correlation between the features outlined in the band descriptors and the interactions observed in the IST is quite challenging.

The analysis reveals that no single speaking feature distinctly differentiates between score bands in the IST; instead, it suggests that a combination of assessable speaking features contributes to achieving a specific score Notably, the speaking features demonstrate a consistent variation from bands 5 to 8, offering validation for the IST framework.

The overall findings align with the quantitative evidence, as variations in all four measures are observed when comparing bands 5 and 8, supporting the validity of the results However, not all measures exhibited a linear progression across the four bands, although accuracy and fluency did show a direct proportional increase with the scores.

Both measures assessing grammatical range and complexity revealed a consistent trend, with the lowest scores at band 5 and higher complexity at band 7 compared to band 8 This indicates that for two of the four measures analyzed, a clear linear progression across the four bands was not evident Brown (2006a, 83) similarly noted that while overall changes aligned with expectations across levels, the differences between some adjacent levels were not always as anticipated.

Both qualitative and quantitative analyses reveal a consistent relationship between speaking features and band descriptors, although individual performances may show significant variation across the four band descriptors Notably, anomalies exist between adjacent bands, as evidenced by findings that band 8 utterances can be less complex than those in band 7 This indicates that the speaking features of a band 8 performance may not align perfectly with the descriptors for that band Both studies suggest that band ratings are more closely linked to clusters of features rather than isolated ones, echoing Brown's (2006a) findings that no single measure dominates the assessment; instead, a variety of performance features shape the overall impression of a candidate’s proficiency This study, utilizing a mixed methods approach, supports the conclusion that an atomistic method of identifying specific components of performance is unlikely to yield successful results.

Discussion, implications and recommendations

The absence of a direct correlation between individual band descriptors and specific features of candidate performances in IST ratings may stem from the unique discoursal organization of the test Notably, candidates' responses in parts 1 and 3 of the IST can exhibit variations that do not align perfectly with the criteria outlined for a band 6 descriptor This suggests that while certain features may be expected in a 6-rated performance, they may not always manifest in a consistent manner across all segments of the test.

The institutional goal of IST is for candidates' talks to be assessed as speech samples according to band descriptors However, as demonstrated in the conversational analysis of the data, candidates must navigate challenging discoursal requirements continuously, dictated by the task structure and the specific topic-scripted question-answer adjacency pair.

According to Harris (2010), candidates aiming for a high score must not only answer the question but also thoroughly develop the topic it presents Our proposed 'discourse involvement hypothesis' suggests that during parts 1 and 3, candidates may focus more on the discourse requirements of the topic-based question-and-answer format rather than on the rating scales.

The examiner’s prompts should ideally serve as a neutral platform, allowing candidates to showcase a variety of lexical and grammatical structures, pronunciation, and discourse features This setup aims to facilitate the generation of assessable qualities that align with specific descriptors However, in practice, candidates must not only provide direct answers to the questions but also develop the topic sufficiently, balancing brevity and depth in their responses.

In the transcripts, candidates often do not exhibit clearly defined characteristics of talk that align neatly with the grading criteria due to the discourse involvement load created by the topic-scripted QA adjacency pair This structure is not criticized; rather, it highlights how all forms of talk inherently engage participants within a specific discoursal framework The topic-scripted QA adjacency pair effectively differentiates candidate performance, but the 'discourse involvement hypothesis' elucidates the challenge of directly correlating band descriptors with talk features in the IST The organization of discourse influences and mediates the conversation, transforming institutional intentions into actual interactions.

According to 2004, 252, interaction in second language (L2) classrooms plays a crucial role in bridging pedagogy and learning It facilitates the transformation of planned tasks into dynamic, ongoing discussions, effectively converting intended communication into actual dialogue.

We now consider how the discoursal requirements of the

IST relate to the band descriptors In parts 1 and 3 of the

The IST's topic-scripted QA adjacency pair requires candidates to respond to examiners' questions, yet this expectation is not explicitly outlined in the grading criteria While descriptors mention fluency, coherence, and topic discussion, the specific skill of answering questions remains unaddressed It is unclear how much examiners consider candidates' ability to fulfill these discoursal requirements in their ratings Although Brown's study did not highlight this issue, it referenced the additional criterion of coping with diverse functional demands If candidates are indeed partially evaluated on their participation in the IST's unique speech exchange system, this could significantly affect their preparation Kasper (2013) argues that real-world pragmatic competence may not directly translate to the requirements of oral proficiency interviews (OPIs).

Real-world pragmatic competence can hinder the Oral Proficiency Interview (OPI) when the assessment requires a different type of pragmatic understanding, specifically regarding the critical focus of interviewers' instructions Learning to participate discursively in the Interview Structure Test (IST) may be essential and could influence the ratings process Future research should explore examiner perspectives on this issue Additionally, this study indicates that band ratings are more closely associated with clusters of features rather than individual characteristics; however, there is currently no evidence that examiners recognize these clusters, as their perspectives were not included in the methodology.

We recommend incorporating the quality of candidates' discourse participation and interactional competence into the ratings process, specifically by including 'the ability to answer questions' in the band descriptors, as this is crucial for discourse structure According to the examiner instructions for part 1, candidates must use the exact words in the question frame, and if they misunderstand, the examiner can only repeat the question once, highlighting the importance of understanding and responding accurately Brown (2006b) notes that examiners often comment on whether candidates are "on task" or "answering the question" in relation to fluency and coherence, suggesting that our recommendation would clarify existing practices for examiners.

The Band Descriptors for Examiners state (Note i)

To ensure effective evaluation, it is essential that candidates align completely with the positive attributes outlined in the descriptor for their specific level However, our quantitative and qualitative analyses indicate that examiners have not consistently adhered to these guidelines in their assessment practices.

Furthermore, the manner in which speaking features are distributed in clusters means that it would be very difficult to follow this instruction in practice

Note: The following publications are not referenced as they are confidential and not publicly available:

Brown, A, 2006a, ‘Candidate discourse in the revised

IELTS Speaking Test’, IELTS Research Reports Vol 6,

IELTS Australia and British Council, Canberra,pp 71-89

Brown, A, 2006b, ‘An examination of the rating process in the revised IELTS Speaking Test’, IELTS Research

Reports Vol 6, IELTS Australia and British Council,

Drew, P, and Heritage, J, 1992, ‘Analyzing talk at work: an introduction’ in Talk at Work: Interaction in

Institutional Setting, eds P Drew and J Heritage,

Cambridge University Press, Cambridge, pp 3-65

Douglas, D, 1994, ‘Quantity and quality in speaking test performance’, Language Testing, 11, pp 125-144

Ellis, R, and Barkhuizen, G, 2005, Analysing Learner

Language, Oxford University Press, Oxford

‘Measuring spoken language: a unit for all reasons’,

Fulcher, G, 1996, ‘Does thick description lead to smart tests? A data-based approach to rating scale construction’, Language Testing, 13, pp 208-238

Fulcher, G, 2003, Testing Second Language Speaking,

Hopkins, D, and Cullen, P, 2007, The IELTS Grammar

Preparation Book, Cambridge University Press,

Johnson, R, Onwuegbuzie, A, and Turner Lisa, A, 2007,

‘Towards a definition of mixed methods research’,

Journal of Mixed Methods Research, 1, pp 112-138

Kasper G, 2013, ‘Managing task uptake in oral proficiency interviews’ in Assessing Second Language

Pragmatics, eds S Ross and G Kasper, Palgrave

Lazaraton, A, 1998, An analysis of differences in linguistic features of candidates at different levels of the

IELTS Speaking Test, Report prepared for the EFL

Division, University of Cambridge Local Examinations

Lazaraton, A, 2002, A qualitative approach to the validation of oral language tests, Cambridge University

Luoma, S, 2004, Assessing Speaking, Cambridge University Press, Cambridge

Mehnert, U, 1998, ‘The effects of different lengths of time for planning on second language performance’,

Studies in Second Language Acquisition, 20, pp 52-83

Psathas, G, 1995, ‘Conversation Analysis: The Study of

Talk-in-Interaction’, Sage, London

Richards, K, Ross, S, and Seedhouse, P, eds, 2012,

Research Methods for Applied Language Studies,

Schegloff, EA, 1993, ‘Reflections on quantification in the study of conversation’, Research on Language and Social

Seedhouse, P, 2004, The Interactional Architecture of the

Language Classroom: A Conversation Analysis Perspective, Blackwell, Malden, MA

Seedhouse, P, and Harris, A, 2010, ‘Topic Development in the IELTS Speaking Test’,

IDP:IELTS Australia and British Council, Melbourne,pp 55-110

Skehan, P, and Foster, P, 1999, ‘The influence of task structure and processing conditions on narrative retellings’, Language Learning, 49, pp 93-120

Taylor, L, ed, 2011, Examining Speaking: Research and

Practice in Assessing Second Language Speaking,

Cambridge University Press, Cambridge van Lier, L, 1989, ‘Reeling, writhing, drawling, stretching and fainting in coils: oral proficiency interviews as conversations’, TESOL Quarterly, 23, pp 480-508

Weir, CJ, Vidakovic, and Galaczi, ED, 2013, Measured

Constructs: A History of Cambridge English Language Examinations 1913-2012, Cambridge University Press,

Young, R, 1995, ‘Conversational styles in language proficiency interviews’, Language Learning, 45, 1, pp 3-42

Young, RF, and He, A, eds, 1998, Talking and Testing:

Discourse Approaches to the Assessment of Oral Proficiency, Benjamins, Amsterdam

Yuan, F, and Ellis, R, 2003, The effects of pre-task planning and on-line planning on fluency, complexity and accuracy in L2 monologic oral production, Applied

Operationalising the complexity measure

The following rules apply when coding the transcript:

“AS unit is identified as an independent clause or sub- clausal unit together with any subordinate clauses(s) associated with either” (Foster el at, 2000, p 365)

AS units are syntactic units so the boundaries of an AS unit are those of a full utterance including the main and subordinate clauses

! An independent clause is a clause with at least a finite verb

C: [My surname and middle name is quite unusual]

An independent sub-clausal unit consists of one or more phrases that can be expanded into a complete clause by retrieving omitted elements from the surrounding context This indicates that an independent clause may lack a verb, yet it can be understood to include one through the recovery of these ellipted elements.

E: so what’s your job then?

C: a nurse AS unit (independent sub-clausal unit)

A units are subordinate clauses that include at least one finite or non-finite verb along with additional elements such as a subject, object, complement, or adverbial This definition highlights the structural complexity of subordinate clauses, emphasizing their role in sentence construction.

AS units, A units must have a verb

1 C: /I’m basically/ /since I’m a freshman/ I do general nursing two A units although the first one doesn’t have a complement but it has a subject and a finite verb

2 C: /when i use them for lon::g/: they sprain the eyes one AS unit with one A unit and an independent clause

Identifying boundaries of AS and A units

The presence of falling or rising intonation, accompanied by a 0.5-second pause, typically signals the beginning of a new AS unit In instances of uncertainty, it is advisable to treat the utterance as a distinct A unit to reflect its complexity.

Difficulty with identifying the boundary of AS units

C: [I’ve always loved business][ it’s something/ I’ve always wanted to do/ :: /since I was a little girl// I used to pretend like// I was business woman/]

The statement can be interpreted in two ways: it may reflect a long-held desire since childhood, with "since I was a little girl" serving as a subordinate clause within the preceding AS unit, or it may initiate a new AS unit, highlighting greater complexity for the student This analysis aims to enhance the understanding of sentence structure and its implications.

Difficulty with identifying the boundary of A unit

In the case of the non-finite progressive participle, there should be at least one other clause element to be considered an A unit

Recently, I have engaged in activities that have prompted me to reflect on the concept of a coffee shop This introspection has led me to consider various aspects related to coffee culture and the ambiance of such establishments.

C: /the things I’ve recently done //that have put me into thinking 2 A units

Whenever we have a lot of independent clauses, if there is rising or falling intonation and (0.5) pause, it breaks the AS unit and we start a new one

A noun phrase without a verb is considered a separate

AS unit if it is separated from the following phrase by falling intonation and a pause of (0.5)

C: [and some children] !(0.5) [they are playing the ball] two AS units

If the AS or A unit rely on the inaudible to be identified as an AS or A, we are not counting it, we are counting the

AS and A units even when they have inaudible if the inaudible doesn’t make a difference

E: Can I see your ID please?

C: here’s (inaudible) AS unit because we are still able to reconstruct the structure to a full clause regardless of the inaudible

When counting ellipted clauses, we ask the question

‘could we reconstruct a whole clause from the ellipted utterance?’

If yes and we can relate it to what comes before it, we count the utterance as an AS unit

C: (name omitted) I prefer my (inaudible) E: (name omitted) is it?

C: yes (we count it as an AS unit)

If not, we don’t count it

C:[ /it is a hospital/ /located at ::: (0.4) erm (Kesin city/)]

C: yeah (we don’t count it as AS unit)

The rule above applies only if the ellipted forms are produced as full utterances like the example above

However, if the ellipted utterances are produced with other clauses, we use (0.5) pause to identify the boundary of an AS unit

C: [no: (0.4) as much as my mother wanted to ]

C: [no: (o.6) as much as my mother wanted to ]

The examiner's interruption signifies that if their comments are considered backchanneling and occur within a 0.5-second gap, the subsequent remarks are integrated into the preceding utterance.

C: [/I’m basically// since I’m a freshman/ and (inaudible)

one As unit, 2 A units within one AS unit

If what the examiner is performing is a complete turn, the structure is broken and what is said after is a new clause

If there are three hesitation markers or more, they also break the boundary of an AS unit and we start a new one after the hesitation markers

C: It’s something I’d love and hhh so:: (.) it’s following my dream Two AS units

• False starts, repetition and self corrections are not counted

• Repetition is not counted if it is within the same turn

Difficulty with implementing the repetition analysis is that repetition is rarely exact

They are repeating a clause but it is not exactly the same and there is a distance, we do count it

C: I’ve always loved business it’s something I’ve always wanted to do since I was a little girl I used to pretend like

I was a business woman sit around with a suit wear some glasses pretend like I’m doing statistics so yeah it’s something I’ve always wanted to do as my dream

In this example, repetition is in fact a summary and it does show complexity so we do count it although it is in the same turn

They are repeating the same clause but the two clauses have different functions, we count both

C: I do love my name: : [e hh] hh I love my name because

In this example, the first instance is an answer to the question while the second is the start of an explanation so the two clauses have different functions

• We do not count fillers or use of phatic communion such as ‘you know, you see, well’, as they don’t show complexity and AS units are syntactic units

Coordinating clauses are counted as two AS units if there is a pause of (.5) and the first one is marked with rising or falling intonation

C: you have/ to go upstairs/ and you have /to take the stairs two independent clauses, 1 AS unit, 2 A units

C: [Last year I just graduated from bachelor of science in nursing] (0.5) and [right now I was hired by the national transplant institute] two AS units

When two coordinating elements are verb phrases, and the first one exhibits either falling or rising intonation followed by a pause of 0.5 seconds or more, they are treated as distinct entities.

If we have a subordinate clause followed by ‘and’ and another clause which we could relate to the main clause, we are counting them as two A units (subordinate clauses)

C: there’s also the variety of seafood because we do live in a gulf country and we basically live in the sea

When calculating the total number of clauses, the same rules above apply

Do not count repetition (exception listed above)

Do not count phatic communion

Do not count false starts.

Verb forms for grammatical range

Present continuous He is promoting

Past continuous He was promoting

Used to (repeated action) He used to promote

Would (repeated action) He would promote

Present perfect simple He has promoted

Present perfect continuous He has been promoting

Past perfect simple He had promoted

Past perfect continuous He had been promoting

Will future He will promote

Going to future He is going to promote

Future continuous He will be promoting

Future perfect simple He will have promoted

Future perfect continuous He will have been promoting

Will (willingness and habits) He will promote

Would (willingness, future in past) He would promote

Ought to He ought to promote

Need He needs to promote

Present simple passive He is promoted

Present continuous passive He is being promoted

Past simple passive He was promoted

Past continuous passive He was being promoted

Present perfect passive He has been promoted

Past perfect passive He had been promoted

Going to passive He is going to be promoted

Will passive He will be promoted

Modal passive He could be promoted

(also can, may, might, must, will, would, shall, should, ought to, need)

Verb + object + infinitive without to

If you heat water to 100C, it boils First conditional

If I invest my money, it will grow Second conditional

If I invested my money, it would grow

If I had invested my money, it would have grown

Reported speech (Many verbs can be used to report instead of ‘said’)

Past simple reported She said he promoted

Past continuous reported She said he was promoting Past perfect reported She said he had promoted

Past perfect continuous reported She said he been promoting

Will reported She said he would promote

Is going to reported She said he was going to promote

Modal reported She said he could promote (also can, may, might, must, will, would, shall, should, ought to, need)

Transcription conventions

A full discussion of Conversation Analysis (CA) transcription notation is available in Atkinson and Heritage (1984)

Punctuation marks are used to capture characteristics of speech delivery, not to mark grammatical units

[ indicates the point of overlap onset

] indicates the point of overlap termination

The continuation symbol indicates that a speaker's turn seamlessly transitions into the next without any pause When placed at the end of one speaker's turn and the beginning of the adjacent speaker's turn, it signifies an uninterrupted flow of dialogue.

(3.2) an interval between utterances (3 seconds and 2 tenths in this case)

(.) a very short untimed pause word underlining indicates speaker emphasis e:r the::: indicates lengthening of the preceding sound

- a single dash indicates an abrupt cut-off

? rising intonation, not necessarily a question

! an animated or emphatic tone

, a comma indicates low-rising intonation, suggesting continuation

a full stop (period) indicates falling (final) intonation

CAPITALS especially loud sounds relative to surrounding talk

" " utterances between degree signs are noticeably quieter than surrounding talk

# ! indicate marked shifts into higher or lower pitch in the utterance following the arrow

> < indicate that the talk they surround is produced more quickly than neighbouring talk

( ) a stretch of unclear or unintelligible speech

((inaudible 3.2)) a timed stretch of unintelligible speech

(guess) indicates transcriber doubt about a word

.hh speaker in-breath hh speaker out-breath hhHA HA heh heh laughter transcribed as it sounds

$ arrows in the left margin pick out features of especial interest

In this article, we highlight that non-English words are italicized and followed by their English translations in double brackets Additionally, we use square brackets to indicate any inaccuracies in the pronunciation of English terms.

[ổ ] phonetic transcriptions of sounds are given in square brackets

< > indicate that the talk they surround is produced slowly and deliberately

(typical of teachers modelling forms)

Ngày đăng: 29/11/2022, 18:17

TÀI LIỆU CÙNG NGƯỜI DÙNG

  • Đang cập nhật ...

TÀI LIỆU LIÊN QUAN