Iranian Journal of English for Academic Purposes

Iranian Journal of English for Academic Purposes

Differential Impacts of State and Islamic Azad University ELT Professors’ E-Diagnostic Assessment on EFL Undergraduates’ Collaborative Writing: Teaching Experience and Affiliation as Moderators (Research Paper)

Document Type : Original Article

Authors
Islamic Azad University, Karaj Branch
Abstract
Advancements in educational technology promote the use of e-diagnostic assessment, which provides timely, personalized feedback that enhances language learning outcomes, especially in collaborative settings. This study critically compared the impacts of State and Islamic Azad University (IAU) ELT professors’ e-diagnostic assessment on EFL undergraduates' collaborative writing moderated by their teaching experience and affiliations. To do so, a cohort of novice (N = 4) and experienced (N = 4) ELT professors from State and Islamic Azad Universities were purposefully selected to cross-examine their relative professional assessment practice. They provided a session-wise e-diagnostic assessment of 160 EFL undergraduates’ collaborative writing performance for eight weeks on the Google Meet platform. Data analysis suggested that regular e-diagnostic assessments improved EFL undergraduate collaborative writing, moderated by ELT professors’ teaching experience and affiliation. Accordingly, State University ELT professors with less teaching experience could outperform their experienced counterparts in e-diagnostic assessment practice, and so did the experienced IAU ELT professors to their novice colleagues. Also, strong associations were found among ELT professors’ teaching experience, university affiliation, and e-diagnostic assessment, with EFL undergraduates’ collaborative writing improvement serving as their proxy. The findings highlighted the importance of supplementing in-service language assessment training to the curricula of university teacher education programs and the benefits of integrating technology into assessment to enhance language learning.
Keywords
Subjects

Article Title Persian

تاثیر تمایز دهنده ارزشیابی تشخیصی الکترونیک توسط اساتید زبان انگلیسی در دانشگاههای سراسری وآزاد اسلامی بر روی پیشرفت نگارش تعاملی دانشجویان زبان انگلیسی: میانجیگری تجربه تدریس و دانشگاه محل خدمت اساتید

Authors Persian

حاجیه قلاجیان مقدم
ناتاشا پوردانا
کبری توسلی
دانشگاه آزاد اسلامی واحد کرج
Abstract Persian

فن‌آوری آموزشی پیشرفته استفاده از ارزشیابی تشخیصی الکترونیکی را به مدرسان زبان انگلیسی توصیه می‌کند تا بدین وسیله بازخورد به‌ موقع و شخصی‌شده را ارائه داده و یادگیری زبان را به‌ویژه در محیط‌های تعاملی افزایش ‌دهند. این مطالعه با هدف بررسی متقابل تأثیر ارزشیابی تشخیصی الکترونیکی توسط اساتید زبان انگلیسی در دانشگاه های دولتی و آزاد اسلامی ایران بر روی نگارش تعاملی دانشجویان زبان انگلیسی با توجه به نقش میانجیگر تجربه تدریس اساتید و دانشگاه محل خدمت آنها انجام شد. علاوه بر این، بررسی شدت همبستگی تجربه تدریس و دانشگاه محل خدمت با ارزشیابی تشخیصی الکترونیکی اساتید زبان انگلیسی نیز مورد  مطالعه تجربی قرار گرفت. برای انجام این کار، از ۱۰۸ استاد زبان انگلیسی که قبلاً در یک نظرسنجی در مقیاس بزرگ شرکت کرده بودند، دو استاد کم تجربه و دو استاد با تجربه از دانشگاه های دولتی، همچنین، دو استاد کم تجربه و دو استاد با تجربه از واحدهای دانشگاه آزاد اسلامی به طور هدفمند انتخاب شدند تا ارزشیابی تشخیصی الکترونیکی را در طول یک ترم تحصیلی به ۱۶۰ دانشجوی زبان انگلیسی ارائه دهند. نگارش تعاملی دانشجویان زبان انگلیسی به مدت هشت هفته در بستر آموزشی Google Meet مورد ارزشیابی تشخیصی قرار گرفت. تجزیه و تحلیل داده‌های جمع‌آوری ‌شده نشان داد که دریافت ارزشیابی تشخیصی الکترونیکی بصورت مستمر و هفتگی نگارش تعاملی دانشجویان زبان انگلیسی را بهبود می بخشد. همچنین، ارتباط قوی بین تجربه تدریس اساتید زبان انگلیسی، دانشگاه محل خدمت و ارزشیابی تشخیصی الکترونیکی  موفقیت آمیز آنها، با بهبود نگارش تعاملی دانشجویان زبان انگلیسی به عنوان پروکسی این عملکرد مورد تایید قرار گرفت. این یافته‌ها اهمیت مکمل کردن دوره های ضمن خدمت اساتید دانشگاهی با آموزش روشهای سنجش زبان و مزایای ادغام فناوری در ارزسیابی مستمر به هدف افزایش یادگیری زبان را برجسته کرد.
 

Keywords Persian

اساتید زبان انگلیسی
ارزشیابی تشخیصی الکترونیکی
دانشگاه محل خدمت
‌ تجربه تدریس نگارش تعاملی

Differential Impacts of State and Islamic Azad University ELT Professors’ E-Diagnostic Assessment on EFL Undergraduates’ Collaborative Writing: Teaching Experience and Affiliation as Moderators

[1]Hajieh Ghalajian Moghadam

[2]Natasha Pourdana*

[3]Kobra Tavassoli

Research Paper                                             IJEAP- 2408-2074

Received: 2024-08-18                          Accepted: 2024-10-31                      Published: 2024-12-20

 

Abstract: Advancements in educational technology promote the use of e-diagnostic assessment, which provides timely, personalized feedback that enhances language learning outcomes, especially in collaborative settings. This study critically compared the impacts of State and Islamic Azad University (IAU) ELT professors’ e-diagnostic assessment on EFL undergraduates' collaborative writing moderated by their teaching experience and affiliations. To do so, a cohort of novice (N = 4) and experienced (N = 4) ELT professors from State and Islamic Azad Universities were purposefully selected to cross-examine their relative professional assessment practice. They provided a session-wise e-diagnostic assessment of 160 EFL undergraduates’ collaborative writing performance for eight weeks on the Google Meet platform. Data analysis suggested that regular e-diagnostic assessments improved EFL undergraduate collaborative writing, moderated by ELT professors’ teaching experience and affiliation. Accordingly, State University ELT professors with less teaching experience could outperform their experienced counterparts in e-diagnostic assessment practice, and so did the experienced IAU ELT professors to their novice colleagues. Also, strong associations were found among ELT professors’ teaching experience, university affiliation, and e-diagnostic assessment, with EFL undergraduates’ collaborative writing improvement serving as their proxy. The findings highlighted the importance of supplementing in-service language assessment training to the curricula of university teacher education programs and the benefits of integrating technology into assessment to enhance language learning.

Keywords: Affiliation, Collaborative Writing, E-Diagnostic Assessment, ELT Professors, Teaching Experience

Introduction

In line with many education researchers, Berry et al. (2017) believe that effective assessment is crucial to learning a second or foreign language. In case teachers do not reach professionalism in assessment, they cannot monitor whether learning has occurred, which could be considered the first and foremost problem in any teaching and learning context (Farrell, 2013; Jin, 2010; Lam, 2015; López Mendoza & Bernal Arandia, 2009; McNamara et al., 2002; Vogt & Tsagari, 2014). English as a Foreign Language (EFL) classrooms are thriving with new approaches to assessment, and do not rely solely on written tests. Today's educators recognize the need for reliable and valid assessments that reflect student progress and individual strengths. This focus on effective assessment empowers students to take charge of their learning, making the journey more engaging. However, recent studies revealed under-education in classroom assessment (Alavi et al., 2024; Azizpour et al., 2023; Campbell-Evans, 2013; Toshiyuki & Mei-Shiu, 2024). Furthermore, assessment is regarded as a device to measure learning outcomes leading to frequent calls by scholars such as Taylor (2013) for L2 teachers’ mastery of various assessment methods and skills.

It is believed that as the demand for effective teaching methods increases, educators are constantly exploring innovative assessment practices to gauge student progress and tailor instruction accordingly (Alderson et al., 2014). Ahmadi and Mirshojaee (2016) surveyed Iranian EFL teachers and revealed many issues, including the effect of testing on teaching and giving diagnostic feedback to the students. According to Alderson (2005), diagnostic assessment as an alternative to traditional tests is designed to track the points of strengths and weaknesses in a learner’s knowledge and practice of language. It is especially helpful for learning collaborative writing (CW, Storch, 2019), where students produce writing together – a skill that is crucial for communication but tricky to assess.

As Ismail et al. (2019) defined, collaborative language learning (CLL) contains all the reciprocal attempts made by pairs/groups of students aiming at co-constructing a message and transferring it in L2. CW also is a form of writing where students work together to produce a single text, and fosters essential communication and teamwork skills valued by employers (Abrahams & Reiss, 2015). In collaborative writing tasks, L2 learners have immense collaboration in various steps of the writing process such as drafting, editing, reviewing, and revising to receive assistance from their teacher and/or classmates (Storch, 2019). Therefore, such a collaboration involves both the process and the product of learning. Electronic diagnostic (e-diagnostic) assessment can unlock valuable insights into the complex process of CW and provide valuable insights to EFL educators seeking to enhance collaborative writing instructions. In this study, therefore, selected EFL undergraduates were grouped to collaborate in writing tasks while their writing performances were subject to the participating ELT professors’ regular diagnostic assessment in the virtual classrooms.

The COVID-19 pandemic has enhanced the adoption of e-learning platforms in language education, necessitating closer examination of L2 teachers’ performance in e-diagnostic assessment (Pourdana, 2022), and integration of e-diagnostic assessment and collaborative learning in a heterogeneous classroom (Rafi et al., 2022). While several studies have explored various aspects of e-learning in L2 educational settings, there is a shortage of research comparing the diagnostic assessment practices of instructors from different types of universities, including the two most popular State and Islamic Azad universities in the context of Iran. Despite the disparity of ideas among most Iranian high school graduates who plan to enter either one of these universities, a common argument is over the qualifications and credentials of their faculty members.

To fill the gap in the literature, this study investigates how e-diagnostic assessment performances of ELT professors at State and IAU universities could have differential impacts on EFL undergraduates’ collaborative writing. We were particularly interested in the moderating role of ELT professors’ teaching experience and affiliations and their associations with successful e-diagnostic assessment performance. We hypothesized that ELT professors with more years of experience might better understand their student’s needs and use more effective diagnostic assessments to tailor their teaching. We also assumed that ELT professors at IAU might perform more serious and regular diagnostic assessments due to receiving more intensive and systematic in-service training courses and workshops on general assessment methods.

Professors with extensive teaching experience may possess a deeper understanding of how to integrate diagnostic assessment into their curriculum for maximum impact on collaborative writing skills (Farrell, 2014). Additionally, university affiliation, in this case, State University versus Islamic Azad University (IAU), might introduce differences in teaching styles, resources, and student demographics that could influence the effectiveness of e-diagnostic assessments. To address the objectives of the present study, the researchers raised the following research questions:

Research Question One:  Does the e-diagnostic assessment performed by Iranian State and Islamic Azad University ELT professors of different teaching experiences have any differential impacts on EFL undergraduates’ collaborative writing performances?

Research Question Two: Is there any association between Iranian State and Islamic Azad University ELT professors’ teaching experience, e-diagnostic assessment performances, and EFL undergraduates’ collaborative writing task performances?

Literature Review

Diagnostic Assessment

Assessment is a key professional responsibility of teachers to make more effective judgments on what learners know or can do, to decide if their teaching has been successful, or to select what to do next to accelerate learners’ improvement (Tsagari et al., 2018). According to Black and Wiliam (1998), assessment is a continuous process in which learners’ development is monitored and tracked by obtaining, analyzing, recording, and applying data keeping an eye on their performance in educational activities. Also, according to Eyal (2012), an effective assessment process involves making sound judgments usually based on the researchers’ interpretation through both “systematic and non-systematic collections of any information that may contribute to understanding the learners’ place in terms of knowledge acquisition” (p.11).

In recent years, assessment literacy (AL) has been the subject of much concern and debate. Since assessment is dynamic and multifaceted, it demands teachers’ immense attention and extensive inquiry. AL plays a significant role in optimizing teacher roles and leading them to professionalism which will consequently embrace the final destined purpose of teachers: ‘decision-making’ for the sake of learners’ learning (Kremmel & Harding, 2020). Arguing against eliminating standardized tests, Kazemi et al. (2022) believe that multiple-assessment approaches are more appropriate to discriminate the learners’ learning based on the specific curriculum they have covered for which alternative assessment procedures are necessary.

Alternative assessment requires learners to show what they can do. In other words, learners are evaluated on what they can integrate, analyze or produce rather than on what they can recall, combine or reproduce. Hidri (2021, p. 23) summarized alternative assessments as “self-assessment, portfolio assessment, student-designed tests, learner-centered assessment, dynamic assessment, diagnostic assessment, academic projects, and classroom presentations”. He also introduced self-evaluation, checklists, logs, video audiotapes, journals, and teacher observations as recent methods of alternative assessment.

As one of the most effective methods of alternative assessments, diagnostic assessment identifies learner strengths and weaknesses to inform instructions (Rafi et al., 2022). It focuses on the learning process not just the outcome, and provides immediate, low-pressure feedback (Alderson et al., 2014; Jang & Wagner, 2014). This feedback includes setting goals (feed-up), addressing performance (feedback), and supporting future progress (feed-forward) (Brookhart, 2008). Standard diagnostic tests can be utilized as a worthwhile way of discovering the sources of the learners’ errors. Nonetheless, not a lot of diagnostic tests have been developed so far. Moreover, most of the developed tests are computer-based, and not appropriate enough to be used in the classroom contexts to diagnose students’ weaknesses.

Diagnostic feedback, pinpointing error causes and suggesting remedies (Nikmard & Tavassoli, 2023; Pourdana & Tavassoli, 2022), goes beyond regular feedback. It offers insights into learners' thinking processes and suggests improvement strategies (Alderson et al., 2014; Nikmard et al., 2023, Nour et al., 2021). These strategies can be cognitive (information processing) or metacognitive (self-regulation). In short, diagnostic feedback empowers language learners by helping them understand how they learn. Harding et al. (2015) provided a framework for diagnostic assessment with the following process steps: (1) Listening/Observation stage, (2) Initial assessment, (3) Hypothesis testing, and (4) Decision-making (Figure 1).

Figure 1

 The Diagnostic Assessment Process (Harding et al., 2015, p. 319)

 

As Figure 1 illustrates, in the Listening/Observation step, teachers observe learners' performance (e.g., classroom observations, tests, conferences) to identify general language ability. In the Initial Assessment step, teachers combine observations with their experience to pinpoint learning difficulties, while in the Hypothesis Testing step, teachers consult with other experts to verify their initial assessment. Finally, in the Decision-Making step, teachers diagnose the cause of learner strengths/weaknesses and choose appropriate diagnostic feedback (e.g., in Lei & Medwell, 2021; Pourdana & Rad, 2017). This framework is commonly used in various areas like formative assessment of collaborative learning practice, especially via writing skills (Rafi et al., 2022; Rafi & Pourdana, 2023).

From a different perspective, various attributes of L2 learning in real and web-based platforms like Google Meet and Zoom in the L2 educational settings has emerged as vogue topics of educational research during and after the COVID-19 pandemic to study L2 teachers’ performance in e-diagnostic assessment (i.e., diagnostic assessment in online interfaces) (Pourdana, 2022), integration of e-diagnostic assessment with collaborative learning in a mixed-ability classroom (Rafi et al., 2022), management of computer-mediated diagnostic assessment of collaborative writing (Kazemi et al., 2022), and developing effective virtual resources for university professors (Hidri, 2021). However, the gap in the literature on ELT professors’ professional development in alternative assessments demands further research. This case becomes more critical in a vast country like Iran with a very large population of university students.

Collaborative writing in L2 contexts

Writing is essential for academic achievement and language learning. Effective writers integrate language skills and self-regulation while actively using teacher/peer feedback. Modern L2 classes emphasize both thinking and collaboration, with teachers guiding students through the writing process to ensure clear communication (Keshanchi et al., 2022). L2 writing tasks involve various skills like comprehension, manipulation, production, and interaction to convey meaning (Ayatollahi et al., 2017).

In the case of collaborative writing (CW) tasks, L2 learners have substantive collaboration and interaction in all stages of the writing process such as drafting, editing, reviewing, and revising to receive assistance from the teacher and/or peers (Storch, 2019). Therefore, such a collaboration integrates both the process and the product of language learning. CW promotes critical thinking, audience awareness, and reflection on language use, where learners work together (Pourdana, 2023). Learners benefit from negotiation, feedback, and scaffolding (Pourdana & Asghari, 2021). Collaboration likely fosters a sense of accomplishment and positive attitudes (Basterrechea & Gallardo-del-Puerto, 2023), and it is considered a deeply engaging modality. Ample Research shows the effectiveness of CW for active engagement learners at all proficiency levels (Bonner et al., 2023; Pourdana et al., 2014). A common argument is that the orchestrating of meaning-focused and language/form-focused CW tasks can encourage critical thinking (Selcuk & Jones, 2022). In this study, therefore, the selected EFL undergraduates were paired to collaborate in writing tasks while their performances were subject to ELT professors’ regular e-diagnostic assessment in virtual classrooms.

Methodology

Design

In this study, the researchers’ objectives were to explore (1) the differential impacts of e-diagnostic assessments performed by State and Islamic Azad University ELT professors on EFL undergraduates’ collaborative writing task performances, (2) the possible association among the ELT professors’ teaching experience, affiliation, and EFL students’ collaborative writing achievement as a proxy to their successful e-diagnostic assessment practices, and (3) the role of ELT professors’ teaching experiences and their university affiliations as moderators. The researchers employed a mixed-methods research design to examine the effects of the e-diagnostic assessment on EFL undergraduates’ collaborative writing performance while exploring the moderating roles of ELT professors’ teaching experience and university affiliation.

Participants

In a large-scope survey by Ghalajian et al. (in press), 108 Iranian State and Islamic Azad University ELT professors were non-randomly selected following a convenient sampling method to make the best use of their voluntary participation. However, the collected survey data are not in the scope of this study. Later, out of the pool of 108 ELT professors, four State and IAU ELT professors agreed to teach four groups of EFL undergraduates and provided them with the diagnostic assessment of their collaborative writing performances in virtual classroom settings. The demographic information of the survey professors’ is provided in Table 1.

Table 1

Demographic Information of the Survey ELT Professors

Factor

F

%

Gender

Male

35

32.4

Female

73

67.6

Age

25-30

10

9.3

31-40

56

51.8

41-50

20

18.5

51-60

19

17.6

> 60

3

2.8

Residence

Rasht

13

12.0

Lahijan

7

6.5

Karaj

51

47.2

Tehran

17

15.7

Others

20

18.5

Educational Degree

Ph.D. Candidate

65

60.2

Ph.D. Holder

43

39.8

Major

Teaching English as Foreign Language (TEFL)

93

86.1

English Language and Literature

7

6.5

Pure Linguistics

8

7.4

Teaching Courses

BA

94

87.0

MA

28

25.9

Ph.D.

27

25.0

Affiliation

Islamic Azad University, Rasht Branch

4

3.7

University of Guilan

28

26.7

Islamic Azad University, Karaj Branch

43

39.1

Islamic Azad University, Tonekabon Branch

2

1.9

University of Tehran

25

24.1

Islamic Azad University, Lahijan Branch

6

4.5

Teaching Experience

< 3 (Novice)

55

50.9

> 3 (Experienced)

53

49.1

Pre-service Assessment

Training

Taken

73

67.6

Untaken

35

32.4

Professional Assessment

Training

Taken

78

72.2

Untaken

30

27.8

To sum up the data in Table 1, 32.4% of the ELT professors (N = 35) were male and 67.6% were female (N = 73), of which the majority of 51.8% were between 31 to 40 years of age, of both genders. Accordingly, the majority of 108 professors (N = 51, 47.2%) resided in Alborz province. Fifty-five (50.8%) ELT professors were at State Universities and 53 (49.2%) at IAU branches. Moreover, almost half of the professors (N = 55, 50.9%) had teaching experience of less than 3 years (thereafter, the novice), and the rest (53, 49.1%) with more than 3 years of experience in their teaching profiles (thereafter, the experienced). Finally, the majority of the survey participants had pre-service (73, 67.6%) and professional assessment training (78, 72.2%), respectively.

Out of this large pool of participating professors, four were purposefully selected to provide a session-wise e-diagnostic assessment of EFL undergraduates’ collaborative writing tasks. The two participating State University ELT professors were female, with six months (the novice) and 11 years (the experienced) teaching experience from the University of Tehran. The two IAU ELT professors were male and female, with one year (the novice) and four years (the experienced) of teaching experience from Islamic Azad University, Karaj Branch.

Also, 160 EFL undergraduates voluntarily participated in this study. They were majoring in English Translation and already had passed four-unit elementary and advanced English writing mandatory courses at State and Islamic Azad Universities. The participating students were 103 females (60.06%) and 57 males (39.94%), with an age range of 22 to 32 (M = 25, SD = .231) who were non-randomly selected with the convenience method of sampling. To determine their level of language proficiency, we asked them to take part in a 2001 version of the Oxford Placement Test (OPT). Their scores were between 40.00 and 65.00 (B1) in the OPT scoring system.

The participants received session-wise e-diagnostic assessment on their collaborative (pair work) writing performances for eight weeks. The collected collaborative writings were rated twice by ELT professors separately following the IELTS writing rating guidelines to assign each essay four independent ratings from 1.00 to 9.00. The selected 160 EFL student participants were later divided into four subgroups of equal size (N = 40):

  1. 40 State University students who received the e-diagnostic assessment on their collaborative writing by an experienced ELT professor (ESS)
  2. 40 State University students who received the e-diagnostic assessment on their collaborative writing by a novice ELT professor (NSS)
  3. 40 Islamic Azad University students who received the e-diagnostic assessment on their collaborative writing by an experienced ELT professor (EAS)
  4. 40 Islamic Azad University students who received the e-diagnostic assessment on their collaborative writing by a novice ELT professor (NAS)

In addition to student participants, four State and Islamic Azad University ELT professors took part:

  1. An experienced State University ELT professor who provided the e-diagnostic assessment of EFL undergraduates’ collaborative writing on Google Meet;
  2. A novice State University ELT professor who provided the e-diagnostic assessment of EFL undergraduates’ collaborative writing on Google Meet;
  3. An experienced IAU ELT professor who provided the e-diagnostic assessment of EFL undergraduates’ collaborative writing on Google Meet;
  4. A novice IAU ELT professor who provided the e-diagnostic assessment of EFL undergraduates’ collaborative writing on Google Meet;

Instruments

Google Meet™ Online Platform

Google Meet™ (formerly known as Hangouts) is a secure video conferencing service developed by Google that allows up to 100 participants per meeting with a Google account. On Google Meet, a Google account can register and sign up for an online meeting. Designed for virtual meetings, users can share videos, text messages, or photos. This platform facilitates communication between classmates and teachers, enabling instruction, holding group discussions, and fostering a sense of community in a virtual classroom setting. Google Meet also allows screen sharing by the host and participants, while everyone can mute their audio/video feeds. Google Meet creates a virtual meeting space where people can connect and collaborate from anywhere in the world. At the outset of the study, the researchers introduced the Google Meet application to the participating professors, who in turn briefed their students to sign in.

Oxford Placement Test (Allen, 2001)

The Oxford Placement Test (OPT) is a standardized test designed to assess the language proficiency of non-native speakers of English. It aligns with the Common European Framework of Reference (CEFR) and reports scores on the scale of Pre-A1 to C2. In this study, the Oxford Placement Test (OPT) (Allen, 2001) was administered to the EFL undergraduates to determine their pre-intervention language proficiency status. Referring to the participants’ obtained scores (40.00 to 65.00), their level of language proficiency was determined at the B1 level.

Collaborative Writing Tasks

After performing a collaborative writing task as the pretest of the study, the EFL undergraduates were required to collaboratively perform selected writing tasks for seven sessions before they attended a writing posttest in the last session of the course. The prompts were selected from Cambridge IELTS Book 12 (2017). The time allocated for the learners to write their essays was 45 minutes. The e-diagnostic assessment on the submitted assignment was provided on the Google Meet online platform.

 

IELTS Writing Rating Rubric

In the e-diagnostic assessment process, the collected collaborative writings were rated following the IELTS Writing Band Descriptors: Task 2 (Public Version). By using the IELTS writing rubric, it was possible to assign the writers a score in the range of 1.00-9.00 on the four domains of task response, coherence and cohesion, lexical resource, and grammatical range and accuracy, each of which requires the learners to meet a specific set of requirements.

Procedure      

At the outset of the study, the volunteer ELT professors (N = 4) and the EFL undergraduates (N = 160) signed the consent form before they received a 4-hour online training workshop on the selection of diagnostic assessment strategies by Alderson et al. (2014) and Nicole and Macfarlane-Dick (2006). They included:

  • Asking the students’ ideas about the type of support they would rather receive;
  • Asking students to identify the exact parts of the writing task that were more difficult for them to perform;
  • Appreciate the strong points in their writing, such as an effective choice of words, fluency and coherence in writing, complex structures, and so on;
  • Assist students in talking about the reasons for making their incorrect responses;

The EFL undergraduates completed a 2001 version of the OPT test. They were further divided into four groups of 40 who collaborated in performing writing tasks for eight weeks. The 80 EFL undergraduates at State Universities were equally divided into two groups of NSS and ESS who received the e-diagnostic assessment by a novice and an experienced ELT professor on Google Meet (NSS), respectively.

Similarly, the 80 EFL undergraduates at Islamic Azad universities were equally divided into two groups of NAS and EAS groups who received e-diagnostic assessment by a novice ELT and an experienced ELT professor on Google Meet, respectively. Every session, the pairs of students who joined the Google Meet platform had to sign into a single Google account, follow the professors’ instructions, collaborate on the writing task, and upload the completed writing assignment.

After collecting the assignments, the ELT professors provided feedback on the grammatical, semantic, and discourse errors and highlighted the strong points in the written scripts. Moreover, writing strategies such as brainstorming, outlining, and using effective discourse markers were provided to improve writing quality. The ELT professors handed in the submitted assignments to the students along with their diagnostic feedback every next session. Students were required to collaboratively proofread their assignments following the scripted feedback. The ELT professors also rated the collaborative writing tasks separately following IELTS Writing Band Descriptors: Task 2 (Public Version). The collected ratings in the first writing task (as the pre-test), the successive writing tasks in sessions 2 to 7, and the final writing task (as the post-test) were subjected to thorough statistical analysis.

Data Analysis

To examine the research questions in the study, the researchers studied the normal distributions of the OPT scores, pre-writing, and post-writing task ratings by using One-sample Kolmogorov-Smirnov Tests. When the normality of the data was reassured, Pearson product-moment correlation coefficients were conducted to examine the inter-rater reliability indices for the writing task ratings (Cronbach’s α = .982, representing strong inter-item reliability). Finally, inferential statistics was used to address the research questions raised in the study. This would allow the researchers to examine the main effect of e-diagnostic assessment as the independent variable in this study and its possible interaction with the moderators (ELT professors’ teaching experience and university affiliations). The quantitative data were collected and subjected to the Statistical Package for Social Sciences (SPSS, Version 21).

Ethical Considerations

In line with ethical principles in research, the researchers informed the ELT professors and EFL undergraduates about the purpose and significance of the study to seek their full consent and cooperation. Transparency was maintained throughout the research process with no attempts to manipulate or coerce participants’ attitudes in completing writing task output to provoke the researchers’ desired data. Furthermore, confidentiality in collecting data was reassured by safeguarding the participants' identities through the omission of any personal identifiers as part of the researchers’ commitment to ethical consideration.

Results

The research null hypothesis 1 was verified after investigating the possible differential impacts of the e-diagnostic assessment performed by novice and experienced Iranian State and IAU ELT professors on EFL undergraduates’ collaborative writing task performances over eight sessions on Google Meet. To decide on the appropriacy of parametric/non-parametric statistical tests, the normal distributions of the OPT scores and overall pre- and post-writing task ratings were examined using One-sample Kolmogorov-Smirnov Tests. The indices were insignificant at p = .960, .550, and .100, respectively, to approve the normality of the data. When the normality of the data was reassured, Pearson product-moment correlation coefficients were used to examine the inter-rater reliability indices for the prewriting ratings r = .901, at p = .021. Next, to examine the presence of any within-group differences in the EFL undergraduates’ OPT scores, a descriptive statistical analysis and an independent-sample t-test on the OPT scores were conducted.

Table 2

Descriptive Statistics: OPT Scores of EFL Undergraduates at State and Islamic Azad Universities

 

 

N

Mean

Std. Deviation

OPT Scores

NSS and ESS

80

42.05

5.35

NAS and EAS

80

42.50

5.40

As Table 2 represents, EFL undergraduates at State and Islamic Azad universities performed similarly on the OPT according to their mean scores (X̄State = 42.05 ± 5.35, X̄IAU = 42.50 ± 5.40, respectively). Decisions on the significance of this difference were made according to the results of the independent-sample t-test.

Table 3

Independent-Samples T-Test: Between-Group Differences of State and Islamic Azad University EFL Undergraduates on OPT Scores

 

Levene's Test for Equality of Variances

t-test for Equality of Means

 

F

Sig.

t

df

Sig. (2-tailed)

OPT

Equal variances assumed

.04

.83

-.78

88

.43

Equal variances not assumed

 

 

-.78

76.91

.43

As reported in Table 3, the detected between-group differences were insignificant (t (158) = - .78, p = .43 > a). The descriptive statistics of pre-writing and post-writing task scores were demonstrated in Table 4.

 

 

Table 4

Descriptive Statistics: Pre-Writing and Post-Writing Task Scores of EFL Undergraduates at State and Islamic Azad Universities

 

N

Pre-writing Task Rating

Post-writing Task Rating

State

University

NSS (40)

Mean

5.52

6.38

 SD

.47

.34

ESS (40)

Mean

5.22

5.95

 SD

.30

.51

Islamic

Azad

University

NAS (40)

Mean

5.85

6.41

 SD

1.08

1.17

EAS (40)

Mean

6.33

6.56

 SD

1.31

1.36

The data in Table 4 can be summarized as follows:

  1. The mean of pre-writing task ratings by EFL undergraduates who received e-diagnostic assessment by the novice ELT professor (NSS) at State Universities improved in their post-writing task (X̄pre-writing = 5.52 ± .47, X̄post-writing = 6.38 ± .34, respectively).

 

  1. Similarly, the mean of pre-writing task ratings by EFL undergraduates who received e-diagnostic assessment by the experienced ELT professor (ESS) at State Universities improved in their post-writing task (X̄pre-writing = 5.22 ± .30, X̄post-writing = 5.95 ± .51, respectively).

 

  1. Comparing the mean of the post-writing task ratings of NSS and ESS groups, it can be concluded that the EFL undergraduates who received e-diagnostic assessment by novice ELT professors (NSS) outperformed the EFL undergraduates who received e-diagnostic assessment by experienced ELT professors (ESS) at State Universities.

 

  1. The mean of pre-writing task ratings by EFL undergraduates who received e-diagnostic assessment by the novice ELT professor (NAS) at Islamic Azad universities improved in their post-writing task (X̄pre-writing = 5.85 ± 1.08, X̄post-writing = 6.41 ± 1.17, respectively).

 

  1. Likewise, the mean of pre-writing task ratings by EFL undergraduates who received e-diagnostic assessment by the experienced ELT professor (EAS) at Islamic Azad universities improved in their post-writing task (X̄pre-writing = 6.31 ± 1.31, X̄post-writing = 6.56 ± 1.36, respectively).

 

  1. By comparing the mean of the post-writing task ratings of NAS and EAS groups, it can be concluded that the EFL undergraduates who received e-diagnostic assessment by the experienced ELT professors (EAS) outperformed the EFL undergraduates who received e-diagnostic assessment by the novice ELT professors (NAS) at Islamic Azad universities.

 

  1. Finally, comparing the mean of the post-writing task ratings of the EFL undergraduates at State and Islamic Azad universities, it can be concluded that the EFL undergraduates at Islamic Azad Universities outperformed the EFL undergraduates at State universities. Yet, the teaching experience of the ELT professors at both State and Islamic Azad Universities seemed to be a determining factor. The significance of this difference was investigated by running a repeated-measures two-way ANOVA.

 

 

 

 

Table 5

Within-Subjects Effects: EFL Undergraduates’ Writing Task Ratings * ELT Professors’ Teaching * ELT Professors’ Affiliation

Source

Type III

 Sum

of Squares

df

Mean Square

F

Sig.

Partial Eta Squared

Pre/Post Writing Ratings

Sphericity Assumed

5.43

1

5.43

6.92

.01*

.08

Pre/Post Writing Ratings * Affiliation

Sphericity Assumed

1.13

1

1.13

1.44

.03*

.02

Pre/Post Writing Ratings * Teaching Experience

Sphericity Assumed

.18

1

.18

1.24

.02*

.05

Pre/Post Writing Ratings * Affiliation * Teaching Experience

Sphericity Assumed

.00

1

.00

.95

.03*

.03

                 

To summarize the data in Table 5, the results of two-way repeated measures ANOVA indicated that after receiving e-diagnostic assessment, EFL undergraduates had significant improvement in their writing performance from the pre-writing task to the post-writing task at both State and Islamic Azad universities (F (3, 156) = 6.92,  p = .01 < a, Partial η2 = .08, interpreted as a weak effect size).

In Table 5, the significant impact of ELT professors’ affiliation on their students’ collaborative writing improvement can be observed (F (3, 156) = 1.44, p = .03 < a, Partial η2 = .02, interpreted as a weak effect size). Likewise, ELT professors’ teaching experience played a significant role in EFL undergraduates’ writing improvement (F (3, 156) = 1.24, p = .02 < a). Also, the observed interaction effect of the ELT professors’ teaching experience and affiliation shows its confounding impact on their students’ writing task improvement (F (3, 156) = .95, p = .03 < a, Partial η2 = .03, interpreted as a weak effect size). In other words, the effect of e-diagnostic assessment on reported writing ratings for eight sessions should be qualified in terms of ELT professors’ teaching experience at State and Islamic Azad Universities. These findings are illustrated in Figures 2 and 3.

Figure 2

 Teaching Experience Marginal Means: NSS and ESS.

 

In Figure 2, the State University EFL undergraduates whose e-diagnostic assessment was performed by a novice ELT professor had better writing improvement than their counterpart EFL undergraduates who received the e-diagnostic assessment from an experienced ELT professor.

Figure 3

Teaching Experience Marginal Means: NAS and EAS

 

 

In Figure 3, the IAU EFL undergraduates whose e-diagnostic assessment was performed by an experienced ELT professor had better writing improvement than the EFL undergraduates who received the e-diagnostic assessment from the novice ELT professor. In other words, the EFL undergraduates’ writing improvement was determined by receiving the session-wise e-diagnostic assessment but moderated by the interaction between the ELT professors’ teaching experiences and their affiliation.

To investigate the research null hypothesis 2 and to explore the possible association among (1) ELT professors’ affiliation (State vs. IAU), (2) ELT professors’ teaching experience (Novice vs. Experienced), and (3) ELT professors’ success in e-diagnostic assessment performance in terms of improved EFL undergraduates’ collaborative writing task ratings on Google Meet, a correspondence analysis (CA) was conducted. CA is a statistical test utilized for identifying and visualizing the hidden patterns and associations between categorical variables in multivariate data when the variables have discrete categories. Because the collected data from session-wise writing task performances were not categorical, they were codified as categories of Low, Mid, and High performances. The grouping of post-writing task ratings was conducted into three levels:  Lower than X̄ ± 1SD (Low Performance), within X̄ ± 1SD (mid-performance), and above X̄ ± 1SD (High Performance). Accordingly, the mean score and standard deviation of the EFL undergraduates’ post-writing task ratings in NSS, ESS, NAS, and EAS groups were calculated as X̄ = 6.19, SD = .94.

 

 

 

 

 

 

Table 6

Correspondence Table: Post-Writing Task Performances of EFL Undergraduates at State and Islamic Azad Universities

Groups

Codified Virtual Post-Writing Task Performances

Low

Pair-wise           

Mid

Pair-wise

High

Pair-wise

Active Margin

NSS

0

20

0

20

ESS

4

14

2

20

NAS

8

8

4

20

EAS

7

4

9

20

Total (Pair-wise)

19

46

15

80

             

Table 6 represents the tallies of the pairs whose post-writing task ratings could be codified as Low, Mid, and High in the NSS, ESS, NAS, and EAS groups. Accordingly, most undergraduates had a mid-status performance on post-writing tasks.

Table 7

The Summary Table of Correspondence Analysis

Dimension

Singular Value

Inertia

Chi Square

Sig.

Proportion of Inertia

Accounted for           Cumulative

1 Teaching Experience

.54

.29

 

 

.70

.70

2 Post-writing Performance

.35

.12

 

 

.29

1.00

Total

 

.41

41.38 (df = 76)

.00*

1.00

1.00

As indicated in Table 7, 70% of the proportion of Interia (Total Variance = .29) was accounted for by the teaching experience of the ELT professors at State and Islamic Azad Universities. Moreover, only 29% of the Interia (.12) was accounted for by the EFL undergraduates’ post-writing task performances. Accordingly, a significant association was found between the teaching experience of ELT professors and the post-writing task performance of the EEL undergraduates in NSS, ESS, NAS, and EAS (c2(76) = 41.38, p = .00).

Figure 4

Joint Distribution of Teaching Experience and Post-writing Task Performance

 

The illustrated data in Figure 4 can be summarized as follows:

  1. A strong association existed between Islamic Azad University ELT professors’ high teaching experience (Dimension 1: novice vs. experienced) and EFL undergraduates’ high post-writing task performance (Dimension 2: High, Mid, Low performance).
  2. A strong association was observed between Islamic Azad University ELT professors’ low teaching experience (Dimension 1) and EFL undergraduates’ low post-writing task performance (Dimension 2).
  3. A weak association between State University ELT professors’ high and low teaching experience (Dimension 1) and EFL undergraduates’ mid-post-writing task performance (Dimension 2).

As Figure 4 illustrates, the two dimensions of ELT professors’ teaching experience and EFL undergraduates’ writing performances are closely associated at Islamic Azad universities but showed a weak association at State Universities.

Discussion

The core objective of this study was to cross-examine Iranian State and Islamic Azad University ELT Professors’ practice of the e-diagnostic assessment as far as their teaching experience and university affiliation were concerned. The pedagogical aspects of these constructs were operationally defined by examining the impact of ELT professors’ e-diagnostic assessment of 160 EFL undergraduates’ collaborative writing performance in the virtual classroom setting of Google Meet.

It was found that e-diagnostic assessment could have differential impacts on collaborative writing ratings moderated by the ELT professors’ affiliation (State vs. IAU) and their teaching experiences (Novice vs. Experienced) in the virtual classroom setting. In other words, it was indicated that e-diagnostic assessment was considerably effective in improving EFL learners’ collaborative writing performance in virtual classrooms at universities. Moreover, the writing improvement was moderated by the ELT professors’ affiliation and teaching experiences.

The findings are relatively supported by Rafi and Pourdana (2023) who examined the impact of e-diagnostic assessment on EFL learners’ collaborative and individual speaking performances and reported satisfactory results. Similarly, Rafi et al. (2022) and Pourdana (2022) examined the effectiveness of computer-mediated diagnostic assessment, run on the Google Meet platform, on EFL learners’ word pronunciation and their modes of engagement in language learning. According to the upshots of both statistical and content analyses, the learners’ speaking performance was significantly improved and they were content with receiving online teacher feedback (Pourdana, 2022). Finally, Csapó and Molnár (2019) investigated the way technology-based diagnostic assessment could lengthen the possibilities of educational research on teaching and learning and reported a highly positive impact.

The findings of the study are in contrast with Ölmezer-Öztürk and Aydin (2018) who claimed that the demographic information of the teachers, including gender or affiliation did not play a critical role in the outcomes of their language assessment knowledge. In their large-scope survey, they reported L2 teachers’ gender, level of experience, place of graduation, whether having a BA, MA, or Ph.D. degree, working at a private or a state school, the educational level of the workplace whether primary, middle, or high school, taking a language assessment course in pre-service education, and attending professional development programs have the insignificant relationship with their knowledge of language assessment.

In addressing the second research question, the researchers argued for the association between ELT professors’ university affiliation and teaching experience, on the one hand, and EFL undergraduates’ collaborative writing task performance serving as a proxy of ELT professors’ e-diagnostic assessment practice, on the other hand. The university professors’ teaching experience as a considerable influential cause in the EFL learners’ achievements was strongly supported by Azizpour et al. (2023) and Esfandiari and Nour (2018). More specifically, teaching experience was reported by Azizpour et al. (2023) as a necessity for ELT university professors to develop language teacher immunity (LTI) in virtual language classrooms. In their argument, teaching experience “allows teachers to maintain professional equilibrium and instructional effectiveness” and guarantees their successful teaching performance (p. 2).

Conclusions and Implications

The concept of language assessment literacy has become a critical issue in the literature in recent years. Many researchers and educators believe that language assessment literacy can empower education stakeholders, especially language school teachers, researchers, and university professors (Bigverdi & Khalili Sabet, 2024), “to make sound decisions about the development, administration, and dissemination of assessment tests/tasks (i.e., tesk)” (Rafi & Pourdana, 2023, p. 4).  Although many stakeholders in different scientific fields need to improve their assessment literacy, language teachers seem to be the focus group “as they are the acting agents of many assessment procedures in the real context of education” (p. 11). Therefore, based on the findings in this study, several concluding remarks can be offered to them.

Firstly, irrespective of academic affiliation, the role of teaching experience of ELT professors in the context of higher education in Iran is paramount. Moreover, teaching experience can likely determine their successful performance in applying alternative methods of assessment such as diagnostic assessment. teaching experience is an invaluable asset being absent in novice L2 teachers. Although it seems teaching experience would have a binary impact on emerging burnout, occupational stress, or maladaptive teacher immunity (Azizpour et al., 2023), it can be closely associated with their mature and successful assessment performances and effective teaching strategies.

Secondly, the findings of the study support the constructive role of diagnostic assessment in improving EFL learners’ writing performance. L2 teachers who “regularly perform diagnostic assessments can discover their students’ needs by monitoring their learning progress, alleviating their learning problems, and enhancing their ongoing achievement” (Pourdana, 2022, p. 5). This impact is even resonated by the collaborative participants in performing writing tasks and proofreading their written assignments. Once more, the importance of applying alternative methods of assessment for learning (AfL) in L2 classrooms and the catalyst role of collaborative learning has critical implications for L2 teachers and practitioners.

Therefore, the findings of this research have strong implications for L2 teacher educators and policymakers worldwide. The results imply the need for the reformation of language teacher education programs at universities. As university “policymakers have to make sensitive and global decisions about all aspects of language education including language assessment, they should constantly become familiar with new assessment approaches to be able to make informed policies” (Rafi et al., 2022, p. 4). Therefore, domain-specific in-service professional courses can be implemented in technology-integrated assessment, and the use of proper technological tools and software, automated feedback applications, or artificial intelligence tools can be advertised in educational environments in Iran. This study was limited by the researchers’ choice of research methodologies and the sampling procedure that might open new horizons for further research. Firstly, the ELT professors’ teaching experience and university affiliation were the only two main grouping/moderating variables in this study. Yet, further differentiation is possible between cohorts of teachers in terms of their gender, age, academic degree, and teaching context. Also, in this study, the ELT professors and EFL undergraduates were not trained in digital literacy and knowledge of technology. The current researchers strongly believe that it could have negatively affected the ELT professors’ e-diagnostic assessment practice and students’ collaborative writing performance on the Google Meet platform. Future researchers are recommended to supplement training in digital literacy with similar studies and control its confounding impact.

Acknowledgments

We would like to sincerely thank the editorial team and the reviewers of the Iranian Journal of English for Academic Purposes for their priceless insights for enhancing the quality of this manuscript.

 

Declaration of Conflicting Interests

The authors declare that there are no conflicts of interest regarding the publication of this article. This includes any financial, personal, or professional relationships. The integrity and objectivity of this research and adherence to ethical standards of scientific inquiry have been maintained.

 

Funding Details

It is declared that no funding was received for the conduct of this research.

References

Abrahams, I., & Reiss, M. J. (2015). The assessment of practical skills. The School Science Review, 96(345), 40-44. https://www.researchgate.net/publication/291344198_The_assessment_of_practical_skills

Ahmadi, A., & Mirshojaee, S. B. (2016). Iranian English language teachers’ assessment literacy: The case of public school and language institute teachers. The Iranian EFL Journal, 12(2), 6-32.

Alavi, S. Y., Rezvani, R., & Yazdani, S. (2024). Examining classroom assessment literacy of English teachers in Iran's language institutes: Curricular gap analysis of Iranian universities' programs. Iranian Journal of English for Academic Purposes, 13(1), 18-35. https://journalscmu.sinaweb.net/article_193105.html

Alderson, J. C. (2005). Diagnosing foreign language proficiency: The interface between learning and assessment. A & C Black.

Alderson, J. C., Haapakangas, E. L., Huhta, A., Nieminen, L., & Ullakonoja, R. (2014). The diagnosis of reading in a second or foreign language. Routledge.

Alderson, J. C., Brunfaut, T., & Harding, L. (2015). Towards a theory of diagnosis in second and foreign language assessment: Insights from professional practice across diverse fields. Applied Linguistics36(2), 236-260. https://doi.org/10.1093/applin/amt046

Allen, D. (2001). Oxford placement test. Oxford: Oxford University Press.

Ayatollahi, M. A., Taghinezhad, A., & Azadikhah, M. (2017). Interactive versus collaborative writing instruction: An experimental study. Journal of Modern Research in English Language Studies, 4(3), 1-18. https://jmrels.journals.ikiu.ac.ir/article_1287.html

Azizpour, S., Pourdana, N., & Nour, P. (2023). Immunized Iranian EFL teachers during COVID-19 Pandemic: The mediating role of teacher occupational stress, enjoyment, and experience. Interchange, 54(3), 317-335. DOI:10.1007/s10780-023-09497-5

Basterrechea, M., & Gallardo-del-Puerto, F. (2023). Collaborative writing and patterns of interaction in young learners: The interplay between pair dynamics and pairing method in LRE production. Vigo International Journal of Applied Linguistics, (20), 49-76. https://doi.org/10.35869/vial.v0i20.4354

Berry, V., Sheehan, S., & Munro, S. (2017). Exploring teachers’ language assessment literacy: A social constructivist approach to understanding effective practices. In ALTE (2017). Learning and Assessment: Making the Connections–Proceedings of the ALTE 6th International Conference (pp. 201-207).

Bigverdi, A. & Khalili Sabet, M. (2024). The Effects of Online Teacher Feedback and Online Peer Feedback on Writing Development and Language Mindset of the EFL Learners. Iranian Journal of English for Academic Purposes13(3), 1-17. https://journalscmu.sinaweb.net/article_210968.html

Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: principles, policy & practice5(1), 7-74. https://doi.org/10.1080/0969595980050102

Bonner, E., Garvey, K., Miner, M., Godin, S., & Reinders, H. (2023). Measuring real-time learner engagement in the Japanese EFL classroom. Innovation in Language Learning and Teaching17(2), 254-264. DOI:10.1080/17501229.2021.2025379

Campbell-Evans, G. (2000). Teacher learning through story: Building professional communities. Future School Administration: Western and Asian perspectives, 40(40), 95-114. http://dx.doi.org/10.14221/ajte.2015v40n11.7.

Csapó, B., & Molnár, G. (2019). Online diagnostic assessment in support of personalized teaching and learning: The eDia system. Frontiers in Psychology, 10(3), 15-22. https://doi.org/10.3389/fpsyg.2019.01522.

Esfandiari, R., & Noor, P. (2018). Iranian EFL raters’ cognitive processes in rating IELTS speaking tasks: The effect of expertise. Journal of Modern Research in English Language Studies, 5(2), 41-76. https://doi.org/10.30479/jmrels.2019.9383.1248

Eyal, L. (2012). Digital assessment literacy: The core role of the teacher in a digital environment. Educational Technology & Society, 15(2), 37-49. https://www.jstor.org/stable/jeductechsoci.15.2.37.

Farrell, T. S. C. (2013). Reflective practice in ESL teacher development groups: From practices to principles. New York: Palgrave Macmillan.

Farrell, T. S. C. (2014). Promoting teacher reflection in second language education: A framework for TESOL professionals. New York: Routledge.

Ghalajian, H., Pourdana, N., & Tavassoli, K. (in press). Cross-examination of the Iranian State and Islamic Azad University ELT professors’ self-perception and knowledge of language assessment literacy and their real vs. virtual diagnostic assessment of EFL undergraduates’ performance on collaborative writing tasks. Unpublished PhD Dissertation. Islamic Azad University, Karaj Branch, Iran.

Harding, L., Alderson. J. C., & Brunfaut, T. (2015). Diagnostic assessment of reading and listening in a second or foreign language: Elaborating on diagnostic principles. Language Testing, 32 (3), 317 –336. http://dx.doi.org/10.1177/0265532214564505.

Hidri, S. (2021). Language Assessment Literacy: Where to go? In S. Hidri (Ed.). Perspectives on language assessment literacy: Challenges for improved student learning (pp. 3-12). Routledge.

Ismail, S. A. A., & Al Allaq, K. (2019). The nature of cooperative learning and differentiated instruction practices in English classes. SAGE Open, 1–17. DOI:10.1177/2158244019856450

Jang, E. E., & Wagner, M. (2014). Diagnostic feedback in the classroom. The companion to language assessment approaches and development, Volume II (pp. 1-19). John Wiley & Sons, Inc.

Jin, Y. (2010). The place of language testing and assessment in the professional preparation of foreign language teachers in China. Language Testing, 27(4), 555–584. https://doi.org/10.1177/0265532209351431.

Kazemi, P., Pourdana, N., Khalili, G.F., & Nour, P. (2022). Microgenetic analysis of written languaging attributes on form-focused and content-focused e-collaborative writing tasks in Google Docs. Educ Inf Technol, 27, 10681–10704. https://doi.org/10.1007/s10639-022-11039-y

Keshanchi, E., Pourdana, N., & Khalili, G. F. (2022). Correction to: Differential impacts of pair and self-dynamics on written languaging attributes and translation task performance in EFL context. English Teaching & Learning, 61(9), 927-935. DOI:10.1007/s42321-022-00117-6

Kremmel, B., & Harding, L. (2020). Towards a comprehensive, empirical model of language assessment literacy across stakeholder groups: Developing the language assessment literacy survey. Language Assessment Quarterly, 17(1), 100-120. http://dx.doi.org/10.1080/15434303.2019.1674855.

Lam, R. (2015). Language assessment training in Hong Kong: Implications for language assessment literacy. Language Testing, 32(2), 169–197. https://doi.org/10.1177/0265532214554321.

Lei, M., & Medwell, J. (2021). Impact of the COVID-19 pandemic on student teachers: How the shift to online collaborative learning affects student teachers’ learning and future teaching in a Chinese context. Asia Pacific Education Review22(2), 169-179. http://dx.doi.org/10.1007/s12564-021-09686-w.

López Mendoza, A., & Bernal Arandia, R. (2009). Language testing in Colombia: A call for more teacher education and teacher training in language assessment. PROFILE Issues in Teachers Professional Development, 11(2), 55-70. https://www.redalyc.org/pdf/1692/169216301005.pdf

McNamara, T., Hill, K., & May, L. (2002). Discourse and assessment. Annual Review of Applied Linguistics22, 221-242. http://dx.doi.org/10.1017/S0267190502000120.

Nicol, D. J., and Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218. http://dx.doi.org/10.1080/03075070600572090.

Nikmard, F., & Tavassoli, K. (2023). The impact of test length on raters’ mental processes during scoring test-takers’ writing performance. Journal of Language Horizons7(1), 159-182. 10.22051/lghor.2022.37340.1545

Nikmard, F., Tavassoli, K., & Pourdana, N. (2023). Designing and validating a scale for evaluating the sources of unreliability of a high-stakes test. Language Testing in Asia, 13(1), 2. https://doi.org/10.1186/s40468-023-00215-7

Nour, P., Esfandiari, R., & Zarei, A. A. (2021). Development and validation of a metamemory maturity questionnaire in the context of English as a foreign language. Language Testing in Asia, 11(1), 24. https://doi.org/10.1186/s40468-021-00141-6

Ölmezer-Öztürk, E., & Aydin, B. (2018). Toward measuring language teachers’ assessment knowledge: Development and validation of language assessment knowledge scale (LAKS). Language Testing in Asia, 8(20), 1–15. https://doi.org/10.1186/s40468-018-0075-2

Pourdana, N. (2023). Impacts of computer-assisted diagnostic assessment on sustainability of L2 learners’ collaborative writing improvement and their engagement modes. Asian-Pacific Journal of Second and Foreign Language Education7(1), 11-28. http://dx.doi.org/10.1186/s40862-022-00139-4.

Pourdana, N. (2022). Impacts of computer-assisted diagnostic assessment on sustainability of L2 learners’ collaborative writing improvement and their engagement modes. Asian-Pacific Journal of Second and Foreign Language Education7(1), 11-26. https://doi.org/10.1186/s40862-022-00139-4.

Pourdana, N., & Tavassoli, K. (2022). Differential impacts of e-portfolio assessment on language learners’ engagement modes and genre-based writing improvement. Language Testing in Asia, 12(1), 7. https://doi.org/10.1186/s40468-022-00156-7

Pourdana, N., & Asghari, S. (2021). Different dimensions of teacher and peer assessment of EFL learners’ writing: descriptive and narrative genres in focus. Language Testing in Asia, 11(6). https://languagetestingasia.springeropen.com/articles/10.1186/s40468-021-00122-9.

Pourdana, N., & Rad, M. S. (2017). Differentiated instructions: Implementing tiered listening tasks in mixed-ability EFL context. Journal of Modern Research in English Language Studies, 4(4), 45–63. https://doi.org/10.30479/jmrels.2017.1566.

Pourdana, N., Sahebzamani, S., & Rajeski, J. S. (2014). Metaphorical awareness: A new horizon in vocabulary retention by Asian EFL learners . International Journal of Applied Linguistics and English Literature, 3(4), 154-161. URL: http://dx.doi.org/10.7575/aiac.ijalel.v.3n.4p.213

Rafi, F., & Pourdana, N. (2023). E‑diagnostic assessment of collaborative and individual oral tiered task performance in differentiated second language instruction framework. Language Testing in Asia, 13:6, 1-18. https://doi.org/10.1186/s40468-023-00223-7.

Rafi, F., Pourdana, N., & Ghaemi, F. (2022). Computer-mediated diagnostic assessment of mixed-ability EFL learners’ performance on tiered tasks: Differentiating mediation on Google Meet™. Journal of Modern Research in English Language Studies, 9(2), 1-26. http://dx.doi.org/10.1186/s40468-023-00223-7.

Selcuk, H., & Jones, J. (2022). Turkish EFL learner perceptions of using a social network environment for collaborative writing: Creating a Trustful affinity space. International Journal of Smart Education and Urban Society (IJSEUS)13(1), 1-14. DOI: 10.4018/IJSEUS.297063

Storch, N. (2019). Collaborative writing in L2 classrooms. Bristol: Multilingual Matters.

Taylor, L. (2013). Communicating the theory, practice and principles of language testing to test stakeholders: Some reflections. Language Testing, 30(3), 403-412. http://dx.doi.org/10.1177/0265532213480338.

Toshiyuki, H., & Mei-Shiu, Ch. (2024). Technology-enhanced language learning in English language education: Performance analysis, core publications, and emerging trends. Cogent Education, 11(1). DOI:10.1080/2331186X.2024.2346044

Tsagari, D., Vogt, K., Froelich, V., Csépes, I., Fekete, A., Green, A., ... & Kordia, S. (2018). Handbook of assessment for language teachers. Teacher Association Literacy Enhancement Publication.

Vogt, K., & Tsagari, D. (2014). Assessment literacy of foreign language teachers: Findings of a European study. Language Assessment Quarterly, 11(4), 374-402. https://doi.org/10.1080/15434303.2014.960046.

 

 

 

 

 

 

 

 

 

 

 

 

 

[1] PhD Candidate of TEFL, ghalagian@yahoo.com; Department of Teaching English Language and Translation, Karaj Branch, Islamic Azad University, Karaj, Iran.

[2] Associate Professor of TEFL, Natasha.pourdana@iau.ac.ir; Department of Teaching English Language and Translation, Karaj Branch, Islamic Azad University, Karaj, Iran.

[3] Assistant Professor of TEFL, Kobra.tavassoli@iau.ac.ir; Department of Teaching English Language and Translation, Karaj Branch, Islamic Azad University, Karaj, Iran.

Abrahams, I., & Reiss, M. J. (2015). The assessment of practical skills. The School Science Review, 96(345), 40-44. https://www.researchgate.net/publication/291344198_The_assessment_of_practical_skills
Ahmadi, A., & Mirshojaee, S. B. (2016). Iranian English language teachers’ assessment literacy: The case of public school and language institute teachers. The Iranian EFL Journal, 12(2), 6-32.
Alavi, S. Y., Rezvani, R., & Yazdani, S. (2024). Examining classroom assessment literacy of English teachers in Iran's language institutes: Curricular gap analysis of Iranian universities' programs. Iranian Journal of English for Academic Purposes, 13(1), 18-35. https://journalscmu.sinaweb.net/article_193105.html
Alderson, J. C. (2005). Diagnosing foreign language proficiency: The interface between learning and assessment. A & C Black.
Alderson, J. C., Haapakangas, E. L., Huhta, A., Nieminen, L., & Ullakonoja, R. (2014). The diagnosis of reading in a second or foreign language. Routledge.
Alderson, J. C., Brunfaut, T., & Harding, L. (2015). Towards a theory of diagnosis in second and foreign language assessment: Insights from professional practice across diverse fields. Applied Linguistics36(2), 236-260. https://doi.org/10.1093/applin/amt046
Allen, D. (2001). Oxford placement test. Oxford: Oxford University Press.
Ayatollahi, M. A., Taghinezhad, A., & Azadikhah, M. (2017). Interactive versus collaborative writing instruction: An experimental study. Journal of Modern Research in English Language Studies, 4(3), 1-18. https://jmrels.journals.ikiu.ac.ir/article_1287.html
Azizpour, S., Pourdana, N., & Nour, P. (2023). Immunized Iranian EFL teachers during COVID-19 Pandemic: The mediating role of teacher occupational stress, enjoyment, and experience. Interchange, 54(3), 317-335. DOI:10.1007/s10780-023-09497-5
Basterrechea, M., & Gallardo-del-Puerto, F. (2023). Collaborative writing and patterns of interaction in young learners: The interplay between pair dynamics and pairing method in LRE production. Vigo International Journal of Applied Linguistics, (20), 49-76. https://doi.org/10.35869/vial.v0i20.4354
Berry, V., Sheehan, S., & Munro, S. (2017). Exploring teachers’ language assessment literacy: A social constructivist approach to understanding effective practices. In ALTE (2017). Learning and Assessment: Making the Connections–Proceedings of the ALTE 6th International Conference (pp. 201-207).
Bigverdi, A. & Khalili Sabet, M. (2024). The Effects of Online Teacher Feedback and Online Peer Feedback on Writing Development and Language Mindset of the EFL Learners. Iranian Journal of English for Academic Purposes13(3), 1-17. https://journalscmu.sinaweb.net/article_210968.html
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: principles, policy & practice5(1), 7-74. https://doi.org/10.1080/0969595980050102
Bonner, E., Garvey, K., Miner, M., Godin, S., & Reinders, H. (2023). Measuring real-time learner engagement in the Japanese EFL classroom. Innovation in Language Learning and Teaching17(2), 254-264. DOI:10.1080/17501229.2021.2025379
Campbell-Evans, G. (2000). Teacher learning through story: Building professional communities. Future School Administration: Western and Asian perspectives, 40(40), 95-114. http://dx.doi.org/10.14221/ajte.2015v40n11.7.
Csapó, B., & Molnár, G. (2019). Online diagnostic assessment in support of personalized teaching and learning: The eDia system. Frontiers in Psychology, 10(3), 15-22. https://doi.org/10.3389/fpsyg.2019.01522.
Esfandiari, R., & Noor, P. (2018). Iranian EFL raters’ cognitive processes in rating IELTS speaking tasks: The effect of expertise. Journal of Modern Research in English Language Studies, 5(2), 41-76. https://doi.org/10.30479/jmrels.2019.9383.1248
Eyal, L. (2012). Digital assessment literacy: The core role of the teacher in a digital environment. Educational Technology & Society, 15(2), 37-49. https://www.jstor.org/stable/jeductechsoci.15.2.37.
Farrell, T. S. C. (2013). Reflective practice in ESL teacher development groups: From practices to principles. New York: Palgrave Macmillan.
Farrell, T. S. C. (2014). Promoting teacher reflection in second language education: A framework for TESOL professionals. New York: Routledge.
Ghalajian, H., Pourdana, N., & Tavassoli, K. (in press). Cross-examination of the Iranian State and Islamic Azad University ELT professors’ self-perception and knowledge of language assessment literacy and their real vs. virtual diagnostic assessment of EFL undergraduates’ performance on collaborative writing tasks. Unpublished PhD Dissertation. Islamic Azad University, Karaj Branch, Iran.
Harding, L., Alderson. J. C., & Brunfaut, T. (2015). Diagnostic assessment of reading and listening in a second or foreign language: Elaborating on diagnostic principles. Language Testing, 32 (3), 317 –336. http://dx.doi.org/10.1177/0265532214564505.
Hidri, S. (2021). Language Assessment Literacy: Where to go? In S. Hidri (Ed.). Perspectives on language assessment literacy: Challenges for improved student learning (pp. 3-12). Routledge.
Ismail, S. A. A., & Al Allaq, K. (2019). The nature of cooperative learning and differentiated instruction practices in English classes. SAGE Open, 1–17. DOI:10.1177/2158244019856450
Jang, E. E., & Wagner, M. (2014). Diagnostic feedback in the classroom. The companion to language assessment approaches and development, Volume II (pp. 1-19). John Wiley & Sons, Inc.
Jin, Y. (2010). The place of language testing and assessment in the professional preparation of foreign language teachers in China. Language Testing, 27(4), 555–584. https://doi.org/10.1177/0265532209351431.
Kazemi, P., Pourdana, N., Khalili, G.F., & Nour, P. (2022). Microgenetic analysis of written languaging attributes on form-focused and content-focused e-collaborative writing tasks in Google Docs. Educ Inf Technol, 27, 10681–10704. https://doi.org/10.1007/s10639-022-11039-y
Keshanchi, E., Pourdana, N., & Khalili, G. F. (2022). Correction to: Differential impacts of pair and self-dynamics on written languaging attributes and translation task performance in EFL context. English Teaching & Learning, 61(9), 927-935. DOI:10.1007/s42321-022-00117-6
Kremmel, B., & Harding, L. (2020). Towards a comprehensive, empirical model of language assessment literacy across stakeholder groups: Developing the language assessment literacy survey. Language Assessment Quarterly, 17(1), 100-120. http://dx.doi.org/10.1080/15434303.2019.1674855.
Lam, R. (2015). Language assessment training in Hong Kong: Implications for language assessment literacy. Language Testing, 32(2), 169–197. https://doi.org/10.1177/0265532214554321.
Lei, M., & Medwell, J. (2021). Impact of the COVID-19 pandemic on student teachers: How the shift to online collaborative learning affects student teachers’ learning and future teaching in a Chinese context. Asia Pacific Education Review22(2), 169-179. http://dx.doi.org/10.1007/s12564-021-09686-w.
López Mendoza, A., & Bernal Arandia, R. (2009). Language testing in Colombia: A call for more teacher education and teacher training in language assessment. PROFILE Issues in Teachers Professional Development, 11(2), 55-70. https://www.redalyc.org/pdf/1692/169216301005.pdf
McNamara, T., Hill, K., & May, L. (2002). Discourse and assessment. Annual Review of Applied Linguistics22, 221-242. http://dx.doi.org/10.1017/S0267190502000120.
Nicol, D. J., and Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218. http://dx.doi.org/10.1080/03075070600572090.
Nikmard, F., & Tavassoli, K. (2023). The impact of test length on raters’ mental processes during scoring test-takers’ writing performance. Journal of Language Horizons7(1), 159-182. 10.22051/lghor.2022.37340.1545
Nikmard, F., Tavassoli, K., & Pourdana, N. (2023). Designing and validating a scale for evaluating the sources of unreliability of a high-stakes test. Language Testing in Asia, 13(1), 2. https://doi.org/10.1186/s40468-023-00215-7
Nour, P., Esfandiari, R., & Zarei, A. A. (2021). Development and validation of a metamemory maturity questionnaire in the context of English as a foreign language. Language Testing in Asia, 11(1), 24. https://doi.org/10.1186/s40468-021-00141-6
Ölmezer-Öztürk, E., & Aydin, B. (2018). Toward measuring language teachers’ assessment knowledge: Development and validation of language assessment knowledge scale (LAKS). Language Testing in Asia, 8(20), 1–15. https://doi.org/10.1186/s40468-018-0075-2
Pourdana, N. (2023). Impacts of computer-assisted diagnostic assessment on sustainability of L2 learners’ collaborative writing improvement and their engagement modes. Asian-Pacific Journal of Second and Foreign Language Education7(1), 11-28. http://dx.doi.org/10.1186/s40862-022-00139-4.
Pourdana, N. (2022). Impacts of computer-assisted diagnostic assessment on sustainability of L2 learners’ collaborative writing improvement and their engagement modes. Asian-Pacific Journal of Second and Foreign Language Education7(1), 11-26. https://doi.org/10.1186/s40862-022-00139-4.
Pourdana, N., & Tavassoli, K. (2022). Differential impacts of e-portfolio assessment on language learners’ engagement modes and genre-based writing improvement. Language Testing in Asia, 12(1), 7. https://doi.org/10.1186/s40468-022-00156-7
Pourdana, N., & Asghari, S. (2021). Different dimensions of teacher and peer assessment of EFL learners’ writing: descriptive and narrative genres in focus. Language Testing in Asia, 11(6). https://languagetestingasia.springeropen.com/articles/10.1186/s40468-021-00122-9.
Pourdana, N., & Rad, M. S. (2017). Differentiated instructions: Implementing tiered listening tasks in mixed-ability EFL context. Journal of Modern Research in English Language Studies, 4(4), 45–63. https://doi.org/10.30479/jmrels.2017.1566.
Pourdana, N., Sahebzamani, S., & Rajeski, J. S. (2014). Metaphorical awareness: A new horizon in vocabulary retention by Asian EFL learners . International Journal of Applied Linguistics and English Literature, 3(4), 154-161. URL: http://dx.doi.org/10.7575/aiac.ijalel.v.3n.4p.213
Rafi, F., & Pourdana, N. (2023). E‑diagnostic assessment of collaborative and individual oral tiered task performance in differentiated second language instruction framework. Language Testing in Asia, 13:6, 1-18. https://doi.org/10.1186/s40468-023-00223-7.
Rafi, F., Pourdana, N., & Ghaemi, F. (2022). Computer-mediated diagnostic assessment of mixed-ability EFL learners’ performance on tiered tasks: Differentiating mediation on Google Meet™. Journal of Modern Research in English Language Studies, 9(2), 1-26. http://dx.doi.org/10.1186/s40468-023-00223-7.
Selcuk, H., & Jones, J. (2022). Turkish EFL learner perceptions of using a social network environment for collaborative writing: Creating a Trustful affinity space. International Journal of Smart Education and Urban Society (IJSEUS)13(1), 1-14. DOI: 10.4018/IJSEUS.297063
Storch, N. (2019). Collaborative writing in L2 classrooms. Bristol: Multilingual Matters.
Taylor, L. (2013). Communicating the theory, practice and principles of language testing to test stakeholders: Some reflections. Language Testing, 30(3), 403-412. http://dx.doi.org/10.1177/0265532213480338.
Toshiyuki, H., & Mei-Shiu, Ch. (2024). Technology-enhanced language learning in English language education: Performance analysis, core publications, and emerging trends. Cogent Education, 11(1). DOI:10.1080/2331186X.2024.2346044
Tsagari, D., Vogt, K., Froelich, V., Csépes, I., Fekete, A., Green, A., ... & Kordia, S. (2018). Handbook of assessment for language teachers. Teacher Association Literacy Enhancement Publication.
Vogt, K., & Tsagari, D. (2014). Assessment literacy of foreign language teachers: Findings of a European study. Language Assessment Quarterly, 11(4), 374-402. https://doi.org/10.1080/15434303.2014.960046.

Supplementary File

  • Receive Date 18 August 2024
  • Revise Date 24 October 2024
  • Accept Date 31 October 2024