The Comparative Effect of Dynamic and Negotiated Assessment on EFL learners’ Writing Complexity and Fluency

The purpose of the present study was to investigate the comparative effect of dynamic and negotiated assessment on EFL learners’ writing complexity and fluency. To this end, 72 female intermediate EFL participants, selected from a larger group of 103 learners based on their performances on a piloted PET, in Tak language institute in Dezfoul, Iran participated in the present study and received either dynamic assessment, negotiated assessment, or traditional instruction during a term. Both of the experiments were process-oriented; however, in the dynamic assessment, the negotiation was done through teacher’s provision of feedback wherein the negotiated assessment group peer-negotiation was encouraged. The participants’ writing complexity and fluency were measured both before and after the instruction through essay writing pre-treatment test and posttest in accordance with Larsen-Freeman’s (2006) T-Unit protocol. A Multivariate Analysis of Variance (MANOVA) was run on the posttest scores to test the null hypotheses of the study, the results of which indicated that while dynamic assessment was significantly effective in improving writing complexity (p = 0.007 < 0.05), negotiated assessment yielded significantly better results in boosting writing fluency compared to the results obtained from both control (p = 0.000 < 0.05) and dynamic assessment groups (p = 0.042 < 0.05). Nevertheless, dynamic assessment did not show significantly better results in comparison to negotiated assessment in improving writing complexity (p = 0.084 > 0.05). Learners, teachers, and syllabus designers who are engaged in the process of language pedagogy may use these results. Depending on the focus of their learning, i.e., fluency or complexity, they may choose the optimal choice between these two types of assessment.


INTRODUCTION
Writing is one of the important skills in the process of learning a second or foreign language and Hapsari (2011) says that writing is generally known as the most difficult of the four skills. The difficulty is seen in generating and organizing ideas and the mastery of the different aspects of writing such as grammar, spelling, word choice, punctuation, and so on. In addition, writing is an inseparable part of any language learning process as Adam (2003) argued that written production and feedback are very important in every language learning process. Feedback is used to express an idea or reflection of an individual's performance (Mackey, Gass & McDonough, 2000). Researchers in the area of second/ foreign language learning (Ellis, 2008;Ellis & Barkhuizen, 2005;Housen & Kuiken, 2009;Larsen-Freeman, 2009;Norris & Ortega, 2009) are now in agreement that L2 proficiency, in general, and writing proficiency, in particular, are multi-componential in nature, and that their principal dimensions can be adequately, and comprehensively, captured by the notions of complexity, accuracy, and fluency (CAF) (Housen & Kuiken, 2009 assessment usually focused on the accuracy of the writing, the distinction between these three components of writing was made in the late 1980s. Complexity was added in the 1990s, following Skehan (1989) who proposed an L2 model, which for the first time included CAF as the three principal proficiency dimensions. Ellis (2003) defined complexity as the degree where the language produced in performing a task is elaborate and varied; also it refers to the extent to which learners desire to take risks to use the innovation of their linguistic knowledge which finally leads to the restructuring process (Ellis & Barkhuizen, 2005). On the other hand, as Wolfe-Quintero et al. (1998) stated, accuracy involves the amount of divergence from a particular norm that is assumed as errors. Also, fluency refers to learners' general language proficiency, which is exemplified by impressions of ease, smoothness, and expressiveness in speech or writing (Freed, 2000;Hilton, 2008).
Moreover, most traditional methods of language testing focused on the product, or what the learner has already learned, that is called "static assessment" (Feuerstein, Rand, & Hoffman, 2012). Indeed, the dynamic assessment looks at an individual's ability to acquire skills or knowledge during ALLS 12(2):1-12 the evaluation. In addition, Dynamic Assessment (DA) refers to "An assessment of thinking, perception, learning, and problem solving by an active teaching process aimed at modifying cognitive functioning" (Tzuriel, 2001). Haywood and Tzuriel (2002) defined dynamic assessment as a subset of interactive assessment that includes deliberate and planned mediatory teaching and the assessment of the effects of that teaching on subsequent performance.
The other types of assessment, negotiated assessment, involves the procedure in which the assessor and the assessee are encouraged to negotiate and agree on the feedback provided and on the use of the assessment mechanism and criteria, in the light of learning objectives, activities, and outcomes (Anderson et al., 1996). Negotiated assessment (NA), which is classified as a formative (as opposed to summative) assessment is considered as a useful method for encouraging teacher learning due to its participative and interactive components (Gosling, 2000). NA focuses on the learning process rather than the product. It is a process of negotiation between the assessor and the assessee to come up with an agreed assessment of the students (Brna, Self, Bull & Pain, 1999 as cited in Bull, 2016). According to Anderson, Bound, and Sampson (1996), in negotiated assessment, the assessor and the assessee discuss and agree on the provided feedback and the use of the assessment mechanism and criteria, regarding learning objectives, activities, and outcomes. Anderson et al. (1996) assumed that, in negotiated teacher assessment, negotiations could increase the active involvement of teachers to choose their own learning objectives, outcomes, activities, and evidence, which continue in their teaching processes during the assessment procedure. When a teacher does not hold this active role him or herself, the assessor must challenge the assessee to take responsibility for his or her own learning and assessment (De Eça, 2005).
Writing as the visual channel and the dynamic mode of language is a vital skill for the L2 learners to improve their language knowledge and the teaching of this skill becomes dominant in second language classes (Hyland, 2003). Even though the role of writing in language learning is not less than the role of the other three language skills, it has long been disregarded. In comparison with the other three language skills, writing seems to be too challenging and time-consuming to teach, so not enough attention has been paid to teach and practice writing in the class (Zeng, 2005). Researchers have found that foreign language learners find it painstaking to compose in the target language, producing less fluent sentences, and encountering difficulties in the revisions of their written works (Fatemi, 2008). Moreover, in the Iranian educational system even in language institutes, students seem to receive little practice in writing in English. Their problems with writing may be attributed to the limited time they spend on this skill or their poor motivation as they are not regularly asked for written productions. The researchers have both experienced these challenges during the time they were learning English and dealt with them during the years they have been teaching it to learners. Therefore, the first force behind this research was the researchers' intention to find a practical way to help learners deal with the challenges of writing more effectively.
One of the ways to improve writing is systematic feedback that learners may receive from teachers as it signifies the importance of learning as a process. It can be said that an effective instruction needs assessment because it must be sensitive to what the individual is able to achieve when performing a task independently (Sharafi & Abbasnasab Sardareh, 2016). Xiaoxiao and Yan (2010) stated that this approach provides chances for the learners to act better by receiving help and support through instruction. Another problem lies in the type of assessment used in language classes. The summative evaluation of the students that happens at the end of the semesters is often not reliable as different extraneous variables may be involved in a onetime performance (Richards & Schmidt, 2010). Formative assessments like negotiated assessment were introduced as a reflection of this limitation and the idea was to involve both teachers and learners in the process of evaluation by helping the learner in creating an index to their learning (Gareis, 2006). Consequently, the negotiations between the assessor and the assessee can improve the assessees' involvement in their assessments (McMahon, 2010). From among different assessment approaches applied in developing the writing ability of the learners, the dynamic assessment suggests a new way of assessing and evaluating that integrates instruction and assessment. Thus, According to Xiaoxiao, and Yan (2010), DA is more practical for the writing process because the teacher can act as a supporter and provide immediate and situated feedback during the whole procedure.
Since the focus of DA and NA is on the process rather than the product, they can give learners a good amount of feedback and help them to develop their abilities and strengthen their weaknesses. The effectiveness of both DA and NA in language classes has been explored by scholars, although the latter has attracted more attention, especially in the context of Iran. Moghadam and Rad (2015), for example, showed that negotiated assessment of metacognitive listening strategies enhances listening comprehension. Abbasi and Fatemi (2015) also showed the effectiveness of dynamic assessment on Iranian pre-intermediate English as a foreign language (EFL) learners' acquisition of English tenses. Other studies (e.g., Ahmadi Safa, Donyaie, & Malek Mohammadi, 2015;Jafary, Nordin, & Mohajeri, 2012;Malmeer & Zoghi, 2014;Minaabad, 2017) showed that DA was also significantly effective in improving different aspects of language, like syntactic knowledge, reading, and speaking. However, no study, to the best of the researchers' knowledge, has attempted to explore the effect of these two assessment types on writing components, neither did any study compare their effects with these regards. Considering the Iranian EFL learners' problems about writing ability in general and its dimensions, complexity and fluency in particular, inspired by the research gap in the literature, and hoping to provide a practical example of using a process-oriented assessment in the context of Iran where the dominant assessment in the educational settings is static, this study attempts to investigate the comparative effect of DA and NA on EFL learners' writing complexity and fluency. Stimulated by the above-mentioned issues, the cur-rent study was carried out in order to answer the following research question. Q1: Does using dynamic assessment have any significant effect on EFL learners' writing complexity? Q2: Does using dynamic assessment have any significant effect on EFL learners' writing fluency? Q3: Does using negotiated assessment have any significant effect on EFL learners' writing complexity? Q4: Does using negotiated assessment have any significant effect on EFL learners' writing fluency? Q5: Is there any significant difference between the effect of using dynamic assessment and negotiated assessment on EFL learners' writing complexity? Q6: Is there any significant difference between the effect of using dynamic assessment and negotiated assessment on EFL learners' writing fluency?

Writing Complexity and its Measurement
Writing is a basic communicative skill and at the same time a complex skill to master. As Richards and Renandya (2002) stated there is no doubt that writing is the most difficult skill learners have to master and this difficulty lies not only in generating and organizing ideas but also in translating those to readable text. Richards and Schmidt (2010) defined writing as "the strategies, procedures, and decision making employed by the writer in writing and it is viewed as the result of complex processes of planning, drafting, reviewing and revising" (p. 592). One of the dimensions of writing is complexity, which is regarded as the most difficult dimension in the process of language development. According to Skehan (1996), complexity is defined as "the stage and elaboration of the underlying interlanguage system" (p. 46), which is developing the difficult and structured inter-language. Moreover, Pallotti (2009) describes complexity as "more advanced" or "challenging language". He argues that complexity is not a property of language production but it is just a sign of the development of proficiency. Complexity is defined as "elaborated language" (Ellis & Barkhuizen, 2005, p.139). Ellis (2008) also makes differences between accuracy and complexity. He noted that accuracy deals with learner's attempts to avoid making formal mistakes while complexity is considered as the learners' tendency to make more complex sentences and clauses. According to Skehan (1996), complexity concerns the elaboration or ambition of the language that is produced which emphasizes the organization of what is said and draws attention to the gradually more elaborate language and a greater kind of syntactic patterns that may be used. Skehan suggested that complexity may reflect a willingness, on the learner's part, to engage in rearranging while more complex subsystems of language are developed. As stated by Vercellotti (2012), these classifications have at least three problems. First, through these types of meanings, only learners could make complex language; while native speakers with fully internalized, automatic language are not able to do so. Second, they seem to conflate complexity with the element of fluency as fluent language is also defined as automatic. Third, they seem to unite complexity to recently acquired but not completely learned structures which is also likely that completely learned structures are used to produce complex language. Complexity can be realized in two categories: lexical and grammatical complexity. Grammatical and lexical variation measures are often used to ascertain the level of complexity in measuring development in writing: "[Grammatical / Lexical] complexity means that a wide variety of both basic and sophisticated [structures/words] are available and can be accessed quickly, whereas a lack of complexity means that only a narrow range of basic [structures/words] are available or can be accessed" (Wolfe-Quintero et al., 1998, p. 69). T-Unit analyzes complexity from a syntactic point of view. Thus, due to the nature of this research, it is regarded as the best tool to analyze the complexity of the participants' writings.

Writing Fluency and its Measurement
Fluency is supposed to be more like language proficiency (Koponen & Riggenbach, 2000). Schmidt (1992) defined fluency "as a part of language performance, exactly the delivery of speech" (p. 358). As stated by Segalowitz (2007), fluency has two aspects: access flexibility and attention control. Accessing flexibility is associated with the learner's capability in relating words and phrases to their meaning, while attention control refers to the procedure in which the learner calls attention in the real time of communication. Raters often measure the component of fluency holistically. Nevertheless, this kind of assessment and evaluation has some limitations and weaknesses. According to Schmidt (1992), influenced by the student's accuracy as well as temporal-based fluency measures, the raters can be biased in their assessments. Furthermore, they may be especially susceptible to reacting to their construct of fluency (Koponen & Riggenbach, 2000), such as the comprehensive sense of fluency, which may comprise of lexical choices, grammatical complexity, and pragmatics. Furthermore, Fluency refers to the learners' ability in constructing language in real time without too much pausing or hesitation. Also, fluency reflects the significance of meaning and the ability to deal with real-time communication (Skehan, 1996). Consequently, it may focus on lexicalized language which also reflects the effectiveness of the planning process and the way plans can be made into effective, ongoing discourse (Foster & Skehan, 1996).
Writing fluency has been also assessed holistically which means raters assign a single score based on the overall impression of the writing when using this method. A rating scale or a scoring rubric that provides a guideline of the scoring criteria is used in a typical holistic assessment (Weigle, 2002). This method, on the other hand, suffers from the same drawback mentioned in the section on measurement of accuracy. Fluency in writing can be defined in different ways; for instance, Polio (2001) stated that one way to define it is through inspecting how native-like the writing sounds. The other way is considering the amount of production in a writing sample. In this research, to measure fluency, Larsen-Freeman's (2006) profile is used which defines fluency as the average number of words per t-units. Haywood and Tzuriel (2002) defined dynamic assessment as a Subset of interactive assessment that includes deliberate and planned mediational teaching and the assessment of the effects of that teaching on subsequent performance. The term dynamic assessment refers to an assessment of thinking, perception, learning, and problem solving by an active teaching process aimed at modifying cognitive functioning. Dynamic assessment differs from conventional static tests regarding its goals, processes, instruments, test situation, and interpretation of results (p. 40). According to Poehner (2008), the term static test refers to a test where the examiner presents items to the child and records his/her response without any effort to mediate in order to change, guide, or improve the child's performance. In other words, the mediational strategies used within the dynamic assessment procedure are more closely related to learning processes in school and to other life contexts than are conventional static methods. Lidz (1987( , as cited in Poehner, 2008 has defined dynamic assessment as: "an interaction between an examiner-as-intervener and a learner-as-active participant, which seeks to estimate the degree of modifiability of the learner and the means by which positive changes in cognitive functioning can be induced and maintained" (p.19).

Negotiated Assessment
According to Gosling (2000), negotiated assessment is a useful method for developing teacher learning because of its participative and interactive elements (Boud, 1992;Day, 1999). Negotiated assessment is identified through the considerable involvement of participants in their assessment and the exchange of views among the assessee and the assessor. Anderson et al. (1996) believed that the negotiations enhance the active involvement of teachers in choosing their learning objectives, outcomes, activities, and evidence which boosts their learning process during the assessment procedure. When a teacher does not take this role actively, the assessor must challenge the assessee to take responsibility for his or her learning and assessment (Anderson et al., 1996;De Eça, 2005). The learning contract includes the negotiated learning objectives, learning activities and the evidence to be arranged during the assessment procedure. The learning contract is a guideline for the assessor's learning process and can be renegotiated over time (Gosling, 2000) all along assessment meetings outlined by reflective dialogues. In such exchanges, the assessor gives feedback about the progress of the assessor's practice and both parties negotiated about them. In addition, McMahon (2010) introduced an essential element as "the collecting of evidence" by the assessee, to show the assessed skills. The negotiations between the assessor and the assessee can improve the assessees' involvement in their assessments which fits in with other literature on formative assessment, which emphasizes participation and control by the assessee on the one hand, and the social, interactive, and contextual nature of learning, on the other (e.g., Gulikers

Related Studies
Waddell (2004) investigated the effects of negotiated written feedback within formative assessment on fourth-grade students' motivation and goal orientations. For the purpose of his study, seventy-nine fourth-grade students, from five elementary classrooms participated in two studies. The study intended to provide support for a cause-and-effect relationship between feedback scores and feedback effectiveness. The other Study intended to demonstrate: a) the relationship between feedback scores and feedback effectiveness. The results of the study through an analysis of covariance confirmed that the experimental group reported a significantly higher level of Learning Goal Orientation, one aspect of academic motivation. A General Linear Model Repeated Measures procedure found support for relationships between feedback scores and feedback effectiveness, and between assignment grade and feedback scores. The research was unable to demonstrate a relationship between feedback effectiveness and academic performance. Moreover, Jafary, et al. (2012), in their study investigated the effect of dynamic assessment on learners' syntactic knowledge. Sixty students were assigned into experimental and control groups. The students in the experimental group received mediation in the dynamic assessment model, which involved some strategies like looking for clues, eliminating the answers that do not fit, and comparison strategies. The control group received deductive grammatical rules during twelve sessions. The results showed that dynamic assessment was better in improving the syntactic knowledge of the learners. In addition, Malmeer and Zoghi (2014) explored the effect of an interactionist model of DA on the Iranian EFL adult learners' grammar performance. Eighty students were assigned into teenage and adult groups. An interactionist model of DA was implemented in both teenage and adult groups. The results displayed a significant difference between the two groups namely the experimental and control group in terms of grammar. The adult EFL learners benefit from DA more than the teenage EFL learners.
Ahmadi and Barabadi (2014) examined the difference between dynamic and non-dynamic tests, and to understand test-takers' potential for learning, and to find out how mediation works for high and low ability students. To achieve these aims, computer software was developed. The efficiency of the software in employing dynamic assessment was tested by 83 Iranian university students. The results of the study indicated that the computerized dynamic test made a significant contribution both to enhancing students' grammar ability and to obtaining information about their potential for learning. The result showed that the use of dynamic assessment can simultaneously lead to the development of the test takers' ability and provide a more comprehensive picture of learning potential. Besides, Moghadam and Rad (2015) examined the role of negotiated assessment of metacognitive listening strategies in enhancing listening comprehension. To this aim, 60 Iranian EFL learners at the intermediate level of language proficiency were assigned to an experimental and control group. An attempt was made by the teacher in the experimental group to raise students' awareness of metacognitive strategies both prior to and after doing listening comprehension tasks in a time bracket of eight weeks. Nonetheless, the control group followed a conventional product-oriented approach to listening instruction; that is, no attempt was made to engage them in metacognitive instruction. Listening comprehension of both groups was evaluated by the listening section of IELTS at the onset and end of the study. Results of the study revealed that negotiated metacognitive assessment managed to significantly increase gains in listening comprehension. Furthermore, the experimental group significantly outperformed the control group. The results gave more credence to the positive role of the process-based approach to teaching listening comprehension. Abbasi and Fatemi (2015) in their study investigated the effect of dynamic assessment on Iranian pre-intermediate English as a foreign language (EFL) learners' acquisition of English tenses. Fifty-eight students were assigned into experimental and control groups. The participants in the experimental group received mediation in the dynamic assessment model. The control group received deductive grammatical rules. The results indicated that the learners in the dynamic group not only could outperform the other group in terms of learning English tenses but also had positive attitudes toward learning through dynamic assessment. Also, there was a study by Ahmadi Safa, et al. (2015) who have conducted a study in order to investigate the effects of dynamic assessment procedures on the Iranian advanced EFL learners speaking skill proficiency. To this end, 40 advanced EFL learners were divided into three groups. They were assigned to two DA groups and one Non-DA group. The first DA group's participants were assessed and given the required assistance through interaction-based DA procedures, while the second DA group received DA based intervention following Lantolf and Poehner (2011) scale to assess and assist the participants' speaking proficiency in their discussions. The results indicated that the interactionist model of DA had a statistically significant positive effect on Iranian EFL learners' speaking ability; while the interventionist model of DA had a statistically significant positive effect on Iranian EFL learners' speaking ability. Additionally, the results indicated that the three groups, namely, interactionist DA, interventionist DA, and non-DA had statistically significant different effects on Iranian EFL learners' speaking ability with the interactionist DA group outperforming.
Sharafi and Abbasnasab Sardareh (2016) tried to investigate the effect of dynamic assessment on elementary EFL students' grammar learning. To this end, forty-six male adult elementary EFL learners in two groups participated in the study. Then, while the experimental group underwent their treatment in the form of dynamic assessment, the control group experienced their routine classroom activates. At the end of the treatment sessions, both groups took a grammar post-test. The results of their study indicate the significant effect of dynamic assessment on elementary EFL learners' learning of prepositions of time and place. Moreover, Hamavandi, Rezai, and Mazdayasna (2017) have done a study on the effect of dynamic assessment of morphological awareness on reading comprehension and to examine which method of assessing morphological knowledge could predict and account for the EFL learners' reading ability. For the purpose of their study, 50 intermediate EFL learners participated. The participants in the experimental group were assessed using a dynamic assessment procedure, while the participants in the control group were taught morphology following the methodology proposed by the institute. The Nelson-Denny Reading Test and Test of Morphological Structure were applied as posttests. The findings of their study indicated that dynamic assessment of morphology developed EFL learners' reading comprehension. Additionally, the dynamic assessment task could predict EFL learners' reading comprehension over and above the static assessment task of morphology.

Participant Characteristics
The participants of this study were 72 female intermediate EFL learners who were studying English in a language institute in Dezful Iran. They were selected from a larger group of 103 learners and their ages ranged from 18 to 28. All participants were Iranians and their native language was Farsi. Besides, a group of 25 EFL learners, who had almost the same characteristics as the participants in the main study, took part in a pilot study, where the reliability of the instruments was ensured.

Sampling Procedure
The 72 participants were selected out of a group of 103 EFL learners based on their performances in the Preliminary English Test (PET) by Cambridge TESOL (2014). Intact group sampling method was used as it was not allowed by the institute to assign students randomly to classes. The initial 103 learners were already placed in 12 classes based on the placement test administered by the institute. First, the researcher assigned 4 classes (N= 35) to the dynamic assessment group, 4 classes (N= 34) to negotiated assessment group, and 4 classes (N= 34) to the control group. The administration of the language proficiency test resulted in the identification of 25 homogenous learners in the dynamic assessment group, 22 in the negotiated assessment, and 25 in the control group. The identification of the homogenous learners was done by determining those whose scores fell within the range of one standard deviation above and one standard deviation below the mean. All 103 learners were present in the classes, but only the scores of those who were identified as homogenous were used in data analyses.

Preliminary english test (PET)
In order to be assured of the homogeneity of the participants in terms of English language proficiency, and to ensure that they were all at the intermediate level, a 2016 version of PET was Administered. It should be noted that the speaking part of the test was excluded for ease of administration.
Before the main study, the test was administered to the pilot group and the results showed a high index of reliability (alpha = .91) and consistency between the raters in scoring the writing section (r = .877, p = .000).

Writing pretest
After homogenizing the participants based on their marks on the PET, the participants were asked to write two five-paragraph essays on two predetermined topics based on their coursebook in descriptive and exploratory types. The essays consisted of 150 to 250 words and they had 50 minutes to write about each predetermined topic (the time limitation was recommended in their writing books). The essays were supposed to have three parts -Introduction, Body Paragraphs, and Conclusion. It should be noted that the learners were already taught how to write five-paragraph essays in their last term in the institute. They practiced more of this type of writing during the study.

Writing posttest
The same topics used in the pretreatment test of writing were given to the participants to write two compositions about.

Writing rubric
In the analysis of the writing ability of the participants regarding complexity and fluency, the following formula was used. These measures are adopted from Larsen-Freeman (2006) and are analyzed based on T-units.
Complexity: total number of clauses divided by the total number of T-units. Therefore, the higher the ratio, the more complex the writing can be considered.
Fluency: total number of words divided by the total number of T-units. Therefore, the higher the ratio, the more fluency the writing can be considered.

Experimental Intervention
The study started with piloting the instrument to make sure of the reliability and inter-rater consistency. Then, the piloted PET was administered to the 103 learners and 72 homogenous learners in terms of language proficiency were identified. The initial 103 learners were already assigned by the institute to 12 classes; that's why the researcher used intact class assignment through which 4 classes (N = 25) were assigned to dynamic assessment, 4 classes (N = 22) to negotiated assessment, and 4 classes (N = 25) to the control group. It should not be left unmentioned that the participants who were not identified as homogenous were also present in the classes, but their scores were not included in the data analysis. The first experimental group received treatment through dynamic assessment and the other one received instruction through negotiated assessment while the third group was considered as a control group. In order to measure the learners' writing ability, all groups were asked to write two essays on the given topic and their scores were used as the pretreatment test.
The whole procedure in the negotiated assessment was adopted from Kim's (2005) work. In the negotiated assessment group, the students were assigned into groups of three or four to provide feedback. The students were asked to write an essay based on the topic of their coursebook and also based on the structural points, such as the use of conditionals, perfect progressive tenses, etc., taught on that session and then they were given a copy of other students' writing assignment; that is, in each session, each of the students was assigned to provide other students with a copy of her writing assignments. The feedbacks were first given within the groups and then in the class. Students were supposed to participate in corrections of errors and express their ideas about the way they can be corrected. Then, the participants were supposed to write their final draft. At the last step of the post-writing process, the teacher proofread the writing works for any spelling, vocabulary, and grammatical points, again and gave some more feedback if necessary. Moreover, through the teacher's feedback, the students gained information about their strengths and weaknesses in these aspects of their writing. Then, at home, the students revised and redrafted their writing based on their peers' comments and gave it back to the teacher in the following session. It is worth mentioning that the students were supposed to keep all their drafts until the last session when they handed them to the teacher.
In the second experimental group, the teacher asked students to write about the topic presented in their coursebook. Based on the interactionist approach of DA, as Poehner (2008) indicated, any necessary help or feedback emerged from the interaction between instructor and learner from implicit to explicit which is highly sensitive to learner's ZPD. In fact, during the mediation phase (which happens both in during-writing and post-writing steps), the teacher constantly monitored all the students' works, answered their questions, and provided them with appropriate hints and feedback while no peer negotiation or feedback was allowed in this group. The discussions between the students and the teacher were about the structure, correct use of lexicons and some other grammatical points taught in that session. This time, they were asked to think about the exact location and the reason for the error in the sentence. They were also requested to correct their errors in their writings after locating them. The learners received DA following Lantolf and Poehner (2011) scale. Their scale was implemented to offer mediation on the ground of each student's answer. If the student's answer was correct, no mediation was provided. However, if the student's answer was incorrect, the instructor selected one of the 8 forms offered by their scale. These forms are as follow: (1) Teacher pauses; (2) Teacher repeats the whole phrase questioningly; (3) Teacher repeats just the error part of the sentence; (4) Teacher asks a question, for example: what is wrong with this sentence; (5) Teacher points out the incorrect word; (6) Teacher asks either… or… questions; (7) Teacher identifies the correct answer; (8) Teacher explains why. In the control group, the same materials were taught using a conventional method. The student received the instruction from the same teacher and practiced the same topics for writings. The only difference was that the assessment in this group was done at the end of the treatment (static assessment) and no process-oriented assessment was used Finally, at the end of the semester, the students were asked to write another two compositions like the pretest. Then based on the T unit, the researcher scored their papers.
In the control group, the same materials were taught using a conventional method. The student received the instruction from the same teacher and practiced the same topics for writings. The only difference was that the assessment in this group was done at the end of the treatment (static assessment) and no process-oriented assessment was used. The writing activities were dealt with only in the scope provided by the book and as much time was spent on these activities as it was spent on other parts of the book. The writings they had written as the book activities required were taken and never returned to them until the end of the term to avoid the provision of feedback. They were just collected by the teacher and used collectively as part of their final scores.
Finally, at the end of the semester, the students were asked to write another two compositions like the pretest. Then based on the T unit, the researcher scored their papers.

Participants Selection and Descriptive Statistics
The main study started with the assignment of the participants of 12 classes intactly into three groups. Then, based on their performances in the language proficiency test, PET, only those whose scores fell within the range of Mean ± 1SD (41.9 to 61.8) were selected as the main participants and the scores of the rest were not included in the measurements. Moreover, in the first and last sessions of the study, the participants were asked to write two essays, which were coded based on Larsen-Freeman's (2006) protocol. The descriptive statistics of the scores obtained from all these phases are reported in Table 1. Note that one case was discarded in the posttest from the NA group as it showed the characteristics of an outlier (Mahalanobis distance = 16.67 > 13.82).
To check the pre-treatment homogeneity of the participants a parametric Analysis of Variance (ANOVA) and a non-parametric Kruskal-Wallis test was run on fluency and complexity scores, respectively. The results of both ANOVA (F (2, 69) = 0.38, p = 0.68 > 0.05) and Kruskal-Wallis (H = 1.328, p = .515 > .05) tests showed no significant difference among the three groups in terms of fluency and complexity at the outset. Therefore, the researcher was rested assured that any possible significant changes in the posttest can be attributed to the effects of the treatments.

Answering the Research Questions
In order to answer the six research questions, a Multivariate Analysis of Variance (MANOVA) was run on the posttest scores. Before running the tests the assumptions were checked: the maximum Mahalabonis distance, after the exclusion of one outlier, was safely below the critical value of 13.82; skewness ratios indicated normality for all distributions of scores; no non-linear relationship among the groups' fluency  and complexity scores were found through the inspection of the scatterplot; the equality of covariance matrix was met (Box's M = 4.817, F = .768, p = .595 > 0.05); and Levene's test of equality of error variances based on the median was non-significant for both fluency (F (2,68) = 0.509, p = 0.604) and complexity (F (2,68) = 0.622, p = 0.54) scores. Having all the assumptions in place, the MANOVA was run ( Table 2). The result of the Wilk's Lambda Test specified that F = 8.51 and p = 0.000 < 0.05. It could thus be concluded that there were statistically significant differences among the three groups. As illustrated in Table 3, the three groups turned out to have a statistically significant difference both in the writing complexity (F (2,68) = 5.61, p = 0.006 < 0.05) and fluency (F (2,68) = 12.17, p = 0.000 < 0.05) after the treatment. Moreover, the effect sizes using Partial Eta Squared was 0.142 and 0.264 for writing complexity and fluency, respectively, indicating that the type of instruction accounted for 14.2% and 26.4% of the overall variance of each of the corresponding dependent variables. Both of these values signify large effect sizes.
Finally, in order to locate the differences, a Scheffe post hoc was run ( Table 4, below). The results showed that: A. There was a significant difference (p = 0.007 < 0.05) between the complexity posttests scores of the dynamic assessment and control groups, the former outperforming the latter (MD = 0.299, SE = 0.919, 95% CI [0.069, 0.529]); therefore, the first null hypothesis, which stated "using dynamic assessment does not have any significant effect on EFL learners' writing complexity", was rejected. B. There was no significant difference (p = 0.055 > 0.05) between the fluency posttests scores of the dynamic assessment and control groups (MD = 0.928, SE = 0.377, 95% CI [-0.016, 1.873]); therefore, the second null hypothesis, which stated "using dynamic assessment does not have any significant effect on EFL learners' writing fluency", was retained. C. There was no significant difference (p = 0.084 > 0.05) between the complexity posttests scores of the negotiated assessment and control groups (MD = 0.218, SE = 0.096, 95% CI [-0.459, 0.023]); therefore, the third null hypothesis, which stated "using negotiated assessment does not have any significant effect on EFL learners' writing complexity", was retained. D. There was a significant difference (p = 0.000 < 0.05) between the fluency posttests scores of the negotiated assessment and control groups, the former outperforming the latter (MD = 1.948 SE = 0.394, 95% CI [0.960, 2.937]); therefore, the fourth null hypothesis, which stated "using negotiated assessment does not have any significant effect on EFL learners' writing fluency", was rejected. E. There was no significant difference (p = 0.754 > 0.05) between the complexity posttests scores of the dynamic assessment and negotiated assessment groups (MD = 0.080, SE = 0.096, 95% CI [-0.160, 0.321]); therefore, the fifth null hypothesis, which stated "there is not any significant difference between the effect of using dynamic assessment and negotiated assessment on EFL learners' writing complexity", was retained. F. There was a significant difference (p = 0.042 < 0.05) between the fluency posttests scores of the dynamic assessment and negotiated assessment groups, the latter outperforming the former (MD = 1.020, SE = 0.394, 95% CI [0.032, 2.008]); therefore, the sixth null hypothesis, which stated "there is not any significant difference between the effect of using dynamic assessment and negotiated assessment on EFL learners' writing fluency", was also rejected.

Discussion
Based on the above-mentioned results, it can be concluded that the mediation of DA has been effective and it develops the learners' writing complexity. Therefore, it can be argued that a good way of improving the EFL learners' writing complexity is making use of DA in language learning classes. In fact, DA helps the learners to reach their potential competencies, through the teacher's mediation and scaffolding, making them aware of their capabilities. According to Vygotsky's socio-cultural theory, DA is both a teaching and assessment procedure in which interaction has a crucial role. In particular, it is presented in the form of mediation and interaction. A very purposeful interaction, which happens between the learners and teacher in the interactionist DA might have led this group to gain more from this procedure. Indeed, the valuable effects of interactionist DA on learning might be as a result of the developmentally useful role of instruction in the learners' zone of proximal development (Lantolf &Thorne, 2006).
The results also indicated that negotiated assessment is an effective method to improve EFL learners' writing fluency. Learner receiving NA also significantly outperformed DA in this regard. Since negotiated assessment instruction engaged the participants in an active process of learning and assessment, it is assumed that it is effective during the writing process and consequently the development of their writing fluency. As stated by Thompson (2006, as cited in Verberg, Tigelaar, & Verloop, 2015, negotiation is regarded as an interpersonal communication process through which two or more people engage in discussion to reach an agreement with a positive result for both parties, so it can contribute to learners' language learning. Indeed, learners, in a negotiated assessment, can better monitor the writing process and they are treated as more active collaborators in the process of learning writing skills because this type of assessment is characterized by extensive involvement of participants in their assessment. These findings suggest that negotiated assessment, by facilitating writing, and the provision of regular peer and teacher feedback on writing, can encourage a significant development in students' writing fluency. Another point about the effectiveness of these two process-based assessments is the context in which they were applied. According to Jahanbakhsh and Ajideh (2018), Iranian learners are characterized as both individualistic and competitive with regard to their learning culture. They often take part in classes with product-based approaches in which their performances are evaluated individually based on their final exams. However, as they state, choosing an appropriate method can help them to change this culture. Both dynamic and negotiated assessments seem to be effective methods since they uphold the competitive culture of learners by providing multiple opportunities for evaluation and making them involved in the interaction, especially in the case of negotiated assessment, to direct their individualistic inclinations towards working together, which has proved as a more effective way to learn.
Comparing the results of the study with previous empirical ones, the findings support the effectiveness of the two assessments. Although the previous studies which worked on writing were very limited, their effectiveness in improving language skills and components were supported by the findings of the present study. Examples of previous studies on DA are Malmeer and Zoghi (2014), revealing that DA plays an effective role in promoting the participants' grammar knowledge, and Jafary et al. (2012), showing the effectiveness of DA in improving both acquisition of English tenses and leading to positive attitudes towards language learning.
Concerning negotiated assessment, there is almost a dearth of study focusing on the effect of this treatment on language learning. Previous empirical studies mostly had focused on some interdisciplinary attributes, like motivation and goal-orientation (e.g., Waddell, 2004) and improving the use of learning strategies (e.g., Moghadam & Rad, 2015), which resulted in significant positive effects. The results obtained from this study can be regarded as a starting point for further research in this area, although those interdisciplinary attributes may have also played role in the results obtained here.
The final issue to be discussed is the difference between the two types of assessments in improving the complexity and fluency of writing. The results suggested that DA was more compatible for improving writing complexity and NA for writing fluency. This can be discussed in light of the theory and practice of these two assessments. DA can offer authentic information about the progress of students and can be used as a means of helping students to overcome their writing problems in L2 because using dynamic assessment technique allowed participants to create a bridge between their teacher and themselves (Lantolf & Thorne, 2006). Actually, the students are exposed to the feedback provided by the teacher which is enriched with higher linguistic knowledge. Such feedbacks, thus, provide good examples of writing in which complex structures are often embedded. Thus, it is not surprising, that this treatment worked effectively in improving writing complexity.
On the other hand, a significant element of the assessment process in negotiated assessment group is negotiations. In the student learning context, the power issue between the student as assessor and also as assessee may affect the relationship, as they are both equal and this might expand the quantity of negotiation between them and make it more beneficial and negotiations may encourage interactivity between students. As a result, the students get more openly involved in the interaction with other students and the teacher, which, in turn, results in the provision of more opportunities for exercising language production. The practices and discussions in such classes can be considered as an effective factor in improving fluency.

CONCLUSION
The results lead the researcher to draw the following conclusions. First is the overall effectiveness of both types of treatments. Although either of the treatments failed in showing significant improvement in one aspect of writing or the other, the overall improvement of the experimental groups' mean scores was evident in both cases. As Larson-Hall (2012) states, not-significant results should not be discounted for insignificant or lack of any effect. The results obtained from the effectiveness of dynamic assessment on writing fluency and negotiated assessment on writing complexity were considered significantly effective. As supported by the literature, the process-based assessment has been proved as a very effective approach to teaching English. Accordingly, the results of this study suggest that these two types of assessments can be regarded as good techniques to be used in the language classroom to make language learners more motivated and turn writing from a challenging to an interesting task. This is while the research (e.g., Khodashenas & Rakhshi, 2017) has shown that traditional ways of teaching and assessing writing as well as the often-boring writing classes make language learners less interested in developing their writing abilities. The difference between the two was significant with regards to writing fluency where the NA group outperformed DA. This can be attributed to the nature of feedback exchanged in the NA group. The participants of this group were allowed to share their ideas and assess each other. It is no wonder, thus, that fluency of production was improved in NA classes. Alternatively, comparing the performance of both experimental groups with regards to complexity showed no significant difference. However, looking at the differences found between the results of these two groups, on one hand, and the control group, on the other hand, shows that DA was more effective than NA in improving writing complexity. It was almost significantly better. This can also be attributed to the more accurate and complex nature of feedback provided in DA classes. In other words, the learners' superior performances in terms of complexity might be substantiated concerning the scaffolding support they received from their teacher which seems to have assisted them in appropriating the complexity of writing and escalated their capabilities of regulating their writings. Moreover, by observing the different effects of the two treatments in improving either aspect of writings, they can be recommended to be used integrally. By mixing the two, it is hoped that an optimum result can be reached with regards to both complexity and fluency.
Teachers, as primary sources of knowledge in language classes, may decide to employ teaching techniques in their teaching practice which leads to the use of dynamic and negotiated assessments. This way, more effective results could be expected in improving learners' writing ability. Consequently, teacher training courses may focus on familiarizing the teacher trainees with how to provide students with an assessment that best helps students improve their writings. In this way, teachers will gain more awareness, which will ultimately affect their teaching practices and the learning process positively. This, in turn, provides the students with a chance for exposure to the right procedure while focusing on learning the language. Having done so, the learners would have more opportunities to become familiar with two kinds of assessments, realizing how they could improve their performances in writing. Being exposed to process-based approaches, Iranian learners may also experience a gradual shift in their individualist and competitive inclinations of learning (Jahanbakhsh & Ajideh, 2018). The direct beneficiaries of such changes would be students themselves. syllabus designers can incorporate writing activities that facilitate both dynamic and negotiated assessments to not only develop the learners' interaction as well as their engagements, but also increase their writing proficiencies. Correspondingly, in English textbooks, Syllabus designers can also include some sections on how dynamic and negotiated assessments operate and what their benefits are.
Finally, it should not be left unmentioned that this research was done with certain limitations. More research is needed to reach a comprehensive understanding of the effects of the two assessments used in this study as well as to conclude on the optimum ways of improving writing and its essential components.