The Role of Automated Corrective Feedback in Improving EFL Learners’ Mastery of the Writing Aspects

Purpose. Automated Corrective Feedback (ACF) is one of the techniques used in EFL writing instruction and assessment. This technique has been widely employed to improve students’ writing skills over the last few decades. Adopting a mixed-method design with data triangulation, this study was conducted to investigate the effect of utilizing WRITER, one of the ACF software, on critical writing aspects including use and mechanics, vocabulary, structural organization


Introduction
The rapid advancements in computer technology and associated software have generated a great deal of interest in Automated Corrective Feedback (ACF) which is a collection of programs that automatically evaluates students' work.Barrot (2021) defined ACF as the information supplied by an automatic writing evaluation program to its users regarding grammatical inaccuracies in their written work.ACF was implemented in the English as foreign language (EFL) classroom setting to help reduce the instructor's burden of managing their students' writing and provide students with more opportunities to obtain instant regular feedback even outside the classroom environment (Woodworth & Barkaoui, 2020).ACF tools are promoted primarily because of their capacity to supply analytical feedback on various areas of writing, such as meaning, sentence structure, spelling, grammar, word choice, content, tone, plagiarism as well as scoring options that allow the instructors to assess and evaluate students' essays with the linguistic aspects in mind, as well as elements that allow instructors to develop and manage writing activities (Ranalli, Link & Chukharev-Hudilainen, 2017).
Students could also use this knowledge to autonomously revise their compositions, enabling them to take part in the writing, feedback, and editing process (Hockly, 2019).These ACF tools support students to apply a variety of drafting and writing assets (Chen & Cheng, 2008).Furthermore, students enjoy utilizing ACF to build a more diverse vocabulary (Shang, 2022).They can access writing samples and online dictionaries to draft, revise, and self-edit their essays, which can help them improve their writing competency and enhance instructional strategies (Hockly, 2019;Muftah, 2022 a, b;Muftah, 2023).Woodworth and Barkaoui (2020) have argued that ACF tools, such as Criterion, Project Essay Grade, My Access, and Piagi, offer multiple drafting and immediate feedback on different aspects of writing.They also produce sample essays, assessment rubrics, visual organizers, dictionaries, and thesauri (Warschauer & Ware, 2006).However, there is also no conclusive evidence to support their effectiveness in improving EFL students' writing quality and accuracy.
The effectiveness of ACF has been the subject matter of recent research, most of which have concentrated on the potential implementation of this technology in the classroom (Barrot, 2020;Ghufron & Rosyida, 2018;Li, Link & Hegelheimer, 2015).Other research examined the educational value of such technology for students (Ranalli, 2018), assessed the forms of feedback provided by ACF applications (Luo & Liu 2017), compared the effects of teacher feedback and ACF on writing proficiency development (Link, Mehrzad & Rahimi, 2020;Wang, 2022;Wang & Han, 2022) while other studies examined the influence of online peer editing on student academic writing performance (Shang, 2022;Zhang et al., 2022).Only a few studies have examined the potential of ACF applications to enhance students' writing skills (Qassemzadeh & Soleimani, 2016).
These studies applied various research approaches, targeted multiple elements of writing skills, explored various writing styles, or investigated the impact of automated editing on instruction or the attitudes of students.It is still up for debate, however, whether using these ACF tools by students or their peers to edit their writings can raise awareness of different writing-related aspects and, as a result, produce higher-quality writing outputs.The current study seeks to fill this research gap by examining the impact of WRITER, an ACF software, on students' peer and self-editing writing aspects.It also investigates how students perceive ACF and their automated editing experience.Warschauer and Ware (2006) identified three strands in the research agenda on electronic feedback for the second language classroom.The first explored the usefulness of electronic feedback for replacing or enhancing human feedback.The second compared automated feedback with non-automated feedback.The third examined dissimilarities among types of electronic feedback.The following section sheds light on the first and second strands of research on ACF focusing on the studies that integrated ACF with peer and/or self-feedback.

Literature Review
The Use of ACF with Peer and Self-feedback Some studies in the literature of automated peer and/or self-feedback reflected on student's attitude towards usage and effect.Liaqat et al (2021) reported that EFL student-writer's evaluation of the ACF provided by peers reflected similar strong negative beliefs found in the literature of traditional peer CF.However, Chen and Cheng (2008) reported that the students who got ACF accompanied by peers and teacher feedback reflected better attitudes towards ACF than those who received only ACF.In an attempt to resolve the dilemma of whether students prefer ACF accompanied by peer feedback, Cote (2013) explored several aspects affecting EFL student-writers' attitudes towards their peers' automated reviews.He found that attitudes depended on several academic, social, and cultural factors.He advised EFL instructors to use anonymous ACF of peers accompanied by proper training to dilute the effects of such factors.In a similar prospect, Lai (2010) compared students' attitudes toward ACF and peers' CF.She found that her participants favored their trained peers' CF to the AFC provided by a program named "My Access".Luo and Liu (2017) compared three kinds of CF: individual, group, and automated given to 61 non-English majors, Chinese college students at a low-intermediate level of proficiency.The analysis of feedback provided across the three groups found the following types: non-corrective, direct, indirect, global, and local.The experimental design was accompanied by qualitative data collected from a questionnaire and interviews.The findings revealed that AFC offered most of the non-corrective feedback while the group correction provided most of the corrective types of feedback including the direct and indirect types.There was an imbalance among the three modes of CF in the results of local and global types of CF.Group CF then ACF dominated the scoring in aspects of writing in the following order: linguistic performance, organization, content, and format.Concerning students' attitudes towards the group and the automated feedback, the responses varied from estimating to non-estimating of certain aspects of both modes of CF.Li (2021) conducted a survey to find out why Chinese EFL learners commonly used AWE to edit their writing.It was found that the easy usage of ACF was the most important reason behind learners' continuance of using ACF for editing.
The above studies shed light on how AFC was received by students.To a large extent, students' attitudes towards using technology-assisted peer and/ self-CF were positive and sensitive to skill and training of human editors.

Impact of ACF Supported Peer and Self-feedback on Writing
Several studies combined ACF with human feedback to gauge their impact on students' writing.Other studies compared ACF integrated with peer and/or self-CF with traditional CF methods.Daweli (2008), in an EFL context, examined peer ACF.The results reflected improved writing of the students.Law and Baer (2017) investigated the impact of a structured peer ACF on revising students' writing.They concluded that both the revision and overall writing of students were improved.Ebadi and Rahimi (2017) compared the effect of ACF alone with peer ACF via Google Docs on students' academic writing.Their findings revealed more improvements in the writings of the group that implemented ACF provided by peers.Wihastyanang et al. (2020) conducted a study in an Indonesian EFL setting on 55 English-major students.Their study was experimental and involved a comparison between two experimental and control groups.The experimental group received both automated and non-automated peer and teacher CF.The control group received only ACF.They found that the performance of students in the control group was better than that of the experimental group.The reasons they gave to justify their results were the delay of the CF, difficulties in locating the essay in the used program, lack of clarity of CF, and absence of discursive interaction.Lazic (2020) investigated the effect of technology-assisted peer feedback on paragraph structure and content as well as students' uptake.She also explored students' perceptions of the treatment.In concluding her study, she reported improvements in the targeted aspects and students' positive impression of the ACF.In conclusion, the majority of the above studies do recommend that ACF accompanied with peer editing have a positive impact on students' writing performance.Hojeij and Hurley (2017) gauged the impact of Edmodo, Notability, and Powtoon on peer and self-editing processes as well as learners' motivation and engagement.They reported students' enhanced motivation and overall improvement in their students' writing.Al-Wasy and Mahdi (2016) examined mobile applications' impact on students' self-editing.They found more improvements in self-editing in the two areas of grammar and punctuation.However, no improvement was detected in areas of spelling and capitalization.A recent study by Sherafati and Largani (2022) examined the impact of ACF on 42 Iranian male upper-intermediate EFL learners' writing performance, self-regulation, and self-efficacy.The study reported significant improvement in the above aspects of the students' writing competence.Elola and Oskoz (2010) compared performances of peer-writers with self-editors using wikis and chats.They found no major differences in the two groups' writing complexity, accuracy, and fluency.In a similar context, a more recent study by Al-Inbari and Al-Wasi (2022) held a comprehensive comparison between ACF accompanied with peer or self-editing on the one hand and traditional peer and self-feedback.Their findings revealed overall improvements in the writings of experimental groups that used peer and self-ACF.However, they reported no significant differences between the groups implementing peer and self-ACF.
In conclusion, most of the above studies compared ACF accompanied with human feedback with human-only CF.However, none of the above-reviewed studies attempted a detailed statisticallybased comprehensive comparison among different aspects of writing proficiency across different modes of CF.The present study is an attempt to fill this gap left by previous research.As this is a crucial area of study that can have insightful ramifications on the use and effect of ACF applications on improving EFL students' writing skills, the present study aims at investigating the effect of using WRITER, a well-known ACF application, on enhancing critical writing aspects including use and mechanics, vocabulary, structural organization, and content of Saudi EFL undergraduate students.Specifically, the study seeks to answer the following research questions.
RQ 1. Do learners who have received automated corrective feedback perform significantly better than those who have made self-editing by using automated corrective feedback?RQ 2. Which writing aspect is improved better by using automated corrective feedback?

Methodology
Participants A random sample of forty-four students, aged between 20 and 24, was selected for this study: 9 males and 35 females.They were selected from Level 8 English Department students at Najran University, Saudi Arabia.At this level, students were expected to have detailed knowledge about the different aspects of writing since all of them have been studying English for approximately nine years at the university, secondary, and pre-secondary levels.All of them have completed all the writing courses in the English program and have effectively used the skill of writing in performing the exams of all other department courses.Another aspect of participants' homogeneity lies in the fact that none of them had ever been to any of the English-speaking countries.

The ACF Software
To achieve the aims of this research paper, the researchers used one of the ACF tools, named WRITER.The reason behind the selection of this tool was that it could check and correct different types of writing errors, starting from spelling and punctuation errors and ending with errors in clarity and style.Certainly, all types of grammatical errors were also covered by WRITER.The software provided users with clear feedback and a detailed explanation of the committed error in a way that would help them, not only to correct the error but also to understand the need for this correction.Besides, the software was supported with some other tools, such as tools for checking plagiarism, tools for building writing styles, and tools for detecting redundancies.
Another important feature of this software was that it deals with errors by using some artificial intelligence techniques.For example, when dealing with word choice errors, the software provided the users with different options from which they can choose, and these options were included in sentences, for more clarification.When dealing with grammatical errors, the software presented several examples of grammatical rules.The examples given by the software, either in vocabulary or in grammar, would enhance the users' ability to understand different ways of correcting various types of errors.In addition to all the above features, this software allowed users to save their work so that they can come back to it whenever it is needed.Users can also recall styles saved by other users and share their styles.
To attract the users' attention to the errors, the detected errors are underlined, and the suggested corrections are shown.Then, the user can just choose the best correction by clicking on it, so it immediately appears in the text.After that, the software goes through the whole text again for final revision; then, the text can be submitted.To conclude, the above features show that this software is appropriate to be used in the experiment, and it will help both researchers and participants to achieve the aims of the current research paper.

The Writing Test
To answer the first two research questions, a writing test was designed.The test was used as a pre-test before the experiment and a post-test after the experiment.The test was administered to all participants.It consisted of one writing question, in which students should write a cause-effect essay explaining the effects of coronavirus on people in Saudi Arabia.To determine its validity, the test was sent to three experts to be reviewed for clarity.Following the feedback provided by them, the test question was revised and finalized.Regarding rating, three professional EFL instructors were asked to rate the essay based on the scale of Klimova (2011).Their scores were compared to measure inter-rater reliability.The reliability coefficient of the writing test (Cronbach's alpha) was 0.86, which was high as shown in Table 1.

Procedures
The experiment was conducted by following several procedures.First, the four groups of the experiment were formed: one control group for peer editing (n=8), one experimental group for peer editing (n=14), one control group for self-editing (n=9), and one experimental group for self-editing (n=13).Second, all participants were instructed to perform a pre-test by writing an essay for the writing test.Rubrics developed by Klimova (2011), which were developed according to Bacha's model ( 2001) -Jacobs (1981), were used to score the pre-test.From these rubrics, five elements were considered: content 30%, organization 20%, vocabulary 20%, language use 25%, and mechanics 5%.The whole experiment lasted for ten weeks.
The experiment started by explaining the features of the software (WRITER) to the members of the two experimental groups through a virtual classroom.The explanation included all the details about the program and how it could be used effectively.Then, the two groups were instructed to use the software to edit their essays, for self-editing purposes, or their respective partners' essays for peer-to-peer editing.They were required to use the program's feedback in correcting the detected errors and make copies of the edited essays after correction.At the same time, the two control groups were asked to edit their own essays or their peers' essays using the traditional way.The final drafts of the edited essays of the four groups, being considered the post-test of the experiment, were scored according to the mentioned rubrics.Consequently, the scores of the pre-test and those of the post-test were compared and analyzed.

Data Analysis Procedure
To examine the effect of different levels of the independent variable (i.e., giving and receiving peer-editing and self-editing corrective feedback) on the students' writing accuracy, the mean scores and standard deviations for the writing performance post-test in all writing aspects were calculated for the four groups.A MANOVA test was also administered to the overall scores of students who participated in this study to find out the effect of ACF on EFL learners' writing performance and explore how it supported EFL writing skills.Similarly, Post-hoc Scheffe tests were also conducted to compare any statistically significant differences among the different groups.Descriptive statistics, including mean scores and standard deviations, as well as the ANOVA test were conducted to find out the effect of ACF on the writing aspects (use and mechanics, vocabulary, organization, and content).

Results
The first research question sought to explore the effect of using ACF on the students' writing performance in both conditions (peer editing and self-editing).Table 2 shows the means and standard deviations for the post-test scores.As shown in Table 2, the MANOVA test revealed that students in the peer-editing experimental group obtained higher scores than the rest three groups in usage and mechanics (M = 25.21).The results indicated that there was a statistically significant difference among the four groups (p = .041).The results also reported that the self-editing experimental group obtained higher scores than the rest three groups in vocabulary (M = 17.38).The p-value among the four groups was p = .000which indicated a significant difference among them.The results also showed that the peer editing experimental group obtained higher scores than the rest three groups in the organization (M = 16.71) with p = .008indicating a significant difference among the four groups.Furthermore, it was observed that the self-editing control group obtained higher scores than the rest three groups in content (M = 24.55).The p-value among the four groups was p= .019which indicated a significant difference among all groups.The significant values in the four conditions were smaller than p< .05which indicated that there was a significant difference between the scores of the four groups in the four aspects.
To find out the overall significance and effect of ACF on writing performance among EFL learners, a Pillai's Trace test was performed.The results from the MANOVA are reported in Table 3. Table 3 displays that the overall mean of students' scores in the experimental groups was significantly higher than that of the students' scores in the control groups on the writing post-test (F(4, 112) = 2.452, p = 0.00); Pillai's Trace =.768, partial η 2 =.25, which indicated a large effect size between groups and outcomes on the writing post-test.
To find out the mean difference between each group regarding the four aspects of writing, a Post-hoc Scheffe test was performed.The results are represented in Table 4. Results in Table 4 showed that there was a significant difference in the performance of the students in the self-editing experimental group and the peer-editing control group (p= .000) in the vocabulary as well as the organization aspect of writing (p= .003).Similarly, the result showed that there was a significant difference in the performance of the students in the peer-editing experimental group and the peer-editing control group in the usage and grammar aspects (p= .05), in the vocabulary aspect (p= .00),and the organization aspect (p= .012).Moreover, the result showed that there was a significant difference in the performance of the students in the self-editing control group and the peer-editing control group in the vocabulary aspect (p= .00),and in the content aspect (p= .03).Likewise, a significant difference in the performance of the students in the peer-editing control group and the self-editing experimental group was found in the vocabulary aspect (p= .00).In addition, a significant difference was identified in the performance of the students in the peer-editing control group and the peer-editing experimental group in the aspects of use and mechanics (p= .05)and vocabulary (p= .00).The result also exhibited a significant difference in the performance of the students in peer-editing control the self-editing control group in the vocabulary (p= .00)and the content aspect (p= .03).
To find out which writing aspect is improved better by using ACF, an ANOVA test was performed.The results of the post-test were analyzed and represented in the following section.Table 5 displays the results of the ANOVA analysis of the post-test scores across all groups of students in terms of language use and mechanics.ANOVA test results revealed that there was a significant difference among the participants across the four groups, (F=3.026,p= .041).The mean scores of the post-test for the self-editing experimental group (M=24.76), the peer editing experimental group (M=25.21), the self-editing control group (M=24.33),and the peer editing control group (M=21.87)suggested that the students involved in the peer-editing experimental group who utilized ACF to improve language use and mechanics outperformed those in the other groups.
In addition, ANOVA test results for the vocabulary aspect of the students' post-test scores in all groups are obtained in Table 6.
Results from the ANOVA test revealed that there was a statistically significant difference among the participants in the four groups, (F=15.75,p= .000).The mean scores of the post-test for the self-editing experimental group (M=17.38), the peer editing experimental group (M=16.57), the self-editing control group (M=15.88), the peer editing control group (M=13.62)revealed that the students in the self-editing experimental group who adopted ACF to improve vocabulary did better than the students in the other groups.The results of the ANOVA test of the post-test scores of all groups of students concerning the organization aspect of writing are represented in Table 7. Again, the ANOVA test revealed that there was a significant difference among the students in all groups, (F=4.52,p= .008).The mean scores of the post-test for the self-editing experimental group (M=16.30), the peer editing experimental group (M=16.71), the self-editing control group (M=16.00), the peer editing control group (M=13.50)indicated that the peer-editing experimental group who utilized ACF to enhance the organization aspect of writing outperformed other groups of students.
As to the content aspect of writing, Table 8 shows the results obtained from the ANOVA test of the post-test scores of the students in the four groups.ANOVA test results revealed a significant difference among the students in the four groups, (F=3.69,p= .019).The mean scores of the post-test for the self-editing experimental group (M=23.61), the peer editing experimental group (M=23.42), the self-editing control group (M=24.55), the peer editing control group (M=19.00)pointed out that students in the self-editing control group, who did not implement any kind of ACF to improve content aspect, performed better than those in the other groups.

Discussion
The present study was designed to investigate the effect of using ACF software programs in developing the aspects of writing that have a vital role in making meaning to the written texts of both self-editors and peer editors.The study tested the effect of these software programs on these four aspects of writing: use and mechanics, vocabulary, organization, and content.The results also revealed the significant effect of the ACF software programs on the quality of writing, in general, and the different aspects of writing targeted in this study.This is in line with Woodworth and Barkaoui (2020) who claimed that ACF is effective as it offers students more opportunities to receive immediate and consistent feedback even outside of the classroom.
The first research question was about whether learners who received automated corrective feedback from their peers performed significantly better than those who made self-editing by using automated corrective feedback.Using Peer editing in the classroom setting attracted students' attention to good writing skills while also assisting them in the construction of new insight through peer mediation.The most important relevant finding of the above question was that peer editors showed better improvement in usage and organization whereas self-editors showed better improvement in vocabulary and content.This is also consistent with the previous research done (see e.g.Wu & Schunn, 2020;Pham et al., 2020), which reported that the effectiveness of peer feedback in improving L2 students' writing accuracy has yielded mixed results.This may be because the student can catch his partner's grammatical errors than he could notice his ones.Similarly, structural organization errors can be caught easily by someone else other than the writer himself.This finding confirms the results obtained by Chen and Cheng (2008) who indicated that using ACF alone did not improve the students' witting performance in the same manner that resulted from the combination of ACF with peers and teacher feedback.Using Peer editing in the classroom setting attracted students' attention to good writing skills while also assisting them in the construction of new insight through peer mediation.All the above explanations can also be interpreted in the principles of the sociocultural theory.The sociocultural perspective assures the importance of interaction in second language acquisition (Vygotsky & Cole, 1978).The learners will get higher interaction opportunities in peer editing than they can get in self-editing.Similarly, combining ACF with both peer editing and teacher feedback will provide learners with more interaction situations.
Another important finding was that the ACF software programs proved to be effective in developing the use and mechanics, vocabulary, and structural organization writing aspects of peer editors, showing no significant differences in the content writing aspect of these editors.A possible explanation for this finding may be that such software programs may be provided with precise clarifications of usage errors, practical suggestions for vocabulary development, and beneficial guidelines regarding text organization.This interpretation does not agree with that of Lazic (2020) who found a positive effect of ACF on developing the content of peer editors.
Regarding the writing aspects of self-editors, vocabulary was better developed than the other writing aspects.Contrary to the authors' expectations, the ACF software programs did not prove to have that much effect on developing all the targeted writing aspects; the self-editing control group received the highest means among all the four groups.The students of the self-editing experimental group did not show any improvement when they were compared with the students of the self-editing control group.This finding may be attributed to the self-editors' view toward their essays and their self-confidence which may cause them to ignore the corrections suggested by the software and completely depend on themselves.However, this result has not previously been stated.Al-Wasy and Mahdi (2016), for example, assured the positive effect of technology integration on self-editing.
The second research question was inquiring about the writing aspect that was better improved by using ACF software programs.It is interesting to note that in all four aspects of writing targeted in this study, the two aspects of language use and organization were better developed by the participants in the experimental peer editing group than in the other three groups, showing a better performance in language use.It was also revealed that the writing aspect of vocabulary was better developed by the participants in the experimental self-editing group than in the other three groups.Surprisingly, the results of this study did not show any positive effect of the ACF software programs on the improvement of the content of both peers and self-editors.This particular finding might be attributed to software programs that are designed mainly to improve accuracy.Bitchener (2012), in her reflection on the benefit of CF, observed that it improved mainly accuracy.That is, the improvement of content, ideas, etc., was not documented in the research on CF.A similar finding was confirmed by Karim and Nassaji (2019) in their critical synthesis of research on written CF.They concluded that research on FC needed to go beyond the focus on accuracy.From the point of view of the theoretical framework, the Skill Acquisition Theory (DeKeyser, 2020) presents a reasonable interpretation of the above result.The three stages of this theory can be used to interpret the above finding in this way: certain grammatical rules have been understood by learners (declarative knowledge), these rules were previously applied in different contexts (procedural knowledge), and these rules were automatically used by learners (fully automated knowledge).There are no specific rules or particular organizational patterns for the content.
When comparing the mean score of the four aspects, one could observe that language use was better improved with the help of ACF software programs among the other three aspects of writing, adopted in this study.This improvement could be attributed to the students' familiarity with language use and mechanics than the other three aspects of writing.It may also be explained by the fact that these students benefited from the ACF software programs regarding language use better than they benefited from these programs in the other three writing aspects.The above-stated result can be explained from the point of view of the Noticing Hypothesis (Schmidt, 1990).Learners can notice the errors in language use by comparing their version with the corrected one provided by the ACF software program.This will not only develop learners' knowledge of the target language but also direct their concern to the form of that language.The present finding is in agreement with the finding of other studies, in which language use and mechanics underwent improvement after using ACF.For example, Luo and Liu (2017) concluded that linguistic performance was the most improved aspect among all the other aspects of writing.On the other hand, it does not support the findings revealed by AbuSeileek and Abualsha'r (2014) which proved an equal or approximate improvement in all writing aspects, including content, organization, language use and mechanics.

Conclusion
In general, the results confirm that ACF should be effectively integrated with human feedback (peer and self) in order to generate positive effects on writing performance.However, these findings can be generalized with some caution.The findings of the current study should be interpreted in light of several acknowledged limitations.First, all the participants were at similar proficiency levels.The results may be different if different levels were compared.For example, the high level and low level of the students in the self-editing and peer-editing groups should be compared.Therefore, future research could include students from a wider range of proficiency levels and backgrounds to investigate the utility of ACF in teaching writing aspects.Second, only the scores on one essay were compared in this study.The results might be more valid if different types of essays were used.The essays can be different in terms of type, length, and simplicity.Therefore, more assignments with comparable features should be included in future studies.Third, the results in this study are limited to using a specific ACF (i.e., the Writer).Further studies are needed using other ACF software and applications.

Pedagogical Implications
The findings of this study show that learners who have received automated corrective feedback from their peers performed significantly better than those who have made self-editing by using automated corrective feedback.Peer editing is one such strategy that can be implemented to enhance students' language competence by reducing the incidence of errors and elevating the quality of meaning aspects such as usage, mechanics, and organization.By stimulating them to provide error feedback to their peers, instructors can assist students to think about language as they produce it and develop the required knowledge.
This study has pedagogical implications in the writing classroom because it inspires educators to integrate the technique of ACF in peer editing to help their students enhance language content.Further studies are also needed to explore whether content can be improved after using editing software.Research should therefore look more carefully at the content provided by students in their assessment as well as their awareness of the steps to revise the writing material.It is feasible that the feedback supplied does not accurately identify the students' zone of proximal development or that it merely targets a few writing-related issues while ignoring others.Furthermore, ACF was likely found to be challenging for some students while being too accessible for others The investigation should perhaps take into account the potential that students need additional guidance and assistance on how to employ ACF to enhance their writing instead of making minor modifications that are more in line with the editing process than effective revisions.Likewise, it would be interesting to investigate whether or not extra peer-editing time and sessions, thus more practice with linguistic structures and other aspects of meaning result in students' decrease of errors in their revised texts.In addition, studies should keep examining the attributes of students that might reduce the impact of automated instructional feedback.Major considerations include self-control abilities, writing-related knowledge, motivation, and concentration.

Appendix 1
The Participants' Pre-and Post-test Scores in the First Experimental Group (Self-editing)

Table 1
Reliability Statistics for the Writing Test

Table 2
MANOVA of Students' Post-test Scores for Corrective-feedback Aspects

Table 3
Multiple Analysis of Variance of Automated Corrective FeedbackNote: a. Design: Intercept + group, b.Exact statistic, c.The statistic is an upper bound on F that yields a lower bound on the significance level.d.Computed using alpha = .05

Table 4
Post-hoc Scheffe Tests of the Students' Post-test Scores Note: PE-E= peer-editing experimental group, SE-C= self-editing control group, PE-C= peer-editing control group, SE-E= self-editing experimental group

Table 5
ANOVA of Students' Post-test Scores for Language Use and Mechanics

Table 6
ANOVA of Students' Post-test Scores for the Vocabulary Aspect

Table 7
ANOVA of Students' Post-test Scores for the Organization Aspect

Table 8
ANOVA of Students' Post-test Scores for the Content Aspect