This study aimed to examine the feedback provided by ChatGPT on cohesion in students’ writing and compare it to feedback given by a human rater. The participants in this study were not directly involved in the research. The researcher sought assistance from a lecturer to obtain essays written by students from the English Language Education Department, specifically from the Creative Nonfiction class, batch of 2023, at Atma Jaya Catholic University. During the data collection process, the essay and instructions for completing the feedback table were given to a human rater. Additionally, the researcher provided a prompt to ChatGPT to identify cohesion issues in the essay and generate a corresponding feedback table. The findings showed that ChatGPT provided a substantial amount of corrective feedback across various cohesion categories, including reference (personal, demonstrative, and comparative), substitution (nominal, verbal, and clausal), ellipsis (nominal, verbal, and clausal), conjunctions (additive, adversative, causal, and temporal), as well as reiteration. Most of the feedback generated by ChatGPT took the form of direct correction, such as revising, replacing, or rephrasing sentences. In contrast, the feedback provided by the human rater reflected several types of corrective strategies based on Nassaji’s classification, including metalinguistic cues, direct correction, and recasts. These forms of feedback reflect a more two-way interaction and learner-centered approach, promoting greater awareness and independent revision. |