AI's Role in Academic Writing: Plagiarism Detection, Bias, and the Evolution of Scholarly Authenticity
Revolution in Kazakhstan: Artificial Intelligence Takes Over University Classrooms
Michael Jones dives into the heart of Kazakhstan's education system, revealing a silent revolution taking place – one driven by none other than artificial intelligence (AI). From writing centers to dorm rooms, AI resources like ChatGPT, Grammarly, and QuillBot have become the go-to tools for students seeking help with their academic writing.
While some students use these tools for simple editing or idea generation, others rely on them to craft entire essays. With these AI resources becoming increasingly accessible, the debate over their appropriateness in educational settings is dwindling. Instead, universities in Kazakhstan are grappling with the question of how to effectively incorporate AI into their academic landscape.
AI offers a wealth of potential benefits for Kazakhstan's students, particularly for multilingual learners who face challenging writing requirements in multiple languages such as Kazakh, Russian, and English. AI can provide personalized, instant feedback, acting as a valuable resource for assisting students. However, if used improperly, AI's promise poses worrying ethical issues related to plagiarism and bias.
Plagiarism in an AI-driven Age
The issue of plagiarism has persisted in education for decades, but AI is causing a shift in our understanding of this practice. Traditionally, plagiarism involves copying another person's work without proper attribution. But with AI, the boundaries of plagiarism become murky. Is it plagiarism when a student utilizes an AI model to generate a 500-word essay on the causes of World War I and submits it unaltered? What about if it is modified slightly? What if AI is only utilized for structuring and transitions?
This is less an academic challenge than a pedagogical one. Students who rely on AI to complete their intellectual work overlook essential learning opportunities, such as learning how to think, synthesize, and analyze. Universities in Kazakhstan, like educational institutions worldwide, need to update their academic integrity policies to account for the subtleties of AI use. Moreover, these policies should foster transparency, clearly outlining when and how students should disclose their AI use and considering intent.
Many students are aware that copying and pasting AI-generated content constitutes cheating and is classified as academic misconduct. However, uncertainty persists, as students often remain unsure about their university's specific policies on AI use, given that institutional policy regarding AI is still evolving.
This uncertainty is amplified by inconsistency. One instructor might embrace modest AI tool use for idea generation or language assistance, while another might outright prohibit it. Given the absence of a unified institutional policy, each student must navigate these shades of gray individually. In response, universities in Kazakhstan should draw inspiration from international institutions that are developing transparent, nuanced guidelines and even citation practices for AI-generated content.
Confronting the Hidden Biases of Neutral Technology
Another significant ethical concern, often overlooked, is bias. While AI is powered by algorithms, it is not neutral. AI models are trained on vast amounts of data, primarily in English and drawn from Western sources. Companies like OpenAI, the owners of ChatGPT, acknowledge this on their website. This means that AI reflects Western cultural, linguistic, and ideological assumptions that are embedded within the data.
For students, this perpetuates two interconnected challenges. Firstly, there is a real risk that AI-based writing reinforces Anglo-American scholarly practices to the detriment of regional systems of knowledge. Work produced by such models tends to prioritize linear, thesis-based argument structures, Western practices of citation, and critical styles that may not align with the conventions of scholarly writing in other regions or for multilingual students. If Kazakh students rely on AI tools to support their writing, they may inadvertently adopt these practices, hindering their ability to develop a uniquely Kazakh or regional voice in academia.
Secondly, AI can exacerbate established inequalities. Students from rural areas or those more comfortable using Kazakh or Russian may encounter difficulties when AI tools favor English or Western-based examples. This creates an uneven playing field in which linguistic ability and access to global discourse determine the quality of AI assistance a student receives. These disparities could deepen educational inequalities, privilege students already fluent in the dominant discourse of global academia.
Paving the Way Forward: Call to Action for Ethical Leadership
Universities in Kazakhstan have the opportunity to lead the way in addressing these AI-related ethical challenges. With a unique multilingual and multicultural environment, coupled with investment in education, the country can create AI policy tailored to its local realities. This requires reforming academic integrity policies to account for AI-generated content and investing in comprehensive training for faculty, staff, and students.
Institutions can also host regular workshops on the ethical use of AI, establish standardized institute guidelines for disclosure and citation of AI assistance, and integrate discussions on digital ethics and algorithmic bias into the curriculum.
A blanket ban on AI in classrooms would be impractical and counterproductive, hindering both students and educators. Instead, we must openly discuss how AI is reshaping the way students learn and think. This includes emphasizing the importance of originality, critical inquiry, and fostering assignments that prioritize individual voice and reflection. AI should function as a supportive tool in the learning process, not as a substitute for it. Fairness, equity, and intellectual honesty must remain central to education.
The author is Michael Jones, a writing and communications instructor at the School of Social Science and Humanities, Nazarbayev University (Astana).
Additional Insights:
- Kazakhstan is working on a national AI strategy that includes standards for AI content labeling and ethical norms, potentially impacting academic policies.
- Universities can benefit from following international best practices, such as emphasizing transparency and disclosure, teaching originality and authorship, establishing ethical AI governance, addressing bias, and ensuring data privacy.
- By adopting ethical AI use practices, universities in Kazakhstan can effectively address concerns related to plagiarism and bias, fostering a responsible and ethical approach to AI in academic writing.
- Encouraging a culture of ethical AI use within academic institutions and providing ongoing education on AI ethics and responsible use is crucial for long-term success.
- As the debate over the integration of artificial intelligence (AI) into Kazakhstan's university classrooms continues, the question of how to address plagiarism arising from AI usage becomes increasingly significant.
- To ensure a fair and equitable use of AI in education, Kazakh universities should invest in comprehensive training for faculty, staff, and students on digital ethics, algorithmic bias, and the importance of originality and critical inquiry.
