Articles
AI vs. Academic Integrity: Cheating or Innovation?
In a rapidly evolving digital landscape, academic institutions are grappling with new ethical and
procedural questions around technology, notably artificial intelligence (AI). A recent legal battle
highlights the tension between traditional academic assessment standards and the role of emerging
technologies. A law student from OP Jindal Global University, named Shakkarwar, has filed a lawsuit
contesting a failing grade received after his end-term exam answers were flagged as AI-generated by
88%. The university’s Unfair Means Committee alleged that AI tools contributed significantly to his
responses, thus classifying it as an unfair practice. This case brings to the forefront complex issues about
the use of AI in academia, definitions of academic integrity, and the rights of students in an AI-influenced
educational landscape.
Shakkarwar, a law student with notable experience, filed the lawsuit after the Unfair Means Committee
and the Controller of Examinations at OP Jindal Global University upheld the committee’s decision. The
student argued that his answers were solely his work and that the university’s allegations were unfounded,
citing the lack of clear guidelines against using AI tools. According to him, the institution failed to
communicate any restrictions or regulations regarding AI in academic settings. Moreover, he contended
that using AI could not be considered plagiarism unless it involved copyright infringement.
This case, now under consideration by the Punjab and Haryana High Court, raises questions about
academic policy, the legitimacy of AI-detection software, and the grounds on which academic misconduct
is judged.
Academic misconduct has traditionally been defined as practices like plagiarism, cheating, and
unauthorized collaboration that jeopardize academic integrity. Universities have always maintained strong
requirements to ensure the credibility and integrity of their degrees. In Shakkarwar’s case, however, the
question isn’t about classic forms of cheating, but rather whether AI use constitutes an act of dishonesty
or gives an unfair advantage.
As AI tools become more integrated into many students' academic routines, a gray area appears on what
constitutes misuse? Is it plagiarism if a student uses artificial intelligence (AI) to help formulate answers
or paraphrase ideas? Students find it challenging to understand where the boundaries are when it comes to
the usage of AI because many universities lack clear policies or standards on the subject. Shakkarwar
bases his legal case on the absence of such guidelines, arguing that his conduct cannot be penalized
retroactively in the absence of clear regulations.
Plagiarism has traditionally been defined as the unauthorized use of another person’s work without proper
acknowledgment, essentially a breach of copyright or intellectual property laws. Here, Shakkarwar argued
that AI is simply a tool, much like any other digital aid, and using it should not inherently constitute
academic dishonesty. Since AI can generate responses from existing databases without directly copying
text, he claimed there was no copyright infringement, as his answers did not mirror any pre-existing text.
However, universities may argue that plagiarism is broader than copyright infringement. Academic
integrity policies generally cover any form of intellectual dishonesty, which can include unauthorized
assistance even if it does not technically infringe on someone’s copyright. If AI content creation is
deemed “unauthorized assistance,” then universities might classify it as misconduct. This hinges on
whether policies explicitly mention AI use—something many institutions have yet to address
comprehensively.
The reliability of AI detecting software is an important factor in this scenario. With improvements in AI,
technologies such as Turnitin and GPT-2 Output Detector have been developed to detect AI-generated
content. However, these instruments aren't infallible. Factors such as false positives, nuanced language
usage, and misinterpretation of complex responses can yield inaccurate results.
Shakkarwar contends that the university’s reliance on an AI-detection tool without any tangible evidence
undermines the legitimacy of their findings. This raises an important legal question: can universities
impose academic penalties based solely on probabilistic software assessments? The accuracy of AI-
detection algorithms varies, and in a high-stakes setting like academia, the consequences of false
accusations can be significant. Without independent verification, it can be argued that relying solely on
AI detection may infringe on a student’s rights to fair assessment.
In any academic misconduct proceeding, students are generally entitled to due process. This means they
should be informed of the allegations against them, allowed to present a defense, and given a transparent
explanation of the evidence used. Shakkarwar’s petition argued that the university failed to provide him
with “concrete evidence” supporting their claims. In the legal context, due process is a fundamental right,
and denying a student the opportunity to challenge evidence infringes upon this right.
Transparency is especially important when dealing with AI-related cases, given that most students—and
faculty—lack an in-depth understanding of AI technology. The opacity surrounding AI-detection methods
can make it difficult for students to defend themselves effectively. Shakkarwar’s lawsuit, therefore,
underscores the necessity of transparent AI policies and evidence-based assessments in academic settings.
For universities, Shakkarwar’s case highlights the urgent need to revisit academic integrity policies to
address emerging technologies like AI. Establishing clear guidelines on AI usage is not only a practical
response to technological advancements but also crucial for protecting both student rights and academic
standards.
An effective AI-use policy should outline acceptable and unacceptable uses, provide guidelines for using
generative tools, and distinguish between permissible digital aids and those that are prohibited. It should
also clarify the standards for proof in AI-related misconduct cases, emphasizing the necessity of
supporting software-based findings with additional evidence.
This case carries significant implications for educational institutions globally, as AI continues to shape
academic practices. If courts rule in favor of Shakkarwar, it may set a precedent requiring universities to
adopt explicit AI policies before penalizing students. This could also prompt a re-evaluation of academic
policies to address AI’s place within the learning environment.
Conversely, if the ruling favors the university, it could affirm the discretion of academic institutions to
interpret and enforce integrity standards, even in the absence of explicit policies. This would underscore
the judiciary’s support for academic autonomy, albeit at the potential cost of individual rights.
In either scenario, universities are likely to face increased pressure to integrate AI training into their
curriculums, helping students understand responsible use while outlining the boundaries of academic
integrity. Additionally, the ruling may influence the adoption and refinement of AI-detection tools,
prompting developers to improve accuracy and accountability in detecting AI-generated content.
The lawsuit filed by Shakkarwar against OP Jindal Global University brings to light critical issues about
AI’s role in academia, the boundaries of academic misconduct, and student rights to due process and
transparency. As academic institutions and courts grapple with the complex interplay between technology
and ethics, this case stands as a pivotal moment that could redefine academic integrity standards.
Ultimately, the case underscores the necessity for universities to craft policies that reflect the realities of a
digitalized educational environment, ensuring that students are held to fair, transparent, and well-defined
standards. With AI’s role in academia only set to grow, institutions will need to balance the enforcement
of integrity with the evolving tools and knowledge systems that students have at their disposal. Only then
can they foster a truly equitable learning environment, where academic rigor coexists with technological
innovation.
As the boundaries of academia continue to shift, cases like Shakkarwar’s remind us of the challenge—and
opportunity—posed by AI in education. In an age where technology is woven into every aspect of our
lives, students and institutions alike face the complex task of blending the age-old values of academic
honesty with the possibilities that new tools provide.
For students, this moment is a call to master their digital toolkit wisely, knowing not only how to wield
AI effectively but also to stay grounded in genuine learning and critical thinking. They are the pioneers in
navigating this uncharted terrain, and how they respond could shape the norms of the future. Just as
calculators once revolutionized math education without undermining its principles, so too can AI
transform learning, as long as it’s guided by integrity.
For universities, Shakkarwar’s case is a wake-up call, a chance to reimagine what academic policies
might look like in the digital age. Rather than shying away from technology, institutions have the chance
to teach students about responsible AI use, establishing policies that both embrace innovation and
reinforce values that foster trust. By crafting clear guidelines and transparent processes, universities can
create a fair academic environment where technology enhances education without compromising ethical
standards.
This case ultimately underscores that the journey forward is one of balance: to uphold tradition while
embracing progress, to instill knowledge while inspiring innovation. In a world increasingly shaped by
AI, how we navigate these questions in academia will set the stage for broader ethical practices in our
society, making the stakes higher than a single grade or course. The resolution of this case may be a
turning point, laying down a foundation for a new era where integrity and technology can coexist, shaping
a generation prepared not just for the jobs of the future, but for the values that guide it.
1. https://economictimes.indiatimes.com/magazines/panache/llb-student-sues-jindal-global-law-
school-after-receiving-failing-grade-for-ai-generated-
answer/articleshow/114953528.cms?from=mdr
2. https://www.boomlive.in/op-jindal-law-llm-student-ai-generated-response-debate
3. https://justai.in/op-jindal-law-student-sues-over-ai-exam-controversy-a-closer-look-06-11-24/