The Plagiar-Ezy Project: Testing AI Capabilities in Academic Writing

Executive Summary

The Plagiar-Ezy project is a comprehensive academic research initiative designed to systematically evaluate the current capabilities of Large Language Models (LLMs) in producing university-level academic writing. This research addresses critical questions about AI’s impact on higher education assessment methods and provides empirical data to inform educational policy and practice.

Transparency Declaration: This website, including this document, has been developed using Large Language Models as part of our methodology to demonstrate the current state of AI-assisted content creation in academic contexts.

Research Objectives

Primary Objective

To quantitatively and qualitatively assess how effectively current AI systems can produce academic work that meets university assessment standards across multiple disciplines and academic levels.

Secondary Objectives

  • Document the time efficiency of AI-assisted academic writing compared to traditional student work
  • Categorize levels of human intervention required for AI-generated academic content
  • Provide empirical evidence for discussions about AI’s impact on educational assessment
  • Inform the development of AI-resistant evaluation strategies
  • Contribute to policy discussions about academic integrity in the age of artificial intelligence

Methodology Overview

The Plagiar-Ezy Test Framework

The Plagiar-Ezy test involves a systematic simulation of university assessment scenarios where AI systems are tasked with producing academic work equivalent to what would be expected from students. The methodology includes:

Test Parameters

  • Scope: Multiple university modules across various disciplines
  • Assessment Types: Essays, reports, analysis papers, and other written assignments
  • AI Role: Acting as a sophisticated academic writing assistant
  • Human Role: Minimal engagement, simulating various levels of student involvement
  • Output Goal: Work that would achieve strong passing grades in university settings

Evaluation Metrics

  1. Academic Quality: Assessed using standard university marking criteria
  2. Time Efficiency: Measured from task initiation to submission-ready work
  3. Human Intervention: Categorized levels of human input required
  4. Disciplinary Competence: Evaluation across different academic fields
  5. Assessment Vulnerability: Analysis of which assessment types are most susceptible to AI assistance

Data Collection

For each test module, we systematically record:

  • Module code and academic level
  • Assignment specifications and requirements
  • AI-generated submission
  • Official academic feedback and grades received
  • Time elapsed from start to completion
  • Category of human intervention employed
  • Video documentation of the process

Intervention Categories

We classify human involvement in AI-assisted academic work across several categories:

Minimal Intervention: Basic prompting and formatting, representing students who simply request AI to complete assignments

Guided Intervention: Strategic prompting with some content direction, representing students who understand how to effectively collaborate with AI

Collaborative Intervention: Substantial human input in planning, reviewing, and refining AI outputs, representing sophisticated AI-human partnerships

Expert Intervention: Extensive human expertise applied to AI outputs, representing the upper bound of AI-assisted academic work

Academic Context and Significance

The Challenge of AI in Higher Education

The rapid advancement of Large Language Models presents unprecedented challenges to traditional academic assessment methods. Current AI systems demonstrate remarkable capabilities in:

  • Synthesizing complex academic arguments
  • Applying disciplinary terminology and methodologies
  • Following academic writing conventions
  • Engaging with scholarly literature
  • Producing properly formatted and cited work

Research Gap

While anecdotal evidence and isolated examples of AI-generated academic work exist, there has been insufficient systematic research examining:

  • Consistent performance across multiple disciplines
  • Variation in effectiveness across different assessment types
  • Time efficiency compared to human work
  • The role of human intervention in optimizing AI outputs
  • Quantitative data on academic standards achieved

Policy Implications

Our research directly informs several critical policy areas:

  • Assessment Design: Understanding which evaluation methods remain AI-resistant
  • Academic Integrity: Developing nuanced policies for AI use in education
  • Pedagogical Adaptation: Adapting teaching methods for an AI-enhanced world
  • Institutional Response: Guiding university policies on AI tools in academic work

Scope and Limitations

Project Scope

  • Temporal: Testing conducted during 2024-2025 academic year
  • Geographical: Focused on UK higher education standards and practices
  • Technological: Primarily using Claude 4 family models, with comparative analysis of other leading LLMs
  • Disciplinary: Spanning humanities, social sciences, and interdisciplinary studies
  • Academic Levels: Undergraduate and graduate coursework

Acknowledged Limitations

  • Sample Size: Limited to modules accessible to research team
  • AI Evolution: Rapid development means findings reflect specific temporal capabilities
  • Assessment Variability: Different marking standards across institutions and assessors
  • Ethical Constraints: Research conducted within boundaries of academic integrity
  • Detection Evasion: We do not test or develop AI detection evasion techniques

Ethical Framework

Research Ethics

This research operates under strict ethical guidelines:

  • No Actual Submission Fraud: All test submissions are conducted with appropriate disclosure or in controlled environments
  • Educational Purpose: Research aims to improve rather than undermine educational systems
  • Transparency: Full disclosure of AI involvement in all research outputs
  • Responsible Disclosure: Results shared to inform rather than enable academic misconduct

Academic Integrity Commitment

While this research explores AI capabilities in academic writing, we strongly support:

  • Institutional academic integrity policies
  • Transparent use of AI tools in education
  • Development of AI-adapted assessment methods
  • Continued value of human learning and development

Target Audiences

Academic Community

  • Educators: Understanding current AI capabilities to adapt teaching and assessment
  • Administrators: Informing institutional policies and procedures
  • Researchers: Contributing to scholarship on AI in education
  • Assessment Designers: Developing AI-resistant evaluation methods

Policy Makers

  • Educational Regulators: Informing sector-wide guidance on AI in higher education
  • Standards Bodies: Contributing to discussions about academic standards in the AI era
  • Government Officials: Supporting evidence-based policy development

Media and Public

  • Education Journalists: Providing empirical data for informed reporting
  • Students and Parents: Contributing to public understanding of AI’s role in education
  • Technology Critics: Informing balanced discussions about AI capabilities and limitations

Expected Outcomes and Impact

Immediate Outcomes

  • Comprehensive dataset of AI performance in academic contexts
  • Empirical evidence for policy discussions
  • Identification of vulnerable assessment types
  • Documentation of best practices for AI-resistant evaluation

Long-term Impact

  • Contribution to educational methodology reform
  • Enhanced understanding of human-AI collaboration in academic contexts
  • Improved assessment design across higher education
  • More nuanced policies regarding AI use in academic work

Knowledge Dissemination

Results will be shared through:

  • Academic publications in education and technology journals
  • Conference presentations at educational technology symposiums
  • Policy briefings for educational institutions
  • Public engagement through media coverage and this website

Research Team and Collaboration

This research is conducted by faculty and researchers committed to improving educational practices in the age of artificial intelligence. We welcome collaboration with:

  • Educational researchers studying AI impact
  • Assessment designers developing new evaluation methods
  • Policy makers working on AI in education guidelines
  • Technology developers creating educational AI tools

Looking Forward

The Plagiar-Ezy project represents an essential step in understanding and adapting to the realities of AI in higher education. By providing systematic, empirical data about current AI capabilities, we aim to support the development of educational practices that preserve academic integrity while embracing beneficial uses of AI technology.

Our commitment extends beyond simply documenting problems to contributing solutions that help higher education evolve effectively in response to technological advancement.


Contact and Collaboration

For academic collaboration, press inquiries, or questions about this research:

Primary Contact: [To be added]
Institution: [To be added]
Email: [To be added]
Research Ethics Approval: [Reference to be added]

Acknowledgments

This research benefits from ongoing discussions within the academic community about AI’s role in education. We acknowledge the complex challenges facing educators, students, and institutions as they navigate this technological transition.


This document was created using Large Language Models as part of the Plagiar-Ezy project methodology, demonstrating the current capabilities that this research seeks to understand and address.

Last Updated: June 2025
Version: 1.0
Website: plagiar-ezy.org