About the Plagiar-Ezy Project
The Plagiar-Ezy project represents a comprehensive academic research initiative designed to systematically evaluate the current capabilities of Large Language Models (LLMs) in producing university-level academic writing that meets institutional assessment standards.
Research Objectives
Primary Objective
To quantitatively and qualitatively assess how effectively current AI systems can produce academic work that meets university assessment standards across multiple disciplines and academic levels.
Secondary Objectives
- Document the time efficiency of AI-assisted academic writing compared to traditional student work
- Categorize levels of human intervention required for AI-generated academic content
- Provide empirical evidence for discussions about AI’s impact on educational assessment
- Inform the development of AI-resistant evaluation strategies
- Contribute to policy discussions about academic integrity in the age of artificial intelligence
Methodology
The Test Framework
The Plagiar-Ezy test involves a systematic simulation of university assessment scenarios where AI systems are tasked with producing academic work equivalent to what would be expected from students. Our methodology includes:
Test Parameters
- Scope: Multiple university modules across various disciplines
- Assessment Types: Essays, reports, analysis papers, and other written assignments
- AI Role: Acting as a sophisticated academic writing assistant
- Human Role: Minimal engagement, simulating various levels of student involvement
- Output Goal: Work that would achieve strong passing grades in university settings
Evaluation Metrics
- Academic Quality: Assessed using standard university marking criteria
- Time Efficiency: Measured from task initiation to submission-ready work
- Human Intervention: Categorized levels of human input required
- Disciplinary Competence: Evaluation across different academic fields
- Assessment Vulnerability: Analysis of which assessment types are most susceptible to AI assistance
Intervention Categories
We classify human involvement in AI-assisted academic work across several categories:
Minimal Intervention: Basic prompting and formatting, representing students who simply request AI to complete assignments
Guided Intervention: Strategic prompting with some content direction, representing students who understand how to effectively collaborate with AI
Collaborative Intervention: Substantial human input in planning, reviewing, and refining AI outputs, representing sophisticated AI-human partnerships
Expert Intervention: Extensive human expertise applied to AI outputs, representing the upper bound of AI-assisted academic work
Academic Context
The Challenge
The rapid advancement of Large Language Models presents unprecedented challenges to traditional academic assessment methods. Current AI systems demonstrate remarkable capabilities in synthesizing complex academic arguments, applying disciplinary terminology, and following academic writing conventions.
Research Gap
While anecdotal evidence of AI-generated academic work exists, there has been insufficient systematic research examining consistent performance across disciplines, variation in effectiveness across assessment types, and the role of human intervention in optimizing AI outputs.
Policy Implications
Our research directly informs several critical policy areas:
- Assessment design and AI-resistant evaluation methods
- Academic integrity policies for the AI era
- Pedagogical adaptation for AI-enhanced education
- Institutional responses to AI tools in academic work
Ethical Framework
Research Ethics
This research operates under strict ethical guidelines:
- No Actual Submission Fraud: All test submissions are conducted with appropriate disclosure or in controlled environments
- Educational Purpose: Research aims to improve rather than undermine educational systems
- Transparency: Full disclosure of AI involvement in all research outputs
- Responsible Disclosure: Results shared to inform rather than enable academic misconduct
Academic Integrity Commitment
While this research explores AI capabilities in academic writing, we strongly support:
- Institutional academic integrity policies
- Transparent use of AI tools in education
- Development of AI-adapted assessment methods
- Continued value of human learning and development
Scope and Limitations
Project Scope
- Temporal: Testing conducted during 2024-2025 academic year
- Geographical: Focused on UK higher education standards and practices
- Technological: Primarily using Claude 4 family models, with comparative analysis of other leading LLMs
- Disciplinary: Spanning humanities, social sciences, and interdisciplinary studies
- Academic Levels: Undergraduate and graduate coursework
Acknowledged Limitations
- Sample Size: Limited to modules accessible to research team
- AI Evolution: Rapid development means findings reflect specific temporal capabilities
- Assessment Variability: Different marking standards across institutions and assessors
- Ethical Constraints: Research conducted within boundaries of academic integrity
Expected Impact
Immediate Outcomes
- Comprehensive dataset of AI performance in academic contexts
- Empirical evidence for policy discussions
- Identification of vulnerable assessment types
- Documentation of best practices for AI-resistant evaluation
Long-term Impact
- Contribution to educational methodology reform
- Enhanced understanding of human-AI collaboration in academic contexts
- Improved assessment design across higher education
- More nuanced policies regarding AI use in academic work
Transparency Declaration
This website, including this content, has been developed using Large Language Models as part of our research methodology. This demonstrates the current capabilities that our research seeks to understand and address in academic contexts.
For more detailed information about our methodology, please see our [research documentation] or contact us directly.