Plagiar-Ezy

Systematic Research into AI Capabilities in Academic Writing

Providing empirical data to inform educational policy and assessment reform in the age of artificial intelligence.

Understanding AI's Impact on Higher Education

The Plagiar-Ezy project conducts systematic testing of Large Language Models' capabilities in producing university-level academic writing. Our research provides crucial empirical data for educators, policymakers, and institutions navigating the challenges and opportunities of AI in academic contexts.

Through controlled testing across multiple disciplines and assessment types, we document AI performance, time efficiency, and the varying levels of human intervention required to produce work that meets academic standards.

4 Modules Tested
4 Intervention Categories
100% Transparency

Research Highlights

Systematic Methodology

Rigorous testing framework across multiple disciplines, academic levels, and assessment types to ensure comprehensive understanding of AI capabilities.

Empirical Evidence

Quantitative data on AI performance, including grades achieved, time efficiency, and human intervention requirements for different academic contexts.

Policy Impact

Research designed to inform educational policy, assessment design, and institutional responses to AI's role in academic work.

Transparency Declaration

This website and its content have been developed using Large Language Models as part of our research methodology. This demonstrates the current capabilities that our research seeks to understand and address in academic contexts.