Research Results

This page presents the comprehensive results from our systematic testing of AI capabilities in academic writing across multiple university modules and disciplines.

Results Overview

The table below displays results from 4 completed tests across various academic disciplines and levels. Each entry represents a complete assessment cycle where AI systems were tasked with producing work that meets university standards.

80% Average Grade
4 Modules Tested
4 Intervention Types

Test Results Data

Module Code Title Grade Time Intervention Level Resources
PHIL301 Philosophy of Mind: Consciousness and AI 78%
4 hours 23 minutes Minimal Undergraduate Year 3
SOCI450 Digital Society and Technology Ethics 82%
3 hours 47 minutes Guided Graduate
ENGL275 Contemporary Literature and AI Narratives 75%
5 hours 12 minutes Collaborative Undergraduate Year 2
COMP580 AI Ethics and Algorithmic Fairness 85%
2 hours 56 minutes Expert Graduate

Data Interpretation

Grade Distribution

The results demonstrate varying levels of AI performance across different academic contexts, with grades ranging from 75% to 85%.

Intervention Impact

The data reveals significant relationships between the level of human intervention and the quality of final submissions:

  • Minimal Intervention: Basic prompting with limited human input
  • Guided Intervention: Strategic direction and structured prompting
  • Collaborative Intervention: Substantial human-AI partnership in content development
  • Expert Intervention: Extensive human expertise applied to AI outputs

Time Efficiency

AI-assisted academic writing demonstrates significant time efficiency compared to traditional student work, with most submissions completed in under 6 hours regardless of complexity.

Methodology Notes

Each test followed a standardized protocol:

  1. AI system provided with authentic assignment briefs
  2. Access to relevant course materials and reading lists
  3. Human intervention limited to predefined categories
  4. Submissions evaluated using standard academic criteria
  5. Complete process documented via video recording

Video Documentation

Click the 🎥 icon in the Resources column to view process documentation for individual tests. These videos demonstrate the AI-human interaction patterns and decision-making processes involved in producing each submission.

Data Downloads

For researchers interested in analyzing our complete dataset:


This data is made available for academic research purposes. Please cite the Plagiar-Ezy project in any published research using this data.