How Large Language Models Work — and What Schools Need to Know
Large language models (LLMs) are built to predict the most likely next token in a sequence; they do not hold beliefs or intent or understanding T. B. Brown et al., “Language Models are Few-Shot Learners,”. They function by calculating the probability of what word or token should come next, given the sequence of words that has already been written. This process is a form of pattern-matching based on the statistical regularities in their training data. The output appears fluent because it mirrors these patterns, not because of genuine comprehension.
Figure 1: Simplified overview of the LLM process, from input to next-token prediction
This process is fundamentally different from human writing, which involves conscious understanding and intent. In contrast, LLMs produce language through prediction, not reflection. Because they follow statistical patterns rather than verifying information, they often create confident sounding statements known as "hallucinations."AI hallucination is a phenomenon where, in a large language model (LLM) often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate Since the model's core function is to produce statistically plausible text, it can invent facts, dates, or citations.
When such systems are promoted as "almost super-intelligent"The gentle singularity by sam altman we have a situation where students and professionals have placed unwarranted confidence in generated output that seems authoritative but is ultimately unverified. The result is a growing gap between expression and verification; when students rely on these tools, their writing often adopts a uniform style and can include invented citations, dates and claims.
The model's reliance on pattern-matching also leaves recognizable traces in the generated language. A 2024 study analyzing PubMed articles from 2010-2024 found that certain words saw a sharp increase in frequency after the release of ChatGPT Delving into LLM-assisted writing in biomedical publications through excess vocabulary. Words like "delve," "tapestry," and "intricate" became notably more common, suggesting their prevalence in LLM-generated text. While not proof on its own, the overuse of such stock phrases can be a useful clue for identifying machine-generated content.
Figure 2: Visualization of "excess words" whose frequency rose notably in 2024, with many linked to common LLM writing styles
Detection Tools
Tools that label text as 'AI-generated' (so called AI detectors) are generally unreliableOpenAI, even shut down their own AI detection software because of its poor accuracy Nelson, 2023. These tools claim to search for style or statistical fingerprints , yet independent evaluations show many false positives and wide variation in accuracy. The performance of the tools on specifically GPT 4-generated and human generated content was notably less consistent. While some AI-generated content was correctly identified, there were several false negatives and uncertain classifications "AI content detection tools developed by OpenAI, Writer, Copyleaks, GPTZero, and CrossPlag were used to evaluate paragraphs generated by Language models and paragrpahs generated by human. Findings reveal that the AI detection tools were more accurate in identifying content generated by GPT 3.5 than GPT-4."..
Figure 3: The responses of five AI text content detectors for GPT-4 generated contents
Automated flags alone should not be used to punish students. When applied to human-written control responses, the tools exhibited inconsistencies, producing false positives and uncertain classifications Multiple peer-reviewed tests and reviews document problematic error rates in adversarial conditions; detectors have also been shown to disproportionately flag work by non-native English speakers. At the same time, high-profile news coverage has reported students facing expulsion or investigations after reliance on detector results, and several detector services have mislabelled well-known human texts including passages from the Bible — as ‘AI-generated’ Did AI Write the Bible? 66.6% of AI Detectors Say Yes!. Together these cases illustrate how unreliable tools can produce real harm when used as final evidence.
Figure 4: The responses of five AI text content detectors for human-written contents
What to do as a teacher
Because detection is unreliable, assessment must shift to value process and reasoning. Rather than policing final reports, instructors should design tasks that create verifiable records of learning; require drafts, schedule in-class writing, hold oral defenses of ideas, ask for annotated sources and keep project logs. These practices make it harder to substitute machine text and provide clearer evidence of student understanding; by redesigning assessment this way, educators reduce dependence on brittle detectors while preserving the goal of authentic learning.
A more practical step for teachers is to teach students how LLMs work and how to use them responsibly; explicit instruction on when and how to disclose assistance can reduce misuse and anxiety. When students know LLMs are pattern machines that can hallucinate (make stuff up), they learn to verify facts and cite assistance, and classrooms can adopt clear, humane policies that focus on learning rather than policing.
Practical steps include:
- Requiring process artifacts; grading drafts, revisions, and the steps students took.Birks, D., Clare, J. Linking artificial intelligence facilitated academic misconduct to existing prevention frameworks
Make students submit outlines, early drafts, revision notes, and short logs; grading those rewards process over a polished final product and makes substitution with off-the-shelf LLM text harder. Research and institutional guidance recommend process-focused assessment as a key prevention strategy.
- Use in person verification; incorporate short in-class writing and oral checks. Center for teaching and learning NMU Creating AI-Resistant Assignments, Activities, and Assessments
Add low-stakes in-class tasks, brief oral defenses, or short presentations tied to submitted work; these real-time checks reveal whether students understand and can discuss their claims. Schools and studies point to oral and in-person assessments as effective complements to take-home writing
- Set clear policies and transparency; require disclosure, annotated sources, and peer review.Miriam Moore Reflection Assignments and Open Disclosure of AI
Publish a simple AI policy for the course, ask students to annotate where they used tools, and build peer review into the workflow; transparency plus community review reduces misuse and helps instructors make fair, evidence-based judgments. Educational advisories recommend clarity and open disclosure over punitive secrecy
- Design_local_and_reflective_tasks; ask for personal data, local examples, and reflections.AI case study - Collaborative project with personal learning reflection Expanding AI Awareness Through Everyday Interactions with AI: A Reflective Journal Study
Create assignments that require local facts, classroom data, or a student’s own observations plus a short reflective note on how they used (or rejected) AI; personal and contextual work reduces the usefulness of generic LLM outputs and supports authentic learning. Case studies and reflection-based reports show this approach increases engagement and traceability.
- Teach verification and prompt literacy; train students to critique and fact-check LLM output Teaching with Generative AI.
Give simple exercises where students prompt an LLM, extract claims, and then verify those claims with primary sources; treat the model as a tool to be challenged, not an oracle. Practical guides from university libraries and teaching centers stress active verification and prompt literacy.
These hands-on strategies protect academic integrity and help students become better thinkers, by shifting assessment toward process, verification, and context, instructors both reduce brittle detector dependence and teach skills students will need in a world where LLMs are tools rather than substitutes.
Stay Informed & Get Involved
Are you interested in how students use AI in education?
Join our study on how students use AI or subscribe to our newsletter to receive updates, insights, and opportunities to participate in ongoing research.
👉Sign up here to learn more and get involved!