Back to Allan K. Smith Center for Writing and Rhetoric
A.I. Writing and Effective Learning
What A.I. Writing Software are:
- Large Language Models (LLM): Algorithm designed to produce sequences of words in response to an input prompt, based upon hundreds/thousands of human-designed* categories and classifications of words, phrases, sentences, and paragraphs garnered from analyzing large corpora of texts.
- Probabilistic: They provide output “guesses” at what you want to hear/see from your input prompt, based upon a variety of statistical probabilities (again, gleaned from the aforementioned human-designed categories).
- Partial / biased: An LLM is only as good as the source text / data that it is trained on, which is typically proprietary information. We have no way of knowing if the model was trained on language structures that incorporate a diverse array of written styles / culturally-indexed voices, or simply the hegemonic imaginary of “standard” American English.
- Products: Keep in mind that LLMs are designed as services in their own market economy with a wide array of use-cases outside of helping students “cheat.” Be mindful that the hype surrounding them may be generated by people with a vested interest in others’ belief that they are bullet-proof (because such mass public belief will boost investor/consumer confidence, stock prices, etc.).
What A.I. Writing Software are not:
- Magic
- “Intelligent” in exactly the same way that you are
- Going to take your job (an LLM is not an adaptive teacher)
- Completely “artificial” and autonomous (i.e. operating without any human input/error)
- Able to flawlessly execute any writing task that you present it with – see below
* While it is true that some LLMs create classifications from more abstracted, probabilistic models (known colloquially as “neural networks,” based upon “machine learning,” though these terms are not without contest), all of them require some degree of human attention in order to check for reliability, which necessarily alter the way that the neural net / machine learning functions. In other words, even LLMs with categories & classifications based on neural networks & machine learning are not fully “autonomous.” They don’t learn what is “good writing” on their own – humans still have to evaluate whether its output is good or bad, helpful or unhelpful, etc.