
“LLMs are a mathematical model of language tokens. You give an LLM text, and it will give you a mathematically plausible response to that text.”
― More Than Words: How to Think About Writing in the Age of AI
― More Than Words: How to Think About Writing in the Age of AI
“Large language models do not “write.” They generate syntax. They do not think, feel, or experience anything. They are fundamentally incapable of judging truth, accuracy, or veracity. Any actions that look like the exercise of judgment are illusory. While the term hallucination has come to mean outputs from LLMs that are incorrect or untrue, it is arguably more accurate to say that from the point of view of the LLM, everything is a hallucination, as it has no reference points from which to judge its own production. ChatGPT is fundamentally a “bullshitter” as defined by Harry Frankfurt in his classic treatise on the term (On Bullshit), something “unconnected to concern for the truth.” It’s not that ChatGPT makes stuff up. It has no capacity for discerning something true from something not true. Truth is irrelevant to its operations.”
― More Than Words: How to Think About Writing in the Age of AI
― More Than Words: How to Think About Writing in the Age of AI
“The world is analog, and digital is always a representation.”
― The Revenge of Analog: Real Things and Why They Matter
― The Revenge of Analog: Real Things and Why They Matter
“Writing is also feeling, a way for us to be invested and involved not only in our own lives but the lives of others and the world around us.”
― More Than Words: How to Think About Writing in the Age of AI
― More Than Words: How to Think About Writing in the Age of AI
“but at its core, an LLM is just fetching one word after another in sequence.”
― More Than Words: How to Think About Writing in the Age of AI
― More Than Words: How to Think About Writing in the Age of AI
Tor’s 2024 Year in Books
Take a look at Tor’s Year in Books, including some fun facts about their reading.
Polls voted on by Tor
Lists liked by Tor