And the Machines Take Over
The Matrix. Gran Torino. Klarna. Westworld. Red Clay Strays. Ollama. Ralph Nader. And Malaprops.
I’ve been building... stuff... or things. Rather, it’s code. There is this scene in The Matrix, where what’s left of the human race is holed up in an underground city called Zion. The leader guides their savior, Neo, played by that Keanu Reeves guy, and tours the scaffolding and houses. Halfway through, he points to these enormous machines—these provide oxygen to inhabitants, well underground after Earth’s armies blocked out the sky. Grim, isn’t it?
And the mayor comments (somewhat paraphrased), “Not really sure how it works. These machines do the work.”
If you don’t remember the scene; it’s wedged halfway through the second of four movies (yes, a fourth happened)—the one where the Architect goes on a philosophical ten-minute tirade, reminds me of the John Galt soliloquy in the opus Atlas Shrugged.
And the machines, they’re cleaning the air and fixing themselves without intervention. Isn’t that amazing? I suppose that’s the film’s warning here. Or the screenplay is a thesis on simulation theory or tribute to Plato’s cave allegory. Take your pick.
In contrast to sci-fi glory (at least, the first movie), let’s take a scene from the movie Gran Torino. Here, Clint Eastwood plays a lonely man living in a changing neighborhood. He’s a remnant of sorts—a snapshot to a prior era. This is a tale of cultural acceptance, redemption, mentorship, and the impact of violence. My memory tells me this is an epic movie, but I only want to discuss Clint’s garage.
Inside, it’s brimming with tools, a mechanical lawn mower, and a pristine 1972 Ford Gran Torino Sport. This particular model is a two-door fastback with a 351-cubic-inch V8 engine, making it a classic American muscle car. And who doesn’t like the idea of a mechanical lawn mower?
What’s great about Clint in this scene, he’s the polar opposite of Zion’s leader—a fictional version of my grandfather. He knows how everything inside the car, house, and garage operates. He can rip the car apart, piece by piece, and put it all together again. And yes, if the oxygen went out in Zion, he’d fine a way to fix that machine too.
A Valid Theoretical Conundrum
This brings me to Large Language Models (LLMs) models, better known as AI—the ultimate buzzword across industry these days. Some believe it’s overhyped, but the impact will be felt. Vehicles are becoming computers on wheels, headed down an autonomous path—self-driving, fixing bugs, etc. There are recent examples where systems of record applications such as Salesforce and Workday have been ripped and replaced by purpose-built solutions. Notably, Klarna made waves in its recent regulatory filings by removing both SAAS juggernauts.
These LLMs are amazing at face value. Yet, we’re not close to “five-nine accuracy.” Humans aren’t either, but we have constraints and controls to check our work. And within the underlying code an AI produces, understanding how it functions, either through documentation or otherwise… let’s hold that thought.
As a writing hack, that’s the problem with writing with the machine. Cut and paste blindly, you lose the thread, the character. It’s like meeting an old high school friend, one you haven’t seen in years, you recognize them, and the memories come roaring. Sure, you can relate. But how have they changed? What happened after homecoming dates and basketball games?
No, the latest AI models—llama, OpenAI, Anthropic, Mistral, PHI, and the other children and offshoots—aren’t the destroyer of worlds. I don’t believe it ruins writing or coding as we know it. Heck, I’m not sure these prediction engines are great writers. I’ve been hobbying models for years and what I love is that sometimes I find an angle, a thought. Yet, I don’t think these tools save time. If anything, contrary to what Tech Titans and Venture Capitalists push, I think using them takes more of it through fact checking and review.
The ultimate challenge is what I call the void, the missing thread. What did you miss? In building applications, what is in this code? What does this do?
LLMs push us to the land in-between. We’re moving from a world where we know how everything works. To a new place, where it’s close. But not quite.
Being Intentional with Workflow
To my AI overlord, I’m not sure where you fit in my workflow.
Why? Reasons:
The little things matter when you write. And I’m more of a sculptor. Typically, I have this flawed tendency to describe too many details in a first draft. When I pull back, I cut ruthlessly, or, make a valid attempt. Each object, character description is made for a reason. If not, it should be cut. When I place a kitchen knife on a coffee table, it’s there for a reason. The reader will think the same. Sure, misdirection exists in storytelling. Intentionality too.
If I were to contract out the creative process, the AI may describe the brand of the knife in Chapter 1 due to its programming design only to forget about it by Chapter 2. Or change the knife into a Swiss-army combo. Consistency matters.
The human mind forgets. In a long work, think 50K words plus, finding the Victorinix change is hard. Worse, the writer might spend more time combing through the work in sweating the small things than what was initially gained in creation. How do you know it’s there if you didn’t cry over each word?
The “write what you know problem.” This is common advice. In a previous corporate life, I often preached, Everyone’s voice matters. Their background. Education. Geographic location. All of this shapes us growing up. And it shapes our work. If someone grew up on a farm in the Midwest, they’ll know the smell, how the tree line slopes. This trait is why writers describe certain settings a certain way. Their point of view makes the work better.
An LLM is different. The tool literally has learned the entire corpus of the Internet, every chat, datapoint—it’s all algorithmically connected. When you type into a Claude or ChatGPT user interface, the model statistically reviews for the optimal response. It’s fast. But can be thin with answers. I forget what I had for lunch yesterday. But if I do remember, it was amazing either through the conversation, food, or location. Something made that grilled cheese special. The AI doesn’t have that luxury, necessarily.
Yes, knowing everything isn’t necessarily an asset.
Maybe, The Problem Is Me
Taylor Swift phrases that line better, I’m the problem, it’s me. But oh the potential of having the all-knowing at one’s fingertips, I have literally hundreds of article ideas in my backlog on world trade, economic theory, political pandering, and the perfect chocolate chip cookie. Wouldn’t it be great if they were just done?
Now that I’ve babbled on about it, I can assure you everything in this newsletter is written by yours truly. But what would happen if I eliminated yours truly from the equation? Instead of being Clint, I lean into becoming Zion’s mayor?
So I built an overly complex series of Python applications to do just that using a set of input files. In HBO’s Westward, the writers suggest that humans can be reduced to 10,000 lines of code. Turns out, I’m more complicated but that’s before any optimization.
It’s Alive!
My primary design constraint was that I wanted the application to take on a personality. I crafted a biography like any politician, a what could have been person. Here, I pictured myself graduating from college, different degree in hand, and taking a job at a 90s era mid-market newspaper—these existed before being gutted by philanthropic Big Tech.
And I figured all newsrooms have a boss or editor who assigns the topic. Another input file. Here, I used a different LLM too.
I also added another that assigned specifics, including verified facts and subtopics. However, I want to be clear that I wrote the application to give the AI the option to use these—the agency exists to keep or ignore. Why? Usually, the through line in my books is the power of choice and destiny so I wanted a similar principle in code. This turned out to be an Achilles’ heel of sorts—more on this in a later article.
The application also had a few other constraints. I designed it to generate content with specified word count limits—my posts are in the 1500-word range. This code aims to reach a minimum word count (soft cap) and ensures it doesn’t exceed a maximum limit (hard cap). This balance ensures that the article is neither too brief nor overly long.
Iteration. Yes, the program can spew out any number of articles. But I built a check where it re-reviews what it writes. I figured, I do this. So my improved version has the option to revise and edit.
Editing. I then created separate streams, sequentially, that reviewed the drafts made by my alternative universe ego. Here, the purpose-built editor essentially completes a line edit. Then, my alter ego reviews and decides to accept the feedback or not. Another pass. And then, a different LLM completes a copy-edit, reviewing for grammar mistakes, etc. I often review, set aside, review, and repeat. When writing a book, I often have a dozen drafts—that’s not necessarily a best practice. In my program hack, I found there is a diminishing return in its quality around draft three—again the challenge with the details.
And The Results? To Be Continued …
Well, I can tell you; the alternative J. Scott didn’t work out quite like I expected. But kudos for its speed. I’ll publish a couple of articles in the JPLA’s next installment. So, we’ll call this a cliffhanger, for now. Or a Choose Your Own Adventure, turn to page 76.
Again, there is promise in the technology. It’s just defining the constraints, a prior topic, and how best to envision as part of your workflow. That’s my takeaway. Draw the lines. Finish the project. And then, think through what worked and what didn’t. I mean, Welcome to Westworld, where nothing can go wrong ... go wrong ... go wrong. Right?
Notes:
The headline picture has been modified through Photoshop. Instead of tinkering with the posterizatin tools, I used the AI built into Adobe’s product to fine-tune and adjust. Is it good? You be the judge.
Memory is a fallacy. Is the Matrix Reloaded better than you remember?
In HBO’s Westworld, it’s 10,247 lines of code. The human mind definitely isn’t written in Rust or C++.
Be Cool, Pass The JPLA On …
What I Was Watching (Cardnal’s Wrap-Up Edition):
Alas, the Cardinals ended the season about where I thought they’d be—slightly better than .500. I had them at one game over, with the pitching puts and takes. So they exceeded expectations, but a -47 run differential isn’t going to take home any trophies. That doesn’t mean it hasn’t been done before (here’s to the 1987 Twins), but it’s no way to run a railroad.
What team has the best run differential this year? The Dodgers. Ohtani’s 50/50 performance is unheard of in the modern game. Or any game, for that matter.
But that doesn’t mean the Dodgers will win. That’s why we play the games.
Red Clay Strays (What my headphones are streaming):
No One Else Like Me. Brandon Coleman is a force of nature with unmatched intensity.
What I’m Tinkering With (LLM Edition):
How much do you tell your chat bot friend—OpenAI, Claude, Meta, Gemnini? Well, if you use them, I’m guessing more than you think. Be cautious; there are few governing rules around data use in the US—nobody is better than Zuck with data mining. So I’ve been playing around with Ollama of late, which runs locally. Depending on your machine’s specifications, the performance varies, but the potential is here.
What I’m Reading (Random Article Edition):
Ralph Nader's Pens. Part conspiracy theory. Part Occam’s Razor.
The light saber. What could have been, Hasbro.
With a burn rate in the billions, OpenAI raises 6 billion to keep the train going. And LiquidAI, creates a new type of model, LFM.
Real estate shopping for the apocalypse.
Writing with the machine, NewYorker Edition. Just learning that George RR Martin writes with WordStar made this is a worthy read.
When There Are No Words:
A sign outside of Malaprop’s, Asheville’s beloved downtown bookstore, read: “Closed. This is Katie. Be safe. I will try and contact you when I am able.”
Credit, The NY Times