Before the Machine Decides: When Tools Begin to Ask for Our Allegiance
A formation-focused response to fears about artificial superintelligence
Lily-Rose Dawson’s article (The Technology That Needs to Be Illegal Right Now - Why Artificial Superintelligence Is More Dangerous Than Nuclear War) is written like a siren. It’s meant to wake people up, and on one level, I understand why. But I think the most factual way to engage with what she’s saying is to notice that her warning doesn’t begin with a hypothetical superintelligence. It begins with something we already live inside every day:
This article is my response to reflections originally written by Lily Rose Dawson. You can read her original piece here:
Tools Becoming Masters.
Months ago, I published a piece called “The Scrolls of Silence” with a simpler claim: We touch our screens more than we touch each other. We’re not just “using” technology; we’re being shaped by it, and our children are the victims. Dawson argues that, when AI dynamics are scaled up, they become catastrophic: that if we build systems smarter than us, we may not be able to steer them at all.
So, here’s the correlation, stated plainly and factually:
The common ground: incentives beat intentions
Dawson’s core point is that the greatest risk is not a “bad machine,” but human systems that reward reckless deployment. Competition, profit, national security, and speed consistently overpower caution. This is not science fiction. It is how modern technology usually scales—fast, unevenly, and with consequences discovered only after adoption.
That’s exactly the pattern I was warning about at the family level:
communication mistaken for communion
presence replaced by participation
relationship displaced by consumption
Different subject. Same mechanism: what we build trains us—then it starts ruling us.
What’s already observable vs. what’s speculative
There’s an important distinction worth keeping honest:
Observable now: attention fragmentation, social isolation, shortened patience, reduced face-to-face conversation, and the use of screens as substitutes for presence. Especially in homes with children. These are measurable cultural outcomes, not theoretical fears.
Speculative (but not irrational): the claim that a future ASI would become uncontrollable and existentially dangerous. That scenario is not proven. It’s a forecast built on assumptions: that intelligence scales into autonomous agency, that control fails at a certain capability threshold, and that alignment won’t be solved in time.
You can agree with Dawson that incentives are dangerous without pretending the endgame is already certain.
Where the two warnings connect most closely: Parenting!!!!!
If we’re being factual, the clearest “front line” isn’t server farms, it’s living rooms!
Parents have to recognize something uncomfortable: when devices become the default babysitter, the child is being trained daily to seek stimulation instead of connection. That’s not a moral panic. It’s a basic formation.
A generation formed to avoid silence, discomfort, patience, and conversation becomes a generation more vulnerable to any system that offers:
constant engagement
frictionless guidance
automated answers
outsourced judgment
The danger isn’t only “what AI might do later.” It’s what dependence does to us now.
Why this echoes Revelation’s pressure patterns (without turning it into panic)
For Christians, this also has a spiritual shape that Scripture already trains us to discern.
Revelation’s pressure isn’t first “new gadgets.” It’s allegiance testing. Quiet, reasonable compromises that slowly shift trust away from Christ and into systems that promise security, access, or peace.
So the discernment question isn’t, “Is this technology evil?” It’s: “What is this training me to trust, obey, and depend on?”
And Revelation gives a second corrective that matters here: it calls believers to endurance and worship, not fear and obsession. Fear-based narratives can become a form of captivity in themselves. The goal is not to panic, it’s to stay awake.
The most honest takeaway
Dawson is shouting about the cliff edge. FaithBindsUs has been writing about the slow drift toward it. If there’s a factual bridge between our warnings, it’s this: We don’t wake up one day under tyranny. We practice our way into dependence.
Whether that dependence is on a glowing screen at dinner or on an automated system that mediates work, truth, or belonging, the pattern is the same.
So yes—ask hard questions about advanced AI. Demand real oversight. But don’t miss the nearer question that decides whether we’re even capable of wise oversight: Have we already surrendered the basic human disciplines of attention, presence, conversation, and conscience that are needed to remain free? Because the future doesn’t just happen to us. It forms in us, one habit at a time.
Original inspiration for this article is credited to Lily Rose Dawson.

