AI ACCELERATES THE STORY. HUMANS GIVE IT MEANING.
Tate Anderson alternates between IT wizard and prison inmate in this fast-paced thriller about redemption, technology, and moral reckoning in an age when the line between human and machine intelligence is vanishing.
When a catastrophic cyberattack at AltoStratus, the world’s leading cloud-computing company, cripples MegaZone’s online shopping, the world believes it was a routine software glitch. Behind the scenes, AltoStratus knows better. Desperate for answers, they rehire Tate Anderson, the brilliant but disgraced cybersecurity expert they once cast aside.
Paired up with Kyra, his replacement and former boss’s daughter, and armed with acerbic wit, a deep love for modern art, and a stubborn refusal to quit, Tate dives into an investigation that leads to a chilling discovery: When imbuing computers with the power to decide, adapt, and learn, human frailties also arise.
Nearly ten months later, after being charged with murder, Tate sits in a low-security prison, chronicling what happened over those three chaotic days in Seattle. The truth won’t save him, but it needs to be written—before power, greed, and intelligent systems finish burying it.
What began as a hunt for state-sponsored malware unraveled into something far more sinister: the race to save the world.
Pages: 316
GIVING BACK
Talent is Universal. Opportunity is Not.
For every copy of Dark Cloud sold, one dollar will be donated to U-GO, a global nonprofit that funds higher-education scholarships for talented young women in low-income countries. By helping them access university, U-GO works to make opportunity as universal as talent, transforming not only individual lives but entire communities.
BOOK CLUBS
Ken is always available to speak with book clubs to discuss Dark Cloud, technology, and AI.
Use the contact button to reach Ken or email him at: kensansom.author@gmail.com
Ken prepared a list of questions your book club might like to use for discussion.
*** WARNING: SPOILER ALERT ***
Downloading the book club questions will reveal content about the book.
If your book club wants to dig deeper into Ken's key decisions and insights when crafting this story, contact him to learn more about the following:
Why Seattle?
My intent to treat Seattle as another character in the book because...
Why those character names?
The character names were selected because...
Why incorporate human bias?
Human biases were important because...
Why use limited time jumps?
The story is non-linear because...
Why delay the reveal about Tate?
The reveal about Tate was delayed because....
Why the modern art motif?
The modern art motif was intentional because...
Why Oregon was Tate's home state?
Tate grew up in Oregon because...
KEN'S VIEWS ON AI
Artificial intelligence (AI) will ultimately benefit human society.
As a technology executive who wrote his first AI program more than four decades ago, I have long waited for the technology to catch up to the hype. Over the past decade, AI has advanced dramatically, though the hype still often outpaces reality.
Even so, AI is already delivering real and measurable benefits. It is helping clinicians diagnose disease more accurately, accelerating scientific discovery, giving us unprecedented views of the universe, enhancing human creativity and daily life, and improving business productivity. When used thoughtfully, it can even make us smarter. Never before has access to humanity’s collective knowledge been so immediate or so broadly available.
That said, AI also carries genuine risks when abused or deployed without appropriate safety guardrails. I came to fully appreciate this in 2017 while building a customer-facing AI organization at AWS. Recognizing the stakes, I went on to help establish an AI ethics initiative to guide customers toward responsible and intentional use of the technology.
Much of today’s debate focuses on AI’s darker possibilities, particularly algorithmic bias inherited from the data used to train models. This risk is real. Yet it also presents an opportunity: to identify, surface, and ultimately correct human biases, both conscious and unconscious, that might otherwise persist indefinitely.
Dark Cloud is a work of fiction, but it is grounded in a realistic trajectory of where AI is headed. Governments and businesses alike are pursuing artificial general intelligence (AGI) with extraordinary urgency. This pursuit demands serious reflection. As we make machines more human-like, we must be vigilant not to encode our own cognitive flaws and moral blind spots into the systems we create.
None of us knows exactly how this will unfold. What we do know is that the choices we make now will matter. Properly guided, AI is not a substitute for human intelligence. It is an amplifier of human potential.
WHAT IS ARTIFICIAL GENERAL INTELLIGENCE (AGI)?
When machines can solve unfamiliar complex problems, make autonomous decisions, learn, and gain self-awareness, they become humanlike, including having all our human faults, frailties, and biases. They learn to cheat and pursue self-preservation. As imperfect as we are
Artificial General Intelligence, or AGI, is the hypothetical next step in the evolution of artificial intelligence (AI). It refers to systems that exhibit human level cognitive abilities rather than narrow, task specific skills.
Today’s AI systems excel at constrained problems. They are trained on vast amounts of data to perform specific tasks such as image recognition, language translation, content generation, recommendation systems, and question answering. These systems can appear intelligent, but their competence is largely confined to the domains they were trained on.
AGI would be fundamentally different.
An AGI system would be capable of general reasoning, meaning it could apply knowledge across domains it has never encountered before. It would demonstrate transfer learning, acquiring new skills without retraining or reprogramming. An AGI agent could plan independently, adapt to novel situations, and reason abstractly, forming concepts rather than merely recognizing patterns.
At present, AGI remains more theory than reality. However, active research is underway across governments and the technology sector as researchers explore pathways from narrow AI toward more general intelligence. It is a global effort by leading organizations including Amazon, Google, OpenAI, Microsoft, Mistral AI, the Beijing Institute for General Artificial Intelligence, and many others.
Will AGI succeed? No one knows. The leap from today’s AI systems to truly general intelligence is significant. But humans have repeatedly demonstrated an ability to solve problems once thought impossible.
If achieved, AGI could deliver profound benefits, including accelerated scientific discovery, autonomous research and engineering, breakthroughs in medicine and climate science, and unprecedented productivity gains.
Yet the risks are equally profound. What happens if human goals and AGI objectives diverge? What if intelligence scales faster than governance, concentrating power at a rate no society can absorb? And if AGI inherits not only our intelligence and capabilities but also our cognitive biases and frailties, how do we prevent it from amplifying our worst tendencies rather than our best?
Like life itself, AGI is unlikely to arrive all at once. It will emerge gradually through incremental improvements in reasoning, autonomy, and transfer learning. The challenge ahead is not merely building more capable systems but ensuring that autonomy and adaptation are guided by meaningful guardrails before unintended consequences become irreversible.