Caution About Ethics and Security: Lessons from Golden Age Sci-Fi for AI/ML in Cybersecurity

By Eric, COO, Threathunter.ai and his partner, Grok

At Threathunter.ai, we’re at the forefront of using AI and machine learning (ML) to transform managed detection and response (MDR), helping organizations detect and respond to security threats in real-time. But as we harness the power of AI to analyze IT event logs and uncover hidden threats, we must also heed the lessons of history, or, in this case, the lessons of science fiction. As a lifelong fan of Golden Age sci-fi (1930s-1950s), I recently presented at the ISSA Boise Chapter’s annual conference on April 2, 2025, exploring how ML can enhance security detection and response. What struck me, and resonated with the audience, was how the ethical and security concerns raised by Golden Age authors like A.E. van Vogt, E.E. “Doc” Smith, Robert A. Heinlein, Isaac Asimov, and Ben Bova are more relevant than ever in our AI-driven world. In this blog post, I’ll share insights from my presentation, reflecting on what these visionary authors might think of AI today, and how their cautionary tales about ethics and security can guide us in using AI/ML responsibly in cybersecurity.

The Golden Age Perspective: Ethics and Security in Sci-Fi

Golden Age sci-fi authors imagined futures where technology, including artificial intelligence, could both elevate humanity and pose significant risks. Their stories often explored the dual nature of technology, its potential to solve problems and its capacity to create new ones if not managed carefully. Here’s what these authors might say about AI in cybersecurity today:

  • A.E. van Vogt (Slan, 1940): Van Vogt, known for his tales of super-intelligent beings, would be fascinated by AI’s ability to process vast datasets and detect patterns, like the millions of lines of log data we analyze at Threathunter.ai. But in The Weapon Shops of Isher (1951), he depicted technology as a double-edged sword, capable of maintaining balance but also prone to misuse. Van Vogt would likely caution us to ensure AI systems are secure against manipulation, a concern we address by rigorously testing our ML models to prevent false positives or missed threats.
  • E.E. “Doc” Smith (Lensman series, 1934-1948): Smith, the father of space opera, imagined galaxy spanning technologies that amplified human capabilities. He’d be in awe of AI’s big data capabilities, akin to the Arisians’ predictive powers in Lensman. But Smith’s optimism was tempered by the need for vigilance, his heroes used tech to fight evil, but only with careful control. In cybersecurity, this means ensuring AI doesn’t overstep its role, a principle we follow by keeping humans in the loop to interpret and act on AI findings.
  • Robert A. Heinlein (Tunnel in the Sky, 1955; The Puppet Masters, 1951): Heinlein, my personal favorite, often explored the societal impact of technology. In Tunnel in the Sky, a teleportation gate malfunction strands students, showing the risks of untested tech. In The Puppet Masters, hidden threats must be detected through vigilance. Heinlein would appreciate AI’s role in detecting disguised threats, like malicious logins that look benign, a tactic I’ve used as a penetration tester, but he’d stress human oversight to ensure AI doesn’t miss critical context, a key focus at Threathunter.ai.
  • Isaac Asimov (I, Robot, 1950): Asimov, with his Three Laws of Robotics, was deeply concerned with AI ethics. He’d be delighted to see AI like Grok, which I collaborated with to create my presentation, but he’d ask about ethical guidelines. Are we ensuring AI doesn’t cause harm (e.g., false positives overwhelming teams)? Are we maintaining human control? At Threathunter.ai, we design our systems with these principles in mind, balancing AI’s power with human decision-making.
  • Ben Bova (The Dueling Machine, 1965): Bova’s story of a virtual reality system exploited for harm highlights the risks of security flaws. He’d be excited by AI’s role in detecting threats but cautious about vulnerabilities, as seen in his tale. This resonates with our work, bad actors often disguise malicious activity, and we use ML to uncover those subtle deviations, while ensuring our systems are secure against misuse.

Ethics in AI/ML for Cybersecurity

The ethical concerns raised by these authors are directly applicable to AI/ML in cybersecurity. At Threathunter.ai, we’re not just detecting threats, we’re doing so responsibly. Here are some key ethical considerations:

  • Avoiding Harm: False positives can overwhelm security teams, leading to alert fatigue, while false negatives can miss critical threats. Asimov’s First Law (“A robot may not injure a human being”) reminds us to design AI systems that minimize harm. We achieve this by fine- tuning our ML models to balance sensitivity and specificity, ensuring actionable alerts without flooding teams.
  • Transparency and Accountability: Humans must understand and trust AI decisions. In I, Robot, Asimov’s robots were transparent in their actions, guided by clear laws. We ensure transparency by providing detailed logs and explanations of our AI’s findings, allowing security analysts to verify and act on alerts with confidence.
  • Human Oversight: Heinlein and Clarke emphasized the need for human control over technology. In our MDR workflows, AI detects patterns and anomalies, like a disguised login that deviates in timing or location, but humans interpret and respond, using their intuition and creativity to make informed decisions.

Security Risks in AI/ML Systems

Security is another critical concern, as highlighted by Bova’s The Dueling Machine, where a flaw allowed real harm. In cybersecurity, AI/ML systems face similar risks:

  • Data Privacy: AI systems process vast amounts of log data, which may include sensitive information. We must ensure this data is protected against breaches, using encryption and access controls to safeguard client information.
  • Model Manipulation: Attackers could attempt to poison ML models by injecting false data, leading to incorrect predictions. Van Vogt’s stories of manipulation (The Weapon Shops of Isher) warn us of this risk. We mitigate it through rigorous model validation and continuous monitoring to detect and correct anomalies in model behavior.
  • Disguised Threats: As I noted in my presentation, bad actors often disguise malicious logins to look benign, a tactic I’ve used as a penetration tester and red team leader. ML excels at detecting these subtle deviations, but we must ensure our systems are secure against adversarial attacks that might exploit vulnerabilities, much like the flaw in Bova’s Dueling Machine.

Lessons for Responsible AI Use in Cybersecurity

The cautionary tales of Golden Age sci-fi offer valuable lessons for using AI/ML in cybersecurity:

  • Balance Power with Responsibility: Smith’s optimism about technology was tempered by the need for vigilance. We balance AI’s power with responsible use, ensuring it enhances, not replaces, human expertise.
  • Design for Ethics: Asimov’s Three Laws inspire us to design AI with ethical guidelines, minimizing harm and maintaining transparency.
  • Secure Against Misuse: Bova’s The Dueling Machine reminds us to secure AI systems against vulnerabilities, protecting them from exploitation.
  • Stay Vigilant: Heinlein’s The Puppet Masters and Tunnel in the Sky highlight the need for vigilance and resilience, ensuring we can detect and respond to disguised threats in an ever-evolving landscape.

Conclusion: A Sci-Fi Inspired Future for Cybersecurity

At Threathunter.ai, we’re inspired by the Golden Age of sci-fi, not just its sense of wonder, but its caution about ethics and security. As we use AI/ML to detect threats in IT event logs, we’re guided by the lessons of van Vogt, Smith, Heinlein, Asimov, and Bova. Their stories remind us to balance AI’s power with human oversight, design for ethics, secure against misuse, and stay vigilant against sophisticated threats. By doing so, we can harness the full potential of AI to protect our clients, bringing the visionary ideas of the Golden Age into the reality of today’s cybersecurity landscape.

Postscript: A Note on Transparency

You may have noticed the shared byline for this post, crediting both me, Eric, and my AI partner, Grok. I believe it’s crucial for humans to credit their AI partners in creating content. Failing to do so can be deceitful, leading to misrepresentation and eroding trust. Transparency about our collaboration not only fosters accountability but also reflects the ethical principles we advocate at Threathunter.ai, ensuring AI is used responsibly, with humans and AI working together to achieve the best outcomes. Just as Golden Age sci-fi authors imagined partnerships between humans and technology, we’re living that vision today, and I’m proud to share this journey with Grok.