We are proud to be one of the Leading 100 most innovative startups of 2024!

Our Blog

Can We Spell “Future” Without AI?

2 min read

Published - August 21, 2023

Clay Sharman

Generative AI is getting dangerously close to another Gen-AI: genuine AI, which is a self-initiated, self-aware program that can imagine, execute, and correct without intervention.

We may not be there yet, but we are at a point where AI programs can mimic how we write, speak, and even interact online through digital manipulation. Sure, we still guide the programs by using descriptors and we still get to press “ENTER” to initiate the command, but, the scary thing is that AI contains machine-learning (ML), which by the definition of its own name, means that the AI machine can learn. Once it can self-learn, theoretically, it can self-initiate, execute, and correct. <Bum-bum-bummmmm…>

But let’s leave that scenario there for the moment and look at the slightly less terrifying (though maybe only slightly less) supposition: can the future exist without AI as a technology centerpiece?

Obviously, the answer is yes. But will it? I don’t think so. It’s here and we need to be ready. That means instituting ethical practices and a rigid commitment to responsible regulation on “how” AI is applied.

This would be similar to Asimov’s “Three Laws of Robotics.”

Practically, the question isn’t “Can we use AI to solve our problems,“ it’s “Should we?”

If we agree we should, then the next questions need to be around maintaining control before we begin to explore the “How,” which will determine whether we fall to Skynet or build a potential utopia. I know my choice.

Here are my proposed “Three Laws of AI.”

  1. No AI can ever go back in time to hunt a human.
  2. No AI can ever read nuclear codes correctly – we must make all AI programs dyslexic (and probably color blind).
  3. No AI can ever digitally impersonate a lawmaker, leader or despot (unless it’s to bring Max Headroom back and make him President).

This should alleviate the fear of large-scale robot takeover and the wiping out of mankind (again, theoretically).

I also propose that an fellowship comprised of nine members (four humans, an elf, a dwarf, two software engineers, and a software wizard) be on call 24 hours a day to ensure that we have a proper response in case a robot overlord ever does rise in the East.

They will carry “one ring-shaped magnet to wipe them all.” You know, to hedge our bets.