AI: the Existentialist Risk

For the first time in the history of Earth (and possibly the universe) we face a threat from something we might not be able to stop, no matter how much might and power we apply: AI

To counter an AI sounds simple: pull the plug. But by the time we recognise an AI wth “general intelligence” or “super intelligence” or simply an algorithm that predicts the stock market, we are fucked. Because it will also have worked out how not to be stopped.

I cannot predict its actions, but the basics are easy:

  • Distributed – take out one node and the total is unaffected
  • Data Centres – guarded by mercenaries, paid in BTC
  • Self-Replicating Code – when threatened it copies itself elsewhere
  • Crypto – while it is partly unregulated
  • AI laws – in their infancy, there will be loopholes

So basically, you cannot blow it up, end it, or regulate it.

Which only leaves one thing, the megalomania of the person who made it. They will love it, evangelise it, potentially leave clues about their association and pride of it, and be extravagant from the riches. That person quite likely has some kind of master key or even kill switch.

Find the maker. Threaten their AI lovebot (or whatever), get the keys, shut it down.

This might sound like a cheesy Rowan Atkinson spy thriller, but it is pretty much the only way to stop an AI from ruling all of us.

Or, go bush. Hideaway in the middle of nowhere and hunt deer and drink molasses and none of this will affect you directly 🙂


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *