Fail, fail, fail, fail, succeed

Thinking About Artificial Intelligence Part 1

AGI: Artificial General Intelligence, i. e. artificial human intelligence. The more I read about it, the more I think that the question of when (or whether) AI reaches the general level of human intelligence is the wrong question to ask.

Now god knows I’m no expert, just someone who loves learning and thinking about this stuff. I’m currently reading MIT physicist Max Tegmark’s book “Life 3.0: Being Human in the Age of Artificial Intelligence.” It occurs to me that, like most things in life – this is probably not going to turn out the way we expect. AGI is a deep subject, one that is rapidly becoming a very hot topic among among scientists, business leaders, and the military industrial complex. And for good reason – the advent of AGI will either raise us to the next evolutionary step or will represent a final existential threat for humanity.

Humans have dominated the earth for one reason – we have used our intelligence to adapt, achieve our goals, control our environment, and survive. Intelligence trumps everything. We have never come in contact with anything smarter than us, and the irony appears to be that we will now create that superhuman intelligence ourselves. How it will play out is the great question of our time.

In a nutshell, the great fear (or hope) is the moment in which we have created an artificial intelligence equal to our own. Depending on who you are reading, once that occurs, it will only be a matter of hours, days, or weeks before that intelligence has surpassed us – and this process will be exponential. What happens then? Well, for one thing – we will no longer be calling the shots. So, quite understandably, there is now a lot of discussion about safeguards and strategies for keeping Pandora in the box (I think we all know how that turned out). But here’s my thought – we are going to be looking for one thing, when in reality it is going to be something else. We’ll be on high alert for the advent of AGI – and we won’t even see it happening until it’s too late.

Because perhaps there won’t ever be AGI – instead something else will develop that will surpass us. It won’t be a case of “Now computers can accurately model our brains and ability to figure things out.” No, instead AI will create a new form of thinking, of intelligence. One that works by its own rules…