This is what allows us to understand the bigger picture with minimal input. It’s something humans do with little thought – and the ability to do so with problems of ever greater complexity surely is one indicator of intelligence. But how we do this is still a mystery in neuroscience – and it presents a significant hurdle to clear for general AI to happen.
For example, if you’re driving a car down the street and you see a ball roll out into traffic you instantly realize that a child might be right behind it. Seems trivial, yes? But how do you encode that into AI algorithms that will one day drive your car? We somehow recognize millions of patterns without ever thinking about it.
Our ability to successfully navigate the world we live in depends largely on associative intelligence, which seems to be analogous to Daniel Kahneman’s idea of “system 1” thinking – the kind that happens without much conscious thought.
Of course, this kind of thinking is also often notoriously wrong, making the whole thing even more inscrutable. This is the dilemma for AI researchers: how do you reverse-engineer that which you don’t understand?
I’m guessing by letting the AI recursively figure it out by itself – a frightening prospect. Unless it could somehow be contained in a safe and controlled environment while it was learning.
Just letting my mind wander freely here folks. Best keep moving, probably nothing useful happening here.