Thoughts on Intelligence and AGI, after reading this brilliant, insightful blog.
With all the recent success of DL and RL (superhuman performance in medical diagnosis, complicated games like Go), the overzealous enthusiasm of popular-science media has created this feeling of fear and inferiority amongst the general public - our jobs are at stake, and that AI - in the form of swarms of super-intelligent droids - will take over the planet (remember the uproar after the news of two FB bots ‘creating’ their own language?).
We as researchers in this area are well-aware that this couldn’t be further away from the truth, and that true ‘intelligence’ is nowhere close to be achieved. And it is our duty to quell all the unfounded fears of the general public. Quoting the aforementioned blog,
computers are making more and more complex decisions, and deep learning is overtaking human experts in many tasks. However, this doesn’t mean that we are close to achieving true AI, or that we are even going in the right direction. One of my favourite quotes on the subject is from Dijkstra, who said:
“The question of whether machines can think… is about as relevant as the question of whether submarines can swim.”
Submarines are in many ways superior to any fish or animal at moving through water, but we would never call what they do swimming. The question is not relevant because it is simply a question of what we define as swimming. Conversely, a computer can do many mental tasks like arithmetic, far better than any human, but if our definition of ‘thinking’ does not include computers, the question becomes pointless. Therefore, in order to know how close we are to Artificial Intelligence, we need a good definition of intelligence that can potentially include both humans and computers.
So first of all, we need to define what we call ‘intelligence’. With time, the definition has evolved significantly since the increasing abilities of machines does not sit well with human ego. As the post says,
Originally, being able to remember a lot of facts was enough to be considered intelligent. As external technology like printing made it easier for objects to remember things for us, the focus shifted more towards being able to calculate. Calculation became the domain of computers, so we started defining intelligence more by the ability to understand and make connections between facts and calculations. Computers are also getting better and better at those tasks, so instead people talk more about ‘Emotional intelligence’ which is supposedly “far more useful than the old mechanistic intelligence that even stupid machines have”.
Doesn’t this sounds like the famous Zeno’s paradox of Achilles and the Tortoise Paradox? As machines approach human intelligence, we conveniently shift the ability further away. All the more reason to fix the definition of what we consider intelligence.
The post then goes on to enlist certain metrics that intelligence should be measured on, namely:
- Long-term memory
- Direct memory (think working memory, like RAM)
- Unbiased logic
- Biased logic (based on priors, intuitions, drawing conclusions)
- Abstraction
As of today, machines beat us soundly on dimensions 1-3. 4 and 5 are where we still shine, and we’re trying to enable machines to catch up on those with ongoing research on priors and generalization. A weighted combination of measurable scores on these dimensions would result in a well-rounded score, one that truly reflects true intelligence.
The post then goes on to propose that the ongoing efforts should be categorized into creation of:
- Artificial Intelligence - A computer that can solve complex mental tasks as well, or better than humans.
- Artificial Humans - A computer that can act, respond and ‘think’ like a human being.
Now we come to an aspect on which I disagree with the author. He suggests that for the second kind of machines to pass off as human by the Turing Test^, they should actually decrease their capacity in any dimension where it currently surpasses us. The justification to this is that if one’s Turing Test partner were able to compute and recite a million digits of pi, or remember every word it had ever heard, then one could easily identify that ‘person’ as a computer. And hence, we need to reduce the current machines’ abilities of long-term and direct memory, as well as unbiased logic.
I do not agree. My view might appear sci-fi-esque, but its possibility cannot be ignored. What if the intelligent system were so intelligent that it could ‘dumb down’ its extensive knowledge and capabilities to a level of humans to pass the Turing Test?
‘Ex Machina’ comes to mind?
Close, but not the same idea - Ava successfully manages to convince Caleb that Nathan, her creator, is the villain, despite Caleb knowing he was brought there to test Ava for these abilities. The idea being that an intelligent system could ‘fool’ the test-takers if it were intelligent enough.
Food for thought.
^ an interpretation of which is -
> “If you can talk to a computer on any topic, for any length of time, and still you are unable to tell if you are talking to a computer or a human, then you know that computer is intelligent.”