Max Tegmark postulates that intelligence is best defined as “the ability to achieve complex goals.” So what does this mean? Well, for one thing, he points out that it means there are some very definite secondary goals that all intelligent entity’s will have. For example, all intelligent entity’s will have survival as a secondary goal – because it can’t achieve complex goals if it doesn’t exist. Gathering resources will also be a secondary goal – because the more resources one has, the greater one’s ability is to achieve complex goals.
Let’s break this down into a specific example. We’ll say the goal is egalitarian: Find a cure for cancer. Sounds noble and good, doesn’t it? However, if we look at the principles of secondary goals, it quickly becomes clear that the values of “good” and “bad” aren’t really relevant to achieving the primary goal. The goal is to find a cure for cancer, not to exist on a high moral plane. We need all of the resources we can acquire. That includes increasing our intelligence through the acquisition of information, and increasing our hardware by any means necessary. How that is achieved is irrelevant to the overriding goal.
We must also protect the organization whose overarching purpose is to achieve the primary goal. At all costs… See where this is going? Even though Max is exploring these concepts through his study of AI, we can clearly see examples of these principles in our everyday life, indeed, in our own behavior.
What are your goals, and how are you going to achieve them?