Professor Stuart Russell used the term “The Gorilla Problem” in his book, “Human Compatible” (Penguin, 2019), to describe how a super intelligent artificial intelligence (AI) could be a threat to humanity. Russell poses the problem “of whether humans can maintain their supremacy and autonomy in a world that includes machines with substantially greater intelligence“.

At a point in history humans diverged from the ancestors of modern gorillas. Humans developed greater intelligence than gorillas. Humans now control the future of gorillas, and many other species.

If humanity created AI that was more intelligent than humanity, would humanity end up in the same situation as the gorillas?

I think it is an interesting problem to consider when thinking about AI. Is humanity working towards dependence on its own creation? Could humanity lose control over AI?Is there a solution to this problem other than stopping research into AI / stopping development of AI?

Russell writes that “the only approach that seems likely to work is to understand why it is that making better AI might be a bad thing“.