One single run of the algorithm of life was enough to invent all of the complexity found in nature. By using open-ended techniques, we may construct machines that keep learning forever. This may even be possible in the absence of any new data, like theorems are demonstrated from posing new axioms.
Moving on from deep learning, designing the next AI paradigms will require better mathematics for learning in itself. We start from understanding the emergence of goals in agents, eventually aiming at building a global framework for the problem of credit assignment.
Animal societies find value in continually transferring knowledge between individuals. Our research in societies of AI agents show new ways to parallelize computation, learn collectively from each other, and invent new mechanisms that solve problems on a collective scale.
By observing the processes in diverse forms of intelligence, from human brains to many other living systems, we are able to translate biological processes into fundamental mathematical principles of intelligence.