People have always found navigation challenges to be fun and entertaining. After all, we have thousands of books on maze puzzles and games that incorporate a made up world for players to navigate and solve problems in. In both mediums, players will always try to find the shortest path to the solution or goal. But now not only will mammals have this capability but machines as well. We’re not talking about mindless machines that try every option until one of them works. We’re talking about artificial intelligence that actually learns certain patterns to navigate through unique spaces.
First of all, there are three brain cell types that scientists have found to relate to navigation ability: Place Cells that memorize past locations, Head-Direction Cells that sense movement and direction, and Grid Cells that divide the spatial environment into a honeycomb hexagonal grid similar to the coordinate system on a map and possibly use vector calculations to help route planning. To learn navigation like humans, DeepMind used AI techniques to test out using the vector calculation hypothesis.
Researchers used recurrent neural networks and long short-term memory techniques to help the machine memorize previous locations, directions, and speed to help determine the next route plan when given real world data and movement studies of rodents to learn from. When the techniques were applied to a virtual reality game environment, the AI actually learned to take shortcuts after discovering better routes, similar to how professional game players play games.