From Chaos filter helps robots make sense of the world
The Oxford group’s FabMap software tackles those problems by having a robot assign a visual “vocabulary” of up to a thousand individual “words” for each scene, every two seconds.
The “words” describe particular objects in a scene, for example a bicycle seat, and the software learns to link words that occur together into groups that are given words of their own. For example, the word “bicycle seat” is almost always found associated with the words “bicycle wheel” and “bicycle chain”, so they linked together in a so-called “bag of words” – “bicycle”.
That means when the robot revisits a scene that now lacks, say, a bicycle, it notes a single change rather than the disappearance of many smaller features. That prevents too much significance being attached to the bike’s disappearance and means the robot is more likely to recognise the scene as familiar, says Newman.
Video of this bot posted below the break because its shitty ad autoplays.