摘要In the visual 'teach-and-repeat' task,a mobile robot is expected to perform path following based on visual memory acquired along a route that it has traversed.Following a visually familiar route is also a critical navigation skill for forag-ing insects,which they accomplish robustly despite tiny brains.Inspired by the mushroom body structure in the insect brain and its well-understood associative learning ability,we develop an embodied model that can accomplish visual teach-and-repeat efficiently.Critical to the performance is steering the robot body reflexively based on the relative famil-iarity of left and right visual fields,eliminating the need for stopping and scanning regularly for optimal directions.The model is robust against noise in visual processing and motor control and can produce performance comparable to pure pursuit or visual localisation methods that rely heavily on the estimation of positions.The model is tested on a real robot and also shown to be able to correct for significant intrinsic steering bias.
更多相关知识
- 浏览1
- 被引0
- 下载0

相似文献
- 中文期刊
- 外文期刊
- 学位论文
- 会议论文


换一批



