Types of Autonomous Robots

https://www.superdroidrobots.com/shop/custom.aspx/autonomous-types-of-autonomous-robots/117/

Summary

Unmanned Ground Vehicles (UGVs) exist on a broad spectrum of autonomous capability, ranging from simple to complex. Simpler autonomous robots use no sensors or a few inexpensive ones, while the more advanced ones have very expensive high-end sensors and require many hours of programming, testing, and tuning.

Open Loop/Sensorless

The most basic autonomy possible. The robot blindly repeats a set of pre-programmed motions without any sensor feedback. Without sensors, the UGV doesn’t know where it is and can’t detect obstacles around itself. UGVs using this approach aren’t very useful so open loop autonomy is not common for these robots. Industrial robot arms can be effective with open loop operation in applications where they just need to perform some repetitive motion over and over. However, this still requires lots of calibration/tuning and the environment must be maintained in the state that the robot is expecting.

When we sell programmable robots without a remote control method, we often use an open loop approach and program them to cycle through a set of preset movements, e.g. drive forward, drive backwards, turn left, turn right. The robots in the following video are examples of this.

Line Following

Line following is a method of autonomous movement in which the UGV follows a line on the ground. Common approaches for detecting the line include optical sensors which detect the color of the line or magnetic sensors which detect magnetic tape. The line layout can simply go from point A to B or scale up to a complex network of forks and waypoints. When manually driven, the robot may move freely without the line. This method is far easier and cheaper to develop than allowing the robot to freely roam in space. The trade-offs are that the line/tape must be placed beforehand (and maintained) and the robot’s autonomous movement is restricted to the lines.

The robot in the video below is a line follower that uses the RoboteQ MGS1600GY Magnetic Guide Sensor to follow magnetic tape on the ground. The robot demonstrates stopping at waypoints and selecting routes at forks.

Person/Target Following

Person/Target Following The UGV moves toward or follows a person or target. This is accomplished most cheaply and effectively using computer vision. This class of robot does not need a full positioning system and only needs to know where it is relative to the target it’s following.

One robot with this capability is the Mini-Vision Robot. This is a tracked ROS robot with vision sensors that include the Intel RealSense T265 Tracking Camera and D435i Depth Camera.

UGVs that don't use Mapping/SLAM

The UGV can move freely but doesn’t maintain an obstacle map of its environment. The UGV may implement an obstacle detection system and can use this information to prevent collisions. The lack of a map allows the use of less powerful computers and sensors but prevents the robot from being able to reliably navigate around obstacles or generate paths that avoid previously encountered obstacles. Navigation decisions are made with only the current information visible to the robot's sensors.

If the UGV has a positioning system, it can travel to waypoints. The video below shows our Mini-IPS Robot in action. The robot uses the Marvelmind Indoor Positioning System and wheel encoders to position itself. The robot runs on ROS and the video shows it traveling between user-defined waypoints. The Mini-IPS Robot is also equipped with a 2D Lidar that it uses for obstacle detection.


UGVs that use Mapping/SLAM

The UGV moves freely while generating and maintaining a map of obstacles encountered in its environment, usually with the SLAM algorithm. Use of mapping automatically equips the robot with powerful positioning and obstacle detection systems. The positioning system enables real-time tracking of the robot's location and waypoint travel. The navigation system can use the map to plan a path to a waypoint that avoids any previously encountered obstacles. These features greatly enhance the capabilities of the robot.

Our Autonomous Security Robot uses a 3D Lidar for 3D SLAM as well as an IMU, encoders, and a GPS to allow positioning both relative to its environment and in GPS coordinates, making it easier for the user to set waypoints and paths on the fly. With this system, the user is able to add paths and waypoints on the fly and save them to the robot to load and execute whenever needed.

The Autonomous Agriculture Robot in the video below uses 2D Lidars for 2D SLAM when indoors and a Zed depth camera for 3D SLAM when outdoors. The positioning system also fuses measurements from encoders, an IMU, and RTK GPS. The robot can travel to user-defined waypoints while avoiding both expected and unexpected obstacles along the way.

Additional Information

Our sensor support page provides more in-depth information about sensors that are suitable for autonomous projects. The necessity of the following sensors depends on many factors such as operating environments and autonomous actions the robot will be performing. Our goal is to provide detailed information on the strengths and weaknesses so you can make informed decisions moving forward. Of course, if any questions or concerns arise during your research, don’t hesitate posting on our forums for help!