Hi! Thanks for all the questions, do keep them coming!
Answering both @Ardi123 and @bluskript question about localisation: each of the sensors, camera, mouse, light, TOF, could give us an approximate location and orientation of the robot by itself. However, by themselves, they are inaccurate.
For example, the TOF might be unreliable if an object is in between our robot and the walls of the field; the mouse sensor might not accurately account for every displacement when the robot gets lifted up, etc.
Therefore we needed to combine these data to minimise the uncertainty. For the camera, we used the detected area and orientation of the field as well as the goals to tell the approximate location. Mouse sensor returns change in displacement which is added to the previous approximation of location. An accelerometer within the IMU complements the mouse sensor since the mouse sensor is prone to “lifting up” and not tracking the ground anymore. The light sensor indicates that the robot is on a line and thus we are able to narrow to possible locations of the robot as well as the angle of the line relative to the robot.
We then implement a kalman filter. We first predict the location of our robot based on the controls (instruction to move at a certain speed and direction) of the robot, then we update the kalman gain (the weights of each sensor) based on the measurements we have taken and the uncertainty associated, finally the kalman gain is used to generate an approximation of the location of our robot.
Unfortunately, we weren’t able to put our robot through intensive testing to determine which sensor is the most reliable and how much more effective it is compared to the conventional suite of sensors. We did find the TOF sensors to be superior to ultrasonic sensors, as they provided more accurate readings due to a way smaller beam angle. The mouse sensor has not been extensively tested yet, but it is very sensitive and we expect it to aid our localisation greatly. The raspberry pi outperforms other less powerful processors by a wide margin, as can be seen by its low latency. And as described above we feel like these sensors complement each other to minimise error.
For the robot vision algorithm, we felt like the minimal latency (16ms) and high FPS (90FPS) was the most important factor and thus our algorithm was made to be as lightweight as possible. To achieve that, we detected objects simply by colour. Furthermore for object detection, we used a kalman filter to predict the location of the ball in the event that it is concealed by another object. However, we knew that the process of finding the right threshold will be very troublesome, thus we created our GUI. This simplifies the process as the range for each object is detected simply by dragging over the object we would like to track and program will be updated in realtime to reflect the new values.