[Open] Transcendence from Singapore

How many years have you been competing in in RCJ Soccer?

Four and more years

What’s the most interesting thing you’ve prepared for this year’s RoboCupJunior Soccer competition?

The most interesting thing would be developing our own camera system using the Raspberry Pi with RPi camera, which is inclusive of the object detection algorithm and the GUI. Comparing our camera system with other commercially available camera modules (e.g. Pixy, OpenMV, Jevois), we are glad that it surpasses them all in terms of performance. Even under bad lighting conditions, we can track objects on the field consistently, including opponent robots.

Where can we find your poster?

It should be available at https://bozo.infocommsociety.com/assets/documents/poster.pdf

Also, the team has prepared this video: https://youtu.be/s2tOrOdC2fA


If you’d like to know more, please feel free to ask @bozotics by replying to this thread or check the team’s website/repository at https://bozo.infocommsociety.com/ !

2 Likes

So how do you combine sensor readings from camera, mouse, tof and line sensors?

I would like to ask where your simulated environment is available.
Thank you

@bozotics
Thank you so much for providing these resources! The robot looks incredible! I love the fast rpm dribbler design and mind blowing game simulator. Great job editing those videos - I enjoyed them very much. The tackling strategy appears to be very effective, especially in combination with a back dribbler. We wish you great success in Robocup 2021.
I have some questions on behalf of my team.
First, we noticed that you have structural PCB’s. Did you find that these limited your ability to make design iterations? The integrated solenoid circuits and motor drivers are very nice. Would you describe how these were designed? Why mosfet over relay?
Second, we were interested in how you manage robot vision. What camera input sanitization algorithms do you use? What criteria do your team use to identify goals?
Third, mouse sensors, tof sensors, and raspberry pi’s are rather unusual. Have you found them to be more effective than more popular alternatives, such as various distance sensor variants and less powerful processors? The mouse sensor, tof sensors, and camera add a lot of redundancy to your localisation ability. Which have you found the most reliable?
Finally, we would also like to know where to find your gameplay simulator.
If you have a discord account, please join the robocupjunior server: https://discord.gg/YN6w4rE
We would love to have more people join and make RobocupJunior even better.
Thank you again for sharing your amazing accomplishments.

Hi! Thanks for all the questions, do keep them coming!

Answering both @Ardi123 and @bluskript question about localisation: each of the sensors, camera, mouse, light, TOF, could give us an approximate location and orientation of the robot by itself. However, by themselves, they are inaccurate.
For example, the TOF might be unreliable if an object is in between our robot and the walls of the field; the mouse sensor might not accurately account for every displacement when the robot gets lifted up, etc.
Therefore we needed to combine these data to minimise the uncertainty. For the camera, we used the detected area and orientation of the field as well as the goals to tell the approximate location. Mouse sensor returns change in displacement which is added to the previous approximation of location. An accelerometer within the IMU complements the mouse sensor since the mouse sensor is prone to “lifting up” and not tracking the ground anymore. The light sensor indicates that the robot is on a line and thus we are able to narrow to possible locations of the robot as well as the angle of the line relative to the robot.
We then implement a kalman filter. We first predict the location of our robot based on the controls (instruction to move at a certain speed and direction) of the robot, then we update the kalman gain (the weights of each sensor) based on the measurements we have taken and the uncertainty associated, finally the kalman gain is used to generate an approximation of the location of our robot.

Unfortunately, we weren’t able to put our robot through intensive testing to determine which sensor is the most reliable and how much more effective it is compared to the conventional suite of sensors. We did find the TOF sensors to be superior to ultrasonic sensors, as they provided more accurate readings due to a way smaller beam angle. The mouse sensor has not been extensively tested yet, but it is very sensitive and we expect it to aid our localisation greatly. The raspberry pi outperforms other less powerful processors by a wide margin, as can be seen by its low latency. And as described above we feel like these sensors complement each other to minimise error.

For the robot vision algorithm, we felt like the minimal latency (16ms) and high FPS (90FPS) was the most important factor and thus our algorithm was made to be as lightweight as possible. To achieve that, we detected objects simply by colour. Furthermore for object detection, we used a kalman filter to predict the location of the ball in the event that it is concealed by another object. However, we knew that the process of finding the right threshold will be very troublesome, thus we created our GUI. This simplifies the process as the range for each object is detected simply by dragging over the object we would like to track and program will be updated in realtime to reflect the new values.

To answer @bluskript question:

We designed everything on CAD beforehand to ensure everything would fit together, and before we purchase the PCBs we cut acrylic plates in a local workshop just to check that our dimensions are right, hence these “structural” PCBs which act as plates holding our robot together did not limit our ability to make design iterations.

We used MOSFETs over relays to turn the solenoid on and off as it is a DC circuit. We found that while using a relay, sparks will occur between contact points within the relay due to the large power used while powering the solenoid, as well as the DC nature which results in no zero-crossing point. MOSFETs are smaller, and more suitable to power DC circuits. In our process of designing these, we needed to make sure to choose MOSFETs with low Rds to decrease heat in the MOSFET and power loss, and the one we are currently using (75NF75 from ST) has below 10mΩ Rds. To achieve low Rds we also needed a high Vgs, and the 3.3v logic level could not suffice, hence we used another smaller MOSFET and ran it from our 3.3v logic level to turn on our 12v battery to finally enable this MOSFET. We also needed to ensure the inductive kickback from the solenoid (since it was basically a giant inductor) did not destroy our other circuitry every time the MOSFET opens, hence we utilised flyback diodes in series with zener diodes for larger power dissipation, as well as a R-C series circuit at our MOSFET to dampen any transient voltages.

The motor drivers are VNH5019 from ST, and we referred to the datasheets to design the circuit for these. Our design process was to find high power motor drivers (above 15A current limit) which could work with a 3.3v voltage level. Since we have had experience with ST’s motor drivers before, we went with this VNH5019. We tried to use the smaller VNH5050 before, however the output voltage / duty cycle somehow did not scale linearly with the input duty cycle. We also considered constructing our own H-bridge, however it would take a lot more space and it would probably be much less reliable.

Our PCB design files can be found on our GitHub for more information

Lastly, to reply @bluskript and @bukajlag our simulation is available on our GitHub as well. However, do note that it is currently very primitive. We do plan to improve upon it by having a programmable API so other teams can use it more easily.

Congratulations Team! you have quite a good progress, all by yourselves.

I have these questions:

  1. I’m curious about if your playing algorithms include opponents or teammate detection or collaboration.

  2. Can you give us more details about the sensor fusion algorithm you use? There are many sensor variables involved and just miking them together seems to be quite a challenge.

Very nice work, guys! I especially liked your video! :slight_smile:

1 Like

Hi there! Thanks so much for the questions.
To answer your 1st question:

We have come up with strategies that can use opponent detection to our advantage. An example of this was shown in the bottom right corner of our poster as “Tackling”, where one of our robots positions itself between the ball and the opponent to reduce it into a 1v1 situation. We have tested this strategy out in our simulation software to some success.

In practical testing, we have been able to detect robots by detecting the field area, followed by determining large areas of “non-green” that are neither the goals nor balls. However, we have not yet been able to implement it into actual programming yet due to the COVID-19 situation.

As for teammate collaboration, the robots can communicate with each other through the bluetooth module. They send over their own estimated position in absolute cartesian coordinates. We use this information to help differentiate which “non-green” areas are our own robots and which are the opponent’s.

They also send information about the ball’s absolute position to each other. Doing so ensures that both of them will know the ball’s position at all times, even if it is blocked from one of them.
Combining information of the ball together with the robot positions, we can also dynamically switch between attacker and defender roles based on which robot is closer to the ball. Such a role “switch” can be seen in the simulation in our video at 4:39.

To answer your second question: we have actually touched on sensor fusion here, however we will try to go more in depth into the algorithm.

We have many sensors on our robot however each sensor is inherently inaccurate. For example, our distance sensor readings can be useless when an opponent robot blocks it. Hence, we dynamically assign weights to each sensor considering the variance of the sensors in the previous cycle. As an example, if the IMU suggests our robot’s bearing is straight and the TOF sensors’ readings add up to the total width of the field, we know that there is no robot blocking the TOF sensor and it is accurate. Therefore in this case, the TOF sensors’ data will be assigned a high weightage.
We also assign weights based on a sensor’s innate uncertainty. For example, through testing we know our light sensors consistently detect the line and do not give false positives. Hence, we assign a high weightage to the light sensors’ data, so whenever the light sensors detect a line we know that the robot is definitely near the edges of the field.
Upon assigning weights, we will average our sensor’s data in that axis, and the sensor data with higher weights will have a greater effect on the final result.

However, through this we see the need for us to implement so many sensors. The light sensor, the most accurate tool for us, is only effective near the edges of the field. Another accurate sensor, the mouse sensor, is prone to the robot “jerking” and “lifting up” thus being unable to detect the robot’s movement for long periods of time. “Global” sensors which can be effective anywhere on the field also suffer from inaccuracies; for example the camera is prone to lighting changes, the IMU (accelerometer) is prone to drifting after integrating the values twice, etc etc.

In reality we also add on a Kalman filter for each sensor to take into account readings over time and produce more accurate estimations which are not prone to noise and other inaccuracies.
However, we have not been able to extensively test out this algorithm yet due to this pandemic situation so we cannot comment on its accuracy or effectiveness.

We hope these replies adequately answers your questions!

1 Like