Hi there! Great work done on your robot, especially on your neural networks.
I would just like to know more about the latency you experience on your ball detection, have you experienced any issues so far? Did you measure the latency of your system?
I’m asking this since our team has tried to use Python to program the RPi or used USB cameras and we found that USB & Python’s interpreter were great bottlenecks.
Hello! First of all, thank you for checking out my work.
This is a good question. I used CSI to communicate with the camera and since I am using Raspberry pi 4 there are 2 USB 3.0 ports available, so the USB is not a bottleneck. I also have a USB Coral Accelator(https://coral.ai/products/accelerator/) ,which communicates with Raspberry through one of these USB 3.0 ports for accelerating the neural network. Therefore, I can achieve a speed of 27 FPS, which is good enough. There have been some issues with compatibility because of some problems with compiling(https://coral.ai/docs/edgetpu/compiler/#system-requirements) but, fortunately, I was able to overcome them.
Excellent question,
I haven’t used the omnidirectional mirror this year yet, but it will definitely come up next year.
Unfortunately, I don’t have more playing fields to test and no competitions I could go to because of the covid 19 situation, but I will test it as soon as possible. So far, I have changed the environment by moving to different rooms in the house and it looked good. And if there was a problem, I could enlarge the dataset, which should help. I also started working on my own architecture specializing in robocup, which could also help.
Unfortunately, I were not able to find any teammate till the covid 19 lock down in my country but I did not give up and I have done all the work myself. I do not consider it to be an advantage over the other teams. However, I am aware that this is supposed to be a team competition and I will try to find a teammate in future years but due to this year’s special circumstances I sadly was not able to do that.
2.
As written here https://www.kaggle.com/jakubgal/robocup-junior-open-xlc
we use this cheap camera, which has a large viewing angle that is still below the limit. We are able to detect our teammate, the ball and the goal (distance and angle to the camera). We detect a teammate using a simple but not ordinary black and white pattern that is easy for training of our neural network and ,at the same time, our robot is easy to distinguish from the opponent. Moreover, we use the neural network for detection,so neither light condition nor most of any interfering colors are not a problem for us. It also reduces time for setup of the robot.
3.
We use ultrasonic sensors to verify that we did not accidentally cross to the other side of the line when the robot hits it. This means that when the robot hits, for example, 90 degrees to the right, it checks whether the distance to the right is less than the one to the left, and when it is not, it still checks whether the closer distance is less than a set constant (whichever is before goals). Basically, it works as a checking mechanism for the light sensors.