Clarification on the Use of AI

Dear RoboCup Community,

We would like to provide a clear explanation regarding the use of AI tools in the competition, as we have noticed that more teams are still unsure about what is permitted under the current rules.
According to the official rules, the following section outlines the relevant definitions and conditions:

4.1. Terms and Definitions

  1. Tool: The term “tool” is a comprehensive concept that encompasses both hardware and software components essential for the operation of robots. These can include physical components such as sensors, actuators, or controllers, as well as software elements like algorithms or libraries.
  2. Calibration: Calibration refers to the process in which a team intervenes to adjust or fine-tune the settings of a tool.
  3. Development: Development refers to activities aimed at creating new solutions, technologies, or systems, as well as enhancing existing ones through innovation and creative problem-solving. In this case, for example, calibration is not considered development since it involves fine-tuning or configuring an existing system without introducing new features, technological advancements, or innovations.
  4. Tools are allowed as long as they are developed by the team or when they cannot independently complete a task, or a part of a task, that enables the robot to earn points by sending a signal to the controller without further development (e.g., color sensors, cameras, or libraries necessary for sensor operation).
  5. Tools which are not developed by the team, which can independently complete a task, or a part of a task, which enables the robot to earn points by sending a signal to the controller without further development (e.g., line-following sensors, AI cameras, OCR libraries) are prohibited.

1. Use of AI Models
A frequently discussed topic is the use of pre-trained AI models.
According to the rules, pre-trained models are not allowed unless the team is able to demonstrate further development and a solid understanding of the model’s internal structure and functionality.
For example, using a model like YOLO is permitted only if the team understands how it works — including its architecture — and is capable of performing meaningful development. This can include modifying the network architecture, changing the loss function, or adapting the model in other significant ways.
Annotating images and retraining a model is a valuable and important task, but by itself, it does not qualify as development under the current rules, unless it is accompanied by deeper changes and improvements to the model.
In summary

2. Use of AI Cameras
Another important topic is the use of cameras with built-in AI.
We understand that many modern camera modules now come with integrated AI functions by default, and that some teams may rely on such hardware.
After careful consideration, the committee has decided the following:

  • The hardware itself is not the issue
  • The concern lies in the use of built-in AI features that can perform tasks independently

Therefore, we will not fully prohibit the use of such cameras. However, to ensure fairness, teams who use cameras with built-in AI functionalities will be subject to more detailed inspection during the competition, to ensure that those features are not being used.
We hope this post helps clarify the expectations regarding AI use in the competition.
If you have any questions or concerns, please reach out here on the forum.

Kind regards,
Csaba,
2025 Committee

1 Like

Dear Committee,

I am part of a team that will be competing in the RoboCup Junior World Championships in the Rescue Line category.

Earlier this year, to detect victims and identify the silver line, we decided to retrain a YOLO model using images we collected ourselves, encouraged by the post “2025 RCJ Rescue Rule Changes” (2025 RCJ Rescue Rule Changes), which allowed the use of models such as YOLO.

We tested various YOLO models (YOLOv8s, YOLOv8n, YOLOv11s, YOLOv11n) and ultimately chose to use YOLOv8s. Although we understand how the model works, we haven’t modified it. We have summarised this entire process in our TDP, which we submitted on June 28.
However, after the above clarification (published on June 29), we are no longer certain whether our solution complies with the rules. Given the limited time before the competition — approximately fifteen days — we are unable to modify our strategy (e.g. replace YOLO with other algorithms) and remain competitive.

We would therefore like to ask: what options do we have to comply with this rule clarification?

Sincerely,
Tobia Petrolini

3 Likes

Hi Tobia and Committee,

Our team is also participating in RoboCup Junior Rescue. Almost two weeks ago, we posted on this forum regarding our use of YOLO: Using YOLO correctly?.

In that post, we explained our implementation of YOLO, as we noticed some conflicting information across the forum. Some posts mention that using YOLO is allowed, while others state that “fine-tuning” is not, which we understand is an essential part of how YOLO works.

We haven’t received a response from the organizers yet. As Tobia mentioned, there’s little time left before the competition, so we would really appreciate your guidance on this matter.

Best regards,
Ernesto González.

2 Likes

Dear Committee,

We are a team competing in the Rescue Maze category at this year’s world championship.

From the beginning, we have used EdgeImpulse to train our TensorFlow model for victim detection. Since the default CNN architecture provided strong results and we had a solid understanding of TensorFlow, we initially saw no reason to modify it.

However, following the clarification posted on June 29, we realized that some aspects of our setup might not fully align with the updated interpretation of the rules. In response, we carefully started adjusting a few hyperparameters and added a small number of custom layers to the model architecture. All modifications are documented and the documentation of the changes will be given to the jury at the world championship.

We would like to ask whether these changes - made only after the clarification was published and with the intention to ensure compliance - are acceptable within the scope of the updated rules.

Sincerely,
Florian Wiesner
B.Robots

Dear Teams,

We have received many inquiries regarding the interpretation and application of Rule 4.1 on AI usage and team classification.

Please refer carefully to the AI Compliance Buckets and criteria provided in the official communication sent to all teams. We ask that each team review the documentation and identify the category that best fits your own implementation.

If you believe your team’s current classification does not fully reflect your work and that you qualify for a higher-level category, you are encouraged to submit an additional review request.

The deadline for submitting these requests is July 14, 23:59:59 UTC via the “AI Compliance Bucket Review Request” form on the CMS.

Furthermore, if your team undertakes additional development or work before the competition to meet the criteria for a higher category, as some teams have planned, we are open to reconsidering your classification based on those improvements.

Our goal is to maintain fairness and uphold the educational mission of RoboCupJunior, and we appreciate your continued cooperation and transparent communication.

Thank you for your understanding and dedication.

Best regards,
RoboCupJunior Rescue Committee

Dear RoboCup Junior Rescue Committee and Community,

First of all, I would like to thank the committee for all the effort and care put into organizing this year’s competition. I’m writing this post/reply not as a competitor, but as someone who just completed their final year in Rescue Line, as I’ll be starting college next fall, but also as someone who cares about the future of this league.

As was suggested during the competition in Salvador, I’m writing to share some thoughts on the topic of AI use, especially in light of this year’s clarification that was released less than two weeks before the competition. While this post isn’t about this year’s timing, I believe it’s important to reflect on how AI use could be handled going forward, especially as its role in robotics becomes increasingly relevant and unavoidable.

Our team, Airborne, used YOLO-based models for victim and silver line detection. We collected thousands of images using a custom capture system we developed directly on the robot, retrained the models multiple times with different configurations, and deployed them in real-world field conditions. Beyond that, we opened two issues in the Ultralytics repository related to bugs we encountered, and also submitted a pull request to improve another part of the framework. While these contributions weren’t changes to the model architecture itself, they were essential to making the models usable for our application. In other words, we contributed to the development of the surrounding infrastructure, something that is often just as important as model training. At the end of all of this, we got placed in bucket 5, meaning we only got a 7% bonus (out of 10%).

Looking ahead to 2026 and beyond, I’d like to offer a few suggestions and reflections:

  • AI should not be restricted simply because it is effective and “simple to use” or “plug and play”. Penalizing or limiting the use of tools just because they perform well risks discouraging teams from applying real-world, cutting-edge solutions to relevant problems. RoboCup Junior should be a space for learning, creativity, and innovation, not for avoiding the best tools available. (I do believe that Neural Networks have already surpassed traditional computer vision models when it comes to perception).

  • Development should be understood more broadly. It’s not just about changing the architecture or the loss function of a model. Contributions like building custom data collection methods, solving integration bugs, contributing to open-source tools, or adapting models to embedded hardware all represent real, meaningful and time-consuming development work. These efforts should be recognized, not discouraged.

  • If the goal is to promote originality and avoid using too many pre-trained models or tested architectures, then the challenges themselves can be designed to require more context-aware or creative solutions. In other words, if there is a tool that can already easily solve a part of the competition ruleset, it might be a better idea to evolve the ruleset to be better solved using other “more desirable” methods.

  • Pre-trained models and standard architectures are not shortcuts when used responsibly. They are starting points, just like sensors, motors, or libraries, and should be treated as such. What matters is how teams adapt and integrate them, not whether they built everything from scratch. (I haven’t seen many teams messing around with copper coils to create their motors from scratch :wink:

Finally, I know from experience how much effort teams put into preparing for this competition, and I believe clear, forward-thinking rules will help teams plan and innovate with confidence. I hope the committee continues to provide an environment where exploration of AI is welcomed, not penalized. A nice compromise is redesigning part of the competition to not be so effectively solved with AI/Neural Networks.

Thanks again for your attention and consideration! I’m open to discuss further. Please note that this is just my opinion/suggestion and not intended as criticism in any way.

Best Regards,
Francisco Gaspar

Team Airborne

TLDR: In short, applying import taxes on the line “from ultralytics import YOLO” may not be the way forward. At this rate, we’ll need a visa to import YOLO and a license to debug it.
(This TLDR is a joke, please do read the full points made above, thank you : )

2 Likes