Unrestrictive use of AI Rule proposal

Hello everyone,

Thank you for providing your concerns regarding the unrestricted AI use rule change. The main goal of publishing these rule changes before releasing the final rules is to find the community opinion and being able to have a discussion between competitors, mentors, region representatives, volunteers and committee members so we can shape the future of the competition in the best way possible, so please continue sharing your thoughts either if you like/prefer a rule change or disagree with it.

I will start with addressing this generally and then covering the specific points of concern on this thread. The RCJ Rescue Committee saw a necessity to implement a change regarding AI use after these findings over the last two years:

  • It is really hard to differentiate code created by a competitor from code generated by AI and the competitor learning how it was implemented. This is creating a scenario that is very difficult to enforce and allowing a pseudo way of playing the system.
  • Hard to establish a line between what is allowed vs what is not allowed. What is the difference between an AI-based sensor compared with a regular sensor that has a public library that is already doing a lot of the processing or a development platform that does a lot of the processing (for example, in Lego color sensor already telling you the color)?
  • There are websites/services/people where you can pay someone else to train an AI model only providing the training photos. These provide the source code of the model. Anyone with a basic AI understanding will be able to explain the code without fully understanding how it is implemented. It is really hard to enforce in an objective way if a model is implemented by the student from scratch or if they are using a base model and modify it tweaking parameters to understand its use.
  • We require more experts to understand the teams’ solutions, evaluating the source code of each implementation. This has become a “find the cheater” approach instead of encouraging collaboration and innovation. We believe that if we need tournament organizers to constantly understand every team’s code to make sure they are not cheating or breaking a rule, we are doing it wrong and it will be unfair depending on who is evaluating your work.
  • AI tools are gaining popularity in the world. There is a big push in today’s society to embrace AI and learn how to leverage it to learn more effectively and innovate faster. The industry and graduate programs are looking for ways to implement AI technology in their development. For example, unless your goal is to create a faster/better motion detection program, if your application needs to use motion detection you could use a paid/open-source solution and be able to focus on the rest of the project.

With that being said, we want to remember the RoboCupJunior goals. We want to provide a learning experience for teams to get challenged, being able to build on their knowledge year over year and innovate so the teams can transition to bigger challenges over time (for example, only allowing them to participate in RCJ Rescue Line twice or encouraging students to transition to RoboCup). To allow unrestricted use of AI we want to achieve the following:

  • Encourage teams to openly give credit to the resources they use. Considering that it is not cheating, we want teams to embrace collaboration and being open about their approach to the challenge. If something works for one team, let’s make it work for everyone!
  • Increase the competition level. We want to reduce the entry barrier difficulty, encouraging teams to be able to score more in the competition and learn from the best teams in the process.
  • We want to develop a challenging competition where pre-existing solutions won’t be enough to solve the challenge and encourage teams to innovate. In the past, we were able to make the competition challenging enough to encourage teams to transition from platforms like Lego to platforms like Arduino and Raspberry-Pi, not because they can’t be solved with a Lego platform, but because moving to a more sophisticated platform allowed the teams to solve the problem more effectively or faster. Our goal with AI is similar, we want to allow pre-existing solutions like the use of AI-based sensors, but create big challenges that teams that transition to develop their own models will be able to solve in a better way.

You might be thinking, if this is the overall goal, why are we not seeing a lot of rule changes to increase the competition difficulty? Looking at the data from the last 2 years you can have a better understanding that the competition is challenging enough, where really few teams achieved to score considerable amount of points and completed the difficult hazards, having just a few achieving “perfect scores”. With the normalized scores, we can see that the vast majority of teams are scoring less than 0.2 out of 1, with the exception being on RCJ Rescue Simulation, where most teams have been able to successfully navigate the areas 3 and 4.

Therefore, with these year’s changes, we want to see what teams develop and how they adapt to this rule change. We want to increase the competition level and see more teams scoring and we wanted to be more flexible for field designers so they can create difficult challenges based on the teams attending their competitions without making it more expensive or difficult to build.

With this context I hope you can better understand the reasoning behind our decision and multiple hour discussion among the committee members and execs, looking to offer a better RCJ Rescue competition. Please, continue providing your opinion in the comments below and even better, if you have alternatives to overcome the different challenges we addressed here. If we are able to find the best solution as a community it will be very rewarding.

Regards,

Diego Garza Rodriguez on behalf of the 2025 RCJ Rescue Committee

3 Likes

Hello,
I can absolutely understand the reasoning behind that rule change. However I am especially concerned about the application of this rule in rescue maze, as the problem where AI is used the most is character detection, where a lot of AI models are available with huge amounts of training data used. This makes them probably more robust and reliable, than anything a team would normally develop on its own or at least reliable enough, so that self-devolpement would no longer be efficient. I would therefore like to suggest probably adding a completely new symbol in addition to the letters, that no AI detection model is available for yet. This possibility to get “extra points” could be more rewarding for teams going through the additional work of training an own model than extra points for the TDP.
I am interested in your opinion on that suggestion.
Best Regards
Jonas (Team Jak&Jonas)

5 Likes

Perhaps I can share my perspective on this as a team mentor.

I see two objectives for events like RobocupJr; Firstly, to provide a fair competition, and secondly to create a platform for learning. It seems that much of the reasons for unrestricted AI use pertains to the first objective.

I would admit that enforcing the rules in a robotics competition can be difficult, if not impossible, but this is not a new problem introduced by AI. I have seen complete solutions (code + building instructions) offered for sale online [1], and heard of local businesses where trainers are tasked to build and code for students. Even with the best experts evaluating code and interviewing competitors, it can be difficult to identify such teams with certainty.

Allowing unrestricted AI use avoids some of these problems; no need to evaluate if the students are doing their machine vision code / training, as it is now legal to purchase a manufactured solution. But this compromises RobocupJr’s value as a platform for learning. Many teams will now choose to purchase a solution, losing the opportunity to learn how machine vision works. As a mentor, I can still continues to encourage my students to write their own code and do their own training, but the robot design is the team’s decision, and the allure of a ready-made solution can be strong.

I would also agree that there is learning value in leveraging AI, even if it is a paid / opensource solution. Students don’t necessarily have to reinvent every sensor they use. In fact, some of the RoboCupJr teams I mentor are using the Huskylens in their robot… but for the OnStage event. In events like OnStage (…and other non-robocup events such as WRO Future Innovator), there aren’t any prescribed challenges that suggests the use of machine vision. The teams can use AI sensors to create an interesting performance or product, and they will be judged, not on how well the sensors perform, but on how innovative their use of the sensor is. For Rescue however, the situation is rather different; Using machine vision to locate and identify objects is a major challenge in the event, and the sensor’s performance is key. Allowing a purchased solution renders this challenge effectively moot.

For the goal of developing “a challenging competition where pre-existing solutions won’t be enough to solve the challenge”; there’s a possibility that the market will simply produce a better one-click AI solution that solves it. It may also be difficult to find a challenge that is hard enough that pre-existing solutions won’t work, but easy enough that it is within the reach of the students. If we can indeed find such a challenge, then I would be supportive of lowering the bar to make the competition more accessible, while encouraging teams to explore their own solutions to score more points. But let’s not put the cart before the horse, we should develop and test these challenges first.

To summarize…

  1. We can’t catch every cheater, but allowing unrestricted AI use compromises learning for all teams. Let’s focus on what produces the best learning objective and trust that most teams plays fair.

  2. Leveraging pre-made AI solutions is an important skill, but let’s leave it to other events better suited for it. Rescue has a clear machine vision challenge, and students should learn to build their own solution for it.

  3. We should develop and test new challenges where pre-existing solutions aren’t enough, before lowering the bar to allow unrestricted AI use.

[1] For other robotics competitions. Haven’t seen one for RoboCupJr, but then again, I haven’t been looking.

5 Likes

Echo both Jomue and Cort.

To Jomue’s point: “… possibility to get “extra points” could be more rewarding for teams going through the additional work of training an own model than extra points for the TDP…” But, in order to make this significant enough to encourage teams to so, it will have to utilize some sort of multipliers, instead of the mere few points from the performance rubric.

I would like to reverberate Cort’s points - well said.

I am sure all mentors and educators are aware the concern in allowing unrestriced usage of AI tools to solve the RCJ challenges. So, I won’t reiterate here.

The end result is this: This unrestrictive usage of AI tools will simply encourage more manufacturers to create devices to target solving RCj Rescue solution. This risks nullifying the fundamental value of RCJ Rescue, transforming it from a test of ingenuity and problem-solving into a mere exercise in device optimization. In effect, teams could become mere surrogates for device makers, undermining the spirit of learning and innovation that RCJ is designed to foster. The potential impact on the program’s integrity and educational value is significant and deeply concerning.

– Elizabeth Mabrey
RCJ/USA Resuce Chair

3 Likes

Dear Diego Garza Rodriguez,

Just as a side note: the points shown in the spreadsheet comparing the results from 2023 and 2024 are not quite accurate. The 2023 points exclude the worst run, while the 2024 points do not. This may not be directly relevant to the main point you are trying to make, but I believe it is still misleading to compare the two sets of data.

Best regards,
Moritz

2 Likes

To this very point, won’t allowing unrestrictive use of AI tools somewhat defeat this point though?

Take the huskylen as an example. A single module alone cost over USA$70 (including shipping). But, having teams to develop it themselves, it costs $25 for a pi cam.

2 Likes

“… Encourage teams to openly give credit to the resources they use. …”

  • Not sure if this means the concern is “teams who learnt form other techniques and create their own, but did not give credit to the resources they use”?

“… from platforms like Lego to platforms like Arduino and Raspberry-Pi, not because they can’t be solved with a Lego platform, but because moving to a more sophisticated platform allowed the teams to solve the problem more effectively or faster…”

  • not only that…LEGO platform itself costs USA$400. Everyone has to python, no alternative like the old days with NXT/EV3 wtih RobotC. So, using a pre-existing platform like LEGO, it is doable to go cross-platform. BUT, the final cost of that will triple or even quatriple the cost of using open-source like Arduino solutions.

"… Our goal with AI is similar, we want to allow pre-existing solutions like the use of AI-based sensors, but create big challenges that teams that transition to develop their own models will be able to solve in a better way… "

  • I do see your point on this one. However, allowiing unrestrictive AI tools won’t help that front - see @Cort post. He has made an excellent point.

– Elizabeth Mabrey
RCJ/USA Rescue Chair

2 Likes

Dear Committee,

Thank you for sharing the perspectives that support the necessity of permitting the unrestricted use of AI!

While we respect the committee’s viewpoints and decisions, we would like to once again express our concerns regarding the authorization of unrestricted AI usage.

In the example of the LEGO color sensor, we believe that its use posed more limitations for beginner teams than advantages. While it does detect colors, its tolerance for variations in measurement height and angle is very low. By contrast, a tcs34725 color sensor, due to its unique calibration capabilities, is far better suited to address the challenges of the Rescue categories.

We highlight this example to illustrate how it differs from the current issue under discussion.

With the unrestricted use of AI, the advantage would likely go to teams that do not develop unique solutions, unlike the scenario in the previous example. We believe that only those advantages gained through a team’s own work should be considered fair in a competitive, sportsmanlike environment.

Regarding the potential for cheating, we believe that the unrestricted use of AI and associated tools could lead to even greater forms of “cheating.” Teams could compete with devices engineered and developed by external engineers or programmers for the Rescue challenges, solutions created without the involvement of the students themselves. In practice, such developments could take place behind the scenes, limited only by available funding. With unrestricted AI, teams could potentially gain unfair advantages, effectively “cheating legally.”

In this scenario, the value of each team’s individual work could be lost, as they would struggle to compete against adult engineers or industry-grade developments. We believe this could significantly contribute to a loss of motivation for teams that focus on their own unique solutions. In our opinion, the true value lies in the individual solutions developed by the teams themselves.

We also feel that for beginner or lower-performing teams, it is not constructive feedback to suggest they should rely on AI tools specifically designed to tackle Rescue challenges. Such a situation could be misleading for all participants involved.

We understand that overseeing AI usage is challenging and requires many specialists, making it a considerable endeavor.

We hope the committee will prioritize the perspectives of teams committed to developing their own solutions. We trust that the decision will take into account the previous discussions on the forum, which largely conveyed a consensus against the unrestricted use of AI.

Respectfully,
Team Lightning,
Kiss ZsĂłfia

6 Likes

Got a good post HERE by a RCJ/USA Regional Rep/Soccer Chair regarding this.

In light of our ongoing discussions about the expectations for teams to explain the complex concepts behind AI-tools choosen. Now, since there is no more engineering journal to go by, expecting that from a non-English-speaking team is unrealistic and unfair.

I recall interviewing three foreign teams during the Co-Space era (many years ago) for Rescue simulation. Despite the presence of translators, the process was arduous and largely non-informative, as the translators themselves struggled with articulating in non-native English.


Elizabeth Mabrey
RCJ/USA Rescue Chair

2 Likes

Hello Diego

Thank you for explaining the “Unrestrictive use of AI Rule proposal”

RULE CHANGES
A.1) AI-Based Solutions
Starting this year, the use of AI-based solutions is permitted without restriction, including the incorporation of AI-based sensors (like the Pixie Cam or Husky Lens).

I agree with this.

In your article you wrote the following:

It is really hard to differentiate code created by a competitor from code generated by AI and the competitor learning how it was implemented.

Hard to establish a line between what is allowed vs what is not allowed.

It is really hard to enforce in an objective way if a model is implemented by the student from scratch or if they are using a base model and modify it tweaking parameters to understand its use.

We believe that if we need tournament organizers to constantly understand every team’s code to make sure they are not cheating or breaking a rule, we are doing it wrong and it will be unfair depending on who is evaluating your work.

There is a big push in today’s society to embrace AI and learn how to leverage it to learn more effectively and innovate faster.

I think the same way.

Best Regards
MASA

1 Like

Do we run a competition that allows some cheaters to get away with it and gives their teams an advantage, or do we run a competition that is fair for everyone?
The Committee claims that it cannot judge whether AI is used or not. I support this from my long experience of being involved in the world championships as a committee member and volunteer.

Allowing unlimited use of AI does not require or recommend that all teams use AI cameras. “What the team’s task is?” is a matter for the team, including the mentor. The goal is for the team to develop image analysis independently without using AI functions. It is a team decision.

I think this is the same as using a robot kit. The rules of RoboCupJunior Rescue allow the use of robot kits, but some teams make their own robots. It is a team issue whether to use a sensor that easily tells you the color, rather than the RGB value, like the EV3 color sensor, or to use a unique sensor(made by team own). From the perspective of losing learning opportunities, the use of all robot kits should be banned. Many robot kits, including the EV3, and Arduino, do not teach the design of electronic circuit boards. Forcing students to use the EV3 is no different from depriving them of the opportunity to learn about hardware. The team will use a robot kit or a robot which team original, it is a team decision.

Why is AI the only matters?

Naomi

1 Like

I can understand that it is very hard to judge whether an AI solution used by a team has been developed by a team on its own. However I can absolutely not agree with your comparison with robot kits and LEGO Sensors. As someone who also started with LEGO and worked with a lot of teams, that used robot kits I can only say, that using these can not be a long term solution to be successful and don’t give you a long term competitive advantage because I have neither seen any kit that was really fitting any of the rescue categories and was very successful nor has LEGO ever been competitive enough to harm good self-developed robots as all these pre-built solutions normally restrict you more than they help you. Therefore everyone who is really interested in having success at the competitions will eventually move towards a custom built robot. On the other hand these AI-Kits can be a huge competitive advantage as there are good working solutions, that can be applied in the rescue categories and competing with professional AI-Solutions as a single person seems to be a very unfair competition and would not be worth the struggle and hard work just to get a slightly better rating in your TDP.
Best Regards
Jonas

3 Likes

To respond to your point about “allowing teams to use robot kit” is equivalent to “allowing unrestrictive AI tools”, I respectfully disagree.

I think we need to look at this in 2 separate key perspectives. One key is straight forward, another area is all about practicality in academia.

Key 1: some AI tools in the market provide professionally tuned solutions that target specifically to solve Rescue Competition challenges which bear scores… Robots kits are used more as a strategical assistant which do not bear any direct scores. .

Key 2: is more about in academia aspects:
1- limited comprehensible learning materials in electronic
2- safety and practicality issue in electronic
3- cost and replacement issue in electonic
4- lack fo acadmeic support in electronic

1- limited comprehensible learning material to explain the complexity of understanding electronics components and circuit design thatvexceeds the typical grasp of pre-college students. Let’s take their datasheets and schematics as an example – they are often written at a level totally non-comprehensible to anyone who has not already possessed substantial background knowledge.

2- Safety and Practicality
We all know playing with electronic components can pose safety risks, especially for younger learners. Tasks such as soldering and handling live circuits require careful supervision and are not suited for all age groups unless with knowledgeable adult supervision. I don’t know about other countries, in the USA, adding these in all elementary and middle school classrooms is almost non-existent.

3- Cost and Replacement of Components
This aspect makes “frequent” experimentation and iteration far much less feasible.

4- Academic Support in Programming vs. Electronics
Due to the abstract nature of electronic, and the higher barriers to entry in terms of both understanding and safety, most pre-Secondary schools curriculum simply do offer support for more advanced electronics classes. However, they surely do with programming and robotics in general.

In contract - there are a lot of learning support in AI programming area
The realm of software and supportive materials online are immensive and within reach of even beginners. In contrast to electronic, there are easy assessible pathways for young learners. These areas allow for a high degree of growth without all the drawbacks stated above. Besides, we all know that academic programs and learning opportunities in these fields are growing. Thus, it is realistic to challenge students to figure out how to solve certain challenges without AI tools.


Elizabeth Mabrey
RCJ/USA Rescue Chair

1 Like

Hello
There are some opinions posted here. However, I think the AI ​​and character recognition libraries are disorganized.

a. AI Camera (such as HUSKYLENS)
b. Special Camera Features (such as PixyCam’s object recognition function)
Utilize the object recognition or line follower functions built into the camera.
c. Creating a custom dataset using an AI library (such as TensorFlow)
Load and train the target character images in advance to create your own dataset.
d. Using an existing dataset with an AI library (such as TensorFlow)
Utilize a dataset created by others (or an existing dataset).
e. Using an existing character recognition library (such as Tesseract, EasyOCR, etc.)
f. The team members develop their own character recognition program.

I understand the following based on the 2024 rules (with Forum).
1.NG 2.NG 3.OK 4.NG 5.NG 6.OK

Will everything be allowed under the 2025 rules?
(I think at least b. and e. are not related to AI.)

Sorry my poor English.

Best Regards
MASA

regarding (e) - I don’t think it should not be allowed: Does not matter whether it is AI related. If it provides solution targeting a scored field challenge, it should not be allowed. Take some cute little contraptions from Polulu … right out of the box, a press of button, it line traced beautifully. That should not be allowed, despite not using AI tools.

any devices already did OCR should not be allowed either… it is already trained and tuned to read an alphabet in any orientation. It should not be allowed.

by the way, MASA, your English is absolutely fine; your English writing is very good. If I was required to write anything else other than English, I would be totally a mute :wink:

–
Elizabeth Mabrey
RCJ/USA Rescue Chair