Unrestrictive use of AI Rule proposal

Hello everyone,

Thank you for providing your concerns regarding the unrestricted AI use rule change. The main goal of publishing these rule changes before releasing the final rules is to find the community opinion and being able to have a discussion between competitors, mentors, region representatives, volunteers and committee members so we can shape the future of the competition in the best way possible, so please continue sharing your thoughts either if you like/prefer a rule change or disagree with it.

I will start with addressing this generally and then covering the specific points of concern on this thread. The RCJ Rescue Committee saw a necessity to implement a change regarding AI use after these findings over the last two years:

  • It is really hard to differentiate code created by a competitor from code generated by AI and the competitor learning how it was implemented. This is creating a scenario that is very difficult to enforce and allowing a pseudo way of playing the system.
  • Hard to establish a line between what is allowed vs what is not allowed. What is the difference between an AI-based sensor compared with a regular sensor that has a public library that is already doing a lot of the processing or a development platform that does a lot of the processing (for example, in Lego color sensor already telling you the color)?
  • There are websites/services/people where you can pay someone else to train an AI model only providing the training photos. These provide the source code of the model. Anyone with a basic AI understanding will be able to explain the code without fully understanding how it is implemented. It is really hard to enforce in an objective way if a model is implemented by the student from scratch or if they are using a base model and modify it tweaking parameters to understand its use.
  • We require more experts to understand the teams’ solutions, evaluating the source code of each implementation. This has become a “find the cheater” approach instead of encouraging collaboration and innovation. We believe that if we need tournament organizers to constantly understand every team’s code to make sure they are not cheating or breaking a rule, we are doing it wrong and it will be unfair depending on who is evaluating your work.
  • AI tools are gaining popularity in the world. There is a big push in today’s society to embrace AI and learn how to leverage it to learn more effectively and innovate faster. The industry and graduate programs are looking for ways to implement AI technology in their development. For example, unless your goal is to create a faster/better motion detection program, if your application needs to use motion detection you could use a paid/open-source solution and be able to focus on the rest of the project.

With that being said, we want to remember the RoboCupJunior goals. We want to provide a learning experience for teams to get challenged, being able to build on their knowledge year over year and innovate so the teams can transition to bigger challenges over time (for example, only allowing them to participate in RCJ Rescue Line twice or encouraging students to transition to RoboCup). To allow unrestricted use of AI we want to achieve the following:

  • Encourage teams to openly give credit to the resources they use. Considering that it is not cheating, we want teams to embrace collaboration and being open about their approach to the challenge. If something works for one team, let’s make it work for everyone!
  • Increase the competition level. We want to reduce the entry barrier difficulty, encouraging teams to be able to score more in the competition and learn from the best teams in the process.
  • We want to develop a challenging competition where pre-existing solutions won’t be enough to solve the challenge and encourage teams to innovate. In the past, we were able to make the competition challenging enough to encourage teams to transition from platforms like Lego to platforms like Arduino and Raspberry-Pi, not because they can’t be solved with a Lego platform, but because moving to a more sophisticated platform allowed the teams to solve the problem more effectively or faster. Our goal with AI is similar, we want to allow pre-existing solutions like the use of AI-based sensors, but create big challenges that teams that transition to develop their own models will be able to solve in a better way.

You might be thinking, if this is the overall goal, why are we not seeing a lot of rule changes to increase the competition difficulty? Looking at the data from the last 2 years you can have a better understanding that the competition is challenging enough, where really few teams achieved to score considerable amount of points and completed the difficult hazards, having just a few achieving “perfect scores”. With the normalized scores, we can see that the vast majority of teams are scoring less than 0.2 out of 1, with the exception being on RCJ Rescue Simulation, where most teams have been able to successfully navigate the areas 3 and 4.

Therefore, with these year’s changes, we want to see what teams develop and how they adapt to this rule change. We want to increase the competition level and see more teams scoring and we wanted to be more flexible for field designers so they can create difficult challenges based on the teams attending their competitions without making it more expensive or difficult to build.

With this context I hope you can better understand the reasoning behind our decision and multiple hour discussion among the committee members and execs, looking to offer a better RCJ Rescue competition. Please, continue providing your opinion in the comments below and even better, if you have alternatives to overcome the different challenges we addressed here. If we are able to find the best solution as a community it will be very rewarding.

Regards,

Diego Garza Rodriguez on behalf of the 2025 RCJ Rescue Committee

2 Likes

Hello,
I can absolutely understand the reasoning behind that rule change. However I am especially concerned about the application of this rule in rescue maze, as the problem where AI is used the most is character detection, where a lot of AI models are available with huge amounts of training data used. This makes them probably more robust and reliable, than anything a team would normally develop on its own or at least reliable enough, so that self-devolpement would no longer be efficient. I would therefore like to suggest probably adding a completely new symbol in addition to the letters, that no AI detection model is available for yet. This possibility to get “extra points” could be more rewarding for teams going through the additional work of training an own model than extra points for the TDP.
I am interested in your opinion on that suggestion.
Best Regards
Jonas (Team Jak&Jonas)

4 Likes

Perhaps I can share my perspective on this as a team mentor.

I see two objectives for events like RobocupJr; Firstly, to provide a fair competition, and secondly to create a platform for learning. It seems that much of the reasons for unrestricted AI use pertains to the first objective.

I would admit that enforcing the rules in a robotics competition can be difficult, if not impossible, but this is not a new problem introduced by AI. I have seen complete solutions (code + building instructions) offered for sale online [1], and heard of local businesses where trainers are tasked to build and code for students. Even with the best experts evaluating code and interviewing competitors, it can be difficult to identify such teams with certainty.

Allowing unrestricted AI use avoids some of these problems; no need to evaluate if the students are doing their machine vision code / training, as it is now legal to purchase a manufactured solution. But this compromises RobocupJr’s value as a platform for learning. Many teams will now choose to purchase a solution, losing the opportunity to learn how machine vision works. As a mentor, I can still continues to encourage my students to write their own code and do their own training, but the robot design is the team’s decision, and the allure of a ready-made solution can be strong.

I would also agree that there is learning value in leveraging AI, even if it is a paid / opensource solution. Students don’t necessarily have to reinvent every sensor they use. In fact, some of the RoboCupJr teams I mentor are using the Huskylens in their robot… but for the OnStage event. In events like OnStage (…and other non-robocup events such as WRO Future Innovator), there aren’t any prescribed challenges that suggests the use of machine vision. The teams can use AI sensors to create an interesting performance or product, and they will be judged, not on how well the sensors perform, but on how innovative their use of the sensor is. For Rescue however, the situation is rather different; Using machine vision to locate and identify objects is a major challenge in the event, and the sensor’s performance is key. Allowing a purchased solution renders this challenge effectively moot.

For the goal of developing “a challenging competition where pre-existing solutions won’t be enough to solve the challenge”; there’s a possibility that the market will simply produce a better one-click AI solution that solves it. It may also be difficult to find a challenge that is hard enough that pre-existing solutions won’t work, but easy enough that it is within the reach of the students. If we can indeed find such a challenge, then I would be supportive of lowering the bar to make the competition more accessible, while encouraging teams to explore their own solutions to score more points. But let’s not put the cart before the horse, we should develop and test these challenges first.

To summarize…

  1. We can’t catch every cheater, but allowing unrestricted AI use compromises learning for all teams. Let’s focus on what produces the best learning objective and trust that most teams plays fair.

  2. Leveraging pre-made AI solutions is an important skill, but let’s leave it to other events better suited for it. Rescue has a clear machine vision challenge, and students should learn to build their own solution for it.

  3. We should develop and test new challenges where pre-existing solutions aren’t enough, before lowering the bar to allow unrestricted AI use.

[1] For other robotics competitions. Haven’t seen one for RoboCupJr, but then again, I haven’t been looking.

3 Likes

Echo both Jomue and Cort.

To Jomue’s point: “… possibility to get “extra points” could be more rewarding for teams going through the additional work of training an own model than extra points for the TDP…” But, in order to make this significant enough to encourage teams to so, it will have to utilize some sort of multipliers, instead of the mere few points from the performance rubric.

I would like to reverberate Cort’s points - well said.

I am sure all mentors and educators are aware the concern in allowing unrestriced usage of AI tools to solve the RCJ challenges. So, I won’t reiterate here.

The end result is this: This unrestrictive usage of AI tools will simply encourage more manufacturers to create devices to target solving RCj Rescue solution. This risks nullifying the fundamental value of RCJ Rescue, transforming it from a test of ingenuity and problem-solving into a mere exercise in device optimization. In effect, teams could become mere surrogates for device makers, undermining the spirit of learning and innovation that RCJ is designed to foster. The potential impact on the program’s integrity and educational value is significant and deeply concerning.

– Elizabeth Mabrey
RCJ/USA Resuce Chair

2 Likes

Dear Diego Garza Rodriguez,

Just as a side note: the points shown in the spreadsheet comparing the results from 2023 and 2024 are not quite accurate. The 2023 points exclude the worst run, while the 2024 points do not. This may not be directly relevant to the main point you are trying to make, but I believe it is still misleading to compare the two sets of data.

Best regards,
Moritz

2 Likes

To this very point, won’t allowing unrestrictive use of AI tools somewhat defeat this point though?

Take the huskylen as an example. A single module alone cost over USA$70 (including shipping). But, having teams to develop it themselves, it costs $25 for a pi cam.

1 Like

“… Encourage teams to openly give credit to the resources they use. …”

  • Not sure if this means the concern is “teams who learnt form other techniques and create their own, but did not give credit to the resources they use”?

“… from platforms like Lego to platforms like Arduino and Raspberry-Pi, not because they can’t be solved with a Lego platform, but because moving to a more sophisticated platform allowed the teams to solve the problem more effectively or faster…”

  • not only that…LEGO platform itself costs USA$400. Everyone has to python, no alternative like the old days with NXT/EV3 wtih RobotC. So, using a pre-existing platform like LEGO, it is doable to go cross-platform. BUT, the final cost of that will triple or even quatriple the cost of using open-source like Arduino solutions.

"… Our goal with AI is similar, we want to allow pre-existing solutions like the use of AI-based sensors, but create big challenges that teams that transition to develop their own models will be able to solve in a better way… "

  • I do see your point on this one. However, allowiing unrestrictive AI tools won’t help that front - see @Cort post. He has made an excellent point.

– Elizabeth Mabrey
RCJ/USA Rescue Chair

1 Like

Dear Committee,

Thank you for sharing the perspectives that support the necessity of permitting the unrestricted use of AI!

While we respect the committee’s viewpoints and decisions, we would like to once again express our concerns regarding the authorization of unrestricted AI usage.

In the example of the LEGO color sensor, we believe that its use posed more limitations for beginner teams than advantages. While it does detect colors, its tolerance for variations in measurement height and angle is very low. By contrast, a tcs34725 color sensor, due to its unique calibration capabilities, is far better suited to address the challenges of the Rescue categories.

We highlight this example to illustrate how it differs from the current issue under discussion.

With the unrestricted use of AI, the advantage would likely go to teams that do not develop unique solutions, unlike the scenario in the previous example. We believe that only those advantages gained through a team’s own work should be considered fair in a competitive, sportsmanlike environment.

Regarding the potential for cheating, we believe that the unrestricted use of AI and associated tools could lead to even greater forms of “cheating.” Teams could compete with devices engineered and developed by external engineers or programmers for the Rescue challenges, solutions created without the involvement of the students themselves. In practice, such developments could take place behind the scenes, limited only by available funding. With unrestricted AI, teams could potentially gain unfair advantages, effectively “cheating legally.”

In this scenario, the value of each team’s individual work could be lost, as they would struggle to compete against adult engineers or industry-grade developments. We believe this could significantly contribute to a loss of motivation for teams that focus on their own unique solutions. In our opinion, the true value lies in the individual solutions developed by the teams themselves.

We also feel that for beginner or lower-performing teams, it is not constructive feedback to suggest they should rely on AI tools specifically designed to tackle Rescue challenges. Such a situation could be misleading for all participants involved.

We understand that overseeing AI usage is challenging and requires many specialists, making it a considerable endeavor.

We hope the committee will prioritize the perspectives of teams committed to developing their own solutions. We trust that the decision will take into account the previous discussions on the forum, which largely conveyed a consensus against the unrestricted use of AI.

Respectfully,
Team Lightning,
Kiss ZsĂłfia

3 Likes

Got a good post HERE by a RCJ/USA Regional Rep/Soccer Chair regarding this.

In light of our ongoing discussions about the expectations for teams to explain the complex concepts behind AI-tools choosen. Now, since there is no more engineering journal to go by, expecting that from a non-English-speaking team is unrealistic and unfair.

I recall interviewing three foreign teams during the Co-Space era (many years ago) for Rescue simulation. Despite the presence of translators, the process was arduous and largely non-informative, as the translators themselves struggled with articulating in non-native English.


Elizabeth Mabrey
RCJ/USA Rescue Chair

Hello Diego

Thank you for explaining the “Unrestrictive use of AI Rule proposal”

RULE CHANGES
A.1) AI-Based Solutions
Starting this year, the use of AI-based solutions is permitted without restriction, including the incorporation of AI-based sensors (like the Pixie Cam or Husky Lens).

I agree with this.

In your article you wrote the following:

It is really hard to differentiate code created by a competitor from code generated by AI and the competitor learning how it was implemented.

Hard to establish a line between what is allowed vs what is not allowed.

It is really hard to enforce in an objective way if a model is implemented by the student from scratch or if they are using a base model and modify it tweaking parameters to understand its use.

We believe that if we need tournament organizers to constantly understand every team’s code to make sure they are not cheating or breaking a rule, we are doing it wrong and it will be unfair depending on who is evaluating your work.

There is a big push in today’s society to embrace AI and learn how to leverage it to learn more effectively and innovate faster.

I think the same way.

Best Regards
MASA

1 Like