2026 RCJ Rescue Simulation Rule Changes

Dear Teams,

After carefully reviewing and discussing the experiences and feedback from last year, we have introduced the proposal of changes to the Rescue Simulation 2026 Draft Rules. Although the discussions took some time, we truly hope these updates will bring new and exciting challenges for all of you.

In the 2026 Simulation draft, we have outlined a few topics that we plan to revise in the final version of the rules. Before making any final decisions, we would like to hear your opinions, comments, and questions regarding these proposed changes.

Please take a moment to read through the document and share your feedback.
We wish every team a successful preparation for the upcoming season. We look forward to meeting you all in Incheon, South Korea!

1. Competition Setup Format

Referring to existing rule 3.1.3:

For the upcoming World Cup, we intend to streamline expectations and focus on Setup A as the standard competition setup. In this model, games are executed using a server-client architecture, where teams connect via an RJ-45 ethernet socket to a game server provided by the organizers. Teams must bring a compatible computer and ethernet cable to run their prepared programs. Detailed documentation is available on the Remote Controller page.

While Setups B and C (organizer-run and cloud-based execution, respectively) remain valid methods and may still be used by other organizers, we aim to evaluate these approaches further for potential future use. For now, our priority is to provide teams with clarity and consistency by standardizing on Setup A and removing Setup B and C from the rules.

2. Required Launch Video

To support the focus on Setup A, as outlined in Rule 3.1.3, we are introducing a new requirement:

All teams must submit a short video demonstrating how to execute their robot controller on a provided example map in a server-client setup. This video will be a formal part of the documentation submission. It ensures that teams are familiar with the competition setup and helps organizers verify that teams understand how the setup works.

The video should be submitted alongside the Technical Description Paper, Poster, and Project Video.

3. More realistic noise levels

In accordance with existing rule 4.3.2, the simulation platform will be updated to introduce more realistic sensor behavior, including noise characteristics aligned with those found in physical robots. These changes are not rule modifications, but platform improvements intended to reflect the original intent of the rule more accurately.

The goal is to better prepare teams for real-world robotics by encouraging the development of robust, noise-tolerant solutions that do not rely on perfect sensor data.
Organizers will not adjust noise levels during the competition, and all teams are expected to design their systems with these realistic conditions in mind.

4. Victim Identification and AI Strategy

Regarding the posters of victims and hazmat, the following changes are proposed:

  • The victims (H, S and U) are replaced for Greek letters (Φ, Ψ, Ω).
  • The hazmat are replaced for new “Cognitive targets”, as explained below:

1. Identification:
The robot must identify a circular target on a square sign of 2 cm on each side, with a small white margin border between the edge of the sign and the target. The target consists of 5 concentric rings of the same width. In this case, the central ring (a circle) has a radius equal to the width of the other rings. Each ring will have one of 5 possible colors: Black, Red, Yellow, Green, or Blue.

2. Calculation:
The robot’s primary task is to “read” the target and calculate a total value.
First, each color corresponds to a numerical value:

  • Black = -2
  • Red = -1
  • Yellow = 0
  • Green = 1
  • Blue = 2

Adjacent rings of the same color are not merged. The robot must always consider each of the 5 rings separately and sum the value for all 5 rings, regardless of whether colors repeat. Valid values ​​are 0, 1, 2, and 3. Other sums are invalid.

Example 1:

  1. Rings (from center outwards): Yellow | Black | Green | Yellow | Red
  2. The robot reads five separate color values.
  3. Calculation: Value(Yellow) + Value(Black) + Value(Green) + Value(Yellow) + Value(Red)
  4. Final sum: (0) + (-2) + (1) + (0) + (-1) = -2
  5. The robot must act based on this total (Sum = -2 →invalid).

Example 2:

  1. Rings (from center outwards): Blue | Yellow | Green | Red | Black
  2. The robot reads all five rings individually.
  3. Calculation: Value(Blue) + Value(Yellow) + Value(Green) + Value(Red) + Value(Black)
  4. Final sum: (2) + (0) + (1) + (-1) + (-2) = 0
  5. The robot must act based on this total (Sum = 0 → send the character ‘0’ to the supervisor).

Example 3:

  1. Rings (from center outwards): Green | Green | Blue | Green | Blue
  2. The robot reads all five rings individually.
  3. Calculation: Value(Green) + Value(Green) + Value(Blue) + Value(Green) + Value(Blue)
  4. Final sum: (1) + (1) + (2) + (1) + (2) = 7
  5. The robot must act based on this total (Sum = 7 → invalid).

3. Action:
The robot must perform a specific action based on the final calculated sum:

  • If Sum is between 0 and 3: The robot sends the number (as a character) to the supervisor.
  • Else The target is considered false. If the robot sends a message to the supervisor, it counts as misidentification resulting in -5 points.

If the sum is between 0 and 3, and the robot send an incorrect number, it counts as misidentification too.

4. Scoring (Cognitive Target):
The scoring follows the same calculation as with the old hazmats.

5. 3D Letter Distractors on Walls

The simulation field may contain large 3D letters mounted on walls that visually resemble the new lettered/ symbolic victim tokens. These 3D elements are not valid wall tokens and must not be reported by the robot under any circumstances. They are intended as distractors to challenge systems that rely solely on camera input. To correctly distinguish valid tokens from these physical decoys, teams must use multiple sensor inputs in combination, such as LiDAR or distance sensors alongside image recognition

6. Swamps as Strategic Penalty Tiles

To reinforce the role of swamps as hazardous and undesirable terrain, changes to their effect on gameplay are proposed. The aim is to make swamps a critical consideration in route planning and encourage teams to actively avoid them.

Swamp tiles will continue to consume additional simulation time, but the penalty will increase with each additional visit to the same swamp tile. For example, the simulation time multiplier may start at ×5 and grow to ×10 or higher if the robot crosses the tile repeatedly. This progressive slowdown penalizes inefficient routes and rewards teams that explore new areas instead of retracing swamp-heavy paths.

7. Obstacle Representation in Maps

Robots will be expected to include obstacles in their submitted map matrices, using a designated symbol to represent these features. The mapping specification will be updated to define a specific symbol for obstacles, consistent with how existing elements are currently encoded.

The center point of an obstacle will never be placed on another scoring tile, such as swamps, checkpoints, passages, or black tiles. This ensures that obstacle placement does not conflict with the current structure and interpretation of the map matrix.

8. Emphasis on Mapping and Strategy in Scoring

The scoring system may be adjusted to place greater value on environment mapping and strategic path planning, while reducing the weight given to victim and hazmat identification. Robots that effectively explore and document the environment will be rewarded more significantly, encouraging the development of autonomous exploration capabilities. The existing mapping bonus multiplier will remain in use but will have a stronger impact on the total score relative to wall token identification.

Best,

Diego Garza Rodriguez on behalf of the 2026 RCJ Rescue Committee

1 Like

I think that the implementation of the concentric circles is a bit excessive, at least for this year, because teams would have to totally rework part of their code and they don’t have much time to fine-tune it.

2 Likes

Hi @BruceWayne!

Could you share more about your thoughts? :slight_smile: Is the case that your team already has a way to identify letters and hazmat and having this new victim will limit the progress of your team in other areas? Or your team is starting to develop a robot and believes that there are a lot of changes to do and having one more item like this victim would be overwhelming?

Thanks,

Diego Garza Rodriguez
2026 Committee

1 Like

Hi, my team is still developing the robot, and when we saw this new draft we thought it might create a big loss of time for the work that we’ve already done, and I suppose this is the case for other teams. My suggestion is keeping the 2026 rules similar to last year and without radical modification, so that the teams don’t have to throw out the windows progress that they’ve already made, and implementing this new system in 2027, so that teams know what they are going to face and prepare accordingly.

1 Like

I believe that the main problem encountered by the various teams is the limited time available to prepare solutions for the new final regulations.

The same problem arises every year.

This year, the proposed changes are very interesting and challenging. However, those who design the robot, the software, set up the test competition fields, the server, etc., have no certainty about the date of publication of the final rules and what they will consist of. Furthermore, I fear that this year they will be published very late, say at the end of January? Some qualifying competitions start in March. This means that sometimes there are only 50 days or so to redesign everything, which is too little time. There are now two problems: the great challenge of finding a solution to these proposed changes and the limited time available. I would like to remind everyone that it is enough time to be ready for the European or World Championships, but we have to be ready much earlier for the national competitions (April) and even earlier for the regional competitions (March). My request is to give a definite date for the publication of the final rules, possibly by the end of October, and for this to become an established and reliable practice. I look forward to hearing your opinions and wish you all the best.

1 Like

@Dieguinilombrin Interesting and very challenging proposals. I find point 5 particularly complicated; it will take time to implement. I am very much in favour of only setup A being valid, provided that it is standard in all competitions and not just at the World Championships. I would like to point out that at the last European Championships, the teams competed locally using their own notebooks instead of a setup A connection. It would be appropriate to clarify once and for all what the correct setup should be. Another danger I see on the horizon is that continually complicating the challenges, rather than simply modifying them without complicating them, will result in poor participation with few teams. At the last European Championship, there were only seven! At previous ones, there were even fewer, including the Egyptians. In my experience with simulation, which began in 2023, I have always found few teams competing and almost all of them in great difficulty. In order to grow this speciality, it would be advisable to introduce an entry category, as in soccer, line and maze. The increase in the number of participants due to the introduction of the entry competition could then lead some teams to tackle the standard competition without immediately finding themselves facing a Himalayan peak to climb.

1 Like

Hi @PeterParker,

We hear you and we are currently looking for a way to create an entry rule set for Sim. In fact, we would like to make a call to action to get volunteers to be part of the Entry Rule subcommittee (which since last year have been helping us create a Line and Maze entry ruleset).

Regarding which method to use, the only proposed method by the rules will be setup A. We make the clarification that other competitions might use other setup because the only way the committee can enforce the rules is in the international competition. Most regions use the rules as guidelines for their competition (and most using them completely as is), but we have seen multiple times where local / regional competitions need to adapt the rules for their local setup and environment (of course notifying ahead of time their competitors).

Thanks!

Diego Garza Rodríguez
2026 Committee

1 Like

Thank you for your clarifying response. The idea of an Entry subcommittee is excellent. If possible, I would like to be part of it.
I look forward to future developments.
Thank you.

Hi everyone,

2023 world winner here. First of all, I appreciate the work being done and the effort behind this competition. However, I am currently not seeing any draft rules or documentation attached, nor on the platforms I used to consult in previous years.

My first concern is not about the increased complexity of the new tasks, but rather the quantity and clarity of information returned by the sensors in this draft. In previous years, sensor precision was effectively perfect, so there was no need for the production-grade pipelines we now use in university and professional robotics competitions.

If the goal is to move closer to real-world conditions and introduce noise, then sensor quality should scale accordingly. From what I recall, the cameras were limited to 64×40 pixels with a narrow FOV, and the 4-layer LiDAR made reliable 3D letter scanning difficult. In practical contexts, 16-layer (or higher) LiDAR systems are inexpensive and often allow custom pitch configurations that better support these tasks.

GPS noise is another important aspect. During my competition years, GPS data was essentially perfect; a simple polar-to-Cartesian conversion was sufficient and odometry corrections were unnecessary. With realistic noise, wheel slip, and potential false positives, filtering (e.g., EKF) becomes essential. In that case, explicit specifications for sensor rates, noise models, and error profiles are needed, since these parameters significantly influence strategy and system design. If that is the intended direction, more time to adapt would also be necessary — two months may not be sufficient.

Given that the rules are not yet official, I think it would be reasonable if only one major shift were introduced this year: either the scoring system change or the SLAM/scoring complexity shift, but not both simultaneously.

Lastly, regarding path-selection tasks, their relevance could be emphasized through better map design. That problem already requires advanced knowledge in multiple areas of computer science, far beyond what other competitions typically demand.

1 Like

Hi Mites! Regarding sensor noise, our plan is to make minor modifications that can be easily filtered out, and only on some sensors. We’ll gradually make it more complex in future competitions.
Regarding lidar, our tests with the 4-layer lidar have yielded good results for detecting 3D letters. You shouldn’t have any problems, especially if you combine it with the use of cameras. Thanks for your input!

1 Like

Very interesting point a view. Thanks for sharing.

Hi, is a sample map with these modifications released, and if not, when can we expect to have sample maps released? Our team really needs them to test the computer vision portion of the simulation competition. We would appreciation any direction on where we can find the maps, if or when released. Thanks!

Hi @charw !

We already have a first beta version of Erebus adapted to these draft rules, version 26.0.0, which you can find here: Releases · robocup-junior/erebus · GitHub

Since we’re still developing the map builder, you can find several maps ready to test the new challenges in the Worlds folder.

Thanks,

Diego Garza Rodriguez
2026 Committee

1 Like