Use of Huskylens to Detect Victims

Hi, would my team be allowed to use a Huskylens (an AI camera) to detect visual and coloured victims? As this camera has inbuilt object recognition and colour recognition features, I am unsure as to whether it is in breach of rule 3.2.4 which states,
“Teams are not permitted to use any commercially produced robot kits or sensors components that are specifically designed or marketed to complete any single major task of RoboCupJunior Rescue.”

1 Like

I am not on the Technical Committee but I doubt they will allow it.

I had not seen this camera before and I decided to look it up. Sometimes “AI” cameras force you to do some programming/work that might allow it in an RCJ competition.

According to the description… “Users can change various algorithms easily by pressing the function button.”

The videos they have do show it being trained for color and object detection just by pushing buttons and there doesn’t seem to be any programming whatsoever.

I have been mentoring RCJ teams for a while and the Technical Committee almost always rejects these devices. They will allow cameras to be trained but some form of programming normally has to take place. For example, OpenCV or OpenMV cameras are allowed because there is normally some programming that must happen to make it work.

Again I am not on the TC so you should wait to hear from them.

1 Like

@Robosapien, @Dennisma
Interesting question and answer.
What about Pixetto? Same as Huskylens or OpenMV?
Committee: please help :wink:

Hi @PeterParker,

As @Dennisma said we always rejected such features. Because
Victim detection is considered to be important tasks of RoboCupJunior Rescue Maze. We will be allowed to use to code your own.

2022 committee

Thank you for your reply to my question, however I am curious as to whether we are allowed to use the Huskylens in a similar way to what you have permitted with the OpenMV cameras in the rescue line competition.

In the Rescue Line competition, the use of the find_blobs() function and the find_circles() function of an OpenMV camera were permitted due to the fact that,
“[The] robot will find green dots using the function, [the] robot [has] to judge in [the] program which side of an intersection, in front of an intersection, etc…
So, You will be able to use the cam and these libraries in the RoboCupJunior 2022 Bangkok.”
(I am unsure how to directly link the page so I am including the URL to the page here)

Instead of using find_blobs(), would we be allowed to use the Huskylens to read the ID of an object through huskylens.get(). I believe that this is very similar to what you have permitted in rescue line because using the Huskylens in this way only allows it to find out whether an object exists or not, but not the actual location of that object or how far away the object is from the robot.
From my understanding the reason you allowed the find_blobs() function and the find_circle() function to be used on the OpenMV camera was because the function could not identify where the circle or blobs actually were, it could only identify that they existed, which is exactly the same as using the huskylens.get() function.
Would the Huskylens be allowed if it was used in this way?

1 Like

Hi @mymama ,
What about OpenMV, are we allowed to use it?
Thank you.

Again I am not on the technical committee but… OpenMV has been used in the past AND many soccer teams for years have been using it.

If you go to GitHub - RoboCupJuniorTC/awesome-rcj-soccer: A curated list of resources relevant to RoboCupJunior Soccer you can see last year’s soccer posters for Open (heavyweight) which requires them to look for an orange-colored soccer ball.

At least 3 of the teams that went to Bangkok used OpenMV.


Hi @Robosapien @Dennisma @PeterParker,

First of all sorry to keep you waiting.

So the biggest difference between find_blobs() function from OpenMV and huskylens.get() is the use of AI. find_blobs(), find_lines() and other other similiar functions use computer vision algorithms to output wanted objects, but still requires manual calibration for, i.e. thresholds, addiotional processing of the outputs and it still might not be 100% reliable, if not configured correctly.
On the other hand, huskylens uses machine learning and as mentioned by @Dennisma “Users can change various algorithms easily by pressing the function button.” Therefore you as a student are not required to do any programming to get a very reliable output.

The same goes for, for example color detection. I hope everyone can see that using a button on huskylens to “learn” a color is not the same as manually calibrating/analysing RGB/HSV or any other values gotten from a pixel.

So to summarize, use of OpenMV library is allowed, husklylens’ AI feature IS NOT. Hope this also clears things up for any other similiar devices.

2022 committee

P.S. please note that we are not against use of AI, quite the opposite. We definitely encourage teams to explore these exciting solutions and challenges, just as long as they can undestands how they work and are the ones coding them :slight_smile:

1 Like

Thank you for your clear answer.