Pixy2 camera usage

I would like to know if the use of PixyCam 2 is allowed.
https://pixycam.com/pixy2/

thanks.

Hi Joel, we asked ourselves the same question and were able to find numerous old forum entries via the search.

As long as the individual raw pixels are used and no integrated algorithm of the Pixy is used, such as the line detection algorithm, the camera is permitted. So it’s not particularly beginner-friendly. In perspective, I would like to have a simple solution that works block-based with the Pixy and is permissible. In other words, beginner-friendly.

Maybe someone has an idea for a simple block-based entry with a camera that is also permissible. Thank you

Best regards
Marvin

Hi @JoelHanerth and @m.bersiner,

as @m.bersiner correctly mentioned, we are allowing the usage of the Pixy hardware, but not their software. Same goes for other similar products, e.g. Huskylens.

In our latest rules draft for 2024, we tried to make the entire challenge of Rescue Line solvable without a need of a camera, to make the challenge more beginner-friendly. However, we still expect (and would like) for more experienced teams to use cameras. Yet, cameras are supposed to be a high-effort = high-reward concept. With the use of premade software contained in products such as Pixy and Huskylens, this concept does not work, and would make these cameras both easier to use AND more effective then e.g. simple photosensors in task of line following.

It does unfortunately mean there is no option to process camera data in block-based programming language (as far as I know), but as I already stated, cameras are sensors that are expected to be used by more experienced students, who already program in text-based programming languages. On the other hand, we do not expect you to do implement all the image processing algorithms on your own. We allow most image processing libraries. A library I would recommend looking into (its available for both C++ and Python) is opencv, which contains a lot of useful functions that can turn the array of raw data into something that is easier to work with, but requires some testing and tuning of parameters, rather than just pressing a button. Here is a short example on how to detect blobs, for example.

Hope you find this answer helpful and best of luck!
Matej

1 Like

That means that it is allowed to use the camera, but not their software to preset the values ​​or it is not allowed to use already created aids, but if I create my own algorithm for object detection, is it allowed?

If you create your own algorithm for object detection, it is absolutely allowed! Just note if you decide to use a ML algorithm we will require some training data / dataset you used as a proof of your own work!

Good luck!

There was a request for a “simple block-based entry with a camera”. I’ve recently been working on Machine Vision capabilities for the ESP32-Cam, and it is now possible to perform some basic machine vision tasks in a blocks based environment (…or Python if you prefer that).

This solution is similar to OpenCV and OpenMV Cam (…which is allowed), in that it does not have any built-in functions that you can activate with a click. You’ll need to write code to perform the machine vision tasks and output the results. Library functions are similar to OpenMV Cam (eg. find_blobs and find_circles that you need to configure), and are probably less “smart” (ie. OpenMV Cam does more things automatically).

Performance is rather poor (~5fps), but it’s a cheap way ($5) to add MV capabilities to a robot.

Demo video: https://youtu.be/i6OMb49KNfs
IoTy: GitHub - QuirkyCort/IoTy: Internet-of-Things made easy