Using prebuild camera is like using pre build kits

by changing the rulle for detecting the ball by vision sensors . it seems to be a good challnege for students . now using prebuild cameras for detecting the ball like pixycam , I think there is no any challenge for detection the ball or atleast there is no difference between detecting the ball with IR and prebuild cameras . I would like to hear comments of other teams and do you think it is possible to set more constrate for using prebuild sensor for future?

Hi Vijay,

Interesting point. Are you questioning to prebuilt in terms of the hardware /electronic, or embedded firmware or even already existing software to find shaped and colored items?

dear @elizabeth.mabrey
I believe that robocup’s rules should change in order to be more challenging. for example if the rules prohibit teams to makes their robots vision sensor by pre-built cameras likes Pixycam they were forced to do this challenge by themselves and for example they learned some about OpenCV . this is a paragraph of the rules as you know :
_"For the construction of a robot, any robot kit or building block may be used as long as the design and construction _
_are primarily and substantially the original work of a team. This means that commercial kits may be used but must _
be substantially modified by the team. It is allowed neither to mainly follow a construction ma nual nor to just change unimportant parts"

this low is observed for other parts of robot and should be observed for vision sensors.

Banning the Pixycam and using OpenCV won’t make the work more challenging. In the narrow domain of finding a red ball, both OpenCV and the PixyCam perform the SAME FUNCTION for the programmer.

You see… the PixyCam is a “smart camera” but with a very “simple library”.… while OpenCV uses a “simple camera” but with a very VERY “smart library”.

From the programmer’s perspective, both PixyCam and OpenCV end up doing the same thing. They both simply return an array of blobs (called Blocks for Pixycam… contours for OpenCV) that match the color red.

The programmer only has four things to do in both cases:

  • get an array of blobs that match the color red (this is what PixyCam and OpenCV do)
  • take the array of red blobs and determine (based on size) which one is the red ball.
  • figure out if the ball is to the left, right or center of the camera image.
  • run your motors in that direction

To prove my point let’s look at some very simple sample code of a Raspberry Pi using OpenCV and an Arduino using the PixyCam to find a red ball. Even though they are in different languages (C++ and Python) you can see they are almost exactly the same from a logical standpoint… especially in the loop() function where all the work takes place.

import numpy as np
import cv2

redLowerBound = (0, 0, 140);
redUpperBound = (70, 70, 255);

def driveTowardsBall(centerX,ballX):
	# do something cheap here - not a real good algorithm
	if ballX < centerX/2:
		print ('Veer Left')
	elif ballX > centerX + (centerX/2):
		print ('Veer Right')
		print ('Go Straight')
def setup():
    global cap, centerX
    cap = cv2.VideoCapture(0)
    if cap.isOpened():
        #let's calc center(X,Y) based on the image dimension
        ret,image =
        (centerX, centerY) = (image.shape[1] // 2, image.shape[0] // 2)    
def loop():
    global cap, centerX
        #the next three lines are similar to calling getBlocks
        ret,image =
        ball = cv2.inRange(image, redLowerBound, redUpperBound)
        #find contours (blocks)
        _, contours, _ = cv2.findContours(ball, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
        if len(contours) > 0:
            # find the block that is max in area and assume it is red ball
            sortedContours = sorted(contours, key = cv2.contourArea, reverse = True)
            biggestRedObject = sortedContours[0]
            (ballX, ballY), radius = cv2.minEnclosingCircle(biggestRedObject)
            # for debugging only we need to see ball
  , (int(ballX), int(ballY)), int(radius), (255, 0, 0), 3)
        #for debugging only we need to see ball    
        cv2.imshow("Ball", image)    
        key = cv2.waitKey(1) & 0xFF
        if key == ord("q"):

def main():

if __name__ == "__main__":

Here is the same code for Arduino:

#include <SPI.h>  
#include <Pixy.h>

Pixy pixy;
const int centerX=320/2;
inline int block_area(const Block& b){
  return b.width * b.height;

void driveTowardsBall(int centerX,int ballX){
  // do something cheap here - not a real good algorithm
  if (ballX < centerX/2){
    Serial.println ("Veer Left");
  else if (ballX > centerX + (centerX/2)) {
    Serial.println ("Veer Right");
    Serial.println ("Go Straight");

void setup(){

void loop(){ 
  static int i = 0;
  int j;
  uint16_t blocks;
  char buf[32]; 
  blocks = pixy.getBlocks(); 
    int max_block = 0;
    int max_block_area = 0;
    for (j=0; j<blocks; j++){
      //find the largest Block - assume it is the ball
      if (block_area(pixy.blocks[j]) > max_block_area){
        max_block = j;
        max_block_area = block_area(pixy.blocks[j]);
    int ballX = pixy.blocks[max_block].x + pixy.blocks[max_block].width/2;

All the real work is done in the loop() functions and both sets of code essentially do the same thing. The programmer makes some calls to get back an array of blobs that match the color red. Then the programmer determines the largest blob assuming it is the red ball. Then determine the center of the ball (centerX) and determine if the center is to the left, right or in the middle of the camera’s field of view. They simply drive turn towards the direction of the ball (note: this code assumes the ball is actually in front of us).

The only real difference is that the PixyCam must be “trained” to find red while in OpenCV the upper and lower bound for red must be passed to the library. A very subtle difference that is more logistical than programmatic.

So I strongly feel that banning the use of the PixyCam would be an arbitrary decision. It buys nothing… it doesn’t make the programming more challenging and it simply restricts what processing platforms students can use.

Hi @vijay and @elizabeth.mabrey,

Thank you for an interesting discussion!

Here is my take:

  1. I do not fully agree with @vijay that using pre-built camera is like using pre-built kits.
  2. At the same time I also cannot agree with @elizabeth.mabrey in saying that both OpenCV and the PixyCam perform the same function for the programmer.

Let me describe why.

First, the big difference between pre-built kits and a pre-built camera like Pixy is that a pre-built kit cannot be generally used for anything else but RCJ Soccer, whereas a pre-built camera like Pixy has a wide range of usecases, far beyond RCJ Soccer. With pre-built kits there is very little to no learning of transferable skills, whereas in case of Pixy one does at least learn that there is a (pretty simple, not very precise and quite expensive) way of finding a blob of color in the scene, which can then be easily interfaced with the rest of the robot. Sure, not many will learn about the randomly seeded Growing Region algorithm that actually powers Pixy (see this thread for more details), but many more may get excited about vision technology, which will make them dig deeper in the end (at least so I hope :slight_smile:) .

Using the same analogy as above, I see OpenCV or any other advanced CV library (dlib is just one of the popular examples) as one step further on the “reusability” ladder. One of the big differences in the Arduino and Python code above is the use of cv2.minEnclosingCircle function. It returns the X,Y coordinates and the radius of a circle that encloses a given contour. I do not think this is something the Pixy firmware can do: as far as I know it only returns the bounding box of a color blob (in the Python code above you could use cv2.boundingRect to achieve the same result). Thus, while in this small example Pixy and OpenCV do work very similar, in the great scheme of things OpenCV provides much more flexibility as to what one can do with the input image / results of the computation. If we keep the educational goal in mind, I would personally prefer a “simple camera” with a very very “smart library” :slight_smile:

With regards to Pixy, the big difference it makes can be largely attributed to the ease of interfacing it provides (once again, at least in my view). Get a Pixy, “teach” it what colors to recognize and in a few minutes you can get the results down to your IC and make decisions based of off them. Put your OpenCV code together (whether in Python, C++ or something else) and you are left with a pretty difficult architectonic/logistical challenge: how do you interface this with the actual actuators on the lower level? I believe such a change is definitely worth of the Open league :slight_smile:

As I said in this reply to a suggestion on the rules repo to ban devices like Pixy, the Open league has changed quite a lot in the past two years. Pixy (and and friends) has helped to ease the transition, but the way I see it, the situation may change once again. Banning PixyCam would really be quite arbitrary, but it may be that the next version of the Soccer Open challenge will make it quite obsolete.

Once again, thank you for this interesting discussion and feel free to continue – I’ll do my best to respond in a more timely manner this time.

I do get how you feel about it, I do not necessarily disagree in the sense of the spirit of challenge. However, I think we must go by the evidence of application.

Let’s take the point about " Put your OpenCV code together (whether in Python, C++ or something else) and you are left with a pretty difficult architectonic/logistical challenge: … " Take the same rationale, one could go as far as banning anyone using pre-existing image processing features then. Where are we going to put the boundary of it?

Perhaps, that should have gone under the category of hardware design!?

In the end, we both do come into the agreement that banning a device like Pixy Cam really being arbitrary.

Sorry, this can be a really lengthy discussion indeed. Back to work for now.

I am sorry if I am reading your comment in a wrong way, but it seems to imply that devices like Pixy Cam would be banned. That is certainly not the case in the current draft of the rules, and I am also not aware of any material plan for banning them in the upcoming years. This is further supported by me agreeing that a ban like that would be arbitrary.

I would like to address one specific point in your post, namely

I do not understand exactly what point are you alluding to, but we can certainly debate to which extend does it make educational sense for the rules to force Soccer teams to “reinvent the wheel” (sometimes literally). My personal take is that the rules should not regulate more than they have to, for they should allow for as much innovation as possible. Using my heuristics of “part not being RCJ Soccer only” I would not be OK with teams using an industrially produced dribbler, but I am quite OK with industrially produced omniwheels.

Certainly, but I also believe it is quite necessary :slight_smile:.

I think banning the pixy would just create a deeper difference between beginners and advanced groups.
Do not forget that even if the coaches and organizers keep learning every year, the students keep changing all the time.
A new team is happy moving the robot without leaving the field and seeing the goals and the balls sometimes while experienced teams can perform great tasks in predicting the balls movement and calculating smart robot moves.
At the European Open in Italy 2018 the “pixy” teams had no chance against the “Raspberry Pi Cam” teams.
The team from Slovakia using a 1000+ Euro nvidia-based cam solution with up to 122 fps was dominating the open games.
So banning the pixy will not increase the experienced teams effort to use their own cam system as they do it anyway.
It would only keep new teams away from the game as they will not manage to get all the new tasks running in one year.
Even more, I think this would only encourage experienced coaches to “help” their teams in an unwanted amount of professional work.

1 Like

Are air blowers allowed

Totally agree with Stiebel and most of Marek’s comments. Stiebel’s opening paragraph makes an important point. Teachers, Coaches and Technical Committees must remember that we get novice students every year. The Pixy camera is just another sensor which takes advantage of existing libraries (Arduino) or EV3 blocks (Mindstorms). First or second year students will still have to face interesting challenges to produce functional robots…

Thanks for stopping by @profspina!

In the interest of completeness, would you mind sharing where you would not disagree with my comments? It is quite probable I may have missed something and so I would be very interested in seeing the issue from your point of view.

Thanks again!

– Marek

Regarding prebuilt camera discussion:
I believe that the ability to add vision SENSORS to our robots, PIXY or otherwise was a soccer innovation waiting to happen. The low cost PIXY-like cameras made this sensor available to most teams. I agree that Technical Committees should strive to keep rules as simple as possible, however, Stiebel’s reference to a team from Slovakia spending over 1000 Euros is of great concern. Such elitism does not do much to level the playing field or for that matter promote innovation in the game. I hope that that team was registered for Open Soccer and that they got a thorough interview.

[quote=“mareksuppa, post:5, topic:453”]
Banning PixyCam would really be quite arbitrary, but it may be that the next version of the Soccer Open challenge will make it quite obsolete.

Marek: Can you elaborate on the quote above?

Yes technology will without doubt continue to provide us with newer and better sensors. For now, the PIXY vision sensor gives teams useful and affordable technology without diminishing the challenge of producing competitive robots.


What I meant by “Banning PixyCam would really be quite arbitrary, but it may be that the next version of the Soccer Open challenge will make it quite obsolete.” is very similar to what you mention in the last paragraph of your post: the technological landscape may change so drastically that PixyCam would no longer be relevant. At the same time, as the Junior leagues try to get closer and closer to the Major ones, especially the Open league may need to step up in terms of used technology. Combining these two factors together would most probably make PixyCam quite obsolete.

That being said, however, I do not think this will happen in the next couple of years.

Does that answer your question? Feel free to follow up with another one – I’ll happily continue the discussion here.


– Marek

Hi Marek,
Thanks for clarifying that point. I needed to confirm that I understood your comment correctly and that there were no modifications in RCJ Vision restrictions in the foreseeable future. Technical evolution is both normal and expected, however we must always remember the presence of new entry-level participants in RCJ activities.

Hi @profspina ,
Greetings from Slovakia.

We see no problem with using pixie cam. Using USB/Ethernet/PCI Express camera and jetson as solution for image processing is step forward. We learned how to process raw data from camera using CUDA and how to optimize for GPU. We didn’t buy this camera, we received it as a sponsor gift from our sponsor(Slovakia based company) in the same way German, Swiss, Austrian and other teams received for instance Maxon/Faulhaber motors for hundreds of euros as sponsor gifts. Still we believe our robot is way less expensive than the average cost for one RoboCup Junior top class robot. We think building our own image processing solution is of great importance. Especially because moving from Junior to Major would require ditching pixie all together. It is basically useless in any Major category.

We don’t have a mentor that is able to help us with technical stuff, so this isn’t our case. We had regular interview on every single competition we participated in. Unwanted help should be visible in any well done interview. I don’t think the other side understood the complexity of our solution.

Many thanks
Team Compotes

Hi Team Compotes,
just to make it clear: We absolutely agree that your team did an extraordinary great job !
Using your team and your robot as an example did not intend to say your team was getting too much sponsoring or too much help. And I am sure your team members did that great stuff without a too intense help of a mentor!
Your team deserved to be European champion!

I just wanted to show by this example, that using a pixie is great for a start. It is easy to use, so teams on the edge from lower leagues to 2vs2 have a chance to get quick results. And on the other hand there is no question, that a proper self made system (like yours) is far better, so the use of a pixie does not blur the results at the top of the league.

Roland, Coach of the Bohlebots.
European 3rd in soccer2vs2 with a pixie and adoring the Slovakian team for their performance
at the Eurpoean Open becoming 1st!

1 Like

I’m new to this type of discipline so I’m trying to learn something about the software and the ball recognition.
I understood that OpenCV is a library which helps you to do the job, but I can’t understand if it is possible to use according to the regulations and what are the differences between it and the PixyCam.
Can someone help me clear out my ideas, please?

Hi @Blu3Wallaby,

First of all, please do not let the length or occurrence of strange words in this thread discourage you from trying out RoboCupJunior Soccer!

Yes, the use of OpenCV it is currently well within the regulations and so feel free to use it to put together something for ball recognition.

You can find quite a lot of materials on this very topic online – I would also like to point out that a very active member of this forum @elizabeth.mabrey has shared a very nice set of materials in this thread on OpenCV which may serve as a good starting point.

Hope this helps!

– Marek

1 Like