The preface of 2019's rules state that "The objective is to create a robotic performance of 1 to 2 minutes that uses technology to engage an audience." According to the rubrics, 60% of a team's assessment takes place off stage. Is the intention to place the focus more on teams using advance/complex tech than creating an engaging performance?
There also seems to be no safeguards to prevent teams from presenting technical abilities under the more controlled conditions of an interview versus ensuring the tech holds up under the spot light. Could/should a team earn credit for a use of technology that may not be reliable or that has malfunctioned during the actual performance(s)?
A team could easily utilize a product like openMV for vision recognition and understand very little about how it works yet potentially beat out another team that studied in depth a basic color sensor to do a similar task. Are judges looking for a growth of understanding a team may gain during a season or simply looking to recognize teams that have figured out how to implement the latest/greatest?
In short, it seems like a team can have a lackluster or potentially malfunctioning performance with little understanding of the tech yet still be competitive if able to demonstrate something that could simply be using slightly modified sample code/guided instructions that came with a product. Could there be any consideration for the 2020 season to align the rubrics more towards the preface statement and/or provide clarification in the rules as to what teams should focus on?
Thank you for providing an opportunity to give feedback openly.