GameSense Feedback: Best Practices

How to give proper Feedback on your GameSense Camera Vision!

Written By Taylor Reed

Last updated 3 months ago

Hello Drawbridge Users,

For those of you using GameSense, I wanted to clarify how things have evolved over the last year and what that means for you. Especially when it comes to using the Thumbs Up and Thumbs Down voting. I put the key takeaways in bold, and added explanations and context to help everyone understand more about how the system works.

How we got here:

Until last spring, we were training camera models using a library that required over 3,000 sample images per objective. We had to label each image by drawing boxes around the relevant areas, and then say whether this was the "solved" or "unsolved" state. This took an extremely long time (hundreds of hours), but once it was completed, the models needed very little refinement because they had so much training data. This model was highly reliable from day one, but was not practical or scalable because every venue has cameras and props in different locations and sometimes with different lighting. Once we knew we had to retrain for each venue, we went looking for an easier way.

In May, we started deploying GameSense using a custom model that used more advanced algorithms for matching images. We can now train a detection with just a handful of images. We still try to get different lighting and need to keep the cameras from switching between dark mode and daytime mode, but the training process became orders of magnitude easier and faster. We were able to deploy the remaining detections that people had ordered, and last month we added a web interface to allow you to do this yourself.

This new method means we can deploy much faster—It also means that initially the models may not be as accurate those first few weeks. Therefore, it is really important to use the Thumbs Up and Down icons for the first 2-3 weeks of GameSense.

As a side note, please give GameSense up to 4 seconds to close the objective before you vote. Sometimes there are shadows, fog, or other reasons your eyes may be a bit faster in seeing a box open or a puzzle solved. GameSense is not meant to be faster on the trigger than a very attentive Gamemaster. It is meant to be more consistent than a Gamemaster trying to constantly keep up with multiple games. A bit slower, but steady wins the race in this case. Guests do not normally ask for a hint within 3-4 seconds of completing an objective, so increasing close time above 4 seconds is not relevant to the intent of GameSense.

When you do click that Thumbs Up or Down icon, we log it in our database. Nothing happens automatically. Every Monday, we check results for every site for the last week. If we see more than an occasional Thumbs Down, we start recording game footage in those games. After a week or so (depending on game traffic) of recording new footage, we annotate and manually add several of these new images to the training data for the model. We may remove existing training images if we find they are not helping with accuracy. Then we update your camera detection model for that objective.

We do updates early in the week, not on Fridays or weekends unless it is an emergency.

During this initial period, it is very helpful to have as much voting data from users as we can. If you are giving lots of Thumbs Down, give us a week or two as we work to capture more footage and retrain the model. Keep giving feedback as long as you feel the model is not as accurate as you would like.

Once GameSense is doing an accurate job, which can take anywhere from immediately to a month or two, you do not need to keep voting except if you start to see GameSense losing accuracy. Then use the Thumbs Down and it will stand out to us as something we need to look at. You do not need to give a Thumbs Up so that it doesn't "forget" what good looks like. It will never forget. That's the beauty of automation.

GameSense is 97%+ accurate. That still means that if we run 100 detections, then 3 are going to not be correct. Keep in mind most games have 6-8 detections, so about once every 20 games you can have a GameSense detection that does not close when it should, or that closes too early. The root cause of this varies, but at some point the effort of adding more training images outweigh the benefit (the law of diminished returns kicks in). We still believe this is much better than an average employee. And if it is becoming a problem we are happy to discuss specific concerns. This is why Autohint is never immediate and has a variable timeout. The Gamemaster should always take a quick glance and see that the hint being proposed is actually correct for where they are in the game. For our Gamemasters, this takes less than 6 seconds, even when watching 6 games at once.

I hope this all makes sense. Our camera training process has evolved over time. We initially explored automatic retraining, but for now we are doing it manually in order to learn and improve from first-hand experience.

Thank you for using Drawbridge and GameSense! If you have any questions, please let us know.

-Kevin

Contact Us: support@drawbridgesolutions.com