The magic that delivers photos to every phone in a stadium in only a few minutes is a unique convergence of hardware, software, and a little human touch.

While we are proud of the technology and innovations that powers SocialVenu, we want to acknowledge that all these fancy robots, open source electronics, and elegant code are just tools that highlight the really important part of our company: capturing those unreal moments we all experience at live events.

Most of the lights in stadiums are aimed at the action on the field, so shooting the stands  provides tricky lighting conditions. Distance and available light determine the correct combination. Often the courtside seats will be intensely brighter than the seats in the nosebleed, so we need to customize our installs for each stadium with a mix of advanced optics.

At 150–200 photos to capture the entire stadium it would be prohibitively expensive to install that many cameras in each stadium. The solution is to put each camera on a robotic gimbel and give it a sequence of photographs to capture. This cuts down the number of cameras per a stadium to 10–14, which makes our installs much simpler.

The issue we then run into is that we have a sequence of ~15 images to capture, but celebratory moments happen quick. Our benchmark for fast is capturing the entire stadium in 5 seconds, after that the cheering subsides and mostly we capture people eating popcorn, checking email, or chatting with their neighbors. So we’ve specifically tuned our motors and optimized our code so that the robots move into position, focus and shoot in 300ms.

Once we’ve figured out how many and which bodies/lenses we’ll need to cover the whole stadium; the next step is fairly straightforward: install them. We pick the most suitable location and mount the units, often in 4 clusters in the corners of the stadium. Then we embark on the tedious task of calibrating and indexing the entire venue.

The first step is calibration, which is programming each camera with a specific sequence. Each photo captures 100-150+ fans, but we need to ensure that we aren’t sending pictures of people on the edges with missing limbs, so we carefully overlap consecutive images.

Once we are sure that every seat is covered we can move onto Indexing, which looks at each sequence step and marks every seat within it. We’ve built a specialty admin tool to perform just this step that includes special drawing tools to efficiently tag every seat in a stadium.

At this point you may be thinking, how do the robots actually know when a big play is made and they should start shooting photos? The answer is humans! We have a Gameday Operator watching every game from within the stadium controlling the entire system, running a custom tool we’ve built called the Venue Manager. 

Each robot is equipped with an electronics system that interprets the commands from the Gameday Operator and transforms them into moving and photographing machines. These units are the combination of several open source and proprietary components, but the two most notable are the Robotec Motor Controller and the Rasberry Pi B+. The Pi is a mini computer that controls all the other parts—commanding the Robotec move the gimbel and instructing the cameras to capture.

As photos start coming in, the Venue Manage facilitates the uploading of the photos to the server where they are cropped and distributed to fans.