Brief VR Study: Technical Highlights
The Making of La Reina Del Sur VR Promo
Two Goats was tasked with an incredible project, working alongside NBCU, GOOGLE and TELEMUNDO. We wrote, directed and produced a promotional short that challenged us in really exciting ways. La Reina Del Sur (LRDS) telenovela has become a cult classic in the Hispanic market and Teresa, the main character has a massive fanbase. This motivated Telemundo to take a completely different approach to promoting LRDS. Teresa’s fans are being rewarded with an immersive, never seen before VR experience that will make them sweat.
Fans get to have an immersive, nail-biting, nerve-wracking, blood-pumping experience as they live through a virtual reality simulation of being kidnapped and tortured by a psychopath and his best friend “Mr. Giggles” the snake. The experience pushes the viewers to the edge of their worst fears and phobias. At the end fans are rewarded with an incredible opportunity, they are standing in front of Teresa at an exclusive location, surrounded by her bodyguards; this is the moment Teresa speaks directly to them. They will never have this opportunity to be so close to Teresa ever again.
This piece was such a fun project for us we decided to dive deeper into the technical highlights, and hopefully start a conversation and get some VR nerds to step in and give us feedback. We broke it down below into sections: Creative, Post Production, VFX/Online Edit and Sound. If we don’t work together towards growing this space, we’ll never see it evolve into what it will be. Hopefully you’ll appreciate these notes, please leave comments below. Enjoy!
Scenes 1 and 2 were shot in Madrid, Spain. We decided to shoot in quadrants with Panasonic GH5 cameras. Using a mix of quadrants and green screen/CGI to recreate a fully realistic stereoscopic environment, this gave us the most impressive 360 quality on the market.
For this scene we also decided to use a mix of quadrants, photogrammetry and stage lighting to recreate a photorealistic stereoscopic environment.
Given that this was a partnership with Google, we were given two Google Yi Halo’s, which afforded us some simplicity in terms of portable on-location stereoscopic production with good natural light. We shot scene 3 in Cartagena, Colombia with the Yi-Halo’s. Using this setup allowed us to film this scene in minimal time, in technical terms, but yet at the highest Stereo-3D-360º quality, and we did that just at the limits of this Google recording technology in terms of light and positioning.
Google Jump System:
It was a real privilege to use the Google Jump system. This Google proprietary technology, not open to everyone, consists of a Yi-Halo VR Camera and access to Jump Assembler (Google’s Stereo-3D-360º cloud stitching platform).
The stereo quality of Google’s Jump system is at the top of the industry and is based on Google’s proprietary image-analysis and depth algorithms that result is some of the best Stereo-3D-360º imaging possible.
Custom-made Stereo-3D-360º rig:
For scenes 1 and 2 we used a custom-made Stereo-3D-360º rig. Both the light and recording conditions of the two remaining scenes made us decide on using a different camera and stitching approach. But after having used the Yi-Halo camera on the first recorded scene (Scene 3), we had to conform to some very high Google recording specifications.
Following some research we decided to film with a custom designed & built dual-camera rig. After considering several possible solutions we finally used Panasonic GH5 as our main camera body and also extreme fisheye lenses that were custom produced for us in Japan.
With this set-up we could keep within a reasonable stereo inter-axial distance (very important for the stereo imaging), and at the same time be able to film our sources at 4k@60FPS and directly to external recorders in PRORES 4:2:2 10bit (which was also much necessary for the chroma screening involved in scene 1).
The designed rig could still give us a final resolution of 6700×6700 @ 60FPS which was more than required on Google’s technical specifications request.
Stereo-3D-360 stitching technique used on scenes 1 and 2::
The technique used for stitching the footage, captured with the custom-made rig above, was also based on a proprietary approach which mixes stereo 360º photo techniques with the live-action stereo fisheye videos.
One of the main problems with most stereo multi-camera rigs currently in the market these days, is that they lack a good stereo reference which on stitching produces uncomfortable stereo disparities (most noticeable on the stitching seams).
In our case, the use of a good stereo 360º photo base is a very specialised approach that not only provides a comfortable stereo frame from starters (matching the stereo quality of the Jump system at the very least), but also allows for a High-Dynamic-Range clean-plate photo image which is not easily achievable with other camera systems.
Scene1: a full stereo 3D CGI with live-action composition:
Scene 1, a full stereo 3D CGI with live-action composition. This is a very challenging scene where all acting was first captured in a full 360º chroma set. A composition of the stereo live-capture and a complete CGI stereo background reconstructions and blending was then fully performed in VFX. This required very high Stereo-3D-360º skills for which new technical approaches have been developed. The difficulty of this sort of scene makes them very rare in 360º VR productions.
VFX / ONLINE EDIT
We had to create a completely new pipeline to achieve what the piece needed. One single frame weighs around 150 MB at the best resolution. This was a 60 frame per second piece, which is lovely to enjoy at that frame rate because of the smoothness you feel within the environment, but it’s huge in terms of material. One second of La reina weighs 9GB, and we are talking about a five and a half minute piece. Everything in a VR piece is huge in terms of size, challenging, time consuming, and has to be managed with great care. Therefore, we had to create a new pipeline for this project that would allow for traditional goals to fit within the non-traditional process.
Another part of the challenge was to create a hyper realistic environment – wherever you look, you feel as if you are there. Alone with a maniac in the rainforest, and that feeling has to last for the length of the scene. With the thrills and the interactions between our actors and environment, all that has to be stereoscopic. Working on stereoscopic differs greatly from traditional cinema because the viewer feels the depth of everything, so if every CGI element is not placed in the right spot within the environment, the spectator will feel dizzy and will instantly see there is something wrong, which will ruin the piece instantly.
We also wanted to have interactions between our CGI, our actors and our set. The flying birds, the leaves so close to our platform, the electrocardiogram, etc… a couple of wrong pixels to the left or the right will completely mess with the magic of the reality we’re aiming to create, and in an instant it’s all gone. There is a good amount of trial and error on our part, but we achieved what we wanted to show.
Achieving a realistic rain forest on CGI, which also needed movement, was another highlight on the road to finishing the piece. Trees and plants are composed by several thousands of leaves which move independently with the wind and have to be animated realistically per each plant. Giving the viewer an environment where they are surrounded by almost one hundred trees and plants recreated on 3D to have a fully immersive experience.
There are common challenges regarding audio on every 360 project, the main one is not having picture cuts. We are used to enhancing rhythm and narrative going along with edits and perspective changes in any project. However, 360 pieces are closer to a theatrical play than they are to cinema. In this particular case, the first scene is somewhere between action and terror; therefore, not being able to use cuts makes it challenging to create the right atmosphere. Also, action takes place in the jungle were there are lots of sounds going on to make it believable so we needed to drive the player’s attention through the important elements of the story, without loosing the illusion of immersion, by rising and lowering ambience and using POV sounds to transition between moments of confusion and fear.
The other common 360 challenge is dynamic range and mixing for various types of devices. Being YouTube the main display for the piece makes audio playback system almost everything from desktop speakers to pro audio headphones. Again, dialogue in the first scene goes from terror-screaming to whispering so finding the right spot between not losing too much dynamic range, but hitting levels that makes the piece viewable in everyday-spaces through regular headphones, was greatly challenging.