"Virtual Reality Filmmaking presents a comprehensive guide to the use of virtual reality in filmmaking, including narrative, documentary, live event production, and more. Written by Celine Tricart, a filmmaker and an expert in new technologies, the book provides a ands-on guide to creative filmmaking in this exciting new medium, and includes coverage on how to make a film in VR from start to finish. Topics covered include: The history of VR; VR cameras; Game engines and interactive VR; The foundations of VR storytelling; Techniques for shooting in live action VR; VR postproduction and visual effects; VR distribution; Interviews with experts in the field including the Emmy-winning studios Felix&Paul and Oculus Story Studio, WeVR, Viacom, Fox Sports, Sundance’s New Frontier and more."
Trang 2Acknowledgments
List of Abbreviations
Introduction and Definitions
Part I Theoretical and Technical Foundations
1 History of VR
with Eric Kurland
2 Live-Action VR Capture and Post-Production
3 Game Engines and Interactive VR
4 VR Headsets and Other Human-VR Interfaces Part II Virtual Reality Storytelling
5 VR: A New Art?
6 VR as a Storytelling Tool
7 Make a Film in VR from Start to Finish
The Future of VR/Conclusion
Trang 3Part I Theoretical and Technical Foundations
History of VR
with ERIC KURLAND
Eric Kurland is an award-winning independent filmmaker, past president of the LA
3-D Club, 3-Director of the LA 3-3-D Movie Festival, and CEO of 3-3-D SPACE: The Center for Stereoscopic Photography, Art, Cinema, and Education He has worked as 3D director on several music videos for the band OK Go, including the Grammy Award- nominated “All Is Not Lost.” He was the lead stereographer on the Academy Award- nominated 20th Century Fox theatrical short “Maggie Simpson in ‘The Longest Daycare’,” and served as the production lead on “The Simpsons VR” for Google Spotlight Stories In 2014, he founded the non-profit organization 3-D SPACE, which will operate a 3D museum and educational center in Los Angeles.
Trang 4Figure 1.1 Hugo Gernsback
While virtual reality is a relatively new innovation, the state of the art is greatly informed by themany forms of immersive media that have come before For practically all of recorded history,humans have been trying to visually represent the world as we experience it Primitive cavepaintings, Egyptian hieroglyphs, and Renaissance frescos were early attempts to tell storiesthrough images, and while these would not be considered true representations of reality, they doillustrate the historical desire to create visual and sensory experiences
Trang 5Figure 1.2 Wheatstone’s stereoscope
Another scientist, Sir David Brewster, refined the design of the stereoscopeinto a handheld device Brewster’s “lenticular stereoscope” placed opticallenses onto a small box to allow the viewing of pairs of backlit glassphotographic slides and front-lit photo prints His viewer was introduced tothe public at the Great Exhibition of 1851 in London, England, where itbecame very popular, helped greatly by an endorsement by Queen Victoriaherself An industry quickly developed to produce stereoscopes and 3Dphotographic images for viewing
Figure 1.3 Brewster’s stereoscope
Trang 6In the United States, author and physician Oliver Wendell Holmes, Sr saw aneed to produce a simpler, less expensive stereoscope for the masses In
1861 he designed a stereoscope that could be manufactured easily, andspecifically chose not to file a patent in order to encourage their massproduction and use
Figure 1.4 Holmes’ stereoscope
Throughout the latter half of the 19th century and until the 1920s,stereoscopes became a ubiquitous form of home entertainment Companiessuch as the London Stereoscopic Company, the Keystone View Company,and Underwood & Underwood sent photographers around the globe tocapture stereoscopic images, and millions of images were produced andsold Stereo cards depicted all manner of subjects, from travel to exoticlocations, to chronicles of news and current events, to entertaining comedyand dramatic narratives Stereo viewers were also used in education, findingtheir way into the classroom to supplement lessons on geography, history,and science Much of the appeal of these stereoscopic photographs was theability of a 3D image to immerse the viewer and virtually transport them tofaraway places that they would never be able to visit in person
Trang 7Figure 1.5 Stereoscopes in use in a classroom, 1908
in 1793 specifically for the exhibition of his panoramic paintings Thepopularity of Barker’s panorama led to competition and many others beingconstructed throughout the 1800s Historically, a panorama (sometimesreferred to as a cyclorama) consisted of a large 360° painting, oftenincluding a three-dimensional faux terrain and foreground sculpturalelements to enhance the illusion of depth and simulated reality, and buildingarchitecture designed to surround the spectator in the virtual environment
Figure 1.6 A panorama
Trang 8Figure 1.7 Illustrated London News “Grand Panorama of the Great
Exhibition of All Nations,” 1851
Figure 1.8 Cinéorama, invented by Raoul Grimoin-Sanson for the Paris
Exposition of 1900
The grand panoramas of the period created the illusion for the audience ofstanding in the middle of a landscape and scene, while the depicted eventswere happening These paintings in the round served both to entertain and
Trang 9to educate, often depicting grandiose locations or great historical events.Panoramas proved to be very successful venues, with over 100 documentedlocations in Europe and North America Some notable installations includethe Gettysburg and Atlanta Cycloramas, painted in 1883 and 1885, whichdepicted scenes from those American Civil War battles, and the RacławicePanorama in Poland, a massive painting 49 feet high and 374 feet long,painted by artists Jan Styka, Wojciech Kossakover, and a team of assistantsover the course of nine months from 1893 to 1894 to commemorate the100th anniversary of the Polish Battle of Racławice During their heyday inthe Victorian period, the panoramas saw hundreds of thousands of guestseach year.
The post-Industrial Revolution brought about a new age of technologicaladvances The birth of cinema in the 1890s brought to the public a new form
of media – the moving image The apocryphal story of early filmmakersAuguste and Louis Lumière’s 1896 film “L’arrivée d’un train à La Ciotat”sending audiences screaming out of the theater, believing the on-screentrain was going to hit them, may be an often-told myth, but it stilldemonstrates the sense of reality that early motion picture attendeesreported experiencing
Throughout the 20th century, cinematic developments such as color, sound,widescreen, and 3D added to the content creation toolset, as inventorssought new methods and technologies, first analog then digital, to buildrealistic immersive experiences One early attempt was the Cinéorama,devised by Raoul Grimoin-Sanson for the Paris Exposition of 1900, whichcombined a panorama rotunda with cinema projection to simulate a ride in ahot-air balloon over Paris Footage was first filmed with ten camerasmounted in a real hot-air balloon, and then presented using ten synchronizedprojectors, projecting onto screens arranged in a full 360° circle around theviewing platform The platform itself was large enough that 200 spectatorswere able to experience the exhibit at the same time
The 1927 French silent film “Napoleon” by director Abel Gance also used amulti-camera widescreen process Gance wanted to bring a heightenedimpact to the climatic final battle scene and devised a special widescreenformat, given the name Polyvision, which used three stacked cameras toshoot a panoramic view Exhibition required three synchronized projectors toshow the footage as a triptych on three horizontally placed screens toultimately display an image that was four times wider than it was high Whilethe impact of such a wide-screen picture was dramatic, it was technicallyvery difficult to display properly, as projector synchronization wascomplicated, and there was no practical method to hide the seams betweenthe three projected frames
Trang 10In 1939, at the World’s Fair in New York, filmmaker and special effects expertFred Waller introduced an immersive theater for the Petroleum Industryexhibit, called Vitarama, which used an array of 11 projectors to project agiant image onto a dome-like spherical screen, designed by architect RalphWalker.
In 1941, Waller took elements of his Vitarama immersive theater andinvented a multi-projection simulator for the military Called the WallerFlexible Gunnery Trainer, its purpose was to train airplane gunners underrealistic conditions They would learn to estimate quickly and accurately therange of a target, to track it, and to estimate the correct point of aim usingnon-computing sights To create the footage for the machine, five cameraswere mounted in the gun turret position of a bomber, and filmed duringflight Combat situations were depicted by having “enemy” planes fly pastthe cameras The Waller Flexible Gunnery Trainer itself used a specialspherical screen designed by Ralph Walker, and five projectors to create alarge tiled image that surrounded four gunner trainees It featured a 150°horizontal field of view and a 75° vertical The trainees were seated at mockgun turrets and engaged in simulated combat with moving targets Inaddition to the visual feedback on the spherical screen, the trainees alsoreceived audio via headphones, and vibration feedback through the dummyguns A mechanical system kept score of their hits and misses and provided
a final tally Waller’s system was used by the US military during World War II,and the first installation was in Honolulu, Hawaii, following the Japaneseattack on Pearl Harbor
Trang 11Figure 1.9 Waller Flexible Gunnery Trainer
Following World War II, Waller devised another multi-camera/multi-projectorsystem for entertainment, and named it Cinerama (for cinema-panorama).Similar to the projection used for Gance’s “Napoleon,” Cinerama used atriptych of three projectors, projecting a seamed three-panel image onto agiant curved screen A special system was designed to shoot for Cinerama,with three cameras mounted together at 48° angles to each other Theinterlocked cameras each photographed one-third of the full wide scene,which filled a 146° arc The Cinerama theaters used three synchronizedprojectors spread out across three projection booths, each projecting one-third of the full image onto a curved screen made of forward-facing verticalstrips A mechanical system of vibrating combs in the projectors was meant
to limit the light output in the overlapping areas of the three panels,effectively blending the three images into one In practice, this method didnot prove to be overly effective, as parallax differences at the seams stillmade them apparent Cinerama was also one of the first theater experiences
to utilize stereo surround sound, with a total of seven tracks – five behind thescreen and two in the back of the auditorium
Trang 12Figure 1.10 Waller’s Cinerama
The First Head-Mounted Displays (HMDs)
The View-Master was conceived by organ maker and stereo photographerWilliam Gruber as a 20th-century redesign of the Victorian stereoscope TheView-Master provided a sequence of seven stereoscopic slide views on abacklit circular reel that could be manually advanced by the viewer TheView-Master was put into mass-production by the Sawyers Company ofPortland, Oregon, which also hired photographers to create content As withthe stereocards that preceded it, the stereo reels consisted of all manner ofsubject matter, from travel and scenic wonder, to popular culture andentertainment Thanks in part to its small form factor and ease of use, itproved to be a very popular consumer product The View-Master also found amarket in education, being used extensively in medical training where itcould provide realistic views of human anatomy and disease View-Mastershave remained in production continuously since their launch, and while thesize and shape of the viewers have changed over the years, the design ofthe reels themselves has changed very little, allowing nearly 80 years ofcontent to continue to be viewed even in modern viewers
Figure 1.11 The View-Master
Trang 13Figure 1.12
In 1936, science-fiction writer and inventor Hugo Gernsback (after whom theHugo Awards for sci-fi literature were named) conceived of what he calledTele-Eyeglasses He described the future device as a pair of eyeglassesutilizing two miniature cathode ray tubes (CRTs) to provide television images
to each eye His device was intended to be connected by wire to a televisionset, which would receive and feed the signals to the wearer Gernsbackcreated mock-ups of his invention, but never produced a working prototype
Inventor Morton Heilig may have been the first person actually to use theterm “virtual reality,” which he used to describe “a simulation of reality.”Heilig designed a head-mounted CRT-based viewing system, which hepatented in 1960, and he went a step further than Gernsback, actuallybuilding working prototypes of his invention His device, which he named theTelesphere Mask, offered stereoscopic viewing of moving images bypositioning two miniature televisions in front of the viewer’s eyes Heilig used
a special lens arrangement to bend the light from the TV tubes to be visible
Trang 14in the viewer’s peripheral vision, and effectively provided a 140° angle ofview both horizontally and vertically The Telesphere also included built-inheadphones to deliver binaural sound, and it had a pair of air dischargenozzles which, according to Heilig’s patent, could be used to “convey to thehead of the spectator, air currents of varying velocities, temperatures andodors.”
Heilig also developed and built an immersive experiential device he calledthe Sensorama Rather than a wearable viewer like the Telesphere, hisSensorama was essentially an immersive theater for an individual viewerwho would be seated inside the machine Again, Heilig relied on optics to fillthe viewer’s field of view with a moving 3D image, multiple speakers toprovide surround sound, and air valves to simulate wind and deliver scents.The Sensorama also added the ability to vibrate the seat for a tactileelement Heilig designed his own 3D camera to capture content for theSensorama, and produced five demonstration films, four of which were ridefilms depicting the point of view from a bicycle, motorcycle, go-kart, andhelicopter The fifth demo film was an interaction with a human subject, abelly dancer, and featured a fragrant perfume smell as she appeared toapproach the viewer Heilig’s intention was that his devices could be used fortraining and education, similar to how Waller’s flight simulators had beenutilized, but Heilig was never able to make his inventions commerciallyviable
Figure 1.13 Morton Heilig’s invention
Trang 15Figure 1.14 Morton Heilig’s Sensorama
Ivan Sutherland is often referred to as “the father of computer graphics”thanks to his development of the first graphical user interface, Sketchpad,programmed while earning his PhD at the Massachusetts Institute ofTechnology in 1962 By 1968, Sutherland was an associate professor ofelectrical engineering at Harvard University, and he and his student BobSproull constructed what is considered the first true VR head-mounteddisplay Nicknamed the “Sword of Damocles” because the weight of thehelmet required that it be suspended from a ceiling-mounted arm,Sutherland’s system was able to track the wearer’s head to determine wherethey were looking and then render a real-time stereoscopic image as acomputer-generated vector wireframe The HMD was somewhat see-through,
so the user could see some of the room, making this a very early augmentedreality system as well Sutherland went on to teach at the University of Utah,where he partnered with another professor, David Evans, to found thecomputer graphics company Evans and Sutherland, which specialized increating high-end computer-generated imagery for, among other purposes,flight simulators and planetariums The company is still one of the largestdistributors of media for full-dome immersive presentations
Trang 16Figure 1.15 The “Sword of Damocles”
Figure 1.16 Nintendo’s Virtual Boy
The 1980s saw the rise of the video game, and several manufacturersventured into early digital attempts at interactive immersive games TheVectrex system in 1981 offered a stereoscopic imager that used a head-mounted spinning mechanical shutter to display stereoscopic graphics on itsvector display screen Sega released a stereoscopic option for its gamingsystem which used liquid-crystal active shutter glasses In 1993 Segaannounced, but never actually released, an HMD-based VR system for itshome video game console Nintendo also released the ill-fated Virtual Boygame system, an attempt at a portable standalone stereoscopic VR gamewhich ultimately failed due to its limited graphics and gameplay capabilities
1991 saw a first attempt at a VR arcade with the Virtuality system and itsgame “Dactyl Nightmare.” Some 350 of these coin-operated game systems
Trang 17were produced and placed in shopping malls Customers would pay their feeand step up onto a large round platform, surrounded by a magnetic ring Theplayer would don the VR visor and gun, and was required to wear a heavybelt at their waist, containing the wires that tethered the devices to thecomputer controller (either an Amiga 300 or 486 computer) The graphicswere very primitive polygons, but the tracking worked rather well for itstime Sega’s VR-1 motion simulator arcade attraction was introduced in 1994
in SegaWorld amusement arcades It had the ability to track head movementand also featured stereoscopic 3D polygon graphics In 2011, Nintendoreleased the 3DS handheld game, which included a built-in stereoscopiccamera, positional tracking, and a glasses-free autostereoscopic screen for
AR gaming
Theme Parks
Theme parks and entertainment venues have also made attempts atbringing VR to the public The Disney VR lab developed a prototype ofAladdin’s magic carpet ride in 1994 and installed it at EPCOT in Orlando,Florida They refined their HMD and in 1998 opened DisneyQuest, a themepark specifically for VR which featured the Aladdin VR
Circarama (later renamed Circle-Vision 360°) was a 360° panoramic theater,introduced at the Disneyland theme park in 1955, which used nine projectorsand nine huge screens arranged in a circle around the audience
Disney theme parks have also used stereoscopic films and moving theaterseats to create immersive movies, such as their “Honey I Shrunk theAudience,” which simulated the entire theater changing in scale The
“Soarin’ over California” ride (now remade as “Soarin’ over the World”) atDisney’s California Adventure Park and EPCOT features large-formatprojection onto a giant dome screen to simulate flight for riders who aresuspended in front of it That attraction also releases various scents into theaudience to enhance the realism of the environments Universal Studiostheme parks also used stereoscopic film and immersive practices with many
of their attractions Their “T2 3-D: Battle across Time,” directed by JamesCameron, combined live performance, environmental effects, and 3D film.The show culminated in a climactic scene projected in 3D across three giantsurround screens Universal’s “Back to the Future” ride, later replaced by
“The Simpsons” ride, is another ride that suspends the audience in front of
an 85-foot IMAX dome screen Recently Six Flags amusement parks, in apartnership with Samsung and Oculus, have even introduced VRrollercoaster rides to their parks, where the riders wear Samsung Gear VRHMDs to experience an interactive simulated adventure while actually riding
in a moving rollercoaster car
Trang 18Figure 1.17 Circle-Vision 360°
Computer Power and Tracking
Figure 1.18 A CAVE system
By the 1990s, workstation computers were becoming powerful enough torender computer graphics in real time, and researchers found new ways totrack users’ movements to create interactive immersive experiences,utilizing both HMDs and immersive rooms In 1977, Daniel Sandin, Tom
Trang 19DeFanti, and Rich Sayre designed the Sayre Glove, the first wired glovewhich allowed hand gesture control of data by measuring finger flexion TheCAVE (Cave Automatic Virtual Environment) is an immersive roomenvironment invented in 1991 by Sandin at the Electronic VisualizationLaboratory at the University of Illinois, and developed by Sandin, David Pape,and Carolina Cruz The CAVE is a cubic room using rear-projection materialfor walls Real-time computer graphics were projected on the walls, andsometimes also on the floor and ceiling The user would wear 3D glasses toview the stereoscopic projections, and the glasses were tracked within thespace so that the projected images could be manipulated to match theviewer’s perspective as one moved around This gave the illusion of actuallystanding in a real environment, with objects appearing to be floating in thevirtual room The tracking information was also used to create three-dimensional audio through multiple surround speakers A control wandallowed the user to virtually interact with objects and to navigate throughlarger environments A more recent iteration, the CAVE2 developed in 2012,uses liquid crystal displays (LCDs) instead of projection.
Figure 1.19
One of the most advanced immersive VR rooms is the StarCAVE, installed atthe University of California, San Diego, under the supervision of Tom DeFanti
Trang 20It utilizes 17 individual screens to create a pentagon-shaped five-walledspace, each wall being three screens high, and includes two screens of floorprojection Some 34 individual projectors provide high-resolution left andright stereoscopic video for the screens The upper and lower wall screenstilt toward the viewer at 15° angles, and the entryway wall rolls into place tocompletely enclose the user for a true 360° immersive VR experience.
Further development of data gloves was carried out by Thomas Zimmerman
at the company VPL Research VPL was founded by Jaron Lanier, whopopularized the use of the term virtual reality in the late 1980s VPL’s dataglove added the ability to track hand position in addition to finger flex Thisled to the development of the DataSuit, a full-body tracking system for VR.VPL became the first company to sell a commercial head-tracking HMD, andthe DataGlove was licensed to Mattel in 1989 and released as thePowerGlove accessory for the Nintendo Entertainment System home videogame console
Figure 1.20 Fakespace’s “BOOM” system
In 1988, Mark Bolas, Ian McDowall, and Eric Lorimer founded the companyFakespace to engineer VR hardware and software for government andscientific use Fakespace’s innovations included their own version of a dataglove, the Pinch Glove, and a VR imaging and tracking system called theBOOM (Binocular Omni-Orientation Monitor), which placed a high-resolutioncomputer monitor inside a stereoscope on the end of a boom arm The userwould look through the wide-angle optics and view stereoscopic real-timecomputer-generated imagery (CGI), and the arm sensors tracked thestereoscope’s position through six axes of motion
The “VR for Everyone” Revolution
A “do-it-yourself” handheld VR viewer, called the FOV2GO, leveraged thecomputing and graphics power of available smartphones, and was developed
in 2012 at the University of Southern California (USC) by a team of studentsand their instructor, multimedia innovator and educator Perry Hoberman, inconjunction with USC’s Institute for Creative Technologies The FOV2GO led
Trang 21directly to Google’s open-source design for their Cardboard VR viewer, aninexpensive, paper-based VR headset that is compatible with practically anyAndroid or iOS phone.
As computing power has continued to increase and computing devices havebecome miniaturized, the feasibility of a self-contained handheld device withhigh-resolution displays and a full complement of positional and rotationaltracking sensors has become a reality A new wave of public and corporateinterest has spawned a new period of heavy development in virtual reality.Companies like Sony and HTC have released their own proprietary virtualreality systems Oculus Rift, a consumer head-mounted display VR devicebased on inexpensive cellular phone parts and originally developed in 2012through a crowdfunding campaign on Kickstarter by its developer, PalmerLuckey, was purchased by Facebook for billions of dollars It seems that newimmersive devices and experiences, from personal VR HMDs to dedicated VRtheaters, gaming centers, and theme parks, are being announced every day.None of these new inventions and innovations would be happening if it werenot for
Trang 22Figure 1.21
Trang 23Figure 1.22
their predecessors In many ways things have come full circle: Google’sCardboard as an inexpensive mass-produced VR viewer, is almost identical inpurpose and design to its analog ancestor, Holmes’s patent-free stereoscopefrom over 150 years ago
Shari Frilot, Founder/Chief Curator of New Frontier at Sundance Film Festival
The first time I experienced VR, it was Nonny de la Peña’s “Hunger in LosAngeles” in 2011 I saw this piece on a really expensive headset It was like a
$50,000 headset on track with the development of VR since the ’90s When Iselected “Hunger” for New Frontier, the owners of the headset were like,
“Well, you are not taking our $50,000 headset to Sundance and puttinghundreds of people inside it!” So Nonny and Palmer Luckey, who was herintern at the time, worked together to patch up a prototype that wouldeventually become the first prototype of the Oculus Rift They showed up atSundance with a headset that was basically duct tape and a cell phone and itworked fine
That was the only VR that had been shown anywhere since the ’90s really
To my knowledge, it was the first time in a context like a film festival In
2014, I invited five VR works That’s when we really started to get lots andlots of lines
In 2015, because of that level of engagement, I made the decision to puttogether a show that was primarily VR work The works were coming from alot of different places There was a performance artist from Chile, there was
an American filmmaker, a music video, there were a lot of different artists
Trang 24and imaginations and storytellers, a lot of different kinds of practitionerscreating work for this technology, and it was really after that year, 2015, thatthe scene exploded at the festival It spiked our attendance and just generalinterest in the showcase.
In the meanwhile, Google is making Cardboard, Oculus Rift is coming up with
the DK2, and The New York Times is giving away a million Cardboard viewers
to their subscribers There’s this momentum that was starting to happen buteverybody was asking, “Well, where’s the content?”
When shooting 360° live action, it is vital to use the exact same cameras,lens, and settings to facilitate the process of creating the final 360° sphere.This process is called “stitching.” During stitching, footage from all thecameras (or the various takes of the same camera in the case of the nodaltechnique) is put together to recreate a full representation of thesurrounding environment In the example in Figure 2.3, footage from sevencameras (five all around, one facing the sky, and one facing down) is stitchedtogether
Trang 25Figure 2.1 360° capture
Note: The pictures above show a top view of the two different techniques butare not exactly accurate: the number of cameras varies as well as the waythey are put together Most VR cameras also feature top and bottomcameras
Once stitched, the final shot can be rendered in various formats and playedeither in a VR headset or on a flat screen with the use of a VR player
Trang 26Figure 2.2 Nodal technique
Trang 27Figure 2.3 The stitching process
Trang 28Note: Certain VR rigs do not have individual cameras facing up and down Inthis case, it is called cylindrical VR versus spherical VR.
Production Hardware
VR Cameras
A number of VR cameras are currently available to buy/rent, from level to the most high-end rigs Technical specifications (specs) constantlychange and can be confusing and misleading, and it is often difficult todetermine which camera is best to use There are many considerations when
amateur-it comes to choosing the right VR camera for your project, including the size
of the camera, the weight of the camera, the number of camera modules,and the quality of the camera modules Is the final project mono (2D) orstereo (3D)? The workflow can sometimes influence the choice of camera, asstitching is currently the most difficult/expensive part of creating VR Just likewhen shooting in 2D, you are not going to use the same camera for everysituation
At first, let’s study the most important technical specs of VR camerasimpacting the quality of the final product
a need for camera sensors that produce quality images in low-lightsituations That narrows the field even more A lot of content is being shotwithout any artificial lighting because this is a medium where we seeeverything and a lot of projects do not have the budget to paint the lightsout later It’s kind of limiting
In cinema and television, we traditionally shoot and project/screen content at
24 frames per second (fps) At this frame rate, camera move-ment andmovement within the frame typically result in a good amount of motion blur
In 3D films, this frame rate and resulting blur can cause a visual discomfortknown as stroboscopy In order to avoid this issue, the capture andprojection frame rate must be higher, either at 48 fps or 60 fps, in order to
Trang 29reduce the motion blur This technique is called HFR for high frame rate.Experienced directors have already chosen to shoot in HFR for their 3D films:Peter Jackson filmed the entire “The Hobbit” trilogy at 48 fps and JamesCameron is considering shooting the “Avatar” sequels in 60 fps or higher.
In VR, things are different because of the use of a VR headset to display thecontent It is common industry knowledge that the higher the frame rate, thebetter, so that it matches the display’s refresh rate for a heightened realismand immersion Entry-level headsets like the Samsung Gear VR have a 60Hzrefresh rate, while high-end headsets like the PlayStation VR can clock in at120Hz, depending on the game/application in use In the best-case scenario,the frame rate of the 360° capture matches the end device’s refresh rate butvery few VR cameras are capable of shooting at such a high frame rate Forfuture-proofing of your content, it is advisable to shoot at 60 fps and above ifpossible
Trang 30Figure 2.4 Final output resolution vs headset resolution
Trang 31Resolution takes on a different meaning when it comes to VR Indeed, eachcamera that composes the VR rig has its own resolution, but stitchedtogether, the final resolution is not a simple addition of all the pixels A goodstitch requires overlap between the cameras, hence shooting with four high-definition (HD) cameras will not add up to a final 4K file, but most likely a 2Kfile While 2K might sound good enough, once again, things are different in
VR What really matters is the number of pixels composing the field of view
of the participant
Let’s take the example of the Oculus Rift CV1:
Horizontal field of view: approx 90° (1/4 of the full 360° sphere).
Resolution: 2160x1200, hence 1080x1200 per eye.
To achieve an acceptable total resolution, the horizontal resolution of thefinal output should be at least 4 x 1080 = 4320 pixels Of course, thisnumber is going to change rapidly as new HD headsets arrive on the market.8K headsets have already been announced
Industry standard as of 2017 is to deliver a final 4K output (4096x2048), and
if possible a 6K output (6144x3072) for future-proofing the content
Sensor Size
The main effect of sensor size is on depth of field Large sensors favorshallow depth of field, but the wider the lens, the less sensor size impacts it.There is no depth of field on a fisheye lens, which is the most commonlyused lens in VR That being said, large sensors have some undeniablebenefits, like their increased ability to catch photons, hence a better low-lightperformance
Dynamic Range
Dynamic range in photography describes the ratio between the maximumand minimum measurable light intensities (white and black, respectively).Dynamic range is measured in F-stop by determining the number of stops acertain sensor can “see” with details between the blacks and the whites TheF-stop scale is exponential In photographic terms, a stop is simply a halving
or doubling of light So if you want to increase your exposure by one stop,you can either double the length of your exposure or double the size of youraperture The reverse is true if you want to reduce your exposure by a stop
Trang 32Figure 2.5 Comparison of dynamic ranges
As a reference, human vision has a dynamic range of about 15 stops, andthe high-end professional cameras like the ARRI Alexa or RED Weapon aremeasured around 14 stops
Good dynamic range is vital when capturing 360° video as many outdoorsituations will have large differences in brightness, and lighting VR in atraditional way is challenging as everything is in the frame
Compression
Compression codecs are used to encode the signal coming from the sensorinto a file that is smaller and optimized compared to recording the “raw”signal The quality of compression codecs varies and can sometimes limitgreatly the dynamic range of the camera by “crushing” details in theshadows or highlights Considering the difficulty of controlling the lighting in
VR, it is best to shoot in RAW format or with the lightest compressionpossible This will facilitate the color grading, especially the camera-matching process during stitching The downside of using a RAW workflow isincreased post-production complexity and cost due to the size and nature ofthe files which must be transcoded into a more manageable format
Trang 33VR Rig Design
A VR camera being made of a certain number of sensors/cameras, thequantity and positioning of these sensors is a very important factor in thequality of the final stitched image For example, a VR camera made of onlytwo sensors will be much easier to stitch than one made of 14 sensors (inthis case there is technically only one stitch line to adjust), but it will have amuch lower resolution/optical quality
In the example in Figure 2.6, the left rig design is much more difficult tostitch than the right design, but the resolution of the final output is fourtimes better (if the sensor is the same in each situation) Also, the opticalquality of the extreme fisheye lenses needed in the case of the right design
is often degraded compared to longer lenses, especially on the periphery ofthe field of view, resulting in a blurry effect around the stitch line zone
Design of VR cameras is truly an art Many manufacturers try to improve thequality/complexity ratio and also reduce the minimum acceptable distance tothe camera
Eve Cohen, Director of Photography, “The Visitor VR”
Usually the choice of the VR camera comes down to where the final delivery
of the project is going to be, whether that’s in a certain kind of headset orwhether that’s just 360° video And then the budget, ultimately Usually thecreative decision for the camera is pushed on the back burner after deliveryand budget After delivery and budget are figured out, then creativeelements come into play depending on how close you need to be able to getthe camera to a certain object, or what you have to be able to do with thecamera, whether that’s going to be movement, or a POV [point of view]-stylecamera It’s similar to choosing lenses, I guess, in standard cinematography
I kind of think of each different VR rig as a different lens
I want to have as much control over the image as I can as a cinematographer
so that I can really make sure it’s the right image for the project To me itcomes down to how much control I am going to need to have now versushow much control I am going to rely on later in post-production
I was just grading something recently from a GoPro rig and none of them arematching Like not even the same cameras match between them, and I havezero control over that That’s not ideal, but it might have been the idealcamera for that shoot knowing that I’d have to put in more time later to kind
of help match those up
Trang 34So when it comes to choosing a camera and what things I would look for, Iguess it would be the kind of setting that it’s in So if it was a daytimeexterior, it might be different than if it was a night-time interior with verylittle light There isn’t one array out there that can really cover everything Iprefer to use multiple cameras within a project Again, kind of looking ateach camera as if it’s a lens and choosing the best lens for that shot.
Figure 2.6 Examples of VR camera configurations
Minimum Acceptable Distance to the Camera
Due to the design of VR cameras, there are blind-spot zones around thestitch lines This occurs between the cameras themselves and where theoptical axes of two adjacent lenses cross If objects or actors were to cross orstand in these zones, the stitching would appear broken
Figure 2.7 shows how different configurations can reduce or reshape theblind-spot zone The third example shows two pairs of cameras arranged ontop of each other This configuration is very efficient at reducing the blind-spot zone but introduces a vertical disparity between the cameras which isvery difficult to fix during stitching This type of configuration is therefore notcommonly used
As a general rule, the bigger the overlap between the lenses is, the easierthe stitch On most current VR cameras, it is recommended not to haveanything come closer than 5 feet from the camera around the stitch linezones The minimum distance does not apply when standing right in front ofone lens: in this case the minimum focus distance is the limiting factor
Trang 35Figure 2.7 Different VR camera configurations have different minimumacceptable distance to the camera
Trang 36Figure 2.8 Stitch lines, blind spot zones, and minimum focus distance
The general rule when working with a VR camera is to identify where thestitch lines are and avoid staging anything important there
Nodal Technique
Another way of shooting 360° is to use a nodal head and rotate a singlecamera around its nodal point The nodal point is the center of the lens’sentrance pupil, a virtual aperture within the lens This specific point is alsoknown as the “no-parallax point.” This allows for a perfect and easy stitch asthere is no parallax between the various takes
As the various slices of the full 360° sphere are shot separately, the blockingand staging are limited to the frame of each take It is not possible to haveactors walk around the camera, for example However, some VR filmmakersuse a nodal head and pan the camera
Trang 37Alex Vegh, Second Unit Director and Visual Effects Supervisor
There’s a couple of different approaches to solving the parallax issue Oneapproach is optical, where you’re trying to get the cameras as nodal ashumanly possible The closer the cameras are to nodal, the closer the objectscan be to camera The other one is more of a software-derived solutionwhere depth information is acquired from cameras and used to creategeometry which photography is projected upon Our solution was to try anduse as few cameras as possible to have as few stitch lines as possible
In a four-camera solution, each camera has to have a field of view of 180° toget the correct amount of overlap We looked at many different lensesincluding lenses from a company that specializes in back-up cameras forcars, to a 6mm Nikon made in the 1970s They’re huge – they look like agiant pizza plate They have a 220° field of view so they see behindthemselves We ended up with a Canon fisheye zoom 8–15mm which wasfairly sharp – not necessarily the sharpest, but quite even across the entirelens You wouldn’t have softness on edges and sharpness in the center Thatwas very important when blending the images from each camera
Figure 2.9 The Mill VR camera rig designed for “Help”
Trang 38We tried a two-camera, three-camera, or four-camera solution We decided
on a four-camera rig to protect for resolution That’s the other side of thestory A lot of the pre-made consumer-grade VR solutions that existed at thetime did not have the resolution or color depth Also the rolling shutterpresents a large issue We decided to go with four 6K RED Dragons for imagequality and resolution
The cameras were mounted sideways due to how the lenses projected Whenthe lens would project that image onto the chip, the top and bottom wouldget cropped a little bit, but the sides would get the full image So we rotatedthem sideways because we wanted more overlap Not as much overlap wasneeded on the top because you’d been seeing the rig anyway (and it would
be painted out) So we adjusted the camera rigs to maximize the overlap.Our primary concern was when a character crosses frame We knew theywould be our critical moments and we wanted to give as much overlap ashumanly possible
Excerpt from “fxpodcast #294: Making the 360 degree short HELP,” MikeSeymour, Sydney University, Fxguide.com
to follow moving elements around, and then re-map the footage onto thefinal 360° sphere This requires some visual effects (VFX) skills, but is aninteresting way of shooting VR
Figure 2.10 A nodal head for panoramic photography
Trang 393D stitching is particularly difficult as two different types of parallax comeinto play: the horizontal parallax between two cameras in a 3D pair (left eye,right eye), and the parallax between the various pairs composing the VRcamera This results in 3D artifacts that appear around the stitch lines whenstitching all the left cameras together and then all the right camerastogether Fixing these artifacts is a tedious process that requires a VFX pass.
Figure 2.11 The complexity of stitching 3D pairs of cameras
Trang 40More and more VR companies are opting for a different method to obtainstereoscopic 3D VR with the use of optical flow algorithms Optical flow ismathematically trickier than other stitching solutions but delivers betterresults The algorithms compute the left–right eye stereo disparity betweenthe cameras and synthesize new views separately for the left and right eyes.
It is similar to the method used to create alternative points of view betweentwo camera positions, or doing time interpolation in editing Optical flow for3D VR remains an open research area since it presents a lot of artifactscaused by occlusions – one camera not being able to see what an adjacentcamera can see
Steve Schklair, Founder, 3ality Technica and 3mersiv
To me, real VR needs stereo People call this virtual reality but a 2D 360°image is far from virtual reality It’s just a flat image projected onto a spherethat you can view in 360° You at least need good stereoscopic images tocome closer to the idea of VR I have seen a lot of content that purports to be
in 3D, but most of it is not good 3D so it’s subtractive more than additive It’sdistracting For me there’s delineation between 360° video, 2D, 3D, andactual room scale where you can move around in the picture So, 60 framesper second and 3D is the minimum requirement for 360° video in myopinion
VR systems using the optical flow technology feature sphere-lookingcameras where the sensors are arranged on a circle, instead of left/rightpairs
Figure 2.12 360RIZE 3DPro and 360RIZE 360Orb