A comparison of LED panels for use in Virtual Production

Left to right: Real in-camera Moire, synthetic Moire rendered in Houdini, and a synthetic panel rendered with a scattered pixel distribution to eliminate Moire.

Very happy to announce the completion of a Technical Report based on my LED panel experiments conducted in November 2021.

Title: A comparison of LED panels for use in Virtual Production: Findings and recommendations.

Abstract: We evaluate four LED panels from three vendors for use in Virtual Production using experimental conditions. Tests were designed to assess Moire and scan-line artefacts, light reflectivity and grazing angle. We discuss LED manufacturing processes and the implications of these on visual artefacts, power consumption and colour fidelity, and conclude with results and recommendations for LED vendors, practitioners and procurement teams in Virtual Production.

Paper: Accessible here.

Supporting Videos: Moire and Scanline, Track Test, Light Bounce Test, Grazing Angle Test

Executive Summary:
  1. Colour reproduction is very important, and influenced by manufacturing decisions (e.g. binning). It is a well–understood practice to replace panels from a set from the same batch to ensure consistency. It would behoove any procurement team to investigate samples and (ideally) measure the colour reproduction in order to ensure it is appropriate for a particular application.
  2. Manufacturing decisions greatly influence the brightness and cost of LED panels. Brighter panels need less power to achieve equivalent brightness, so power draw could theoretically be cut significantly by choosing alternative manufacturing approaches or selecting LED’s based on brightness.
  3. By modifying the layout of pixels on the display, or adjusting the sampling strategy of the camera, Moire artefacts could potentially be eliminated altogether, although the manufacturing and rendering complexity of doing this have not been considered in this paper.
  4. By publishing both the reflectance distribution function of the back panel and emission patterns of the LED’s, manufacturers would allow studio planners to accurately simulate the panel and lighting design of a virtual production studio, eliminating the need to publish the contrast ratio.
Acknowledgements:
This work would not have been possible without the assistance of multiple parties. Edward Sedgley and Rehan Zia assisted in running the experiments and made many recommendations in the configuration. Most of the findings and recommendations result from conversations with the LED vendors and manufacturers, in particular:
  • Deric Zhou and Jason Yang (AOTO)
  • Liam Winter and Blaine Johnstone (PSCo Ltd)
  • Christian Czimny (Absen), and
  • Luc Neyt, Victor Kortekaas and David Morris (ROE Visual).
This project would not have been possible without the continued support of Richard Marshall (PEPLED) who arranged the wall test, supported the installation and provided technical support and advice throughout.

 

UK-China networking funding success

I’m really happy to report that our colleague Associate Professor Xiaosong Yang has been funded to develop a network to explore links between the UK and China in Cloud Based Virtual Film Production. The project commences during February, and we look forward to working with industry and academic partners in developing this essential network.

Virtual Production defines a set of new production practices where practitioners work in and interact directly with a virtual set. VP reduces the need to move crews and equipment to location and enables remote working in Virtual Reality, reducing CVD-19 risks, the environmental footprint, production costs and upends the traditional production process by blurring the lines between production departments. The logical next step in the evolution of the discipline is to transition production in film, TV and broadcast media practices from those that are mainly facilities-bound to working environments that are cloud based remotely collaborative.

Global Virtual Production Market is expected to reach £2.2b by 2026, rising at a rate of 14.3%. UK studios and core technology providers are global leaders in this space, leading to the Digital Catapult and Screenskills to identify this as a critical growth area. As of November 2020, there were 150 Virtual Production studios in the world, and 70 new stages have been designated for construction across UK until 2023. The UK is well positioned to provide technology leadership in this.

Virtual production has also showed great potential in China with its record-high box office income and first-class artificial intelligence innovations and industry (speakers at Shanghai International Film Festival). It bridges western technologies and Chinese culture and stories. In the past one year China Film industry has made enormous investment in building up virtual production studios and organized the Shanghai International Virtual production Summit at June 2021 with over 10000 attendees and key speakers from all around the world. China Film Virtual Stage (CFVS), the first virtual production stage in China, recently opened a brand new 800 sq meter green screen stage to offer Chinese clients the most advanced virtual set shooting services for both movie and TV episodic shows.

The aim of this project is to explore and analyse how research and innovation collaboration between the UK and China in the film production industry currently works, how it works best, and how it could work in future. The main focus is targeted at investigating the challenges and potential solutions for the next generation technology – Cloud based Virtual Film Production.

XR Stories funding success: Towards Zero Carbon Production

Apologies for the lack of communications – we’ve been busy!

I’m delighted to report that our project proposal “Towards Zero Carbon Production: A system dynamics model to inform and monitor energy policy and planning scenarios in Virtual Production” has been approved by XR Stories, and will hopefully commence early in 2022. A synopsis of the proposed project follows. Really looking forward to working with the team at XR Stories and our industry partners to look into this essential problem.

Virtual Production (VP) defines a set of production practices where practitioners work in and interact with virtual environments and technologies. It reduces the need to move crews and equipment to location and enables remote working in VR (Virtual Reality), dramatically reducing CVD-19 risks, the environmental footprint, production costs and upends the traditional production process. It has been described as “one of the greatest technical advances of recent years and holds the key to improving sustainability in the creative industries.” 

Virtual Production can be seen to be a dynamic system with changing facilities, equipment, operations, and personnel requirements. This allows us to use a system dynamics methodology, which is a computer aided approach to modelling policy analysis and design. It has been developed to manage complex and intractable problems in an easy to communicate way and combines both qualitative and quantitative analysis, enabling the testing of interventions and policy adaptations.  

This project will employ a researcher, who, working with specialists at Bournemouth University, will gather evidence through surveys, interviews, and literature reviews, and model the results using a Systems Dynamics (SD) approach. The goal is to provide a framework for interrogating the quantitative and qualitative data of sustainable practices and policy in Virtual Production.

Really looking forward to working with the team at XR Stories and our industry partners to look into this essential problem.

Get in touch with me if you want to know more!

– Richard

Frame Remapping for Virtual Production

One of the benefits of working with a fantastic multi-disciplinary team is range of creative ideas which we come up with. One idea is to utilise the high frame rates of the LED displays to alternate between multiple images, in the same way that stereo shutter glasses alternate between left and right eye images, used in CAVE‘s and the now dead 3D TV‘s. By synchronising the cameras with one or more of the synchronised feeds, a couple of applications arise:

  1. As the virtual camera is rendering the scene from the view point of each physical camera, the view of each camera is different. By alternating between the camera views it is possible to ensure each camera is synchronised with it’s individual virtual camera view, enabling multi-camera configurations. This is particularly relevant in broadcast scenarios.
  2. By alternating between camera view and a chroma green (or any other colour you desire) it is possible to capture a green screen image to assist in post-production VFX.

Like all good ideas, it turns out that this technology has already been integrated into the Brompton controller, and details of this are described here. The feature, snappily titled “frame remapping” allows you to quickly manipulate the signal to the LED wall by applying a frame multiplier which inserts additional frames into the stream. For a video signal of 50Hz, adding an additional frame will double it to 100Hz, doubling the data throughput to the LED display, which is something which needs to be taken into account when configuring the wall. The Brompton also accounts for shutter speed / angle which will improve the alignment of the frames in camera (we found that an additional shutter angle offset was required which was found by trial and error). Note that there can be only one HDMI feed to the SX40 or S8 processors so in order to flip between multiple camera feeds they will need to be sent in a combined feed (for example, concatenated side by side in the x) with an offset applied to separate the signals.

An example of testing (2) is shown below, with the colour image rendered on the Atmos (above) and the green on the camera viewer:

Frame Remapping

Alternating the video feed with chroma green using Brompton’s Frame Remapping technique. Note the LED panel in the display is the ROE Black Onyx 2, although 4 panels were tested. Hay-bale test image and this image courtesy of Rehan Zia.

A few words of caution:

  • This is not easy on the eye, and the Brompton software does clearly warn that this carries a severe epilepsy warning.
  • It is also the case that the display potentially distracting to the actors involved as it may be difficult to elicit the best performance when performers are distracted by constant flickering.
  • Also bear in mind the data rate issue mentioned above, as this may require compromises to your wall layout, colour depth or other parameters to ensure the display receives a complete signal.

We have yet to integrate this technology into a full workflow, and will not be doing so for some time as these LED panels were just on loan for testing. However it does have great potential and I look forward to seeing whether / how VFX studios make use of this feature!

LED Comparison Testing

Apologies for the brief hiatus – the start of term and return to campus has sapped all available bandwidth!

We are very excited this week to be hosting LED panels from multiple vendors to in order to better understand the differences and develop testing methodologies to consistently evaluate them for use in Virtual Production. Participating providers are Absen, AOTO and ROE (being installed tomorrow). Controllers courtesy of Brompton. A massive thank you to Richard Marshall (PEPLED) for bringing this together.

Watch this space as further updates of our experimental setup will be forthcoming.

Absen panel layout

Absen panel layout is 2.6mm pitch surface mount device (SMD).

AOTO panel layout

The AOTO panel is 2.3mm pitch with a 4-in-1 layout.

Experimental setup day 1

The setup to date, with the AOTO and Absen panels respective. Natty Brompton controller rack courtesy of Ed Sedgely.

One Sky, One Destiny – Our real journey in a Virtual Production

We were given the amazing opportunity to test the new Virtual Production stage at BU. From the very first time we saw with our eyes the huge LED screen we knew that something phenomenal – yet quite hard – was going to happen. To be honest, we had a limited timeframe not only to actually shoot in the sound stage, but also to prepare all the 3D scenes in Unreal Engine. This forced us to really push all our creative and technical skills to finish our production, but we feel we learnt and grew thanks to this experience.

On one hand, it is hard and requires more time than usual to prepare and optimise all the assets and combine them in the file scene and run it multiple times to check for bugs. On the other hand though, working in a Virtual Production environment allowed us to creatively explore the tri-dimensional space of our scene and move from the setups we developed from our storyboards to new perspectives we found being truly immersed in the digital set.

Virtual Production made us more aware of technical and artistic choices throughout almost the entire project, especially choices that would end up making us waste more time to re-adjust scenes. For instance, in one of our hero shots, our protagonist is stranded on a desert planet hoping for rescue. We decided to change the depth of field for that Unreal scene but, naively, it ruined the colour match between our real sand and the digital one. The chromatic difference was so evident that forced us to longly recalibrate the colours of the scene and re-match the values of the digital sand.

Several of our artistic decisions were dictated by the technical limitations of the LED screen itself. We needed to recreate the same outdoor lighting environment using powerful 750 W lights which, unfortunately, produced reflections on the screens breaking the suspension of disbelief. Similarly, the layout and position of the panels restricted some of the main camera angles we had in mind. For example, the closer we were with the camera, the more moire effect we could see on the camera monitor, meanwhile the further we were from the screens, almost no interactive lighting was affecting the actor (see attached illustration).

Notwithstanding all the difficulties, the whole project was something you can really define as a creative process, something which is not very common these days, when sometimes digital artists lose themselves in default settings and already seen visuals. A whole new toolset forces you to find innovative creative methods on your own. The lack of bibliography on the matter guarantees a free artistic approach, even though more research is required to overcome all the little and big problems.

We feel extremely lucky about this opportunity and we really think this is going to be a very important tool in the future (and the present) of visual effects, even if it is very important to understand not only the strengths but also the limitations of every asset. Watch our final video here:

We will release a “Behind the scenes” video as well, so please, stay tuned.

P.S. We truly wish to thank the people who allowed us to work on this project. A big thanks to Richard Southern who proposed us with this initiative and together with the incredible Oleg Fryazinov for the Unreal support and Edward Sedgley for the set dressing, Mocap setup, Camera assistance, morale support…

Our gratitude goes to Neil Goridge as well who guided us to polish the photography and cinematography of our short film.

And lastly, thanks to Anurag Gautam, our great friend who was never tired from constantly holding up heavy equipment and our spirits 🙂

Cast & Crew

  • Paolo Mercogliano – Director, UE Artist, Pipeline and Colour Supervisor
  • Diana Pelino – Producer, Lighting Artist, Editor
  • Anna Semple – 3D Artist, Photography, Actress
  • Miguel Pozas – 3D Artist, Actor
  • Joseph Adams – Technical Director, UE Artist
  • Nathalie Puetzer – Compositor, Camera operator

This article was contributed by Paolo Mercogliano and Diana Pelino.

Behold the parallax!

This is with a custom Unreal 4.25 setup using composure and Motion Analysis (note, not using ndisplay). I’ll share this once it’s neat and tidy. Still loads to do: genlock, proper measurements, frustum feathering, proper zoom / aperture capture etc. Also will be looking into different methods to interpolate as this method causes some warping (although this isn’t likely to be noticeable during a shoot).

 

Here is the classic Subway scene, shot using Unreal nDisplay, Motion Analysis for camera tracking. Still no Genlock on this one, but the frustum match is good.

More coming soon!

– Richard

Student Previsualisation Project

What follows is a report from a student group using the Green Screen as a Previsualisation studio.

Real-Time Previs Experience Feedback

We used two different scenes to test the Previsualisation setup in the Green Screen studio. We realised an exterior environment with bright sunlight and sharp shadows, and an interior environment with more diffused lighting. 

With the Unreal Engine setup, we were able to get a live feedback of a “slap comp” so that all cast and crew could see the live action footage overlayed to the CG background and with additional CG foreground elements. This approach helped us to make more aware choices about framing, composition, camera movements and lighting.  

We prepared a custom output system, so we could record at the same time the already mentioned “slap comp”, the background and the foreground CG elements from Unreal (we used these to have a more refined comp for our work-in-progress edit), the clean plate from the camera for the traditional postproduction process and the camera track information from the motion capture system. 

We were given a great setup, anyway during the process we faced many little and big technical issues. For example, some of our props (e.g.: sand) reflected IR light, which caused a loss of accuracy in the Motion Capture system. To fix this, we had to reconfigure the IR cameras on set, and to do a lot of clean up over the motion data. We came across many little problems in terms of acquisition of the Unreal Engine render passes in real time, colour space management from one system to the other, image streams resolution compatibility… After the first full day of work, we ended up with only fifteen seconds of footage, and it was not good enough to be used anyway, but eventually we got (almost) everything working.  

Some of the outdoor scene assets are from a previous group project, and great thanks for those go to Dermott Burns, Edward Barnes, Alexandra Kim Bui, Alexander Lechev, Ollie Nicholls. Finally, of course a huge thank you to the people who gave us this incredible opportunity and helped us with any minute they had. Thank you to Richard Southern, Oleg Fryazinov, Neil Goridge, Edward Sedgley.  

Cast & Crew

  • Paolo Mercogliano – Director, UE Technician, Technology and pipeline supervisor 
  • Diana Pelino – Producer, Motion Capture Technician, VFX supervisor 
  • Joseph Adams – UE Technician 
  • Nathalie Puetzer – Camera operator, Compositor 
  • Miguel Pozas – 3D Assets, Effects, and Rendering, Actor 
  • Anna Semple – 3D Assets, Effects, and Rendering, Actress 

Media

 

In this video we demonstrate the progression from the slap comp in real time on set, to a more refined comp using the UE4 assets (this can be done in few minutes, to have a shot to use in a work-in-progress edit), to the final piece with a traditional offline render.