top of page

In-Depth with the C500 MKII: Part 4

Welcome to the final installment of the “In-Depth with the C500 MKII” series. Now that we’ve gone over the outer and inner workings of the camera, it’s time to see what to do with all this footage in post.


Post 4 - Post-Production:


Working with Cinema RAW Light

Canon’s Cinema RAW Light works a little differently from other RAW files you might be used to working with. Depending on the software you are using, you will have more or less adjustment tools at your disposal.


If you are working with Cinema RAW Light clips directly in an NLE like Final Cut Pro, Adobe Premiere or Avid, these clips will handle exactly the same as an XF-AVC clip. You will not have control of ISO, White Balance or Gamma/Color Space like you would with other camera’s RAW media. Be sure to keep in mind, that while you cannot control individual setting elements, you are still working with a rich 12-bit 2.1 Gbps file in 5.9K Cinema RAW Light as opposed to 10-bit 410 Mbps 4K files in XF-AVC. When working with C500 MKII Cinema RAW Light files in Adobe Premiere, make sure you are up to date with CC 2020 version 14.


If you are working with Cinema RAW Light clips in DaVinci Resolve, you have a few options available to you. Once you load in Cinema RAW Light clips to your timeline, you are able to change a few settings. Click on the Camera RAW tab to access the settings. Change the Decode Using option from Project to Clip to give you the ability to adjust White Balance, Tint, Exposure, Sharpness, and a variety of other settings. The C500 MKII RAW files loaded with no issues in my version of Resolve Studio 15.

DaVinci Resolve Camera RAW

The other option you have is to tweak your Cinema RAW Light media in Canon’s Cinema RAW Development application. Here, you can change ALL of your settings and export the clips out in a variety of codecs for use in other NLEs or Color Correction applications. You can also simultaneously create proxy files if you wish.


If you have used Cinema RAW Development in the past for the C700, you will also need to update to version 2.4.





Working with XF-AVC

The C500 MKII XF-AVC files work exactly the same as what comes out of the C300 MKII. These can be edited natively in all NLEs.


A useful tool I have found working with Canon XF-AVC files is the XF Utility application. Since the metadata of a clip cannot be read by any NLEs, this is helpful in filling out camera reports, as well as a variety of other uses.


You can view all the metadata of a clip, such as Clip Name, Duration, Start & End TC, Resolution, Color Space, Gamma, White Balance and Lens Squeeze. You can also easily see the number of clips on a card, as well as their individual file size and total size size of the card.



You can view the clip with a LUT applied by pressing the LUT button on the bottom center of the viewer. As far as I can tell, this will only toggle a 709 LUT on and off, regardless of the LUT used in camera.






If you shot an anamorphic clip and designated the Squeeze Ratio to be saved in the metadata, the clip will automatically load de-squeezed. If you didn’t save the ratio in camera, then you can press the Desqueeze Ratio button to set.








Lastly, you can also pull frame grabs from a clip. Within the preferences, you can designate the format and quality of the still. By pressing the Grab Still Frame button, it will save that frame as a still image that can be sent to production for reference.




There are many other uses for Canon XF Utility in production. You can backup your footage using the application, zoom in on a clip to determine proper focus, monitor isolated audio channels and many more. I hope you explore the application to see how and where it can fit into your production and post-production workflow.





LUTs

Canon has re-published the LUTs that ship internally with the camera. Unlike the ones sent with the C300 MKII, I actually find these LUTs translate well. They include a wide variety that suit practically any workflow and can be downloaded here. You can of course, also create or download your own and load them into the camera as well. This is helpful if you are able to create a look for your project before you shoot, use these LUTs to monitor on set, bake them into proxies for production and post to edit off of, and then pass the same LUT to the colorist to work from. This streamlines the process from beginning to end.


To import these LUTs into Adobe Premiere, navigate to the Applications Folder on your computer and find Adobe Premiere. Right-click the application and choose Show Package Contents. This opens an additional folder where the Lumetri LUTs are stored. Drag and drop whichever LUTs you want access to into the Technical folder, re-start Premiere and they will appear in the Lumeti Color Input LUT dropdown list.




To import LUTs into DaVinci Resolve, click the Settings icon in the lower right of the screen. Select Color Management and scroll down to Lookup Tables. First choose Open LUT Folder where you can drag and drop whichever LUTs you want access to within Resolve. Then click Update Lists. You will not need to re-start Resolve after this and your LUTs will appear in the 3D LUT option within the clip node.





Proxies

Like we covered before, the C500 MKII is only capable of creating in-camera proxies when shooting in RAW. This is very disappointing, as I loved this feature on the C300 MKII with XF-AVC clips and used it on every production. Unless you are exclusively shooting in RAW, I would forego recording in-camera proxies and create your own in DaVinci Resolve. Here is the process to do so.


#1: Bring all of your clips into the Media Pool and create a timeline of the clips. I have set my timeline settings to 2K 2048×1080, since this is what I want my proxies to be.


#2: Under the Color tab, I want to apply a REC709 LUT to all the clips. Select all the clips, right-click and under 3D LUT, apply the LUT you want to use.






#3: Go to the Deliver tab. Browse to where you want your proxy files to save and make sure Individual Clips is checked under Render. Under the Video tab, select your preferred format and confirm that your resolution is correct. If you did not set your timeline resolution to 2K in the beginning, you can change the output here.















Under the Audio tab, make sure this is checked so that your proxies will have their attached audio file. You can also change this output if you’d like, but its probably not necessary.












Under the File tab, choose Source Name so that the proxies have the same name as the original, which you will need when re-linking your media later on.





















#4: Once all of your settings are correct, press the Add to Render Queue button and then press Start Render in the Render Queue. Your files will churn away and you’ll be ready to go.





If you are working with anamorphic files and creating proxies, refer to the Anamorphic De-Squeeze section below, since this process is slightly different.




Anamorphic De-Squeeze

Since the C500 MKII does not internally de-squeeze and record anamorphic footage like the Alexa, you will need to take a few steps to prep your footage. There are a few options for doing so, depending on your workflow. You can use DaVinci Resolve, Adobe Premiere or Canon’s Cinema RAW Development application. I am not an Avid or Final Cut user, but I believe the steps to work with your anamorphic footage will be very similar to the steps performed with Adobe Premiere.



DaVinci Resolve

#1: Once you have brought your anamorphic clips into the Media Pool, select them all, right-click and choose Clip Attributes. This will bring up a dialogue box where you will find Pixel Aspect Ratio. Change this to CinemaScope.









#2: Next, click the Settings icon in the lower right to bring up your Timeline settings. Depending on your workflow, there are a variety of options to choose from. In this example, I am working with 4096×1716 DCI Scope 2.39.



With your Timeline settings correct, highlight all the clips you want to use from the Media Pool and right-click choosing Create New Timeline Using Selected Clips.









#3: Because we are shooting 2x anamorphic on a 17:9 sensor, your clip will be much wider than a 2.39:1 aspect ratio. We will need to scale the footage to fit in the proper aspect ratio.



Click on your first clip and under Transform, zoom the clip to 1.590. This will fill the clip in the frame.

















#4: Now you simply need to copy this scaling information to all the other clips in the timeline. Right-click the first clip that has been properly scaled and choose Copy. Then select the rest of the clips in the timeline, right-click and choose Paste Attributes. Be sure to check the box that says Zoom and click Apply.
















You will now see that all of your footage has been properly scaled in the timeline and you are ready to go.







Once these steps are done, you can begin editing, coloring, etc. If you are working on a longer project where you want access to the properly de-squuezed clips on their own or are editing in another NLE and using Resolve to just confirm your clips, you can now export this timeline to stand-alone files or proxy files with a baked in LUT. To do this, refer to the Proxy Workflow outlined previously.



Adobe Premiere

#1: Once your anamorphic clips have been imported into the project, select them all, right-click and choose Modify > Interpret Footage. This will bring up a dialogue box where you will find Pixel Aspect Ratio. Change this to Anamorphic 2:1 (2.0).














#2: With your newly converted clips, select them all and drag them into the empty Timeline Panel to create a new timeline with the clips’ settings. Your footage will be very wide, as this is what 2x de-squeeze looks like when you shoot on a 17:9 sensor.

Right-click on the newly created timeline in the Project window and choose Sequence Settings. Change the Frame Size to 4096×1716 and change the Pixel Aspect Ratio to Square. Click OK.



Settings for 5.9K clips

#3: You will now need to scale your clips down to fit. In this example, with a 4K Anamorphic timeline, I will need to scale 5.9K footage to 55% and 4K footage to 80%.




Settings for 4K clips

You can either independently change each clip, or right-click and choose Copy and then select all other clips in the timeline with the same resolution and right-click and choose Paste Attributes.




Make sure only Scale Attribute Time > Motion is checked so you don’t accidentally paste other settings.













You are now ready to go. This solution is great if you want to immediately start editing your project in the timeline. If you want to export these clips out to use as stand-alone clips, I would recommend using DaVinci Resolve to do so.



Cinema RAW Development

This is actually an incredibly useful tool for working with anamorphic clips out of the camera. It can help automate and simplify the conversion process with fewer steps than Davinci Resolve or Adobe Premiere. The only thing to note is this application will only work with RAW clips and will not work with XF-AVC clips.


#1: Navigate to find your footage in the upper left of the screen. Once you click on one of the anamorphic clips, you will see it load into the viewer window. In this window, you should see the Squeeze drop-down list automatically show x2.0. This is because you set the camera to record this metadata.




#2: Click the Settings icon in the Export window. This will open a dialogue box to set all of your preferences.







In the first File Settings tab, you can designate the Clip Name, Destination and Files to be exported (Full-Quality, Proxy or both). In this example, I will choose both.









In the second tab – Output Type (Full-Quality Clips), you can set the preferences for Output Settings (with a full array of options), Resolution Settings and Color Space / Gamma. Under Resolution Settings, this is where the magic happens.


Check the box for Start with 4:3 Image, leave Video Resizing at None (Original Size), select your De-Squeeze Ratio (x2.0) and choose 2.39:1 for Crop Aspect Ratio. This will create a cropped 2.39:1 clip that has been properly de-squeezed. This way, you can load these clips directly into your NLE and get straight to work.

In the third tab – Output Type (Proxy Clips), you can set Output Settings, Resolution Settings and Color Space / Gamma.


Under Resolution Settings, choose Full-Quality Files’ Resolution, as this will carry over the anamorphic de-squeeze ratio and 2.39:1 aspect ratio crop we did before. You can then decide to downscale the footage to a smaller resolution or keep it the same.

Lastly, you have the ability to keep the proxy clip in its native Color Space / Gamma or change it to REC709. Sadly, you cannot upload your own LUT in this process.


#3: Once all of your Output Settings are set, simply select all of your clips and click the Add to Export Queue button and select Export in the Export Queue window. These clips will crank along and you will end up with fully converted Full Quality and Proxy files a the end.






Download Times


I decided to test two different offload methods onto two different hard drives. While there are definitely faster systems out there to do this, I wanted to test the practical situations I find myself most in. The industry standard for offloading media is with software like ShotPut Pro that will perform checksum verifications. But I will not lie that I also just drag and drop in OSX Finder to speed up the process. When you have a ton of cards at the end of the day, sometimes this is what you have to do. (I will note that whenever I use Finder, I do my due diligence and confirm file sizes, number of files and load the offloaded clips in Canon XF Utility to confirm I have picture and audio with no weird dropouts.)

I tested each offload method using a Glyph 10TB USB-C 7200rpm drive and a LaCie Rugged 4TB USB-C 5400rpm drive. Oftentimes, production sends you out with the LaCie shuttle drives because you can carry a ton of space in a small form factor. The fact that these are bus-powered are also key. But they are definitely slower. The Glyph drives are faster and come with higher capacity, but require AC power and more space in the suitcase.


If you look at the table below, you can see that regardless of the drive, ShotPut Pro with XXHash-64 Checksum Verification is 2.4x slower than a simple Finder drag & drop. The LaCie 5400rpm drive is 1.6x slower than the Glyph 7200rpm drive. Something to keep in mind when planning your shoot and the amount of media you estimate shooting.


Downloading a full 512GB card is definitely a time suck, especially if you are using ShotPut Pro and a LaCie Rugged drive in the field. Be sure to keep this in mind when you decide to shoot everything in 5.9K RAW!

Offload Method

Glyph 10TB USB-C 7200rpm

LaCie Rugged 4TB USB-C 5400rpm

ShotPutPro 6 w/ XXHash-64 Checksum

1 hour, 44 minutes

2 hours, 51 minutes

OSC Finder Drag & Drop w/ no Checksum Verification

43 minutes

1 hour, 9 minutes


Conclusion

The footage coming out of the C500 MKII have a lot going for them. They are deep, rich files with a lot of color depth that give you tons of room to work in post, but every application I tested read and played the clips with ease. Because these files are so rich, they are also large. Especially when dealing with 5.9K RAW. This is definitely something to keep in mind since it effects the entire workflow. You have less record time on the cards and downloading takes a while, especially if you are doing a proper checksum verification.


It would certainly be nice to have a simpler anamorphic workflow, but with the process I discussed, its something that can be manageable in post.


Lastly, the biggest bummer for me is that none of the metadata of the clip is readable by anything except Canon’s proprietary software – XF Utility and Cinema RAW Development. It would be nice to have access to the Log and Gamma settings in an NLE or Resolve, but I suppose if we’ve made it this long without that feature, we can make it a little longer…


That concludes the “In-Depth with the C500 MKII” series. I hope this has been as informative for you as it has been for me. If you haven’t checked out the other posts in the series, I encourage you to go back and explore Physical and External Features, Internal Specifications and Footage Analysis.


bottom of page