C++ Club Production

8 Apr 2026

I decided to describe my podcast production process. I'm cheap so I do everything myself. The following, combined with me having a $DAYJOB, should explain the long pauses between C++ Club episodes. It is also a reminder for myself, should I ever forget how to do it. Lastly, it may be helpful to someone producing their own podcast.

No AI is used in any part of the process. The only use of ML is for transcribing the final audio, and I inspect and fix the resulting transcript manually afterwards.

Recording

During the meeting I record my 4K screen and only rely on Zoom video recording for backup purposes, as its quality is not great. Same with audio: I record directly from my RØDE PodMic dynamic XLR microphone plugged into TASCAM US-1x2 USB interface for better quality (44.1 kHz vs. 32 kHz I get from Zoom). I also get local audio recordings from several other participants.

My camera is Elgato FaceCam 1080p60.

Editing

For editing C++ Club I use Apple Final Cut Pro X (FCPX) on a MacBook Pro with M2 Max CPU and 64 GB RAM. (Boy am I glad I got those 64 GB of RAM back when AI hasn't consumed it all yet!)

Syncing the footage

After I get all the footage into FCPX and onto the magnetic timeline, I align the audio and video clips, then turn off backup clips and leave them in place, in case I discover any audio/video problems during editing later. Aligning Zoom clips is easy as they all have the same duration. The tricky part is to align local audio/video recordings with each other and the Zoom timeline.

Once all clips are aligned I select all of them and create a compound clip that will be used for the cut.

The cut

The compound clip allows me to cut the footage non-destructively. I can remove undesirable audio segments, delete ums and errs, shorten long pauses and make the flow easier to listen to. (This reminds me I need to work on getting rid of filler words in my speech.)

If I discover something I need to fix in an individual source clip, like a bit of unexpected noise (a delivery driver ringing, for example, which did happen) or an audio overlap (where I unnecessarily mutter yes under my breath over the actual speaker), I can double-click the compound clip to expand it and get access to all the original clips, uncut as they were. This is a game changer for my workflow. I tried Adobe Premiere and DaVinci Resolve, but nothing compares to FCPX's magnetic timeline and compound clips.

Fixing audio

I usually do it at the beginning, before the cut. But I often go back to tweak audio in original clips as I edit, to try and make it sound better. Effects are applied non-destructively directly onto the original clips. I let FCPX analyse and fix audio as the first step, then reduce or remove any noise, apply a bit of compression, some light channel EQ to enhance voice, and add Apple DeEsser 2 effect (got to work on those sibilants, sigh).

I also adjust levels, so that LUFS-I is around -19 dB, which is the target for a mono podcast.

Title

I put a simple title card on top of the compound clip and attach a fade effect to it.

Top-level clip

After finishing the cut and adding a title (and any other video overlays or effects if necessary) I select the compound clip and all the additions and create the top-level clip of the correct duration.

As the final audio editing step, I add Izotope RX 10 De-click effect to the final soundtrack to get rid of those annoying mouth clicks. Finally, I add Gain and Multimeter plugins to adjust the final LUFS to around -19 dB, and convert it to mono.

Chapters

Watching the top-level compound clip from start to end, I add chapter markers to it, setting thumbnails that get converted to chapter images. These will be exported for both YouTube and podcast.

Transcript

When I'm sure the cut is done and nothing in the clip is going to change, I use FCPX Transcribe to Captions function. I don't trust YouTube or podcast apps to perform automatic transcription for me and mangle names and terminology. FCPX transcription model runs locally and produces good results, but I still go through the generated text and fix names and technical terms (see cash becomes ccache and so on.)

Export

I export the entire top-level clip for YouTube at 1080p (as H.265 MP4/MOV), including audio (as AAC at 192 Kbps) and transcript (as VTT).

In addition to that I export audio separately as AAC with chapters. Unfortunately, exporting MP3 with chapters doesn't seem to work in FCPX, so I use Marco Arment's Forecast to import the resulting AAC. It reads chapters and chapter images from the AAC file and exports everything to an MP3 file. I can also replace chapter images generated by FCPX with something more topical, and set chapter URLs where applicable.

To show chapters in YouTube, I export FCPX project as XML and load it into Creator's Best Friend app. Weird choice of name and icon, but it works perfectly, producing chapter marks in YouTube format that I copy and append to the video description.

YouTube allows me to upload a transcript file (VTT) and uses it instead of its own autogenerated transcript for subtitles.

RedCircle, my podcast host, lets me specify URL of a VTT file as an Apple Podcasts-specific property, but unfortunately (as of iOS 26.4) Apple Podcasts ignores it and creates its own transcript, with mangled people's names and butchered technical terminology.

Backup

I back up completed podcast's sources, FCPX project, and all exported files to tape. I use an old Quantum LTO6 drive (rebadged IBM) connected to an ATTO H644 SAS interface card installed in a StarTech Thunderbolt enclosure. A single LTO6 cartridge holds 2.5 TB uncompressed (6.25 TB compressed) data and has a lifetime of 30 years, which is plenty for my needs. I use the excellent Canister app by Hedge for LTO backups.

https://glebd.com/posts/rss.xml