OK, so I trained the model with extracted frames, one set (A) for the video source and the other set (B) for the exchange. The training lasted longer than expected (more than 20 hours) and I had to stop earlier and then try again and had to stop within a few hours (for other options read on ​https://deepfake-porn.com/howto-custom-deepfake/​). Question 1: Can you use data from the initial canceled training session or do you need to have a complete training session that can last for days? So I used training data that ended prematurely to change my project. So I set the path to my input directory and the path to my source directory and specify my model directory (which in turn contains a log of training about initial termination) and use all the default settings, except that I change and set Writer in FFmpeg Select the checkbox in next to the Exchange model. The converted video is an original video with a faded face, and the second attempt is worse. Unfortunately, there is no detailed explanation. Transform the explanation of the process in the forum as well as other processes such as extraction and training, which are explained clearly in their chapter. And it also confuses me what happened to my folder B which contains the photos that I want to replace with the original because I didn't see the file path for them during the conversion and only assumed that the model directory was in the program carrying this file. After I say all this, my second question is: "What exactly am I not doing right?" As long as the training isn't broken (eg due to a computer crash while recording), you are fine. Clicking Cancel is the right way to end the exercise, and continuing is the recommended way to exchange people with some exercises. We are working on the full conversion guide, but this is a fairly clear process compared to the others. I recommend starting with the preview tool, making sure the results look good and then reading the tooltips in the "Conversions" section. The process is quite simple and I think you will be able to understand this (and you can ask questions if you encounter problems). You don't need to convert someone's source at all. You need to provide the frame you want to exchange with the face, the orientation of this video source and the trained model. The model swaps "memory" (that's what training means) so you don't have to see the person you want to convert. Thank you for your answer It's only wrong if you check or uncheck the box here, which can cause problems along the way. I am still having problems with the conversion process. My next step is to go back to the beginning and try to do everything in the book and read forum posts to find out what the problem is. In the meantime, I'm looking forward to the new conversion guidelines as they become available. Thanks again