• AnyStream is having some DRM issues currently, Netflix is not available in HD for the time being.
    Situations like this will always happen with AnyStream: streaming providers are continuously improving their countermeasures while we try to catch up, it's an ongoing cat-and-mouse game. Please be patient and don't flood our support or forum with requests, we are working on it 24/7 to get it resolved. Thank you.

Guide Reencoding videos via free command-line tools and GPU

tectpro

Translator (ms_MY)
Thread Starter
Joined
Feb 27, 2011
Messages
1,405
Likes
939
Hello everyone,

I created this new topic to share some " basic scripts" to compress videos already on your PC.
The scripts are mostly for Anime/Cartoon but can also be used for normal Movies and Series.
Kindly note that you might need to adjust certain settings of the scripts to your preference, the scripts provided here are for your reference.
Please be advised that all scripts use HEVC - 10 bit - if possible. Scripts for H.264 might follow.

This topic is mainly for Intel iGPU dGPU and Nvidia GPUs.
Due to AMD drivers and related encoding issues, please take caution when using them.

At this point, I would like to thank
@0x0x0x0x0 providing the idea with the intel "high" CQP settings and for Intel encoding testing.
@cartman0208 and @DeepSpace for testing some of the encodings.
@Ch3vr0n for providing NVencC hardware readouts of the RTX 4 series.

Let's proceed

Needed for Intel:
Rigayas Qsvcenc and latest drivers as well as a capable Intel iGPU or dGPU
The latest version can be found here:
Code:
 https://github.com/rigaya/QSVEnc/releases
Please download and extract into a folder of your choice and remember the location.

Needed for Nvidia:
Rigayas NVencC and latest drivers as well as a capable Nvidia GPU
The latest version can be found here:
Code:
 https://github.com/rigaya/NVEnc/releases
Please download and extract into a folder of your choice and remember the location.

Needed for AMD:
Rigayas VCEEnc and latest drivers as well as a capable AMD GPU
The latest version can be found here:
Code:
 https://github.com/rigaya/VCEEnc/releases
Please download and extract into a folder of your choice and remember the location.


For all scripts, please copy the text 1:1 and only change things for which you know what they do, or if you like to play with different settings.
Create a new TXT file for each script in the same folder as the extracted encoder.
For example, if you extracted QSVEncC to "D:\Encoder\QSVEncC\" the script should be saved there too.
Give each script a different name.


How to run the scripts?
After all the scripts you need have been stored in the folder and with separate naming, you can either create a batch file or run the command from the command line.

Run from the command line:
Go into the folder with the videos you want to convert and open a command line, ensure that the output folder exists and adjust the below example accordingly

Code:
for %i in (*.mp4) do "E:\rigaya\NVEncC\NVEncC64.exe" -i "%i" --sub-source "%~dpni.srt" --option-file "E:\rigaya\NVEncC\example.txt" -o "F:\Re-encode\%~ni.mkv"

For a batch file:
Create a new text document.
Rename it with a .bat extension, e.g., EncodeVideosWithSubtitles.bat.
Right-click on the .bat file and select 'Edit', which should open the file in Notepad or another text editor.

Paste the following script:
Code:
@echo off
cd %~dp0
for %%i in (*.mp4) do (
    "E:\rigaya\NVEncC\NVEncC64.exe" -i "%%i" --sub-source "%%~dpni.en-us.srt" --option-file "E:\rigaya\NVEncC\example.txt" -o "F:\REencode\%%~ni.mkv"
)
pause

%~dp0 sets the current directory to where the batch file resides.
In a batch script, you must use double per cent signs %% instead of a single % for loop variables.
The pause command at the end will keep the command window open after processing so you can see any errors or messages.

To use the script:

Place the .bat file in the directory containing your MP4 videos and subtitle files.
Double-click the .bat file to run it.
The script will process each MP4 file, encode it using the settings in example.txt, and mux in the subtitles, saving the result in the F:\Re-encode directory.

Please note that you need to adjust the subtitle info accordingly to the subtitle you have in your folder.
You can also force a subtitle and create metadata for example:
Code:
--sub-source "%%~dpni.en-us.srt:disposition=forced;metadata=language=eng"


Disclaimer:

While I have dedicated a significant amount of time and effort to test everything detailed in this post, I cannot guarantee that there are no mistakes.
Human error is always a possibility, and this guide is no exception.
If you find any inaccuracies or errors, please kindly bring them to my attention.
I'm committed to rectifying any issues based on your feedback.
I genuinely appreciate constructive comments and suggestions that can help improve this guide further.
 
"Scripts"

Scripts for Intel:


1. General video compression
This will compress the video and copy the audio 1:1

Code:
--avhw
--codec  hevc
--quality best
--profile main10
--cqp 34:34:36
--i-adapt
--b-adapt
--scenario-info archive
--open-gop
--ref 8
--bframes 16
--output-depth 10
--pic-struct
--hevc-gpb
--vpp-deband
--audio-copy
--chapter-copy


2. Video compression with sharpening
This will sharpen and compress the video and copy the audio 1:1

Code:
--avhw
--codec  hevc
--quality best
--profile main10
--cqp 34:34:36
--i-adapt
--b-adapt
--scenario-info archive
--open-gop
--ref 8
--bframes 16
--output-depth 10
--pic-struct
--hevc-gpb
--vpp-deband
--vpp-edgelevel
--vpp-warpsharp
--audio-copy
--chapter-copy

3. Video compression with sharpening and upscaling
This will upscale to X*1080, sharpen, compress the video and copy the audio 1:1
X = auto adjusts that value accordingly.

Code:
--avhw
--codec  hevc
--quality best
--profile main10
--cqp 34:34:36
--i-adapt
--b-adapt
--scenario-info archive
--open-gop
--ref 8
--bframes 16
--output-depth 10
--pic-struct
--hevc-gpb
--output-res -4x1080
--vpp-resize lanczos4
--vpp-deband
--vpp-edgelevel
--vpp-warpsharp
--audio-copy
--chapter-copy
 
Scripts for Nvidia:

1. General video compression
This will compress the video and copy the audio 1:1

Code:
--avhw
--codec hevc
--preset quality
--profile main10
--qp-init 20:23:25
--qp-min 18:21:23
--qp-max 22:25:27
--output-depth 10
--lookahead 32
--bframes 5
--ref 8
--multiref-l0 3
--multiref-l1 3
--bref-mode each
--mv-precision Q-pel
--pic-struct
--split-enc auto
--vpp-deband
--audio-copy
--chapter-copy




2. Video compression with sharpening
This will sharpen and compress the video and copy the audio 1:1

Code:
--avhw
--codec hevc
--preset quality
--profile main10
--qp-init 20:23:25
--qp-min 18:21:23
--qp-max 22:25:27
--output-depth 10
--lookahead 32
--bframes 5
--ref 8
--multiref-l0 3
--multiref-l1 3
--bref-mode each
--mv-precision Q-pel
--pic-struct
--split-enc auto
--vpp-deband
--vpp-edgelevel
--vpp-warpsharp
--audio-copy
--chapter-copy

3. Video compression with sharpening and upscaling
This will upscale to X*1080, sharpen, compress the video and copy the audio 1:1
X = auto adjust that value accordingly.

Code:
--avhw
--codec hevc
--preset quality
--profile main10
--qp-init 20:23:25
--qp-min 18:21:23
--qp-max 22:25:27
--output-depth 10
--lookahead 32
--bframes 5
--ref 8
--multiref-l0 3
--multiref-l1 3
--bref-mode each
--mv-precision Q-pel
--pic-struct
--split-enc auto
--output-res -4x1080
--vpp-resize lanczos4
--vpp-deband
--vpp-edgelevel
--vpp-warpsharp
--audio-copy
--chapter-copy
 
Scripts for AMD
USE WITH CAUTION DUE TO DRIVER ISSUES!

1. General video compression
This will compress the video and copy the audio 1:1

Code:
--avhw
--codec hevc
--preset slow
--profile main10
--cqp 19:21:24
--output-depth 10
--ref 5
--vpp-deband
--audio-copy
--chapter-copy


2. Video compression with sharpening
This will sharpen and compress the video and copy the audio 1:1

Code:
--avhw
--codec hevc
--preset slow
--profile main10
--cqp 19:21:24
--output-depth 10
--ref 5
--vpp-deband
--vpp-edgelevel
--vpp-warpsharp
--audio-copy
--chapter-copy

3. Video compression with sharpening and upscaling
This will upscale to X*1080, sharpen, compress the video and copy the audio 1:1
X = auto adjust that value accordingly.
RESIZING IS CURRENTLY NOT RECOMMENDED WITH AMD!

Code:
--avhw
--codec hevc
--preset slow
--profile main10
--cqp 19:21:24
--output-depth 10
--ref 5
--output-res -4x1080
--vpp-resize lanczos4
--vpp-deband
--vpp-edgelevel
--vpp-warpsharp
--audio-copy
--chapter-copy

INFO: Most AMD cards do not support b-frames, however, the value must be included in the CQP.
I tested this with an RX 570 and an RX 6700 XT. Without that info, there will be an error.
 
FAQ

Are these all settings there are?
No, you can find more here:
Intel:
Code:
 https://github.com/rigaya/QSVEnc/blob/master/QSVEncC_Options.en.md
Nvidia:
Code:
 https://github.com/rigaya/QSVEnc/blob/master/QSVEncC_Options.en.md
AMD:
Code:
 https://github.com/rigaya/VCEEnc/blob/master/VCEEncC_Options.en.md


How do I know what settings my GPU supports?
You can run the "--check-features" from the command line.
For example, please run from the command line within the specific folder:
Code:
QSVEncC.exe --check-features
NVEncC.exe --check-features
VCEEncC.exe --check-features


Why are the scripts provided look so limited?
They are for reference. You can include other options from the aforementioned links.
You are also able to adjust the settings to your liking and preferences.
There is no one fit.
Especially for AMD, you might need to tinker with the settings for your GPU.


I don't understand why the sharpening settings have no values.
The tool will use standard values for most settings if no value is provided.

Here is an example:
Code:
--vpp-edgelevel strength=7.0,threshold=18.0,black=1,white=1
This filter is an edge enhancement filter. Its purpose is to make the edges in a video look sharper. Here's what each parameter does:

strength: Specifies the strength of the edge enhancement. A higher value will result in stronger sharpening.
threshold: This parameter determines the minimum difference between pixel values the filter considers an "edge." Only differences above this threshold will be enhanced.
black: This parameter amplifies darker edges. A value greater than 1 will amplify darker edges more, while a value less than 1 will reduce the amplification.
white: Similar to the black parameter, but for lighter edges. A value greater than 1 amplifies lighter edges more, while less than 1 reduces the amplification.



I want my stereo to be 5.1 is that possible?
Yes and no.
It is possible to change the stereo to 5.1 and readjust some of the sounds, but if you have a sound system it most likely does the same.
Nevertheless, here is an example:
Code:
 --audio-filter "[0:a]pan=5.1|FL=FL|FR=FR|FC=0.5*FL+0.5*FR|LFE=0.1*FL+0.1*FR|BL=0.5*FL|BR=0.5*FR"

What does it do?
Code:
The filter is a pan filter, which is used to remap audio channels.

5.1: This indicates that the output will have a 5.1 surround sound configuration, which consists of:

FL (Front Left)
FR (Front Right)
FC (Front Center)
LFE (Low-Frequency Effects, commonly referred to as the subwoofer channel)
BL (Back Left or Rear Left)
BR (Back Right or Rear Right)
Each assignment after 5.1 indicates how the output channels will be created from the input channels:

FL=FL: The Front Left channel in the output will be the same as the Front Left channel in the input.
FR=FR: Similarly, the Front Right channel in the output will match the Front Right channel from the input.
FC=0.5*FL+0.5*FR: The Front Center channel will be a mix of 50% Front Left and 50% Front Right from the input.
LFE=0.1*FL+0.1*FR: The Low-Frequency Effects (subwoofer) channel will consist of 10% Front Left and 10% Front Right.
BL=0.5*FL: The Back Left channel will be 50% of the Front Left channel.
BR=0.5*FR: The Back Right channel will be 50% of the Front Right channel.

This filter aims to take a stereo input (assumed to have channels FL and FR) and transform it into a 5.1 surround sound output by designating specific parts of the stereo signal to the respective 5.1 channels. 
This can be used to upmix stereo audio to a 5.1 configuration. 
Note, however, that the actual spatialization won't be as authentic as true 5.1 recorded audio, but it provides a way to distribute stereo sound across a 5.1 speaker setup.
I recommend adding the below to the script and removing the audio copy function in the script for it to work. Adjust the bitrate to your liking.
Code:
--audio-bitrate 192
--audio-stream 5.1


I saw there is the possibility of debanding but no values.
As mentioned for some settings, there will be presets called from the tool, and they will be applied.
However here are some sample settings:
Code:
--vpp-deband range=30,sample=4,thre_y=20,thre_cb=20,thre_cr=20,dither_y=20,dither_c=20,rand_each_frame,blurfirst
--vpp-deband range=22,sample=1,thre_y=12,thre_cb=12,thre_cr=12,dither_y=10,dither_c=10,rand_each_frame
--vpp-deband range=24,sample=2,thre_y=10,thre_cb=10,thre_cr=10,dither_y=12,dither_c=12,rand_each_frame,blurfirst

What is a deband filter?
A deband filter aims to reduce banding artefacts in videos.
Banding often occurs in areas of the video where there should be smooth gradients but instead have visible bands or stripes due to limitations in bit depth or compression.
The parameters control how this debanding operates.
Let's break down the parameters for the "--vpp-deband" filter:

range: Specifies the deband range. A higher value increases the range of the debanding effect.
sample: Sets the number of samples to be used for debanding.
thre_y, thre_cb, thre_cr: These are threshold values for the luma (Y) and chroma (Cb and Cr) channels. If the difference in the pixel value is below the threshold, the pixel will be processed as a banded pixel.
dither_y, dither_c: Amount of dithering for the luma (Y) and chroma (Cb and Cr) channels. Dithering adds a bit of noise to reduce visible banding.
rand_each_frame: If this is specified, random dithering will be recalculated for each frame. This is useful to avoid patterns from appearing in the dithering across frames.
blurfirst: If this is specified, the video will be blurred before debanding. This can help in some scenarios where the video has a significant amount of noise.

The three different provided deband filters with varying parameters:

The first filter has a broader range of 30 and a higher sample value of 4. The thresholds and dithering levels are also set to higher values, and it utilizes the blurfirst option.
The second filter has a more moderate range of 22 and a sample value 1. The thresholds and dithering levels are more conservative than the first filter.
The third filter has a range of 24 with a sample value of 2. The thresholds are set to even more conservative than the second filter, but the dithering levels are slightly increased. This filter also uses the blurfirst option.
Different parameters will be more effective for different types of video content and banding severity.
The multiple filters might be applied in various scenarios to determine which one gives the best result for a particular video.

Deciding which deband filter to use depends on the source material and the specific issues you're trying to address.
Let's evaluate each filter's potential use case based on its parameters:

The first filter:
range=30, sample=4
thre_y=20, thre_cb=20, thre_cr=20
dither_y=20, dither_c=20
blurfirst

Potential Use Cases:
When dealing with severely banded content, especially when the banding encompasses a broad range.
When the video has significant noise or graininess, which the blurfirst option would reduce.
When you want to use more samples for debanding, it would improve the deband quality but may increase processing time.


The second filter:
range=22, sample=1
thre_y=12, thre_cb=12, thre_cr=12
dither_y=10, dither_c=10

Potential Use Cases:
For content with moderate banding issues, especially if you're looking for a balance between debanding quality and processing speed.
You don't need the blurfirst option when the video has less noise.
When processing speed is a concern, given the lower sample value.

The third filter:
range=24, sample=2
thre_y=10, thre_cb=10, thre_cr=10
dither_y=12, dither_c=12
blurfirst

Potential Use Cases:
For videos with a moderate level of banding but also noise, hence the inclusion of blurfirst.
When you're aiming for a middle ground between the first filter's aggressive settings and the second filter's more conservative settings.
When you want a slightly more effective debanding than the second filter without significantly increasing processing time.
In practice, the choice between these filters isn't always clear-cut. One might need to experiment and visualize the results to decide on the best filter.
A couple of factors to consider when deciding:

Video Source Quality: You might lean towards more conservative settings if you have a high-quality source with minimal compression artefacts.
In contrast, a lower-quality source might benefit from more aggressive debanding.

Output Intent: If you're aiming for a highly compressed web video, some finer details might not matter as much.
On the other hand, if you want to achieve the best possible quality for archival or high-definition outputs, you might opt for a more meticulous debanding process.

Always preview the results after applying any filter to ensure the visual quality meets your expectations.
 
FAQ

This is not enough for me. I need more filters and adjustments. What can I do?

For additional "improvements", especially for older Anime and Cartoons, I recommend using Avisynth or Vaporsynth.

Here are some links to Avisynth, tools and filters.
1. AviSynth+ 64bit
Code:
 https://github.com/AviSynth/AviSynthPlus/releases
2. AvsPmod
Code:
 https://github.com/gispos/AvsPmod/releases
3. AviSynth Batch Scripter
Code:
 https://www.videohelp.com/software/AviSynth-Batch-Scripter
4. AviSynth Plugins:
a. Temporal Degrain
Code:
 http://avisynth.nl/index.php/Temporal_Degrain
and all required plugins (64-bit versions)
b. Stab
Code:
 http://avisynth.nl/index.php/Stab
and all required plugins (64-bit versions)
5. L-SMASH-Works
Code:
 https://github.com/HolyWu/L-SMASH-Works/releases/tag/20210423
(64-bit version)

Please install all the tools and extract all the plugins in the relevant folder.
 
Sample script for AviSynth.
Kindly note that this is just a basic script for reference only.
You can use your AviSynth scripts that you are using or can adjust the one that is shown as an example:

Code:
#load the video and audio via LWLibavVideoSource
v = LWLibavVideoSource("F:\S01E01_I.mkv")
a = LWLibavAudioSource("F:\S01E01_I.mkv")
audiodub(v,a)

#remove grain from source, enable GPU acceleration
TemporalDegrain(GPU=true,sigma=16,degrain=3,ov=4,blksize=16)

#stabilise video, recommended for older Anime/Cartoons where there is a lot of shaking, may introduce black borders
Stab(range=4, dxmax=8, dymax=8, mirror=15)

to load AviSynth via one of the aforementioned encoders, you need to remove the "--avhw" in your option file or create a new one without it and change the input file to .avs
example line for encoding:
Code:
for %i in (*.avs) do "E:\rigaya\NVEncC\NVEncC64.exe" -i "%i" --sub-source "%~dpni.srt" --option-file "E:\rigaya\NVEncC\example.txt" -o "F:\Re-encode\%~ni.mkv"

I recommend the AviSynth Batch Scripter aforementioned in the tools to create several AVS without having to do it manually.
To check how the setting affects your video, please use AvsPmod.
 
FAQ

Why does AviSynth take so long? I have a fast GPU.

Kindly note that depending on the filters you use in the AviSynth script they may slow down the process.
 
Oh right, I haven't used it at all after you talked with me. Will give the forced subs another try with this guide. But I am not sure when that will happen. Hopefully this (the next) week.
 
FAQ

I always have to encode the whole video to see the effect. Isn't there another way?


You can use, for example:
Code:
--trim 2000:3000
This will encode from frame #2000 to frame #3000.
You can change the values. Kindly ensure the values are within the frame count of the video.


When I run an encoding, I get some yellow text in the command line window. What is that?

When you see a yellow text, it usually means that the encoder doesn't support a particular setting and changed it, or one of the values you choose enabled another setting automatically.
An example of this would be
Code:
cop.SingleSeiNalUnit value changed off -> auto by driver


I see red text in the command line window, and nothing encodes. What is that?

When you see a red text in the command line window, one of the settings applied is not supported.
An example of this could be enabling hardware decoding, but the codec of the video to be decoded is not supported by the GPU hardware decoding,
in this case, you would need to enable software decoding by switching the setting from:
Code:
before
--avhw

after
--avsw


I do not use Avisynth, but my encoding is still slow. I thought it was only slow when Avisynth was used.

Please be informed that the encoding speed depends on several factors.
1. The GPU used, an integrated GPU, is, in general, slower compared to a dedicated GPU
2. Encoding from and to the same drive can slow the encoding process. Please select another drive as an output drive.
3. Software decoding is enabled instead of hardware decoding. Software decoding will rely on the performance of the CPU.
4. Depending on the settings selected for the encoding, it will decrease the speed of the encoding.
5. There might be other factors that affect the encoding. For example, other software using the GPU, CPU, etc.
6. Please always check if you have the latest driver installed for your GPU.
 
Is there any other way to improve the speed of AviSynth?

Yes the Prefetch() function in Avisynth++

The Prefetch function in Avisynth+ is related to multithreading.
Traditional Avisynth is single-threaded, meaning it can only use one CPU core at a time, which can be quite slow for some processing tasks.
Avisynth+ introduced some multithreading features, among which is the Prefetch function.

Prefetch(N) allows Avisynth+ to request N frames ahead of the currently required frame.
By doing this, it makes better use of multi-core processors.
For example, if you have a filter chain processing video, and it's currently working on frame 10, if you use Prefetch(24), it would also be requesting frames 11 through 34 simultaneously.
This allows multiple CPU cores to work on different frames simultaneously.

In this case, 24 is the number of frames you're asking Avisynth++ to request ahead of the current frame.
In general, the ideal number depends on a few factors:
  1. The number of CPU cores you have:
    If you have a quad-core processor, you might request more frames than a dual-core processor.
    Not every filter or operation in Avisynth can be efficiently multithreaded, so sometimes, you might not see a linear performance increase.

  2. The complexity of your script:
    If you're doing heavy processing on each frame, you might want to prefetch fewer frames because each frame will take longer to process.
    Conversely, you can prefetch more frames if you're doing light processing.

  3. Memory Usage:
    Prefetching many frames can use substantial memory, especially if you're working with high-resolution video.
    You don't want to request so many frames that your system runs out of RAM and starts using the hard drive for swapping, which would slow down processing significantly.
It's often a good idea to experiment with different values for Prefetch to see what gives you the best performance for your specific script and hardware configuration.

Kindly note when Avisynth++ processes a script, it works in a "pull" mode.
This means that the final output (usually a player or encoder) requests a frame, and this request works its way back through your script, pulling the necessary data through each filter or operation in turn.

By placing Prefetch at the end of your script, you're ensuring that the prefetching applies to the entire processed clip, taking into account all the filters and operations you've defined in your script.
If you were to place Prefetch earlier in the script, only the operations up to the point where you inserted Prefetch would benefit from the multithreading.
 
Do you have an example of what an old Anime/Cartoon could look like after AviSynth++ processing?

Yes, here you go.
Kindly note that I placed the script used for this at the end with some explanation of what does what.
I did not use any upscaling in that script.
UPDATE: To clarify. The script provided here does not generate the before-after screen that you see in the samples.
The sample script is for the after image on how it changed and improved.

1697280736915.png

1697280784034.png


1697280849602.png

1697280891088.png

1697280935149.png

1697280984008.png

Code:
# Loading video and audio sources
v = LWLibavVideoSource("path_to_your_video.mkv")
a = LWLibavAudioSource("path_to_your_audio.mkv")

# Muxing the audio and video together
audiodub(v, a)

# Color space conversion: converting to RGB color space using the Rec. 709 matrix
ConvertToRGB(matrix="rec709")

# Adjusting the red, green, and blue channels separately.
# Decreasing the red channel's brightness by 15 units
# The commented out parameters suggest another adjustment (boosting each channel) which isn't currently active.
RGBAdjust(rb=-15, gb=0, bb=0)

# Adjusting the red, green, and blue channels by the same factor, effectively brightening the image.
RGBAdjust(r=250.0/220.0, g=250.0/220.0, b=250.0/220.0)

# Converting back to YV12 color space using the Rec. 709 matrix
ConvertToYV12(matrix="rec709")

# Applying a temporal denoising function. This function uses surrounding frames to denoise.
# GPU acceleration is enabled, and the noise standard deviation is set to 6.
TemporalDegrain(GPU=True, sigma=6)

# Video stabilization to correct for shaky camera movements.
# It looks within a range of 4 frames, and allows a maximum shift of 8 pixels horizontally and vertically.
# If stabilization requires a clip to be shifted to the point where the edge is visible,
# it mirrors the border pixels up to 15 pixels deep to fill in the gap.
Stab(range=4, dxmax=8, dymax=8, mirror=15)

# Prefetching frames for multithreading support. The next 24 frames are requested ahead of the current frame
# to better utilize multi-core processors.
Prefetch(24)
 
Last edited:
How much space can I save with this method?

When we talk about video compression, it's not just about reducing file sizes.
The ultimate goal is to ensure our videos look their best while taking up less space on our devices.

Sometimes, even after compression or "improvement," videos might not look great.
However, you can enhance video quality independently with the right tools and guidance, such as the provided script.

Here are a couple of things to consider when compressing videos:

  1. Starting Quality:
    Compressing a video effectively can be challenging if it is grainy or noisy. Instead of getting smaller, the file size might stay the same or even increase a bit.

  2. Compression Settings:
    It's like choosing the best setting for your camera. There are different options, and each can affect both the video's quality and its size. The trick is finding the right balance.
You might have stumbled upon terms like H.264 or H.265 when dealing with video compression. Both are "codecs"—think of them as languages or methods that videos speak to reduce their size while keeping good quality.

H.264 has been the go-to for many years and is widely accepted because it delivers decent video quality without hogging too much space. It was groundbreaking when it first came out, striking a balance between file size and video clarity.

H.265, also known as HEVC (High-Efficiency Video Coding), is the newer kid on the block.
Picture it as the more advanced sibling of H.264.
Not only does it retain the video quality of H.264, but it can also compress videos even further.
This means you get a similar (or even better) video than H.264 but in a smaller package.
It's particularly useful for UHD or 4K videos, as they can be quite sizeable.

Now, when we mention "10-bit", we're diving into the world of colour. In simple terms, 10-bit colour depth allows more colours to be displayed on the screen. This means smoother transitions, less "banding" (those annoying stripes you sometimes see in skies or gradients), and a richer visual experience.

For anime and cartoons, which often feature bold colours and clear contrasts, 10-bit is a blessing. Scenes with vibrant hues and smooth gradients look noticeably better, avoiding the pitfalls of banding or colour inconsistencies.
And for UHD content, which often comes in 10-bit by default, using a 10-bit compression approach retains the richness and depth of the visuals, ensuring viewers get the best experience possible.

In summary, while H.264 is a reliable workhorse, H.265 is a thoroughbred racehorse—faster and more efficient. And 10-bit?
That sprinkle of magic dust makes everything pop, especially for anime, cartoons, and UHD content.




EVERYTHING LOOKED WRONG when I tested the sample AviSynth script provided with the sample screenshots.

Please be aware that the provided scripts are just examples. They might require some tweaks to fit your specific needs.

While it's tempting to use some scripts directly without modification, double-checking the results after implementation is always a good practice.
Always ensure the output aligns with your expectations.
 
Can you please provide a sample script and screenshots for AviSynth with a grainy video?

Yes, here you go.
Kindly note that this uses several filters, and I made this in a rush, so you might need to tweak it to your liking.
However, you might need to download more filters for AviSynth to use the sample script at the end.
No upscaling was used.

1697450063282.png

1697450125657.png

1697450203103.png

1697450268260.png
1697450323955.png



Code:
# Load your video and audio sources
v = LWLibavVideoSource("C:\path\to\your\video\sample.mp4")
a = LWLibavAudioSource("C:\path\to\your\audio\sample.mp4")

# Merge the video and audio streams
clip = AudioDub(v, a)

# FILTERS:

# FineDehalo: Used for removing halo artefacts that can appear around edges in a video.
# Halos are bright or dark outlines that can appear due to excessive sharpening or other processes.
clip = clip.FineDehalo(rx=2.5, ry=1.0)

# KNLMeansCL: A denoising filter. It helps reduce noise in a video.
# Noise can be random speckles or grains that appear in videos, especially in low-light conditions.
clip = clip.KNLMeansCL(d=1, a=1, h=3.0)

# SMDegrain: Another denoising filter. It can also be used to smooth out a video.
# This can help make the video look cleaner, especially if it has a lot of random noise.
clip = clip.SMDegrain(thsad=234, tr=2, PreFilter=4)

# Sharpen: Enhances the edges and contrast in a video.
# Helps make the video look crisp, especially if it's blurry.
clip = clip.Sharpen(0.3, 0.30)

# DeSpot: Helps in removing small spots or specks from the video.
# Useful for videos that have dust or dirt specks.
clip = clip.DeSpot()

# Prepare a denoised version of the clip for the Repair function
denoised_clip = clip.KNLMeansCL(d=1, a=1, h=3.0)

# Repair: Compares two clips (in this case, the processed clip and a denoised version)
# and tries to repair defects in the processed clip using the denoised version as a reference.
clip = Repair(clip, denoised_clip, 1)

# Prefetch is used to buffer frames in memory for faster processing.
# Setting it to 0 means no frames are buffered ahead of time. This uses less memory but might be slower.
clip.Prefetch(0)

return clip




Why don't you upscale via AviSynth?

You can do that if you like.
I prefer upscaling via QSVenc or NVEnc during the encode.
You can test this and see which works for you.
 
Is there a method to remove film grain via the encoders without using AviSynth?

Yes, here are some suggestions based on film grain in videos:

  1. --vpp-convolution3d:
    This is a 3D noise reduction option designed to reduce temporal noise (noise that changes over time, like grain).
    Here's a recommended setting:
    --vpp-convolution3d matrix=simple,ythresh=5,cthresh=6,t_ythresh=4,t_cthresh=5
    The matrix=simple parameter uses an average of neighbouring pixels for denoising. The threshold values are slightly increased to have a more substantial noise removal effect but without going too high to avoid unwanted artefacts. Remember, these values are just a starting point; you might need to tweak them based on the results.

  2. --vpp-knn:
    This is a strong noise-reduction filter. Given that grain is a form of noise, you can use this filter to remove it:
    --vpp-knn radius=3,strength=0.12,lerp=0.15
    Here, the strength is slightly increased for more aggressive noise removal. The Lerp value is adjusted for better blending between original and noise-reduced pixels.

  3. --vpp-pmd:
    This method is designed to preserve edges while performing noise reduction:
    --vpp-pmd apply_count=3,strength=90,threshold=110
    This setting applies the filter three times for more vigorous noise removal, with a slightly reduced strength and edge detection threshold to maintain details.


    When using these filters, previewing the results is essential to ensure that the quality is maintained and artefacts are not introduced.
    Not all of the above filters may be needed, so it's a good practice to test each individually and in combination to achieve the desired outcome.
 
As requested, here is a sample script for a particular TV series with the below plot.
In a world where unexplained phenomena and government secrets intertwine, Agent M, a believer in the supernatural, and Agent S, a sceptic grounded in science, form an iconic duo. Together, they delve into the shadows of the unknown, unravelling mysteries that challenge the boundaries of reality.

Kindly note this script is based on two screenshots that have been provided, and it is for NVEnc.

Code:
--avhw
--codec hevc
--preset quality
--profile main10
--vbr-quality 26
--vbv-bufsize 6000
--multipass 2pass-full
--weightp
--aq
--aq-temporal
--hierarchial-p
--mv-precision Q-pel
--preset P7
--output-depth 10
--lookahead 32
--bframes 5
--ref 8
--multiref-l0 3
--multiref-l1 3
--bref-mode each
--mv-precision Q-pel
--pic-struct
--split-enc auto
--vpp-convolution3d matrix=simple
--vpp-knn radius=3,strength=0.10,lerp=0.1
--vpp-unsharp weight=1.0
--vpp-edgelevel strength=10.0,threshold=16.0
--vpp-deband range=15,sample=1,thre_y=15,thre_cb=15,thre_cr=15,dither_y=12,dither_c=12,rand_each_frame
--audio-copy
 
Can I convert a video to a BluRay conform m2ts?

Yes, with Intel QSVEnc and NVIDIA NVEnc, it is relatively easy.


Example for QSVEnc:
Code:
--avhw
--codec h264
--la 5000
--quality best
--max-bitrate 8000
--vbv-bufsize 30000
--la-depth 100
--la-window-size 0
--la-quality slow
--mv-scaling 3
--b-pyramid
--bluray
--audio-copy


Example for NVenc:
Code:
--avhw
--codec h264
--qvbr 26
--preset quality
--multipass 2pass-full
--max-bitrate 8000
--vbv-bufsize 30000
--aq
--aq-temporal
--lookahead 24
--mv-precision Q-pel
--bluray
--audio-copy

If the audio is not Blu-Ray conform, you must use AC3 or EAC3.
Replace
Code:
 --audio-copy
with
Code:
 --audio-codec ac3_fixed
or
Code:
--audio-codec eac3
additionally, you need to set the bitrate and channel layout, for example
Code:
--audio-bitrate 640
--audio-stream 5.1

The option
Code:
--bluray
will add or remove settings if they should not fit for Blu-Ray. The exception is the audio.

Also, check the resolution. You may need to add padding if you have a video that is not conform.

For example the video is 1920x860 with the option
Code:
--vpp-pad <int>,<int>,<int>,<int>
which adds padding to left,top,right,bottom (in pixels).
You can use MediaInfo and a calculator.
For the example provided 1920x860, you calculate:
1080-860 = 220 then 220 / 2 = 110
1080 because the resolution should be 1920x1080
Therefore, the setting should look like this
Code:
 --vpp-pad 0,110,0,110
This will add padding(black bars) at the top and bottom of the video.

At the moment, no AMD VCEEnc settings are provided.

FYI:
You should be able to use PCM audio, too. However, please be informed that PCM will take up considerable space.
Option:
Code:
--audio-codec pcm_bluray
There should be no need to specify bitrate or channel layout. In the tests, this was auto-set.
You can try and experiment with the settings.
Example space:
EAC3 @340kbps 5.1 for 1 minute and 23 seconds = 3.81 MiB
PCM 5.1 for 1 minute and 23 seconds video = 68.7 MiB


Please ensure that the output file you encode has the ending .m2ts
Example:
Code:
for %i in (*.mp4) do "NVEncC64.exe" -i "%i" --option-file "BluRay.txt" -o "%~ni.m2ts"
 
Example:
Code:
for %i in (*.mp4) do "NVEncC64.exe" -i "%i" --option-file "BluRay.txt" -o "%~ni.m2ts"
If you batch process videos with different resolutions, you should add some sort of algorithm to calculate and adjust the padding for each one. :giggle:
 
If you batch process videos with different resolutions, you should add some sort of algorithm to calculate and adjust the padding for each one. :giggle:
As mentioned, this is an example, specifically that there should be an m2ts as extension and not another one


However, the only thing I could come up with in a few minutes is a drag-and-drop bat file.
You need to download the MediaInfo CLI version here:
Code:
 https://mediaarea.net/en/MediaInfo/Download/Windows
Please put it in any folder you like.
Create a new bat file, and copy and paste the text below into that new bat file.
Adjust variables to your needs.

Code:
@echo off
set MEDIAINFO_PATH=E:\rigaya\MediaInfo\MediaInfo.exe
set NVENCC_PATH=E:\rigaya\NVEncC\NVEncC64.exe

:choose_resolution
echo Select the target resolution:
echo 1. 1920x1080
echo 2. 1280x720
set /p choice="Enter your choice (1 or 2): "

if "%choice%"=="1" (
    set TARGET_WIDTH=1920
    set TARGET_HEIGHT=1080
) else if "%choice%"=="2" (
    set TARGET_WIDTH=1280
    set TARGET_HEIGHT=720
) else (
    echo Invalid choice. Please try again.
    goto choose_resolution
)

"%MEDIAINFO_PATH%" "--Inform=Video;%%Width%%+%%Height%%" %1 > "%~1.resolution.txt"

for /f "tokens=1,2 usebackq delims=+" %%A in ("%~1.resolution.txt") do (
    set Width=%%A
    set Height=%%B
)

set /a Width_ADJUST=(%TARGET_WIDTH%-%Width%)/2
set /a Height_ADJUST=(%TARGET_HEIGHT%-%Height%)/2

del "%~1.resolution.txt" > NUL 2>&1

"%NVENCC_PATH%" -i %1 --option-file "BluRay.txt" -o "%~1.out.m2ts" --vpp-pad %Width_ADJUST%,%Height_ADJUST%,%Width_ADJUST%,%Height_ADJUST%

PAUSE

And since I was on it, I also included the other variable in case you need black bars left and right and selection in case you have 1280x640, for example.
Otherwise, you get thick black bars all around if you drag and drop a 1280x640 file.


Once the encoding starts you see this in the CMD output of NVEnc in this example:
Code:
Vpp Filters    cspconv(nv12 -> yv12)
               pad: [1920x816]->[1920x1080] (right=0, left=0, top=132, bottom=132)
               cspconv(yv12 -> nv12)
 
@cartman0208

I revised the script.

Updated Batch Script for Automated Video Processing: Key Changes Explained

1. Automated Resolution Selection
  • What's New: Previously, users had to choose the target resolution manually.
    Now, the script intelligently selects the appropriate resolution based on the dimensions of the input video file.
  • Script Note: The script now features an automated mechanism to determine the target resolution, eliminating manual input.
2. Aspect Ratio Analysis
  • What's New: I've introduced a new functionality to calculate and compare the aspect ratios.
    This ensures the selected resolution closely aligns with the aspect ratio of the input file, choosing between standard 1920x1080 and 1280x720 resolutions.
  • Script Note: The script includes calculations to compare aspect ratios, ensuring the most suitable standard resolution is selected.

3. Optimized Handling for Smaller Resolutions
  • What's New: To address the issue of excessive padding (adding black bars) when dealing with smaller resolutions,
    the script now includes a check for resolutions lower than 1280x720.
  • Script Note: There's a specific check to prevent unnecessary padding when padding smaller videos to larger standard resolutions.
    However, it's important to note that for very small files, such as a 720x576 video, the script will still add padding to match the resolution of 1280x720.

4. Removal of Manual Selection Prompt
  • What's New: I've removed the need for manual resolution selection, streamlining the process.
    The script now automatically determines the best resolution for the input file.
  • Script Note: The manual resolution selection step has been eliminated for a more automated experience.

Drag-and-Drop Functionality
  • This script is tailored for ease of use with a drag-and-drop feature.
    Drag your video file onto the batch file, which will automatically process the video.
    It determines the resolution and aspect ratio, then encodes the video using NVEncC to a Blu-ray-compatible resolution, applying appropriate padding when necessary.

Overall Enhancement
  • These updates significantly improve the script's functionality, making it more user-friendly and efficient.
    The automation of resolution selection and the optimization for standard Blu-ray resolutions, especially for smaller input files, ensure a smoother and more reliable video processing experience.

Revised script:

Code:
@echo off
REM Set the paths for MediaInfo and NVEncC
set MEDIAINFO_PATH=E:\rigaya\MediaInfo\MediaInfo.exe
set NVENCC_PATH=E:\rigaya\NVEncC\NVEncC64.exe

REM Use MediaInfo to get the resolution of the input file and write it to a temporary file
"%MEDIAINFO_PATH%" "--Inform=Video;%%Width%%x%%Height%%" %1 > "%~1.resolution.txt"

REM Read the resolution from the temporary file
for /f "tokens=1,2 delims=x" %%A in ('type "%~1.resolution.txt"') do (
    set Width=%%A
    set Height=%%B
)

REM Calculate the aspect ratio of the input file
set /a AspectRatioInput=10000*%Width%/%Height%

REM Define aspect ratios for standard resolutions
set /a AspectRatio1080=10000*1920/1080
set /a AspectRatio720=10000*1280/720

REM Calculate the difference in aspect ratios between the input file and standard resolutions
set /a Diff1080=%AspectRatioInput%-%AspectRatio1080%
if %Diff1080% LSS 0 set /a Diff1080=-%Diff1080%

set /a Diff720=%AspectRatioInput%-%AspectRatio720%
if %Diff720% LSS 0 set /a Diff720=-%Diff720%

REM Check if the input resolution is smaller or equal to 1280x720
if %Width% LEQ 1280 (
    if %Height% LEQ 720 (
        set TARGET_WIDTH=1280
        set TARGET_HEIGHT=720
    ) else (
        goto check_aspect_ratio
    )
) else (
    :check_aspect_ratio
    REM Choose the standard resolution with the closest aspect ratio
    if %Diff1080% LEQ %Diff720% (
        set TARGET_WIDTH=1920
        set TARGET_HEIGHT=1080
    ) else (
        set TARGET_WIDTH=1280
        set TARGET_HEIGHT=720
    )
)

REM Calculate padding needed to adjust the video to the target resolution
set /a Width_ADJUST=(%TARGET_WIDTH%-%Width%)/2
set /a Height_ADJUST=(%TARGET_HEIGHT%-%Height%)/2

REM Delete the temporary file
del "%~1.resolution.txt" > NUL 2>&1

REM Run NVEncC with the calculated parameters
"%NVENCC_PATH%" -i %1 --option-file "E:\rigaya\BluRay.txt" -o "%~1.out.m2ts" --vpp-pad %Width_ADJUST%,%Height_ADJUST%,%Width_ADJUST%,%Height_ADJUST%

REM Pause the script to view any messages before closing
PAUSE


Please note that you may need to adjust certain variables in the script to match your system's configuration.
Key variables to review and modify if necessary include:
  1. MEDIAINFO_PATH and NVENCC_PATH: Set these to the paths where MediaInfo and NVEncC (QSVENC) are installed on your system. For example:
    Code:
    set MEDIAINFO_PATH=E:\path\to\MediaInfo\MediaInfo.exe
    set NVENCC_PATH=E:\path\to\NVEncC\NVEncC64.exe

  2. The command invoking NVEncC (or another encoder like QSVENC if you prefer to use that).
    Ensure the paths to the option file location are correct for your setup:
    Code:
    "%NVENCC_PATH%" -i %1 --option-file "E:\path\to\your\options\file.txt" -o "%~1.out.m2ts" --vpp-pad %Width_ADJUST%,%Height_ADJUST%,%Width_ADJUST%,%Height_ADJUST%
If you use a different encoder or have your tools and files in other locations, update these paths accordingly.
This customization is crucial for the script to function correctly on your machine.


DISCLAIMER
Although I've tested this script extensively, unforeseen issues may still exist.
Please don't hesitate to let me know if you encounter any problems.
Additionally, while I'm considering the possibility of providing batch processing for entire folders, currently, due to time constraints, this feature might not be available immediately.
 
Back
Top