• AnyStream is having some DRM issues currently, Netflix is not available in HD for the time being.
    Situations like this will always happen with AnyStream: streaming providers are continuously improving their countermeasures while we try to catch up, it's an ongoing cat-and-mouse game. Please be patient and don't flood our support or forum with requests, we are working on it 24/7 to get it resolved. Thank you.

[optimisations] sluggish performance when download target is over the network

0x0x0x0x0

Well-Known Member
Thread Starter
Joined
Sep 21, 2020
Messages
1,337
Likes
598
First off, this is something for the long-term, definitely not something one would reasonably expect to get fixed in the near future, but nonetheless... I've been trying to figure out why my download traffic via cable looks like a rather angry hedgehog, and started playing with the download targets (Settings->Download->Downloaded media is saved to: ...) It transpires, AS suffers quite a penalty when writing to network mounts vis-a-vis local storage:

So, by a totally unscientific measure, a 2,823,437,162 byte download takes (this is just the time difference between "download completed" and first "Starting" log [Notice]s):-

local disk (Samsung 830): 1m56.330
SMB^* v3.1.1: 3m01.817
NFS* v4.1: 3m05.591

Interestingly, dumping to an iSCSI* target, same file downloaded in 2m00.461.

^smb has DisableLargeMtu=0, DisableBandwidthThrottling=1
* 1Gbe with 9000 byte MTU to switch, switch uplink 10Gbe (w/ MTU=9000) to the back of the server (Chelsio T540), quad channel DDR4 backstore (for iSCSI: raw image created on tmpfs, so 2 levels of translation, no hardware acceleration (too much hassle for this experiment))

The same file copied from the local store over SMB at 112,456,174 B/s, and over NFS at 95,437,979 B/s, so it's not a LAN issue, and the WAN link is about 55MB/s.

Just putting it out there, I know some mount destination storage over SMB/NFS, so there you are :)
 

Attachments

  • AnyStream_1.1.2.0.smb-nfs-local.astlog
    4 MB · Views: 0
  • AnyStream_1.1.2.0.iscsi.astlog
    1.4 MB · Views: 0
Last edited:
If you have not already done so I would check to make sure that the nic drivers are the current ones from the manufacturer. Don’t rely on Windows to have the latest. I’ve seen many network speed issues fixed by updating the drivers.


Sent from my iPhone using Tapatalk
 
Are the times you are quoting from start of download to the completion dialog box? With my system, when I had one setup downloading directly to my server, the time to the 100% download complete was about the same as downloading directly to my computers hard drive. However the post processing time was brutally long, compared to the processing time from my computers hard drive.
 
The times include post processing time, I did that deliberately: the post-processing doesn't seem to make that noticeable an impact when you're using iSCSI.

Edit: if you take away processing times you get:

1st place: iSCSI (unsurprising, it's getting written to RAM, albeit 2 levels of indirection on the server side + local NTFS cluster => LBA translation) @ 1m38.705
2nd place: local Samsung 830 @ 1m48.074
3rd place: NFS* @ 2m12.654
4th place: SMB* @ 2m34.512


* both NFS and SMB are also written to RAM with fewer indirections than the iSCSI
 
Last edited:
I don't think this is something that can be addressed in AS at all ...
Depending on how an application writes its files (small or big packets) you might not gain anything from using large MTU.
I noticed I get the best performance with MTU1500 in my network, but I only use SMB on the 1Gb.
NFS is something for all-linux environments, never my first choice for Windows.
iSCSI I'd need to test but I guess it will be the fastest over the network
But nothing beats a local SSD, at least in my 1Gb Network...
 
I don't think this is something that can be addressed in AS at all ...
Depending on how an application writes its files (small or big packets) you might not gain anything from using large MTU.
I noticed I get the best performance with MTU1500 in my network, but I only use SMB on the 1Gb.
NFS is something for all-linux environments, never my first choice for Windows.
iSCSI I'd need to test but I guess it will be the fastest over the network
But nothing beats a local SSD, at least in my 1Gb Network...

If you are transferring files of any size I would imagine the packets will be max MTU. And if you are transferring stuff like movies that's a lot of packets at 1500. Not long back I changed my MTU to 9000 and got a significant throughput increase when moving files between systems (using SMB). In fact it maxes out my NICs whereas before at 1500 this was not the case. The smaller amount of overhead helps out a good bit.
 
If you are transferring files of any size I would imagine the packets will be max MTU. And if you are transferring stuff like movies that's a lot of packets at 1500. Not long back I changed my MTU to 9000 and got a significant throughput increase when moving files between systems (using SMB). In fact it maxes out my NICs whereas before at 1500 this was not the case. The smaller amount of overhead helps out a good bit.

You'll want to set DWORD at
Code:
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\DisableLargeMtu

to 0, otherwise Windows will cap transfers to 64kB at a time, and it's only disabled by default on Win8 [1].

Code:
1. https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/file-server/

I don't think this is something that can be addressed in AS at all ...

<snip>

I wouldn't be so sure: the fact that SMB is the worst performer on bare download, and NFS winning over SMB at that stage, suggests that AS is doing lots of tiny writes and probably in a blocking way; one way to work around that is writing large buffers and using async I/O, or completion ports...
 
I didn't see angry hedgehog during download, but I did see a LONG wait during post-processing as it read and rewrote the destination file.
 
I didn't see angry hedgehog during download, but I did see a LONG wait during post-processing as it read and rewrote the destination file.

I, too, get a flat line when I use VDSL that tops out at 90 Mb/s, but when I use cable that is capable of 550 Mb/s, I get a patent hedgehog; but long processing is in both cases, unless I use iSCSI ;)
 
I played with a tweak for SMB and made a small tweak to the iSCSI server, and this is what I got for the same 2,823,437,162 byte file:

Code:
 MaxCmds | T i m e   t a k e n
         | Pure  DL | DL + Proc
---------+----------+----------
(dflt)15 | 2m38.104 | 3m20.828
    4096 | 2m25.966 | 2m53.484
    8192 | 2m38.769 | 3m24.643
   16384 | 2m31.783 | 3m14.919
        -+-        -+-
  iSCSI  | 1m12.927 | 1m35.600
 

Attachments

  • AnyStream_1.1.2.0.-maxcmds-iscsi.astlog
    6.7 MB · Views: 0
Back
Top