[optimisations] sluggish performance when download target is over the network

Discussion in 'AnyStream' started by 0x0x0x0x0, May 3, 2021 at 12:39 AM.

  1. 0x0x0x0x0

    0x0x0x0x0 Well-Known Member

    First off, this is something for the long-term, definitely not something one would reasonably expect to get fixed in the near future, but nonetheless... I've been trying to figure out why my download traffic via cable looks like a rather angry hedgehog, and started playing with the download targets (Settings->Download->Downloaded media is saved to: ...) It transpires, AS suffers quite a penalty when writing to network mounts vis-a-vis local storage:

    So, by a totally unscientific measure, a 2,823,437,162 byte download takes (this is just the time difference between "download completed" and first "Starting" log [Notice]s):-

    local disk (Samsung 830): 1m56.330
    SMB^* v3.1.1: 3m01.817
    NFS* v4.1: 3m05.591

    Interestingly, dumping to an iSCSI* target, same file downloaded in 2m00.461.

    ^smb has DisableLargeMtu=0, DisableBandwidthThrottling=1
    * 1Gbe with 9000 byte MTU to switch, switch uplink 10Gbe (w/ MTU=9000) to the back of the server (Chelsio T540), quad channel DDR4 backstore (for iSCSI: raw image created on tmpfs, so 2 levels of translation, no hardware acceleration (too much hassle for this experiment))

    The same file copied from the local store over SMB at 112,456,174 B/s, and over NFS at 95,437,979 B/s, so it's not a LAN issue, and the WAN link is about 55MB/s.

    Just putting it out there, I know some mount destination storage over SMB/NFS, so there you are :)
     

    Attached Files:

    Last edited: May 3, 2021 at 12:46 AM
  2. Don922

    Don922 Well-Known Member

    If you have not already done so I would check to make sure that the nic drivers are the current ones from the manufacturer. Don’t rely on Windows to have the latest. I’ve seen many network speed issues fixed by updating the drivers.


    Sent from my iPhone using Tapatalk
     
  3. Watcher0363

    Watcher0363 Well-Known Member

    Are the times you are quoting from start of download to the completion dialog box? With my system, when I had one setup downloading directly to my server, the time to the 100% download complete was about the same as downloading directly to my computers hard drive. However the post processing time was brutally long, compared to the processing time from my computers hard drive.
     
  4. 0x0x0x0x0

    0x0x0x0x0 Well-Known Member

    The times include post processing time, I did that deliberately: the post-processing doesn't seem to make that noticeable an impact when you're using iSCSI.

    Edit: if you take away processing times you get:

    1st place: iSCSI (unsurprising, it's getting written to RAM, albeit 2 levels of indirection on the server side + local NTFS cluster => LBA translation) @ 1m38.705
    2nd place: local Samsung 830 @ 1m48.074
    3rd place: NFS* @ 2m12.654
    4th place: SMB* @ 2m34.512


    * both NFS and SMB are also written to RAM with fewer indirections than the iSCSI
     
    Last edited: May 3, 2021 at 2:30 AM
  5. cartman0208

    cartman0208 Member

    I don't think this is something that can be addressed in AS at all ...
    Depending on how an application writes its files (small or big packets) you might not gain anything from using large MTU.
    I noticed I get the best performance with MTU1500 in my network, but I only use SMB on the 1Gb.
    NFS is something for all-linux environments, never my first choice for Windows.
    iSCSI I'd need to test but I guess it will be the fastest over the network
    But nothing beats a local SSD, at least in my 1Gb Network...
     
  6. DarkQuark

    DarkQuark Well-Known Member

    If you are transferring files of any size I would imagine the packets will be max MTU. And if you are transferring stuff like movies that's a lot of packets at 1500. Not long back I changed my MTU to 9000 and got a significant throughput increase when moving files between systems (using SMB). In fact it maxes out my NICs whereas before at 1500 this was not the case. The smaller amount of overhead helps out a good bit.
     
  7. 0x0x0x0x0

    0x0x0x0x0 Well-Known Member

    You'll want to set DWORD at
    Code:
    HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\DisableLargeMtu
    
    to 0, otherwise Windows will cap transfers to 64kB at a time, and it's only disabled by default on Win8 [1].

    Code:
    1. https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/file-server/
    
    I wouldn't be so sure: the fact that SMB is the worst performer on bare download, and NFS winning over SMB at that stage, suggests that AS is doing lots of tiny writes and probably in a blocking way; one way to work around that is writing large buffers and using async I/O, or completion ports...
     
  8. ocd

    ocd Well-Known Member

    I didn't see angry hedgehog during download, but I did see a LONG wait during post-processing as it read and rewrote the destination file.
     
  9. 0x0x0x0x0

    0x0x0x0x0 Well-Known Member

    I, too, get a flat line when I use VDSL that tops out at 90 Mb/s, but when I use cable that is capable of 550 Mb/s, I get a patent hedgehog; but long processing is in both cases, unless I use iSCSI ;)
     
  10. 0x0x0x0x0

    0x0x0x0x0 Well-Known Member

    I played with a tweak for SMB and made a small tweak to the iSCSI server, and this is what I got for the same 2,823,437,162 byte file:

    Code:
     MaxCmds | T i m e   t a k e n
             | Pure  DL | DL + Proc
    ---------+----------+----------
    (dflt)15 | 2m38.104 | 3m20.828
        4096 | 2m25.966 | 2m53.484
        8192 | 2m38.769 | 3m24.643
       16384 | 2m31.783 | 3m14.919
            -+-        -+-
      iSCSI  | 1m12.927 | 1m35.600
    
     

    Attached Files: