Cache not working with S3 buckets

I’ve set NetDrive cache to 500GB, but after running a lot of operations (a script is pulling files and processing them, then copying them to another folder), i see that it’s removing files that were pulled almost instantly and the cache folder is almost empty. I’ve noticed this by refreshing the cache folder (a jpg file that was recently pulled shows up, but after refreshing again it’s gone again.)

This creates unneccessary overhead by constantly doing pull/list requests from amazon, thus increasing my bill with amazon cloud service.


A user can set the cache expiration time manually by editng ndfs.ini.

The file path is “c:\Program Files (x86)\Bdrive\NetDrive3\x64\ndfs.ini” on 64bits OS.
Or “c:\Program Files (x86)\Bdrive\NetDrive3\ndfs.ini” on 32bits OS.

The sample file is attached. You need to rename it to ndfs.ini.

ndfs.txt (160 Bytes)

I just did that, then restarted both netdrive services, but it’s not honoring the cache settings.
files are still getting removed almost instantly.

as i’m running 64 os, the file saved is at
C:\Program Files (x86)\Bdrive\NetDrive3\x64\ndfs.ini

[CACHE]
notify_cache_path_size=10
clear_used_block=43200
clear_unused_block=3600
clear_used_block_when_quota_exceed=3600
clear_unused_block_when_quota_exceed=900


would be great if netdrive handled caches differently.
i.e.
Cache all files without time limit and as the allocated drive space is reached, remove as many files as neccessary (based on least access time to cache new files)

Just a thought

Thank you for your feedback. We appreciate your input, and we will take it into consideration.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.