
This can compare hashes and avoid having to re-upload data unnecessarily if you happen to re-name or reorganize the location of your local files. These optional flags flags may may be worthconsidering: I can't say if rclone is runs lighter than any other given sync manager (because I'd need to be familiar with that sync agent to comment), but you can certainly have it run very light and use fairly trivial resources if you configure it like that (the default settings tend to be very light already). So TLDR - rclone is well suited for the job and has lots of flexibility. If you have need for it you can also apply filtering to ignore certain types of files, or certain folders inside a directory structure you otherwise want to sync ect. You can also look into using the backup dir flag (see usage doc) if you want to keep an extra layer of protection if you accidentally delete something and then rclone happens to sync right after (thus also deleting the cloud file). Use it or not depending on what you think is appropriate for your use-case. This means it may not be ideal if you want to sync very many files very often. This will however require hashes to identify files, so unless you use a filesystem that has hashes compatible with the cloud service you use it will need to fully read the files on sync and generate them. track-renames is also very useful to enable as it will allow rclone to re-organize already uploaded files if you move them around locally instead of blindly re-uploading them again. I'm not sure how much more there is to add to that unless you need step-by-step instructions on how to.

The easiest thing to do is just run a script on a timer (cron) that performs rclone sync to your remote, as you already said.
