Cloud

<< Click to Display Table of Contents >>

Navigation:  Using SyncBackPro > Expert Mode >

Cloud

 

You are strongly advised to use a linked cloud account. This profile settings page can also use and create shared settings.

 

SyncBackPro can backup and synchronize files with the following cloud storage services:

 

Amazon S3™ (and files stored on Glacier™ via Amazon S3, as well as storage types such as One-Zone Infrequent Access). S3 compatible services include Cloudflare R2, Storj, DreamObjects, S3ForMe, Dunkel, HiCloud, Wasabi, Oracle, IDrive, IBM, Contabo and others.

Backblaze™ B2

Box

Citrix ShareFile™

Dropbox™ (including support for Dropbox Business)

Egnyte™

Google Drive™ (including support for Team Drives)

Google Storage™

Google Photos™. This cloud service has its own settings page.

Microsoft Azure™ Blob Storage (including support for Hot, Cool and Archive Tiers)

Microsoft OneDrive™

Microsoft OneDrive for Business (Office 365)

Microsoft SharePoint™ (Office 365)

OpenStack compatible services, e.g. Rackspace™

OVH™ (via Rackspace/OpenStack)

pCloud™

SugarSync™

WebDAV

 

This means your files will be securely stored on their servers, and having an offsite backup of your files is highly recommended. Before you can use this feature you must create an account on the relevant cloud storage service. Once you've created an account you will receive your account details which you will need to use their service.

 

There are some cloud services which are compatible with Amazon S3. To use such a service with SyncBackPro you simply need to set the Cloud Service and Service URL as appropriate.

 

If you want to download files from a web server, using HTTP, see HTTP.

 

warning

Important: It is highly recommended that you use a linked cloud account when using cloud services. This is especially important if you're using Box™. The exception to this is when using multiple profiles in parallel with the same Egnyte account (see below for details).

 

Destination/right files are on a cloud storage server: If ticked, then the destination/right is a compatible cloud storage service, i.e. you are backing up to or synchronizing with a cloud service.

 

 

Server Connection Details

 

Cloud Service: Select the appropriate type of cloud storage service that is to be used.

 

cloudservices

 

oThe cloud service forces file versioning: Some cloud services forcibly enable their own file versioning. Click this link to go to the Versioning settings page to configure automatic purging of excess versions files.

 

oS3 compatible service: If you are using an S3 compatible cloud storage service, e.g. DreamObjects, S3for.Me, Oracle Cloud, etc. then tick this checkbox. This switches off certain features that are usually only available on the Amazon S3 servers themselves.

 

oUse Identity V3 API (required for some OpenStack services): If you are using an OpenStack compatible cloud storage service then you may need to tick this checkbox. Check with your cloud service if you are not sure.

 

oUse my Dropbox Teams root folder (only valid when using Dropbox Business): If you are using Dropbox Business, and want SyncBackPro to copy files and folders from your teams root folder (instead of your home folder), then select this option. Note that your home folder is a folder within your teams folder. Your home folder typically contains only your personal files and folders, whereas the teams root folder has your home folder as well as team folders (which are folders shared between members of your team). If you change this setting then the cloud database will be deleted. You will also need to change your file & folder selections.

 

oIgnore OneNote files: If you are using OneDrive Personal, OneDrive for Business or Sharepoint, and want SyncBackPro to ignore OneNote files stored on the cloud storage service, then select this option.

 

Service URL: If you are using Amazon S3 or Microsoft Azure then it is recommended that you leave this setting as [default]. You only need to change this setting if you are using a service which is compatible with Amazon S3 or Microsoft Azure. For example, DreamObjects is an S3 compatible service. In these cases you must use the URL supplied by the compatible service (otherwise it would connect to the Amazon or Microsoft servers). If you are using OpenStack then you must enter the services Identity V2 URL (the authorization URL). If you are using Microsoft Azure, and want to use a Shared Access Signature (SAS), then supply it here. Note that it must include the container.

 

Site Path / Domain: If you are using SharePoint, and you want to access a path or sub-site within your site, then enter the path to it here. For example, if you want to access https://mycompany.sharepoint.com/path/to/destination then enter path/to/destination. Another example, if you want to access https://mycompany.sharepoint.com/subsite enter subsite. This setting is optional, and if you don't want to access a path or sub-site, then you should leave it as [default]. For Egnyte, you need to supply your domain name (not the entire URL, e.g. mydomain and not mydomain.egnyte.com).

 

Project ID: If you are using Google Storage, then enter the project ID here. You can retrieve your project ID from your Google Storage console.

 

Tenant / Project: If you are using an OpenStack compatible cloud service then you may need to specify a tenant/project. Check with your cloud service if you are not sure.

 

Username / Access Key ID / Account Name / Account ID: Depending on the cloud service, this is essentially the cloud login username. On Amazon S3 this is called the access key you want to use to connect to the S3™ service. Microsoft Azure™ calls this the account. This setting is not relevant for some cloud services, e.g. Google Drive™. For those cloud services you should use a linked cloud account or click the Authorize button. For non-OAuth cloud services, e.g. Amazon S3, you can use a secret for the username.

 

information

With Google Storage you can use a Service Account Private Key file instead of OAuth authentication. This is recommended as it allows for parallel file transfers.

 

Password / Secret Access Key / Primary Access Key / API Key / Application Key / Access Token: Depending on the cloud service, this is essentially the cloud login password. On Amazon S3 this is called the secret key. Microsoft Azure calls this the access key. You can optionally have SyncBackPro prompt you for the password instead of entering it here. This setting is not relevant for some cloud services, e.g. Google Drive™. For those cloud services you should use a linked cloud account or click the Authorize button. For non-OAuth cloud services, e.g. Amazon S3, you can use a secret for the password.

 

Prompt for the password when run (profile will fail if run unattended): If this option is enabled then every time the profile is run SyncBackPro will prompt you for the password. If the profile is being run unattended, then no prompt will be displayed and the profile run will fail. This setting is not relevant for some cloud services, e.g. Google Drive™.

 

Use encrypted (https) connection: If this option is enabled then all communication with the cloud servers is encrypted. This does not mean the files are encrypted, it means that all the communication is encrypted. Note that encrypting the connection may reduce performance. If you want to store your files encrypted you must use the encryption settings. This setting is not relevant for Google Drive™, OneDrive™, Dropbox™ or Box because an encrypted connection is always used.

 

Use my account: This button is not relevant for some cloud services, e.g. Amazon S3™. For the other cloud services (e.g. Google Drive™) then by clicking this button you are setting the profile to use your linked cloud account.

 

Authorize: This button is not relevant for some cloud services, e.g. Amazon S3™. For the other cloud services (e.g. Google Drive™) you must click this button to allow SyncBackPro to connect to your cloud service. Depending on the cloud service you'll need to login to your cloud service via a web browser then enter an authorization code into SyncBackPro. It is strongly recommended that you use a linked cloud account.

 

Delete DB: This button is not relevant for some cloud services, e.g. Amazon S3. SyncBackPro keeps a local (and optionally remote) database for storing the details of the files on the cloud service. This database is used to store details that cannot be stored on the cloud service. For example, some cloud services do not allow the last modification date & time of a file to be changed. To get around such limitations SyncBackPro keeps a record of what those details are. By clicking this button SyncBackPro will delete the local and remote database. This means you will lose all the details stored in that database and so the next profile run may result in files being copied or deleted as the information to make those decisions has been deleted.

 

Bucket / Container / Library / Team Drive: On many of the enterprise cloud services, e.g. Amazon S3, all files must be stored within a bucket. Microsoft Azure has the same concept but calls it a container. Google Drive can optionally use a Team Drive (part of GSuite). SharePoint also uses Libraries, which are optional.  You can have multiple buckets/containers (like you can have multiple drives on a computer) but a profile can only backup/sync with one bucket/container (other profiles can of course use other buckets/containers). An Amazon S3 bucket name must be globally unique (meaning nobody else can use the same bucket name). A Microsoft Azure container name does not need to be globally unique. Buckets/containers need to adhere to some naming restrictions (these are restrictions of the service and not of SyncBackPro):

 

Amazon S3 bucket naming rules for buckets created outside of the US Standard location are:

 

Must be globally unique, i.e. you cannot have the same bucket name as someone else

The maximum length is 63 bytes and the minimum length is 3 bytes

Must start with a lowercase letter or a number

Can only contain lowercase letters, numbers, periods (.), and dashes (-)

Cannot contain consecutive periods, e.g. a bucket cannot be called bad..name

Cannot contain a dash next to a period, e.g. a bucket cannot be called bad.-name

Must end with a lowercase letter or a number

Must not be formatted as an IP address (e.g., 192.168.5.4)

 

Amazon S3 bucket naming rules for buckets created in the default US Standard location have more relaxed bucket naming rules (see below). However, it is strongly recommended that you stick to the stricter naming rules as it gives you greater flexibility and compatibility with name servers, web sites, other utilities, etc.:

 

Must be globally unique, i.e. you cannot have the same bucket name as someone else

The maximum length is 255 bytes and the minimum length is 3 bytes

Can only contain letters (upper or lower case), numbers, periods (.), dashes (-), and underscores

 

Microsoft Azure container naming rules are:
 

The maximum length is 63 bytes and the minimum length is 3 bytes

Must start with a lowercase letter or a number

Can only contain lowercase letters, numbers, dashes (-), and underscores

Cannot contain consecutive dashes, e.g. a container cannot be called bad--name

Should not end with a dash, e.g. a container cannot be called bad-name-

 

Backblaze B2 bucket naming rules are:

 

Bucket names are globally unique. This means if another B2 user has created a Bucket named for example myphotos, then you cannot create a Bucket named myphotos

Each Backblaze B2 account can have a maximum of 100 Buckets.

Bucket names must be a minimum of 6 characters long and a maximum of 50 characters long.

Bucket names can be consist of numbers (0-9), letters (a-z) and the "-" (dash).  No other characters are valid, including "_" (underscore).

Bucket name are case insensitive, meaning that "MYPhotos" is the same as "myphotos".

Bucket names that start with "b2-" are reserved by Backblaze and cannot be used

 

Google Storage bucket naming rules are:

 

Bucket names must contain only lowercase letters, numbers, dashes (-), underscores (_), and dots (.). Names containing dots require verification.

Bucket names must start and end with a number or letter.

Bucket names must contain 3 to 63 characters. Names containing dots can contain up to 222 characters, but each dot-separated component can be no longer than 63 characters.

Bucket names cannot be represented as an IP address in dotted-decimal notation (for example, 192.168.5.4).

Bucket names cannot begin with the "goog" prefix.

Bucket names cannot contain "google" or close misspellings of "google".

Also, for DNS compliance and future compatibility, you should not use underscores (_) or have a period adjacent to another period or dash. For example, ".." or "-." or ".-" are not valid in DNS names.

 

For Google Drive, you must use the Google web interface to create Team Drives (which are optional and part of the GSuite service). For SharePoint you must use the relevant Microsoft tools (or web services) to create Libraries (which are optional).

 

warning

On Amazon S3, it is possible to restrict a user from listing all the buckets. If so when you click the Refresh button then you will get an Access Denied error message. To manually add the bucket to the list right-click on the Bucket/Container list and select Add bucket from the pop-up menu. You can then manually type in the name of the bucket you want to use. Note that S3 is case sensitive so you should double-check that you have typed in the bucket name correctly.

 

 

Empty folders and Amazon S3, Azure, Backblaze B2, OpenStack and Google Storage

 

Cloud storage services Amazon S3, Microsoft Azure, Backblaze B2, OpenStack (Rackspace) and Google Storage, store files as objects. Each object has a unique name within its bucket or container. For easy reference, objects are named just like files on a local file system, e.g. \My Documents\Bank\statement.txt. The key difference is that the objects are not stored in folders, although they look like they are, and these cloud systems do not have folders.

 

For example, you could have an object called \Level1\Level2\file.txt and another one called \Level1\Level2. They have no relationship to each other. You could delete \Level1\Level2 and \Level1\Level2\file.txt would still exist. Also, if you have an object called \Level1\Level2\file.txt it does not mean there is a folder \Level1\Level2 or \Level1. Some services give the impression, via their web interface, that you can create folders, but what they actually do is create empty objects to make it appear there is a folder.

 

Because of this some options in SyncBackPro are not available when using these cloud systems. For example, you cannot create empty directories.

 

 

Amazon S3 Bucket Names and Locations

 

Files within a bucket can be accessed via a web browser, so you may want to keep this in mind when deciding on a bucket name (i.e. use the stricter naming rules). For example, if you created a bucket called companyname.com then you could access the files in that bucket using the URL http://companyname.com.s3.amazonaws.com/filename. By default files created in a bucket cannot be accessed via a browser because the default access policy is private. You can change this on the advanced settings page.

 

Another important factor is that buckets are location dependent, which means the files in a bucket are physically located in a specific place. When you create a bucket you can choose its physical location. Obviously performance is going to be affected by where you are accessing the files in the bucket from and where the bucket is located.

 

 

Amazon Glacier and Azure Archive objects

 

Amazon S3 allows for objects to be moved to Glacier. In this situation an entry for the object is kept in S3 but the actual objects contents is stored in their Glacier archival system. Glacier objects cannot be manipulated using S3. All that can be done with them is to delete them or request a temporary copy for later retrieval. The temporary copy is automatically deleted by Amazon S3 after a user specified number of days (the original Glacier file is not deleted, just the temporary copy). As Glacier is an archival system it typically takes 3 to 5 hours for a temporary copy of the object to be retrieved.

 

If a file is stored on Glacier, and needs to be accessed by SyncBackPro, then a request will be sent for a temporary copy of a file. An entry will be recorded in the log file to note this. When the profile is next run SyncBackPro will check to see if the temporary copy is available, and if so, it will use it as required.

 

Azure has a similar feature with it's archive storage class, and it is handled by SyncBackPro the same transparent way as Glacier objects, i.e. a request is made to the cloud service to restore the object from cold/archive storage.

 

 

Google Storage

 

When creating a bucket in Google Storage there are three types of buckets that can be created:

 

Nearline: A Nearline bucket is similar to Amazon's Glacier. Nearline Storage enables you to store data that is long-lived but infrequently accessed. Nearline data has the same durability and comparable availability as Standard storage but with lower storage costs. Nearline Storage is appropriate for storing data in scenarios where slightly lower availability and slightly higher latency (typically just a few seconds) is an acceptable trade-off for lowered storage costs.

 

Durable Reduced Availability: A DRA bucket is similar to Amazon's Reduced Redundancy Storage. Durable Reduced Availability Storage enables you to store data at lower cost, with the trade-off of lower availability than standard Google Cloud Storage. DRA storage is appropriate for storing data that is particularly cost-sensitive, or for which some unavailability is acceptable. DRA buckets can also be created in regions, i.e. there are a wider range of locations that the bucket can be created in.

 

Standard: A normal bucket where the data is stored as per normal.

 

 

Backblaze B2

 

First, it's important to understand that Backblaze B2 is not the same as the Backblaze backup service. You cannot access your Backblaze backup files using B2. Backblaze B2 is a cloud storage service, provided by Backblaze, that is similar to others like Amazon S3 and Google Storage. Although it is similar, there are some very important differences that must be considered before using it:

 

B2 should be thought of as an archiving system. Once a file is uploaded to B2 it cannot be modified. You can upload a replacement file but the file existing file cannot be modified.

Files in B2 cannot be copied, renamed or moved, which means safe copying and versioning cannot be used. However, versioning is built-into B2 and enabled by default. See below.

The meta-data for a file cannot be changed. Meta-data is data about the file, e.g. its hash value, last modification date & time, etc. This means you cannot choose the action to use a source files details, for example.

 

Within the B2 web interface, you can set the life-cycle rules for files stored within a bucket. This gives you fine grained control over what files to keep and for how long. Within SyncBackPro you can also specify how many versions to keep, with the default being 32. If set to zero then no versions are kept.

 

The current version of SyncBackPro cannot restore versions from B2. To do this you must use the Backblaze B2 web interface.

 

 

Microsoft Azure $root container

 

The name $root is a special container name in Microsoft Azure. A root container serves as a default container for your storage account. A storage account may have one root container. The root container must be explicitly created and must be named $root. A blob (file) stored in the root container may be addressed without referencing the root container name, so that a blob can be addressed at the top level of the storage account hierarchy. For example, you can now reference a blob that resides in the root container in the following manner:

 

 http://myaccount.blob.core.windows.net/mywebpage.html

 

 

OpenStack and large files

 

When using OpenStack, and you are uploading large files (over 10MBytes), then the file will be uploaded in parts. This improves the upload and download speed and also allows larger files to be uploaded. If you are using a browser, or other software, to view your files on your OpenStack service (e.g. Rackspace), then the parts will have names like B30D35E0-A13B-4DEB-B9C4-88ED11D7DCBE.Part1.DO_NOT_REMOVE. Do not delete these files. If you are using versioning, then these file parts remain in their original upload folder and are not moved to the versions sub-folder ($SBV$).

 

 

What format are Google Docs files downloaded in?

 

Google Docs files stored on Google Drive do not have a defined type. For example, if you create a document file on Google Docs then it can be exported in several formats. This causes a problem for SyncBackPro because the Google Docs file has no defined size (it is reported as having no size) as the size depends on what format it is exported in. When SyncBackPro downloads a Google Docs file it will store it locally using the Microsoft format, e.g. .docx for document files. If the Microsoft format is not available then the first export format available for that file is used.

 

Google Docs files stored on Google Drive do not record milli-seconds. This means the last modification date & time comparison must be not be less than 1 seconds difference.

 

 

Upgrading Dropbox and OneDrive Cloud API

 

When you upgrade from an earlier version of SyncBackPro, or import a profile from an earlier version of SyncBackPro, and the profile is using Dropbox, OneDrive (Personal or Business) or SharePoint, then SyncBackPro will continue using the old (legacy) interface with that cloud service. This ensures your profile continues to work. However, it is recommended that you update the profile to use the new cloud API. See the Upgrading Cloud Service section for details.

 

 

Which cloud storage service should I use?

 

When deciding on which cloud storage service to use you should base it on what is most important to you:

 

1.Price: The cloud services frequently change their pricing, often reducing it. Also, the cost may depend on what storage class you use for the object and/or bucket/container. Pricing may be based on where (regionally) you decide to store your files. Some cloud services provide free storage up to a certain level. The cloud service may also charge differently depending on if it's an upload or a download. Working out costs can be complex.

 

2.Speed: The upload and download performance largely depends on where your files are physically located. The closer they are to you, the faster it is that they can be accessed. If possible, try the services using a typical set of files (at the same time of day you would use the service) to see any differences in performance.

 

3.Size: Microsoft Azure can store files up to 200GB in size, and for Amazon S3 it is 5TB. The limit for Box depends on the type of account you have, e.g. 250MB for personal and 5GB for Enterprise. These limits can change so please verify with the cloud service.

 

4.Security: The security and integrity of your files may be paramount. It is impossible to say which service is more secure. You can reduce security risks by telling SyncBackPro to encrypt your files.

 

5.Meta-data: Many of the professional/business cloud services support storing meta-data for files. This is data about the files, e.g. the real file size if the file is stored compressed. If a cloud service supports meta-data then a cloud database is not required. This can make things much simpler and result in fewer issues.

 

 

Large file uploads and security time-outs

 

For some cloud services, e.g. Google Drive and Box, if it takes a long time to upload a file then it may fail because the security token has expired. When SyncBackPro connects to a cloud service it is given a security token that is passed back to the cloud service with every call made. The security tokens are only valid for 60 minutes (usually, although this is service specific) but they can be refreshed and are refreshed automatically by SyncBackPro. However, if it takes longer to upload a file than the security token is valid for then the upload will always fail as the security token will have expired by the time the upload has completed, and the cloud service will only check the security token once the upload has complete. This is a limitation of those cloud services that use expiring security tokens.

 

 

Naming restrictions

 

All of the cloud services have restrictions on what characters can be used in a file or folder name. These restrictions may be more restrictive than Windows and may change over time. For example, at time of writing a semi-colon cannot be used in any file or folder names in Microsoft OneDrive. As these rules are likely to change, SyncBack does not check names to see if they meet the rules and instead leaves that to the cloud service itself. You may need to rename your files to meet their naming restrictions.

 

Dropbox also filters out some files and will not allow them to be uploaded (see https://www.dropbox.com/en/help/145 for more details). For example, if you attempt to upload a file called desktop.ini to Dropbox then it will return the error message "The file desktop.ini is on the ignored file list, so it was not saved.". You cannot force Dropbox to accept those files. The only option is to deselect the files, or filter them out, in your profile so that SyncBackPro doesn't even try to upload them.

 

 

Cloud database

 

Many of the professional/business cloud services support storing meta-data for files. This is data about the files, e.g. the real file size if the file is stored compressed. However, the consumer orientated cloud services often do not support this (with the exception of Dropbox and Google Drive). SyncBack needs to store important data about the files, e.g. the last modification date and time. If the cloud service does not support meta-data then that data must be stored locally in a database. This can result in problems if that database is deleted, as that important data is then lost. For example, if you backup to Box, and store the files compressed, then SyncBack needs to store the actual uncompressed size of the original source file. This is so it can detect changes in that file. That data is stored in the database as Box does not support meta-data. If you delete the database then SyncBack no longer knows the true size of the compressed file on Box. You can rebuild the database by copying the source file properties to the destination (via the Differences window), but you will need to know which files have changed and which have not.

 

 

Egnyte Performance

 

We always recommend using a linked cloud account when using cloud services except in one specific use case: if you have multiple profiles that run in parallel (at the same time), and use the same Egnyte account, then you will get better performance by not using a linked cloud account. It is better to authorize each profile with Egnyte so each profile gets its own security access token. The reason for this is due to how Egnyte limits usage. If your Egnyte profiles are not run in parallel, i.e. not at the same time (serially, one after the other), then it is still recommended to use a linked cloud account.

 

 

 

All Content: 2BrightSparks Pte Ltd © 2003-2024