Amazon S3
S3 后端可与多个不同的提供商一起使用。
路径指定为 remote:bucket
(对于 lsd
命令,可使用 remote:
)。你也可以指定子目录,例如 remote:bucket/path/to/dir
。
在创建了一个远程存储(请参阅上述特定提供商部分)之后,你可以按以下方式使用它:
查看所有存储桶
rclone lsd remote:
创建一个新的存储桶
rclone mkdir remote:bucket
列出存储桶的内容
rclone ls remote:bucket
将 /home/local/directory
同步到远程存储桶,并删除存储桶中多余的文件。
rclone sync --interactive /home/local/directory remote:bucket
配置
以下是为 AWS S3 提供商创建 S3 配置的示例。大多数设置也适用于其他提供商,任何差异将在 下文 中描述。
首先运行
rclone config
这将引导你完成一个交互式设置过程。
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ "s3"
[snip]
Storage> s3
Choose your S3 provider.
Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
\ "AWS"
2 / Ceph Object Storage
\ "Ceph"
3 / DigitalOcean Spaces
\ "DigitalOcean"
4 / Dreamhost DreamObjects
\ "Dreamhost"
5 / IBM COS S3
\ "IBMCOS"
6 / Minio Object Storage
\ "Minio"
7 / Wasabi Object Storage
\ "Wasabi"
8 / Any other S3 compatible provider
\ "Other"
provider> 1
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> XXX
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> YYY
Region to connect to.
Choose a number from below, or type in your own value
/ The default endpoint - a good choice if you are unsure.
1 | US Region, Northern Virginia, or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
/ US East (Ohio) Region
2 | Needs location constraint us-east-2.
\ "us-east-2"
/ US West (Oregon) Region
3 | Needs location constraint us-west-2.
\ "us-west-2"
/ US West (Northern California) Region
4 | Needs location constraint us-west-1.
\ "us-west-1"
/ Canada (Central) Region
5 | Needs location constraint ca-central-1.
\ "ca-central-1"
/ EU (Ireland) Region
6 | Needs location constraint EU or eu-west-1.
\ "eu-west-1"
/ EU (London) Region
7 | Needs location constraint eu-west-2.
\ "eu-west-2"
/ EU (Frankfurt) Region
8 | Needs location constraint eu-central-1.
\ "eu-central-1"
/ Asia Pacific (Singapore) Region
9 | Needs location constraint ap-southeast-1.
\ "ap-southeast-1"
/ Asia Pacific (Sydney) Region
10 | Needs location constraint ap-southeast-2.
\ "ap-southeast-2"
/ Asia Pacific (Tokyo) Region
11 | Needs location constraint ap-northeast-1.
\ "ap-northeast-1"
/ Asia Pacific (Seoul)
12 | Needs location constraint ap-northeast-2.
\ "ap-northeast-2"
/ Asia Pacific (Mumbai)
13 | Needs location constraint ap-south-1.
\ "ap-south-1"
/ Asia Pacific (Hong Kong) Region
14 | Needs location constraint ap-east-1.
\ "ap-east-1"
/ South America (Sao Paulo) Region
15 | Needs location constraint sa-east-1.
\ "sa-east-1"
region> 1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
endpoint>
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia, or Pacific Northwest.
\ ""
2 / US East (Ohio) Region.
\ "us-east-2"
3 / US West (Oregon) Region.
\ "us-west-2"
4 / US West (Northern California) Region.
\ "us-west-1"
5 / Canada (Central) Region.
\ "ca-central-1"
6 / EU (Ireland) Region.
\ "eu-west-1"
7 / EU (London) Region.
\ "eu-west-2"
8 / EU Region.
\ "EU"
9 / Asia Pacific (Singapore) Region.
\ "ap-southeast-1"
10 / Asia Pacific (Sydney) Region.
\ "ap-southeast-2"
11 / Asia Pacific (Tokyo) Region.
\ "ap-northeast-1"
12 / Asia Pacific (Seoul)
\ "ap-northeast-2"
13 / Asia Pacific (Mumbai)
\ "ap-south-1"
14 / Asia Pacific (Hong Kong)
\ "ap-east-1"
15 / South America (Sao Paulo) Region.
\ "sa-east-1"
location_constraint> 1
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
\ "public-read"
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
3 | Granting this on a bucket is generally not recommended.
\ "public-read-write"
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
\ "authenticated-read"
/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-read"
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-full-control"
acl> 1
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
server_side_encryption> 1
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Reduced redundancy storage class
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
5 / One Zone Infrequent Access storage class
\ "ONEZONE_IA"
6 / Glacier Flexible Retrieval storage class
\ "GLACIER"
7 / Glacier Deep Archive storage class
\ "DEEP_ARCHIVE"
8 / Intelligent-Tiering storage class
\ "INTELLIGENT_TIERING"
9 / Glacier Instant Retrieval storage class
\ "GLACIER_IR"
storage_class> 1
Remote config
Configuration complete.
Options:
- type: s3
- provider: AWS
- env_auth: false
- access_key_id: XXX
- secret_access_key: YYY
- region: us-east-1
- endpoint:
- location_constraint:
- acl: private
- server_side_encryption:
- storage_class:
Keep this "remote" remote?
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d>
修改时间和哈希值
修改时间
修改时间以元数据的形式存储在对象上,键为 X-Amz-Meta-Mtime
,值为自纪元以来的浮点时间,精度可达 1 纳秒。
如果需要更新修改时间,若对象可以单部分复制,rclone 将尝试执行服务器端复制来更新修改时间。若对象大于 5GB 或存储在 Glacier 或 Glacier Deep Archive 中,则会上传该对象而非复制。
请注意,从对象中读取此信息需要额外的 HEAD
请求,因为对象列表中不会返回该元数据。
哈希值
对于非多部分上传的小对象(如果使用 rclone 上传,指大小低于 --s3-upload-cutoff
的对象),rclone 使用 ETag:
头作为 MD5 校验和。
然而,对于以多部分上传或使用服务器端加密(SSE - AWS 或 SSE - C)上传的对象,ETag
头不再是数据的 MD5 总和。因此,rclone 会添加一个额外的元数据 X-Amz-Meta-Md5chksum
,它是一个 Base64 编码的 MD5 哈希(格式与 Content-MD5
所需的格式相同)。你可以使用 base64 -d
和 hexdump
手动检查此值:
echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
或者你可以使用 rclone check
来验证哈希值是否正确。
对于大对象,计算此哈希值可能需要一些时间,因此可以使用 --s3-disable-checksum
禁用此哈希值的添加。这意味着这些对象将没有 MD5 校验和。
请注意,从对象中读取此信息需要额外的 HEAD
请求,因为对象列表中不会返回该元数据。
降低成本
避免使用 HEAD 请求读取修改时间
默认情况下,rclone 会使用存储在 S3 中的对象的修改时间进行同步。该时间存储在对象元数据中,不幸的是,读取它需要额外的 HEAD 请求,这可能会很昂贵(在时间和金钱方面)。
默认情况下,所有需要检查文件最后更新时间的操作都会使用修改时间。这使得 rclone 可以将远程存储更像真正的文件系统一样处理,但在 S3 上效率较低,因为它需要额外的 API 调用来检索元数据。
在同步时(使用 rclone sync
或 rclone copy
),可以通过几种不同的方式避免额外的 API 调用,每种方式都有其权衡。
--size-only
- 仅检查文件大小。
- 不使用额外的事务。
- 如果文件大小不变,rclone 将不会检测到它已更改。
rclone sync --size-only /path/to/source s3:bucket
--checksum
- 检查文件的大小和 MD5 校验和。
- 不使用额外的事务。
- 能够最准确地检测到更改。
- 会导致源端读取 MD5 校验和,如果源是本地磁盘,这将导致大量的磁盘活动。
- 如果源和目标都是 S3,这是 推荐 使用的标志,以实现最高效率。
rclone sync --checksum /path/to/source s3:bucket
--update --use-server-modtime
- 不使用额外的事务。
- 修改时间变为对象上传的时间。
- 对于许多操作,这足以确定是否需要上传。
- 结合使用
--update
和--use-server-modtime
可以避免额外的 API 调用,并上传本地修改时间比上次上传时间更新的文件。 - 同步时会错过使用过去时间戳创建的文件。
rclone sync --update --use-server-modtime /path/to/source s3:bucket
这些标志可以并且应该与 --fast-list
结合使用 - 见下文。
如果使用 rclone mount
或任何使用 VFS 的命令(例如 rclone serve
),那么你可能需要考虑使用 VFS 标志 --no-modtime
,这将阻止 rclone 为每个对象读取修改时间。如果你对对象的修改时间为上传时间感到满意,也可以使用 --use-server-modtime
。
避免使用 GET 请求读取目录列表
Rclone 的默认目录遍历方式是逐个处理每个目录。这每个目录需要一个 API 调用。使用 --fast-list
标志将首先使用较少的 API 调用(每 1000 个对象一个调用)将所有对象信息读入内存。有关更多详细信息,请参阅 rclone 文档。
rclone sync --fast-list --checksum /path/to/source s3:bucket
--fast-list
以内存使用为代价减少了 API 事务。大致来说,rclone 每个存储的对象使用 1KB 的内存,因此在同步一百万个对象时使用 --fast-list
将大约使用 1GB 的 RAM。
如果你只是将少量文件复制到一个大存储库中,那么使用 --no-traverse
是个好主意。这将直接查找对象,而不是通过目录列表。你可以通过使用 --max-age
和 --no-traverse
非常经济地进行“增量”同步,只复制最近的文件,例如
rclone copy --max-age 24h --no-traverse /path/to/source s3:bucket
然后你可以不那么频繁地进行完整的 rclone sync
。
请注意,“增量”同步中不需要 --fast-list
。
避免 PUT 后使用 HEAD 请求
默认情况下,rclone 会对每个上传的对象执行 HEAD 请求。这样做是为了检查对象是否正确上传。
你可以使用 –s3-no-head 选项禁用此功能 - 有关更多详细信息,请参阅该选项说明。
设置此标志会增加未检测到上传失败的可能性。
提高性能
使用服务器端复制
如果你在同一区域的 S3 存储桶之间复制对象,你应该使用服务器端复制。 这比下载并重新上传对象要快得多,因为无需传输数据。
要让 rclone 使用服务器端复制,你必须对源和目标使用相同的远程存储。
rclone copy s3:source-bucket s3:destination-bucket
使用服务器端复制时,性能受 rclone 向 S3 发出 API 请求的速率限制。 有关如何增加 rclone 发出的 API 请求数量,请参阅下文。
增加 API 请求速率
你可以通过使用 --transfers
和 --checkers
选项增加并行度来提高向 S3 发出 API 请求的速率。
Rclone 对这些设置使用非常保守的默认值,因为并非所有提供商都支持高请求速率。 根据你的提供商,你可以显著增加传输和检查器的数量。
例如,对于 AWS S3,你可以将检查器的数量增加到 200 这样的值。 如果你正在进行服务器端复制,你还可以将传输的数量增加到 200。
rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
你需要对这些值进行试验,以找到适合你设置的最佳配置。
数据完整性
Rclone 会尽最大努力使用各种哈希值验证上传或下载到 S3 提供商的每个部分。
与提供商之间的每个 HTTP 事务都有一个 X-Amz-Content-Sha256
或 Content-Md5
头,以防止 HTTP 主体损坏。HTTP 头由 Authorization
头中传递的签名保护。
与提供商的所有通信都通过 https 进行,以实现加密和额外的错误保护。
单部分上传
-
Rclone 使用从源读取的 MD5 哈希以
Content-Md5
形式上传单部分对象。提供商在接收数据时会检查此值是否正确。 -
然后 Rclone 会执行一个 HEAD 请求(使用
--s3-no-head
禁用)以读取返回的ETag
,它是文件的 MD5 值,并将其与发送的值进行比较。
请注意,如果源没有 MD5 值,那么单部分上传将没有哈希保护。在这种情况下,建议使用 --s3-upload-cutoff 0
,以便所有文件都以多部分上传。
多部分上传
对于大于 --s3-upload-cutoff
的文件,rclone 会将文件拆分为多个部分进行上传。
- 每个部分都由
X-Amz-Content-Sha256
和Content-Md5
保护。
当 rclone 完成所有部分的上传后,它会通过发送以下内容来完成上传:
- 每个部分的 MD5 哈希值
- 部分的数量
- 所有这些信息都由
X-Amz-Content-Sha256
保护。
提供商将检查它收到的所有部分的 MD5 值与 rclone 发送的值是否一致,如果一致则返回 OK。
然后 Rclone 会执行一个 HEAD 请求(使用 --s3-no-head
禁用)并检查 ETag 是否符合预期(在这种情况下,它应该是所有部分的 MD5 总和,末尾加上部分的数量)。
如果源有 MD5 总和,那么 rclone 会附加 X-Amz-Meta-Md5chksum
,因为多部分上传的 ETag
不容易与文件进行比较,因为必须知道块大小才能计算它。
下载
Rclone 会将下载数据的 MD5 哈希值与 ETag 或 X-Amz-Meta-Md5chksum
元数据(如果存在)进行比较,该元数据是 rclone 在多部分上传时上传的。
进一步检查
在每个阶段,rclone 和提供商都会发送和检查 所有内容 的哈希值。为了额外的安全性,rclone 会在上传后故意对每个对象执行 HEAD 请求以检查其是否安全到达。(你可以使用 --s3-no-head
禁用此功能)。
如果你需要进一步确保数据完好无损,可以使用 rclone check
来检查本地和远程的哈希值。
如果你极度担心,可以使用 rclone check --download
,它将下载文件并与本地副本进行比较。(请注意,这不会使用磁盘进行此操作 - 它会在内存中流式传输)。
版本控制
当存储桶版本控制启用时(可以使用 rclone 的 rclone backend versioning
命令启用),当 rclone 上传文件的新版本时,它会创建一个 新版本。同样,当你删除文件时,旧版本将被标记为隐藏,但仍然可用。
使用 --s3-versions
标志可以查看文件的旧版本(如果有)。
还可以使用 --s3-version-at
标志查看存储桶在某个时间点的状态。这将显示当时的文件版本,显示后来被删除的文件,并隐藏自那时以来创建的文件。
如果你希望删除所有旧版本,可以使用 rclone backend cleanup-hidden remote:bucket
命令,该命令将删除文件的所有旧隐藏版本,保留当前版本不变。你还可以提供一个路径,只会删除该路径下的旧版本,例如 rclone backend cleanup-hidden remote:bucket/path/to/stuff
。
当你 purge
一个存储桶时,当前版本和旧版本将被删除,然后存储桶将被删除。
然而,delete
会使文件的当前版本成为隐藏的旧版本。
以下是一个会话示例,展示了如何列出和检索旧版本,然后清理旧版本。
使用 --s3-versions
标志显示当前版本和所有版本。
$ rclone -q ls s3:cleanup-test
9 one.txt
$ rclone -q --s3-versions ls s3:cleanup-test
9 one.txt
8 one-v2016-07-04-141032-000.txt
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt
检索旧版本
$ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
$ ls -l /tmp/one-v2016-07-04-141003-000.txt
-rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
清理所有旧版本,并显示它们已经消失。
$ rclone -q backend cleanup-hidden s3:cleanup-test
$ rclone -q ls s3:cleanup-test
9 one.txt
$ rclone -q --s3-versions ls s3:cleanup-test
9 one.txt
版本命名注意事项
使用 --s3-versions
标记时,rclone 会依赖文件名
来确定对象是否为版本。 版本名称
是通过在文件名和扩展名之间插入时间戳来创建的。
9 file.txt
8 file-v2023-07-17-161032-000.txt
16 file-v2023-06-15-141003-000.txt
如果存在与版本文件名相同的真实文件,那么 --s3-versions
的行为可能不可预测。
清理
如果你运行 rclone cleanup s3:bucket
,它将删除所有超过 24 小时的未完成的多部分上传。你可以使用 --interactive
/-i
或 --dry-run
标志来确切查看它将执行的操作。如果你想更精确地控制过期日期,可以运行 rclone backend cleanup s3:bucket -o max-age=1h
来删除所有超过一小时的上传。你可以使用 rclone backend list-multipart-uploads s3:bucket
查看未完成的多部分上传。
受限文件名字符
S3 允许使用任何有效的 UTF-8 字符串作为键。
无效的 UTF-8 字节将被 替换,因为它们不能用于 XML。
由于在处理 REST API 时会出现问题,以下字符会被替换:
字符 | 编码值 | 替换字符 |
---|---|---|
NUL | 0x00 | ␀ |
/ | 0x2F | / |
以下文件名也会被编码,因为它们似乎无法与 SDK 正常工作:
文件名 | 替换名 |
---|---|
. | . |
.. | .. |
多部分上传
rclone 支持与 S3 进行多部分上传,这意味着它可以上传大于 5GB 的文件。
请注意,通过多部分上传 并且 通过加密远程存储上传的文件没有 MD5 校验和。
rclone 在 --s3-upload-cutoff
指定的大小处从单部分上传切换到多部分上传。该值最大为 5GB,最小为 0(即始终上传多部分文件)。
多部分上传中使用的块大小由 --s3-chunk-size
指定,并发上传的块数由 --s3-upload-concurrency
指定。
多部分上传将使用 --transfers
* --s3-upload-concurrency
* --s3-chunk-size
的额外内存。单部分上传不使用额外内存。
单部分传输可能比多部分传输快,也可能慢,这取决于你与 S3 之间的延迟——延迟越高,单部分传输越有可能更快。
增加 --s3-upload-concurrency
会提高吞吐量(8 是一个合理的值),增加 --s3-chunk-size
也会提高吞吐量(16M 是合理的)。增加这两个值都会使用更多的内存。默认值设置得足够高,以便在不使用过多内存的情况下获得大部分可能的性能。
存储桶和区域
使用 Amazon S3 时,你可以使用任何区域列出存储桶(rclone lsd
),但你只能从存储桶创建所在的区域访问其内容。如果你尝试从错误的区域访问存储桶,将会收到错误信息 incorrect region, the bucket is not in 'XXX' region
。
认证
有多种方法可以为 rclone
提供一组 AWS 凭证,包括使用和不使用环境变量的方法。
不同的认证方法按以下顺序尝试:
- 直接在 rclone 配置文件中(配置文件中
env_auth = false
):- 需要
access_key_id
和secret_access_key
。 - 使用 AWS STS 时可以可选地设置
session_token
。
- 需要
- 运行时配置(配置文件中
env_auth = true
):- 在运行
rclone
之前导出以下环境变量:- 访问密钥 ID:
AWS_ACCESS_KEY_ID
或AWS_ACCESS_KEY
- 秘密访问密钥:
AWS_SECRET_ACCESS_KEY
或AWS_SECRET_KEY
- 会话令牌:
AWS_SESSION_TOKEN
(可选)
- 访问密钥 ID:
- 或者,使用 命名配置文件:
- 配置文件是 AWS CLI 工具使用的标准文件
- 默认情况下,它将使用主目录中的配置文件(例如,在基于 Unix 的系统上为
~/.aws/credentials
)和 “default” 配置文件,要更改这些设置,可以设置以下环境变量或配置键:AWS_SHARED_CREDENTIALS_FILE
来控制使用哪个文件,或使用shared_credentials_file
配置键。AWS_PROFILE
来控制使用哪个配置文件,或使用profile
配置键。
- 或者,在具有 IAM 角色的 ECS 任务中运行
rclone
(仅适用于 AWS)。 - 或者,在具有 IAM 角色的 EC2 实例上运行
rclone
(仅适用于 AWS)。 - 或者,在具有与服务账户关联的 IAM 角色的 EKS 容器中运行
rclone
(仅适用于 AWS)。 - 或者,使用 进程凭证 从外部程序读取配置。
- 在运行
当 env_auth = true
时,rclone(使用 Go v2 SDK)应该支持 aws
CLI 工具和其他 AWS SDK 支持的 所有认证方法。
如果这些选项都没有为 rclone
提供 AWS 凭证,那么与 S3 的交互将是未认证的(有关更多信息,请参阅 匿名访问 部分)。
S3 权限
使用 rclone
的 sync
子命令时,写入的存储桶需要具备以下最低权限:
ListBucket
DeleteObject
GetObject
PutObject
PutObjectACL
CreateBucket
(除非使用 s3-no-check-bucket)
使用 lsd
子命令时,需要 ListAllMyBuckets
权限。
示例策略:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
},
"Action": [
"s3:ListBucket",
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*",
"arn:aws:s3:::BUCKET_NAME"
]
},
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
}
]
}
上述策略说明:
- 这是一个可在创建存储桶时使用的策略。它假定
USER_NAME
已经创建。 Resource
条目必须同时包含两个资源 ARN,因为一个表示存储桶,另一个表示存储桶中的对象。- 当使用 s3-no-check-bucket 且存储桶已经存在时,不需要包含
"arn:aws:s3:::BUCKET_NAME"
。
作为参考,这里有一个 Ansible 脚本,它可以生成一个或多个与 rclone sync
兼容的存储桶。
密钥管理系统 (KMS)
如果你使用 KMS 进行服务器端加密,那么必须确保 rclone 配置为 server_side_encryption = aws:kms
,否则你会发现无法传输小对象,这些对象会产生校验和错误。
冰川存储和深度冰川存储
你可以使用冰川存储类上传对象,或者使用 生命周期策略 将对象转换为冰川存储。存储桶仍然可以正常进行同步或复制,但是如果 rclone 尝试访问冰川存储类中的数据,你会看到类似以下的错误:
2017/09/11 19:07:43 同步失败: 无法打开源对象: 对象位于 GLACIER 中,请先恢复: path/to/file
在这种情况下,你需要在使用 rclone 之前 恢复 相关对象。
请注意,rclone 仅支持 S3 API,不支持冰川存储库 API,因此 rclone 无法直接访问冰川存储库。
启用对象锁定的 S3 存储桶
根据 AWS 的 S3 对象锁定文档:
If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.
是一个markdown文件,帮我在保持markdown格式不变的基础上,翻译成中文
标准选项
以下是 S3(Amazon S3 兼容存储)特定的标准选项。
–s3-disable-http2
禁用 HTTP/2。
属性:
- 配置项:disable_http2
- 环境变量:RCLONE_S3_DISABLE_HTTP2
- 类型:布尔值
- 默认值:false
–s3-download-url
自定义下载端点。
通常将其设置为 CloudFront CDN URL,因为 AWS S3 通过 CloudFront 网络下载数据的出口费用更低。
属性:
- 配置项:download_url
- 环境变量:RCLONE_S3_DOWNLOAD_URL
- 类型:字符串
- 是否必需:否
–s3-directory-markers
创建新目录时,上传一个带有尾随斜杠的空对象
基于存储桶的远程存储不支持空文件夹,此选项会创建一个以 “/” 结尾的空对象来保留文件夹。
属性:
- 配置项:directory_markers
- 环境变量:RCLONE_S3_DIRECTORY_MARKERS
- 类型:布尔值
- 默认值:false
–s3-use-multipart-etag
是否在多部分上传中使用 ETag 进行验证
该值应为 true、false 或留空以使用提供商的默认设置。
属性:
- 配置项:use_multipart_etag
- 环境变量:RCLONE_S3_USE_MULTIPART_ETAG
- 类型:三态值
- 默认值:未设置
–s3-use-unsigned-payload
在 PutObject 操作中是否使用未签名的有效负载
Rclone 在调用 PutObject 时必须避免 AWS SDK 对请求体进行定位操作。AWS 提供商可以在尾部添加校验和以避免定位操作,但其他提供商则无法这样做。
该值应为 true、false 或留空以使用提供商的默认设置。
属性:
- 配置项:use_unsigned_payload
- 环境变量:RCLONE_S3_USE_UNSIGNED_PAYLOAD
- 类型:三态值
- 默认值:未设置
–s3-use-presigned-request
单部分上传时是否使用预签名请求而非 PutObject
如果设置为 false,rclone 将使用 AWS SDK 的 PutObject 方法上传对象。
rclone 1.59 之前的版本使用预签名请求上传单部分对象,将此标志设置为 true 可重新启用该功能。除非在特殊情况下或用于测试,否则通常不需要这样做。
属性:
- 配置项:use_presigned_request
- 环境变量:RCLONE_S3_USE_PRESIGNED_REQUEST
- 类型:布尔值
- 默认值:false
–s3-versions
在目录列表中包含旧版本文件。
属性:
- 配置项:versions
- 环境变量:RCLONE_S3_VERSIONS
- 类型:布尔值
- 默认值:false
–s3-version-at
显示指定时间点的文件版本。
参数可以是日期(如 “2006-01-02”)、日期时间(如 “2006-01-02 15:04:05”)或表示多久之前的时间段(如 “100d” 或 “1h”)。
请注意,使用此选项时不允许进行文件写入操作,即不能上传或删除文件。
有关有效格式,请参阅 时间选项文档。
属性:
- 配置项:version_at
- 环境变量:RCLONE_S3_VERSION_AT
- 类型:时间
- 默认值:关闭
–s3-version-deleted
使用 --s3-versions
时显示已删除文件的标记。
使用 --s3-versions
列出文件时,此选项会显示已删除文件的标记,这些标记将显示为大小为 0 的文件。对这些标记唯一可执行的操作是删除。
删除删除标记将显示上一个版本的文件。
已删除的文件将始终显示时间戳。
属性:
- 配置项:version_deleted
- 环境变量:RCLONE_S3_VERSION_DELETED
- 类型:布尔值
- 默认值:false
–s3-decompress
若设置此选项,将对 gzip 编码的对象进行解压缩。
可以将设置了 “Content-Encoding: gzip” 的对象上传到 S3。通常,rclone 会将这些文件作为压缩对象下载。
如果设置了此标志,rclone 将在接收到设置了 “Content-Encoding: gzip” 的文件时对其进行解压缩。这意味着 rclone 无法检查文件的大小和哈希值,但文件内容将被解压缩。
属性:
- 配置项:decompress
- 环境变量:RCLONE_S3_DECOMPRESS
- 类型:布尔值
- 默认值:false
–s3-might-gzip
如果后端可能对对象进行 gzip 压缩,则设置此选项。
通常,提供商在下载对象时不会对其进行修改。如果对象上传时未设置 Content-Encoding: gzip
,则下载时也不会设置该头部。
然而,某些提供商可能会对未设置 Content-Encoding: gzip
的对象进行 gzip 压缩(例如 Cloudflare)。
出现此问题的症状可能是收到类似以下的错误:
ERROR corrupted on transfer: sizes differ NNN vs MMM
如果设置了此标志,并且 rclone 下载的对象设置了 Content-Encoding: gzip
和分块传输编码,那么 rclone 将在下载过程中对对象进行解压缩。
如果设置为未设置(默认值),rclone 将根据提供商的设置选择是否应用此功能,但你可以在此处覆盖 rclone 的选择。
属性:
- 配置项:might_gzip
- 环境变量:RCLONE_S3_MIGHT_GZIP
- 类型:三态值
- 默认值:未设置
–s3-use-accept-encoding-gzip
是否发送 Accept-Encoding: gzip
头部。
默认情况下,rclone 会在可能的情况下在下载请求中添加 Accept-Encoding: gzip
头部以获取压缩对象。
然而,某些提供商(如 Google Cloud Storage)可能会更改 HTTP 头部,从而破坏请求的签名。
出现此问题的症状可能是收到类似以下的错误:
SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided.
在这种情况下,你可能需要尝试禁用此选项。
属性:
- 配置项:use_accept_encoding_gzip
- 环境变量:RCLONE_S3_USE_ACCEPT_ENCODING_GZIP
- 类型:三态值
- 默认值:未设置
–s3-no-system-metadata
禁止设置和读取系统元数据。
属性:
- 配置项:no_system_metadata
- 环境变量:RCLONE_S3_NO_SYSTEM_METADATA
- 类型:布尔值
- 默认值:false
–s3-sts-endpoint
STS(安全令牌服务)的端点(已弃用)。
如果使用 AWS,请留空以使用该区域的默认端点。
属性:
- 配置项:sts_endpoint
- 环境变量:RCLONE_S3_STS_ENDPOINT
- 提供商:AWS
- 类型:字符串
- 是否必需:否
–s3-use-already-exists
设置 rclone 在创建存储桶时是否报告 BucketAlreadyExists
错误。
在 s3 协议的发展过程中,AWS 在尝试创建用户已拥有的存储桶时,开始返回 AlreadyOwnedByYou
错误,而不是 BucketAlreadyExists
错误。
不幸的是,一些 s3 兼容服务的实现并不一致,有些返回 AlreadyOwnedByYou
,有些返回 BucketAlreadyExists
,还有些根本不返回错误。
这对 rclone 很重要,因为在许多操作中(除非使用 --s3-no-check-bucket
),rclone 会通过创建存储桶来确保其存在。
如果 rclone 知道提供商可以返回 AlreadyOwnedByYou
或不返回错误,那么当用户尝试创建非自己拥有的存储桶时,它可以报告 BucketAlreadyExists
错误。否则,rclone 会忽略 BucketAlreadyExists
错误,这可能会导致混淆。
rclone 应该会为所有已知的提供商自动正确设置此选项 - 如果未正确设置,请提交 bug 报告。
属性:
- 配置项:use_already_exists
- 环境变量:RCLONE_S3_USE_ALREADY_EXISTS
- 类型:三态值
- 默认值:未设置
–s3-use-multipart-uploads
设置 rclone 是否使用多部分上传。
如果你想禁用多部分上传,可以更改此选项。在正常操作中,通常不需要这样做。
rclone 应该会为所有已知的提供商自动正确设置此选项 - 如果未正确设置,请提交 bug 报告。
属性:
- 配置项:use_multipart_uploads
- 环境变量:RCLONE_S3_USE_MULTIPART_UPLOADS
- 类型:三态值
- 默认值:未设置
–s3-directory-bucket
设置为使用 AWS 目录存储桶。
如果你使用的是 AWS 目录存储桶,请设置此标志。
这将确保不发送 Content-Md5
头部,并确保不将 ETag
头部解释为 MD5 校验和。无论是单部分还是多部分上传的对象,都会设置 X-Amz-Meta-Md5chksum
。
这还会将 no_check_bucket
设置为 true。
请注意,目录存储桶不支持以下功能:
- 版本控制
Content-Encoding: gzip
rclone 在使用目录存储桶时的限制:
- rclone 不支持使用
rclone mkdir
创建目录存储桶。 - 目前还不支持使用
rclone rmdir
删除目录存储桶。 - 在顶级执行
rclone lsf
时,目录存储桶不会显示。 - rclone 目前还无法删除自动创建的目录。理论上,设置
directory_markers = true
应该可以实现,但实际上不行。 - 目录在递归(ListR)列表中似乎不会显示。
属性:
- 配置项:directory_bucket
- 环境变量:RCLONE_S3_DIRECTORY_BUCKET
- 提供商:AWS
- 类型:布尔值
- 默认值:false
–s3-sdk-log-mode
设置以调试 SDK。
可以将其设置为以下功能的逗号分隔列表:
Signing
Retries
Request
RequestWithBody
Response
ResponseWithBody
DeprecatedUsage
RequestEventMessage
ResponseEventMessage
使用 Off
禁用日志记录,使用 All
设置所有日志级别。你需要使用 -vv
来查看调试级别的日志。
属性:
- 配置项:sdk_log_mode
- 环境变量:RCLONE_S3_SDK_LOG_MODE
- 类型:位掩码
- 默认值:Off
–s3-description
远程存储的描述。
属性:
- 配置项:description
- 环境变量:RCLONE_S3_DESCRIPTION
- 类型:字符串
- 是否必需:否
元数据
用户元数据存储为 x-amz-meta-
键。S3 元数据键不区分大小写,并且始终以小写形式返回。
以下是 s3 后端可能的系统元数据项。
名称 | 说明 | 类型 | 示例 | 只读 |
---|---|---|---|---|
btime | 从 Last-Modified 头部读取的文件创建时间 |
RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | 是 |
cache-control | Cache-Control 头部 |
字符串 | no-cache | 否 |
content-disposition | Content-Disposition 头部 |
字符串 | inline | 否 |
content-encoding | Content-Encoding 头部 |
字符串 | gzip | 否 |
content-language | Content-Language 头部 |
字符串 | en-US | 否 |
content-type | Content-Type 头部 |
字符串 | text/plain | 否 |
mtime | 从 rclone 元数据中读取的最后修改时间 | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | 否 |
tier | 对象的存储层级 | 字符串 | GLACIER | 是 |
有关更多信息,请参阅 元数据文档。
后端命令
以下是 S3 后端特有的命令。
使用以下命令运行这些命令:
rclone backend COMMAND remote:
下面的帮助信息将解释每个命令需要的参数。
有关如何传递选项和参数的更多信息,请参阅 后端命令文档。
这些命令可以使用 rc 命令 backend/command 在运行的后端上执行。
restore
从 GLACIER 或 INTELLIGENT-TIERING 存档层级恢复对象
rclone backend restore remote: [选项] [<参数>+]
此命令可用于将一个或多个对象从 GLACIER 恢复到正常存储,或从 INTELLIGENT-TIERING 的存档访问/深度存档访问层级恢复到频繁访问层级。
使用示例:
rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY
此标志也遵循过滤器。首先使用 --interactive
/-i
或 --dry-run
标志进行测试
rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1
所有显示的对象将被标记为恢复,然后
rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1
它将返回一个包含 Remote
和 Status
键的状态字典列表。如果操作成功,Status
将为 OK
;否则,将显示错误消息。
[
{
"Status": "OK",
"Remote": "test.txt"
},
{
"Status": "OK",
"Remote": "test/file4.txt"
}
]
选项:
- “description”: 作业的可选描述。
- “lifetime”: 活跃副本的有效期(以天为单位),对于智能分层存储(INTELLIGENT-TIERING)可忽略此选项。
- “priority”: 恢复优先级:标准(Standard)|加急(Expedited)|批量(Bulk)
restore-status
显示从冰川存储(GLACIER)或智能分层存储(INTELLIGENT-TIERING)恢复的对象的恢复状态
rclone backend restore-status remote: [选项] [<参数>+]
此命令可用于显示从冰川存储(GLACIER)恢复到正常存储,或从智能分层存储的存档访问/深度存档访问层级恢复到频繁访问层级的对象的状态。
使用示例:
rclone backend restore-status s3:bucket/path/to/object
rclone backend restore-status s3:bucket/path/to/directory
rclone backend restore-status -o all s3:bucket/path/to/directory
此命令不受过滤器的影响。
它会返回一个状态字典列表。
[
{
"Remote": "file.txt",
"VersionID": null,
"RestoreStatus": {
"IsRestoreInProgress": true,
"RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
},
"StorageClass": "GLACIER"
},
{
"Remote": "test.pdf",
"VersionID": null,
"RestoreStatus": {
"IsRestoreInProgress": false,
"RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
},
"StorageClass": "DEEP_ARCHIVE"
},
{
"Remote": "test.gz",
"VersionID": null,
"RestoreStatus": {
"IsRestoreInProgress": true,
"RestoreExpiryDate": "null"
},
"StorageClass": "INTELLIGENT_TIERING"
}
]
选项:
- “all”: 如果设置该选项,则显示所有对象,而不仅仅是具有恢复状态的对象。
list-multipart-uploads
列出未完成的多部分上传任务
rclone backend list-multipart-uploads remote: [选项] [<参数>+]
此命令以 JSON 格式列出未完成的多部分上传任务。
rclone backend list-multipart s3:bucket/path/to/object
它返回一个包含存储桶的字典,其值为未完成的多部分上传任务列表。
你可以在不指定存储桶的情况下调用该命令,此时它将列出所有存储桶;也可以指定一个存储桶,或者同时指定存储桶和路径。
{
"rclone": [
{
"Initiated": "2020-06-26T14:20:36Z",
"Initiator": {
"DisplayName": "XXX",
"ID": "arn:aws:iam::XXX:user/XXX"
},
"Key": "KEY",
"Owner": {
"DisplayName": null,
"ID": "XXX"
},
"StorageClass": "STANDARD",
"UploadId": "XXX"
}
],
"rclone-1000files": [],
"rclone-dst": []
}
cleanup
移除未完成的多部分上传任务。
rclone backend cleanup remote: [选项] [<参数>+]
此命令会移除超过最大时间(默认值为 24 小时)的未完成多部分上传任务。
请注意,你可以使用 --interactive
(简写为 -i
)或 --dry-run
选项来查看该命令的执行效果。
rclone backend cleanup s3:bucket/path/to/object
rclone backend cleanup -o max-age=7w s3:bucket/path/to/object
时间间隔的解析方式与 rclone 其他部分一致,例如 2h
(2 小时)、7d
(7 天)、7w
(7 周)等。
选项:
- “max-age”: 要删除的上传任务的最大时长
cleanup-hidden
移除文件的旧版本。
rclone backend cleanup-hidden remote: [选项] [<参数>+]
此命令会移除启用了版本控制的存储桶中所有旧的隐藏版本文件。
请注意,你可以使用 --interactive
(简写为 -i
)或 --dry-run
选项来查看该命令的执行效果。
rclone backend cleanup-hidden s3:bucket/path/to/dir
versioning
设置/获取存储桶的版本控制支持。
rclone backend versioning remote: [选项] [<参数>+]
如果传递了参数,此命令会设置存储桶的版本控制支持,然后返回所提供存储桶的当前版本控制状态。
rclone backend versioning s3:bucket # 仅读取状态
rclone backend versioning s3:bucket Enabled
rclone backend versioning s3:bucket Suspended
它可能返回 “Enabled”(已启用)、“Suspended”(已暂停)或 “Unversioned”(未启用版本控制)。请注意,一旦启用了版本控制,状态就不能再设置回 “Unversioned”。
set
用于更新配置参数的设置命令。
rclone backend set remote: [选项] [<参数>+]
此设置命令可用于更新正在运行的 S3 后端的配置参数。
使用示例:
rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X
选项键的命名与配置文件中的命名一致。
调用此命令时,会使用新参数重新建立与 S3 后端的连接。只需传递新参数即可,因为值将默认为当前正在使用的值。
该命令不会返回任何内容。
公共存储桶的匿名访问
如果你想使用 rclone 访问公共存储桶,请将 access_key_id
和 secret_access_key
留空进行配置。最终你的配置应该如下所示:
[anons3]
type = s3
provider = AWS
然后,你可以像使用普通远程存储一样,通过公共存储桶的名称来使用它,例如:
rclone lsd anons3:1000genomes
你将能够列出和复制数据,但不能上传数据。
你也可以完全在命令行中完成此操作:
rclone lsd :s3,provider=AWS:1000genomes
提供商
AWS S3
这是上面配置部分中作为主要示例使用并描述的提供商。
AWS 目录存储桶
从 rclone v1.69 版本开始,支持 目录存储桶。
你需要设置 directory_buckets = true
配置参数,或者使用 --s3-directory-buckets
选项。
请注意,rclone 目前还无法:
- 创建目录存储桶
- 列出目录存储桶
有关更多信息,请参阅 –s3-directory-buckets 标志。
AWS Snowball Edge
AWS Snowball 是一种用于将批量数据传输回 AWS 的硬件设备。它的主要软件接口是 S3 对象存储。
要将 rclone 与 AWS Snowball Edge 设备配合使用,请按照 “S3 兼容服务” 的标准进行配置。
如果使用的是 rclone v1.59 之前的版本,请务必设置 upload_cutoff = 0
,否则你会遇到认证头部问题,因为 Snowball 设备不支持基于查询参数的认证。
使用 rclone v1.59 或更高版本时,应该不需要设置 upload_cutoff
。
eg.
[snowball]
type = s3
provider = Other
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
endpoint = http://[IP of Snowball]:8080
upload_cutoff = 0
Ceph
Ceph是一个开源、统一的分布式存储系统。 存储系统,具有卓越的性能、可靠性和 可扩展性。 它有一个与 S3 兼容的对象存储接口。
要将 rclone 与 Ceph 结合使用,请按上述方法配置,但区域留空 并设置端点。 最后应该在 配置:
[ceph]
type = s3
provider = Ceph
env_auth = false
access_key_id = XXX
secret_access_key = YYY
region =
endpoint = https://ceph.endpoint.example.com
location_constraint =
acl =
server_side_encryption =
storage_class =
如果你使用的是较旧版本的 Ceph(例如 10.2.x Jewel),并且 rclone 版本早于 v1.59,那么你可能需要提供参数 --s3-upload-cutoff 0
,或者在配置文件中设置 upload_cutoff 0
,以解决一个导致小文件上传失败的 bug。
另外需要注意的是,Ceph 有时会在提供给用户的密码中包含 /
。如果你使用命令行工具读取秘密访问密钥,会得到一个 JSON 数据块,其中的 /
会被转义为 \/
。确保在秘密访问密钥中只写入 /
。
例如,Ceph 的输出大致如下(无关键已移除)。
{
"user_id": "xxx",
"display_name": "xxxx",
"keys": [
{
"user": "xxx",
"access_key": "xxxxxx",
"secret_key": "xxxxxx\/xxxx"
}
],
}
由于这是一个 JSON 转储,它将 /
编码为 \/
,因此如果你将密钥设置为 xxxxxx/xxxx
,它将正常工作。
Cloudflare R2
Cloudflare R2 存储服务允许开发者存储大量非结构化数据,且无需支付传统云存储服务中高昂的出口带宽费用。
以下是一个配置 Cloudflare R2 的示例。首先运行:
rclone config
这将引导你完成一个交互式的设置过程。
请注意,所有存储桶都是私有的,并且都存储在同一个 “自动” 区域中。若要公开共享存储桶中的内容,需要使用 Cloudflare Workers。
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> r2
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
...
XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Magalu, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi
\ (s3)
...
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
...
XX / Cloudflare R2 Storage
\ (Cloudflare)
...
provider> Cloudflare
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> ACCESS_KEY
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> SECRET_ACCESS_KEY
Option region.
Region to connect to.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / R2 buckets are automatically distributed across Cloudflare's data centers for low latency.
\ (auto)
region> 1
Option endpoint.
Endpoint for S3 API.
Required when using an S3 clone.
Enter a value. Press Enter to leave empty.
endpoint> https://ACCOUNT_ID.r2.cloudflarestorage.com
Edit advanced config?
y) Yes
n) No (default)
y/n> n
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
This will leave your config looking something like:
[r2]
type = s3
provider = Cloudflare
access_key_id = ACCESS_KEY
secret_access_key = SECRET_ACCESS_KEY
region = auto
endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com
acl = private
现在运行 rclone lsf r2:
来查看你的存储桶,运行 rclone lsf r2:bucket
来查看存储桶内的内容。
对于具有 “对象读写” 权限的 R2 令牌,你可能还需要添加 no_check_bucket = true
才能使对象上传正常工作。
请注意,Cloudflare 默认会对以 Content-Encoding: gzip
上传的文件进行解压缩,这与 AWS 的做法不同。如果这导致了问题,可以使用 --header-upload "Cache-Control: no-transform"
来上传文件。
这带来的一个结果是,Content-Encoding: gzip
永远不会出现在 Cloudflare 的元数据中。
Dreamhost
Dreamhost 的 DreamObjects 是一个基于 Ceph 的对象存储系统。
要将 rclone 与 Dreamhost 配合使用,请按照上述方法进行配置,但将区域留空并设置端点。最终你的配置文件应该如下所示:
[dreamobjects]
type = s3
provider = DreamHost
env_auth = false
access_key_id = your_access_key
secret_access_key = your_secret_key
region =
endpoint = objects-us-west-1.dream.io
location_constraint =
acl = private
server_side_encryption =
storage_class =
谷歌云存储
GoogleCloudStorage是谷歌云平台的S3-interoperable对象存储服务。
要连接到 Google Cloud Storage,您需要一个访问密钥和秘钥。 可以通过创建 HMAC 密钥来获取。
[gs]
type = s3
provider = GCS
access_key_id = your_access_key
secret_access_key = your_secret_key
endpoint = https://storage.googleapis.com
注意,当需要进行目录分页时,--s3-versions
与谷歌云存储(GCS)不兼容。rclone 会返回以下错误:
s3 协议错误:收到 IsTruncated 标志设置为真但没有 NextKeyMarker 的版本列表
这是谷歌的一个问题,编号为 #312292516。
DigitalOcean Spaces
Spaces 是云服务提供商 DigitalOcean 提供的一个 与 S3 兼容 的对象存储服务。
要连接到 DigitalOcean Spaces,你需要一个访问密钥(access key)和一个秘密密钥(secret key)。你可以在 DigitalOcean 控制面板的 “应用程序与 API” 页面获取这些密钥。在运行 rclone config
并提示输入 access_key_id
和 secret_access_key
时,你需要提供这些密钥。
当提示输入 region
或 location_constraint
时,按回车键使用默认值。区域信息必须包含在 endpoint
设置中(例如 nyc3.digitaloceanspaces.com
)。其他设置可以使用默认值。
通过运行 rclone config
创建一个新的远程存储,每个提示应按如下方式回答:
Storage> s3
env_auth> 1
access_key_id> YOUR_ACCESS_KEY
secret_access_key> YOUR_SECRET_KEY
region>
endpoint> nyc3.digitaloceanspaces.com
location_constraint>
acl>
storage_class>
The resulting configuration file should look like:
[spaces]
type = s3
provider = DigitalOcean
env_auth = false
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
region =
endpoint = nyc3.digitaloceanspaces.com
location_constraint =
acl =
server_side_encryption =
storage_class =
Once configured, you can create a new Space and begin copying files. For example:
rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space
华为 OBS
对象存储服务(OBS)提供稳定、安全、高效、易用的云存储,可让您以任何格式存储几乎任何数量的非结构化数据,并从任何地方进行访问。
OBS 提供 S3 接口,您可以复制并修改以下配置,并将其添加到您的 rclone 配置文件中。
[obs]
type = s3
provider = HuaweiOBS
access_key_id = your-access-key-id
secret_access_key = your-secret-access-key
region = af-south-1
endpoint = obs.af-south-1.myhuaweicloud.com
acl = private
Or you can also configure via the interactive command line:
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> obs
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ (s3)
[snip]
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
9 / Huawei Object Storage Service
\ (HuaweiOBS)
[snip]
provider> 9
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> your-access-key-id
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> your-secret-access-key
Option region.
Region to connect to.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / AF-Johannesburg
\ (af-south-1)
2 / AP-Bangkok
\ (ap-southeast-2)
[snip]
region> 1
Option endpoint.
Endpoint for OBS API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / AF-Johannesburg
\ (obs.af-south-1.myhuaweicloud.com)
2 / AP-Bangkok
\ (obs.ap-southeast-2.myhuaweicloud.com)
[snip]
endpoint> 1
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
[snip]
acl> 1
Edit advanced config?
y) Yes
n) No (default)
y/n>
--------------------
[obs]
type = s3
provider = HuaweiOBS
access_key_id = your-access-key-id
secret_access_key = your-secret-access-key
region = af-south-1
endpoint = obs.af-south-1.myhuaweicloud.com
acl = private
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
Name Type
==== ====
obs s3
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q
IBM 云对象存储(S3)
存储在 IBM 云对象存储系统中的信息经过加密,分散在多个地理位置,并通过 S3 API 的实施进行访问。 该服务利用 IBM 云对象存储系统(前身为 Cleversafe)提供的分布式存储技术。 欲了解更多信息,请访问 (http://www.ibm.com/cloud/object-storage)
要配置对 IBM COS S3 的访问,请按照以下步骤操作:
-
运行 rclone config 并为新远程选择 n。注意,当需要进行目录分页时,
--s3-versions
与谷歌云存储(GCS)不兼容。rclone 会返回以下错误:s3 协议错误:收到 IsTruncated 标志设置为真但没有 NextKeyMarker 的版本列表
这是谷歌的一个问题,编号为 #312292516。
DigitalOcean Spaces
Spaces 是云服务提供商 DigitalOcean 提供的一个 与 S3 兼容 的对象存储服务。
要连接到 DigitalOcean Spaces,你需要一个访问密钥(access key)和一个秘密密钥(secret key)。你可以在 DigitalOcean 控制面板的 “应用程序与 API” 页面获取这些密钥。在运行 rclone config
并提示输入 access_key_id
和 secret_access_key
时,你需要提供这些密钥。
当提示输入 region
或 location_constraint
时,按回车键使用默认值。区域信息必须包含在 endpoint
设置中(例如 nyc3.digitaloceanspaces.com
)。其他设置可以使用默认值。
通过运行 rclone config
创建一个新的远程存储,每个提示应按如下方式回答:
2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
- Enter the name for the configuration
name> <YOUR NAME>
- Select “s3” storage.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ "s3"
[snip]
Storage> s3
- Select IBM COS as the S3 Storage Provider.
Choose the S3 provider.
Choose a number from below, or type in your own value
1 / Choose this option to configure Storage to AWS S3
\ "AWS"
2 / Choose this option to configure Storage to Ceph Systems
\ "Ceph"
3 / Choose this option to configure Storage to Dreamhost
\ "Dreamhost"
4 / Choose this option to the configure Storage to IBM COS S3
\ "IBMCOS"
5 / Choose this option to the configure Storage to Minio
\ "Minio"
Provider>4
- Enter the Access Key and Secret.
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> <>
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> <>
- Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an endpoint address.
Endpoint for IBM COS S3 API.
Specify if using an IBM COS On Premise.
Choose a number from below, or type in your own value
1 / US Cross Region Endpoint
\ "s3-api.us-geo.objectstorage.softlayer.net"
2 / US Cross Region Dallas Endpoint
\ "s3-api.dal.us-geo.objectstorage.softlayer.net"
3 / US Cross Region Washington DC Endpoint
\ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
4 / US Cross Region San Jose Endpoint
\ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
5 / US Cross Region Private Endpoint
\ "s3-api.us-geo.objectstorage.service.networklayer.com"
6 / US Cross Region Dallas Private Endpoint
\ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
7 / US Cross Region Washington DC Private Endpoint
\ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
8 / US Cross Region San Jose Private Endpoint
\ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
9 / US Region East Endpoint
\ "s3.us-east.objectstorage.softlayer.net"
10 / US Region East Private Endpoint
\ "s3.us-east.objectstorage.service.networklayer.com"
11 / US Region South Endpoint
[snip]
34 / Toronto Single Site Private Endpoint
\ "s3.tor01.objectstorage.service.networklayer.com"
endpoint>1
- Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter
1 / US Cross Region Standard
\ "us-standard"
2 / US Cross Region Vault
\ "us-vault"
3 / US Cross Region Cold
\ "us-cold"
4 / US Cross Region Flex
\ "us-flex"
5 / US East Region Standard
\ "us-east-standard"
6 / US East Region Vault
\ "us-east-vault"
7 / US East Region Cold
\ "us-east-cold"
8 / US East Region Flex
\ "us-east-flex"
9 / US South Region Standard
\ "us-south-standard"
10 / US South Region Vault
\ "us-south-vault"
[snip]
32 / Toronto Flex
\ "tor01-flex"
location_constraint>1
- Specify a canned ACL. IBM Cloud (Storage) supports “public-read” and “private”. IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
\ "public-read"
3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
\ "public-read-write"
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
\ "authenticated-read"
acl> 1
- Review the displayed configuration and accept to save the “remote” then quit. The config file should look like this
[xxx]
type = s3
Provider = IBMCOS
access_key_id = xxx
secret_access_key = yyy
endpoint = s3-api.us-geo.objectstorage.softlayer.net
location_constraint = us-standard
acl = private
- Execute rclone commands
1) Create a bucket.
rclone mkdir IBM-COS-XREGION:newbucket
2) List available buckets.
rclone lsd IBM-COS-XREGION:
-1 2017-11-08 21:16:22 -1 test
-1 2018-02-14 20:16:39 -1 newbucket
3) List contents of a bucket.
rclone ls IBM-COS-XREGION:newbucket
18685952 test.exe
4) Copy a file from local to remote.
rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
5) Copy a file from remote to local.
rclone copy IBM-COS-XREGION:newbucket/file.txt .
6) Delete a file on remote.
rclone delete IBM-COS-XREGION:newbucket/file.txt
IBM IAM authentication
If using IBM IAM authentication with IBM API KEY you need to fill in these additional parameters
- Select false for env_auth
- Leave
access_key_id
andsecret_access_key
blank - Paste your
ibm_api_key
Option ibm_api_key.
IBM API Key to be used to obtain IAM token
Enter a value of type string. Press Enter for the default (1).
ibm_api_key>
- Paste your
ibm_resource_instance_id
Option ibm_resource_instance_id.
IBM service instance id
Enter a value of type string. Press Enter for the default (2).
ibm_resource_instance_id>
- In advanced settings type true for
v2_auth
Option v2_auth.
If true use v2 authentication.
If this is false (the default) then rclone will use v4 authentication.
If it is set then rclone will use v2 authentication.
Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
Enter a boolean value (true or false). Press Enter for the default (true).
v2_auth>
IDrive e2
以下是制作 IDrive e2的示例 配置。 第一次运行
rclone config
这将引导您完成交互式设置过程。
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
Enter name for new remote.
name> e2
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ (s3)
[snip]
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / IDrive e2
\ (IDrive)
[snip]
provider> IDrive
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth>
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> YOUR_ACCESS_KEY
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> YOUR_SECRET_KEY
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
2 | The AllUsers group gets READ access.
\ (public-read)
/ Owner gets FULL_CONTROL.
3 | The AllUsers group gets READ and WRITE access.
| Granting this on a bucket is generally not recommended.
\ (public-read-write)
/ Owner gets FULL_CONTROL.
4 | The AuthenticatedUsers group gets READ access.
\ (authenticated-read)
/ Object owner gets FULL_CONTROL.
5 | Bucket owner gets READ access.
| If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ (bucket-owner-read)
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ (bucket-owner-full-control)
acl>
Edit advanced config?
y) Yes
n) No (default)
y/n>
Configuration complete.
Options:
- type: s3
- provider: IDrive
- access_key_id: YOUR_ACCESS_KEY
- secret_access_key: YOUR_SECRET_KEY
- endpoint: q9d9.la12.idrivee2-5.com
Keep this "e2" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
IONOS Cloud
IONOS S3 对象存储 是 IONOS 提供的一项服务,用于存储和访问非结构化数据。 要连接到该服务,你需要一个访问密钥和一个秘密密钥。你可以在 数据中心设计器 中,选择 管理资源 > 对象存储密钥管理器 来找到这些密钥。
以下是一个配置示例。首先,运行 rclone config
。这将引导你完成一个交互式的设置过程。输入 n
以添加新的远程存储,然后输入一个名称:
Enter name for new remote.
name> ionos-fra
Type s3
to choose the connection type:
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ (s3)
[snip]
Storage> s3
Type IONOS
:
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / IONOS Cloud
\ (IONOS)
[snip]
provider> IONOS
Press Enter to choose the default option Enter AWS credentials in the next step
:
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth>
Enter your Access Key and Secret key. These can be retrieved in the Data Center Designer, click on the menu “Manager resources” / “Object Storage Key Manager”.
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> YOUR_ACCESS_KEY
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> YOUR_SECRET_KEY
Choose the region where your bucket is located:
Option region.
Region where your bucket will be created and your data stored.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Frankfurt, Germany
\ (de)
2 / Berlin, Germany
\ (eu-central-2)
3 / Logrono, Spain
\ (eu-south-2)
region> 2
Choose the endpoint from the same region:
Option endpoint.
Endpoint for IONOS S3 Object Storage.
Specify the endpoint from the same region.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Frankfurt, Germany
\ (s3-eu-central-1.ionoscloud.com)
2 / Berlin, Germany
\ (s3-eu-central-2.ionoscloud.com)
3 / Logrono, Spain
\ (s3-eu-south-2.ionoscloud.com)
endpoint> 1
Press Enter to choose the default option or choose the desired ACL setting:
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
[snip]
acl>
Press Enter to skip the advanced config:
Edit advanced config?
y) Yes
n) No (default)
y/n>
Press Enter to save the configuration, and then q
to quit the configuration process:
Configuration complete.
Options:
- type: s3
- provider: IONOS
- access_key_id: YOUR_ACCESS_KEY
- secret_access_key: YOUR_SECRET_KEY
- endpoint: s3-eu-central-1.ionoscloud.com
Keep this "ionos-fra" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Done! Now you can try some commands (for macOS, use ./rclone
instead of rclone
).
- Create a bucket (the name must be unique within the whole IONOS S3)
rclone mkdir ionos-fra:my-bucket
- List available buckets
rclone lsd ionos-fra:
- Copy a file from local to remote
rclone copy /Users/file.txt ionos-fra:my-bucket
- List contents of a bucket
rclone ls ionos-fra:my-bucket
- Copy a file from remote to local
rclone copy ionos-fra:my-bucket/file.txt
Minio
Minio是专为云应用程序开发人员和开发人员打造的对象存储服务器。
它的安装非常简单,并提供了一个与 S3 兼容的服务器,可供 rclone 使用。
要使用它,请按照说明 此处 安装 Minio。
配置完成后,Minio 会打印如下内容
Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis
Browser Access:
http://192.168.1.106:9000 http://172.23.0.1:9000
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
.NET: https://docs.minio.io/docs/dotnet-client-quickstart-guide
Drive Capacity: 26 GiB Free, 165 GiB Total
These details need to go into rclone config
like this. Note that it
is important to put the region in as stated above.
env_auth> 1
access_key_id> USWUXHGYZQYFYFFIT3RE
secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region> us-east-1
endpoint> http://192.168.1.106:9000
location_constraint>
server_side_encryption>
Which makes the config file look like this
[minio]
type = s3
provider = Minio
env_auth = false
access_key_id = USWUXHGYZQYFYFFIT3RE
secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region = us-east-1
endpoint = http://192.168.1.106:9000
location_constraint =
server_side_encryption =
So once set up, for example, to copy files into a bucket
rclone copy /path/to/files minio:bucket
Outscale
OUTSCALE Object Storage (OOS)是达索系统旗下品牌 OUTSCALE 提供的企业级 S3 兼容存储服务。 有关 OOS 的更多信息,请参阅官方文档。
下面是一个 OOS 配置示例,你可以将其粘贴到你的 rclone 配置文件中:
[outscale]
type = s3
provider = Outscale
env_auth = false
access_key_id = ABCDEFGHIJ0123456789
secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
region = eu-west-2
endpoint = oos.eu-west-2.outscale.com
acl = private
You can also run rclone config
to go through the interactive setup process:
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
Enter name for new remote.
name> outscale
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
X / Amazon S3 Compliant Storage Providers including AWS, ...Outscale, ...and others
\ (s3)
[snip]
Storage> outscale
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / OUTSCALE Object Storage (OOS)
\ (Outscale)
[snip]
provider> Outscale
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth>
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> ABCDEFGHIJ0123456789
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Option region.
Region where your bucket will be created and your data stored.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Paris, France
\ (eu-west-2)
2 / New Jersey, USA
\ (us-east-2)
3 / California, USA
\ (us-west-1)
4 / SecNumCloud, Paris, France
\ (cloudgouv-eu-west-1)
5 / Tokyo, Japan
\ (ap-northeast-1)
region> 1
Option endpoint.
Endpoint for S3 API.
Required when using an S3 clone.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Outscale EU West 2 (Paris)
\ (oos.eu-west-2.outscale.com)
2 / Outscale US east 2 (New Jersey)
\ (oos.us-east-2.outscale.com)
3 / Outscale EU West 1 (California)
\ (oos.us-west-1.outscale.com)
4 / Outscale SecNumCloud (Paris)
\ (oos.cloudgouv-eu-west-1.outscale.com)
5 / Outscale AP Northeast 1 (Japan)
\ (oos.ap-northeast-1.outscale.com)
endpoint> 1
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
[snip]
acl> 1
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Configuration complete.
Options:
- type: s3
- provider: Outscale
- access_key_id: ABCDEFGHIJ0123456789
- secret_access_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
- endpoint: oos.eu-west-2.outscale.com
Keep this "outscale" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Qiniu Cloud Object Storage (Kodo)
Qiniu Cloud Object Storage (Kodo), a completely independent-researched core technology which is proven by repeated customer experience has occupied absolute leading market leader position. Kodo can be widely applied to mass data management.
To configure access to Qiniu Kodo, follow the steps below:
- Run
rclone config
and selectn
for a new remote.
rclone config
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
- Give the name of the configuration. For example, name it ‘qiniu’.
name> qiniu
- Select
s3
storage.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ (s3)
[snip]
Storage> s3
- Select
Qiniu
provider.
Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
\ "AWS"
[snip]
22 / Qiniu Object Storage (Kodo)
\ (Qiniu)
[snip]
provider> Qiniu
- Enter your SecretId and SecretKey of Qiniu Kodo.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> AKIDxxxxxxxxxx
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> xxxxxxxxxxx
- Select endpoint for Qiniu Kodo. This is the standard endpoint for different region.
/ The default endpoint - a good choice if you are unsure.
1 | East China Region 1.
| Needs location constraint cn-east-1.
\ (cn-east-1)
/ East China Region 2.
2 | Needs location constraint cn-east-2.
\ (cn-east-2)
/ North China Region 1.
3 | Needs location constraint cn-north-1.
\ (cn-north-1)
/ South China Region 1.
4 | Needs location constraint cn-south-1.
\ (cn-south-1)
/ North America Region.
5 | Needs location constraint us-north-1.
\ (us-north-1)
/ Southeast Asia Region 1.
6 | Needs location constraint ap-southeast-1.
\ (ap-southeast-1)
/ Northeast Asia Region 1.
7 | Needs location constraint ap-northeast-1.
\ (ap-northeast-1)
[snip]
endpoint> 1
Option endpoint.
Endpoint for Qiniu Object Storage.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / East China Endpoint 1
\ (s3-cn-east-1.qiniucs.com)
2 / East China Endpoint 2
\ (s3-cn-east-2.qiniucs.com)
3 / North China Endpoint 1
\ (s3-cn-north-1.qiniucs.com)
4 / South China Endpoint 1
\ (s3-cn-south-1.qiniucs.com)
5 / North America Endpoint 1
\ (s3-us-north-1.qiniucs.com)
6 / Southeast Asia Endpoint 1
\ (s3-ap-southeast-1.qiniucs.com)
7 / Northeast Asia Endpoint 1
\ (s3-ap-northeast-1.qiniucs.com)
endpoint> 1
Option location_constraint.
Location constraint - must be set to match the Region.
Used when creating buckets only.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / East China Region 1
\ (cn-east-1)
2 / East China Region 2
\ (cn-east-2)
3 / North China Region 1
\ (cn-north-1)
4 / South China Region 1
\ (cn-south-1)
5 / North America Region 1
\ (us-north-1)
6 / Southeast Asia Region 1
\ (ap-southeast-1)
7 / Northeast Asia Region 1
\ (ap-northeast-1)
location_constraint> 1
- Choose acl and storage class.
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
2 | The AllUsers group gets READ access.
\ (public-read)
[snip]
acl> 2
The storage class to use when storing new objects in Tencent COS.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Standard storage class
\ (STANDARD)
2 / Infrequent access storage mode
\ (LINE)
3 / Archive storage mode
\ (GLACIER)
4 / Deep archive storage mode
\ (DEEP_ARCHIVE)
[snip]
storage_class> 1
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
--------------------
[qiniu]
- type: s3
- provider: Qiniu
- access_key_id: xxx
- secret_access_key: xxx
- region: cn-east-1
- endpoint: s3-cn-east-1.qiniucs.com
- location_constraint: cn-east-1
- acl: public-read
- storage_class: STANDARD
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
Name Type
==== ====
qiniu s3
RackCorp
RackCorp 对象存储 是由您信赖的云服务提供商 RackCorp 提供的一个与 S3 兼容的对象存储平台。该服务速度快、可靠性高、价格合理,并且分布在许多其他服务商未覆盖的战略位置,以确保您能够维护数据主权。
在使用 RackCorp 对象存储之前,您需要在我们的 “门户” 上 “注册” 一个账户。接下来,您可以轻松地在您选择的位置创建一个 访问密钥
、一个 秘密密钥
和 存储桶
。当 rclone config
要求您输入 access_key_id
和 secret_access_key
时,这些信息将用于后续的配置步骤。
您的配置最终应该类似于以下内容:
[RCS3-demo-config]
type = s3
provider = RackCorp
env_auth = true
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = au-nsw
endpoint = s3.rackcorp.com
location_constraint = au-nsw
Rclone 服务 S3
Rclone 可以通过 S3 协议为任何远程设备提供服务。 详情请参阅 rclone serve s3 文档。
例如,要通过 S3 为 remote:path
提供服务,请这样运行服务器:
rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
This will be compatible with an rclone remote which is defined like this:
[serves3]
type = s3
provider = Rclone
endpoint = http://127.0.0.1:8080/
access_key_id = ACCESS_KEY_ID
secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
Note that setting use_multipart_uploads = false
is to work around
a bug which will be fixed in due course.
Scaleway
Scaleway对象存储平台允许你存储从备份、日志和网络资产到文档和照片的任何内容。 文件可从Scaleway控制台投放,或通过我们的API和CLI或使用任何S3兼容工具传输。
Scaleway提供了一个S3接口,可以像这样与rclone配置使用:
[scaleway]
type = s3
provider = Scaleway
env_auth = false
endpoint = s3.nl-ams.scw.cloud
access_key_id = SCWXXXXXXXXXXXXXX
secret_access_key = 1111111-2222-3333-44444-55555555555555
region = nl-ams
location_constraint = nl-ams
acl = private
upload_cutoff = 5M
chunk_size = 5M
copy_cutoff = 5M
Scaleway Glacier 是 Scaleway 提供的低成本 S3 Glacier 替代方案,它的工作方式与 S3 相同,接受 “GLACIER” 作为 storage_class
。
因此,你可以在配置远程存储时设置 storage_class = GLACIER
选项,直接将文件上传到 Scaleway Glacier。请记住,在这种状态下,你无法直接读取文件,需要先将它们恢复到 “STANDARD” 存储类,然后才能读取(请参阅上述“恢复”部分)。
希捷 Lyve 云
希捷 Lyve 云 是 希捷 推出的一款适用于企业的 S3 兼容对象存储平台。
以下是一个名为 remote
的远程存储的配置过程示例 —— 当然,你可以选择不同的名称。请注意,要创建访问密钥和秘密密钥,你需要先创建一个服务账户。
$ rclone config
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Choose s3
backend
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ (s3)
[snip]
Storage> s3
Choose LyveCloud
as S3 provider
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / Seagate Lyve Cloud
\ (LyveCloud)
[snip]
provider> LyveCloud
Take the default (just press enter) to enter access key and secret in the config file.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth>
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> XXX
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> YYY
Leave region blank
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Use this if unsure.
1 | Will use v4 signatures and an empty region.
\ ()
/ Use this only if v4 signatures don't work.
2 | E.g. pre Jewel/v10 CEPH.
\ (other-v2-signature)
region>
Choose an endpoint from the list
Endpoint for S3 API.
Required when using an S3 clone.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Seagate Lyve Cloud US East 1 (Virginia)
\ (s3.us-east-1.lyvecloud.seagate.com)
2 / Seagate Lyve Cloud US West 1 (California)
\ (s3.us-west-1.lyvecloud.seagate.com)
3 / Seagate Lyve Cloud AP Southeast 1 (Singapore)
\ (s3.ap-southeast-1.lyvecloud.seagate.com)
endpoint> 1
Leave location constraint blank
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a value. Press Enter to leave empty.
location_constraint>
Choose default ACL (private
).
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
[snip]
acl>
And the config file should end up looking like this:
[remote]
type = s3
provider = LyveCloud
access_key_id = XXX
secret_access_key = YYY
endpoint = s3.us-east-1.lyvecloud.seagate.com
SeaweedFS
SeaweedFS 是一个用于存储二进制大对象、对象、文件和数据湖的分布式存储系统,具有 O(1) 磁盘寻道时间和可扩展的文件元数据存储。它拥有一个与 S3 兼容的对象存储接口。SeaweedFS 还可以充当 远程 S3 兼容对象存储的网关,通过异步写回的方式缓存数据和元数据,以实现快速的本地访问速度并降低访问成本。
假设已经使用 weed shell
对 SeaweedFS 进行了如下配置:
> s3.bucket.create -name foo
> s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply
{
"identities": [
{
"name": "me",
"credentials": [
{
"accessKey": "any",
"secretKey": "any"
}
],
"actions": [
"Read:foo",
"Write:foo",
"List:foo",
"Tagging:foo",
"Admin:foo"
]
}
]
}
To use rclone with SeaweedFS, above configuration should end up with something like this in your config:
[seaweedfs_s3]
type = s3
provider = SeaweedFS
access_key_id = any
secret_access_key = any
endpoint = localhost:8333
So once set up, for example to copy files into a bucket
rclone copy /path/to/files seaweedfs_s3:foo
Selectel
Selectel 云存储 是一个与 S3 兼容的存储系统,具有三重冗余存储、自动扩展、高可用性和全面的 IAM 系统等特点。
Selectel 在其网站上有一个关于 配置 rclone 的章节,展示了如何创建合适的 API 密钥。
从 rclone v1.69 版本开始,Selectel 成为受支持的服务提供商 —— 请选择 Selectel
提供商类型。
请注意,你应该对存储桶使用 “虚拟主机” 访问方式(这是推荐的默认方式),而不是 “路径风格” 访问方式。
你可以使用 rclone config
来创建一个新的远程存储,如下所示:
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
Enter name for new remote.
name> selectel
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including ..., Selectel, ...
\ (s3)
[snip]
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / Selectel Object Storage
\ (Selectel)
[snip]
provider> Selectel
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> ACCESS_KEY
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> SECRET_ACCESS_KEY
Option region.
Region where your data stored.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / St. Petersburg
\ (ru-1)
region> 1
Option endpoint.
Endpoint for Selectel Object Storage.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Saint Petersburg
\ (s3.ru-1.storage.selcloud.ru)
endpoint> 1
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Configuration complete.
Options:
- type: s3
- provider: Selectel
- access_key_id: ACCESS_KEY
- secret_access_key: SECRET_ACCESS_KEY
- region: ru-1
- endpoint: s3.ru-1.storage.selcloud.ru
Keep this "selectel" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
And your config should end up looking like this:
[selectel]
type = s3
provider = Selectel
access_key_id = ACCESS_KEY
secret_access_key = SECRET_ACCESS_KEY
region = ru-1
endpoint = s3.ru-1.storage.selcloud.ru
Wasabi
Wasabi 是一种基于云的对象存储服务,适用于广泛的应用程序和使用场景。Wasabi专为那些需要以最低成本获得高性能、可靠且安全的数据存储基础设施的个人和组织而设计。
Wasabi提供了一个S3接口,可以按照以下方式将其配置为与rclone一起使用。
No remotes found, make a new one?
n) New remote
s) Set configuration password
n/s> n
name> wasabi
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio, Liara)
\ "s3"
[snip]
Storage> s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> YOURACCESSKEY
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> YOURSECRETACCESSKEY
Region to connect to.
Choose a number from below, or type in your own value
/ The default endpoint - a good choice if you are unsure.
1 | US Region, Northern Virginia, or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
[snip]
region> us-east-1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint> s3.wasabisys.com
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia, or Pacific Northwest.
\ ""
[snip]
location_constraint>
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
[snip]
acl>
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
server_side_encryption>
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Reduced redundancy storage class
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
storage_class>
Remote config
--------------------
[wasabi]
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = us-east-1
endpoint = s3.wasabisys.com
location_constraint =
acl =
server_side_encryption =
storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
This will leave the config file looking like this.
[wasabi]
type = s3
provider = Wasabi
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region =
endpoint = s3.wasabisys.com
location_constraint =
acl =
server_side_encryption =
storage_class =
阿里云 OSS
以下是一个配置 阿里云对象存储服务(OSS) 的示例。首先运行:
rclone config
这将引导你完成一个交互式的设置过程。
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> oss
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ "s3"
[snip]
Storage> s3
Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
\ "AWS"
2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
\ "Alibaba"
3 / Ceph Object Storage
\ "Ceph"
[snip]
provider> Alibaba
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> accesskeyid
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> secretaccesskey
Endpoint for OSS API.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / East China 1 (Hangzhou)
\ "oss-cn-hangzhou.aliyuncs.com"
2 / East China 2 (Shanghai)
\ "oss-cn-shanghai.aliyuncs.com"
3 / North China 1 (Qingdao)
\ "oss-cn-qingdao.aliyuncs.com"
[snip]
endpoint> 1
Canned ACL used when creating buckets and storing or copying objects.
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
\ "public-read"
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
[snip]
acl> 1
The storage class to use when storing new objects in OSS.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Archive storage mode.
\ "GLACIER"
4 / Infrequent access storage mode.
\ "STANDARD_IA"
storage_class> 1
Edit advanced config? (y/n)
y) Yes
n) No
y/n> n
Remote config
--------------------
[oss]
type = s3
provider = Alibaba
env_auth = false
access_key_id = accesskeyid
secret_access_key = secretaccesskey
endpoint = oss-cn-hangzhou.aliyuncs.com
acl = private
storage_class = Standard
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
中国移动云弹性对象存储(EOS)
以下是一个配置 中国移动云弹性对象存储(EOS) 的示例。首先运行:
rclone config
这将引导你完成一个交互式的设置过程。
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> ChinaMobile
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
...
XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ (s3)
...
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
...
4 / China Mobile Ecloud Elastic Object Storage (EOS)
\ (ChinaMobile)
...
provider> ChinaMobile
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth>
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> accesskeyid
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> secretaccesskey
Option endpoint.
Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ The default endpoint - a good choice if you are unsure.
1 | East China (Suzhou)
\ (eos-wuxi-1.cmecloud.cn)
2 / East China (Jinan)
\ (eos-jinan-1.cmecloud.cn)
3 / East China (Hangzhou)
\ (eos-ningbo-1.cmecloud.cn)
4 / East China (Shanghai-1)
\ (eos-shanghai-1.cmecloud.cn)
5 / Central China (Zhengzhou)
\ (eos-zhengzhou-1.cmecloud.cn)
6 / Central China (Changsha-1)
\ (eos-hunan-1.cmecloud.cn)
7 / Central China (Changsha-2)
\ (eos-zhuzhou-1.cmecloud.cn)
8 / South China (Guangzhou-2)
\ (eos-guangzhou-1.cmecloud.cn)
9 / South China (Guangzhou-3)
\ (eos-dongguan-1.cmecloud.cn)
10 / North China (Beijing-1)
\ (eos-beijing-1.cmecloud.cn)
11 / North China (Beijing-2)
\ (eos-beijing-2.cmecloud.cn)
12 / North China (Beijing-3)
\ (eos-beijing-4.cmecloud.cn)
13 / North China (Huhehaote)
\ (eos-huhehaote-1.cmecloud.cn)
14 / Southwest China (Chengdu)
\ (eos-chengdu-1.cmecloud.cn)
15 / Southwest China (Chongqing)
\ (eos-chongqing-1.cmecloud.cn)
16 / Southwest China (Guiyang)
\ (eos-guiyang-1.cmecloud.cn)
17 / Nouthwest China (Xian)
\ (eos-xian-1.cmecloud.cn)
18 / Yunnan China (Kunming)
\ (eos-yunnan.cmecloud.cn)
19 / Yunnan China (Kunming-2)
\ (eos-yunnan-2.cmecloud.cn)
20 / Tianjin China (Tianjin)
\ (eos-tianjin-1.cmecloud.cn)
21 / Jilin China (Changchun)
\ (eos-jilin-1.cmecloud.cn)
22 / Hubei China (Xiangyan)
\ (eos-hubei-1.cmecloud.cn)
23 / Jiangxi China (Nanchang)
\ (eos-jiangxi-1.cmecloud.cn)
24 / Gansu China (Lanzhou)
\ (eos-gansu-1.cmecloud.cn)
25 / Shanxi China (Taiyuan)
\ (eos-shanxi-1.cmecloud.cn)
26 / Liaoning China (Shenyang)
\ (eos-liaoning-1.cmecloud.cn)
27 / Hebei China (Shijiazhuang)
\ (eos-hebei-1.cmecloud.cn)
28 / Fujian China (Xiamen)
\ (eos-fujian-1.cmecloud.cn)
29 / Guangxi China (Nanning)
\ (eos-guangxi-1.cmecloud.cn)
30 / Anhui China (Huainan)
\ (eos-anhui-1.cmecloud.cn)
endpoint> 1
Option location_constraint.
Location constraint - must match endpoint.
Used when creating buckets only.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / East China (Suzhou)
\ (wuxi1)
2 / East China (Jinan)
\ (jinan1)
3 / East China (Hangzhou)
\ (ningbo1)
4 / East China (Shanghai-1)
\ (shanghai1)
5 / Central China (Zhengzhou)
\ (zhengzhou1)
6 / Central China (Changsha-1)
\ (hunan1)
7 / Central China (Changsha-2)
\ (zhuzhou1)
8 / South China (Guangzhou-2)
\ (guangzhou1)
9 / South China (Guangzhou-3)
\ (dongguan1)
10 / North China (Beijing-1)
\ (beijing1)
11 / North China (Beijing-2)
\ (beijing2)
12 / North China (Beijing-3)
\ (beijing4)
13 / North China (Huhehaote)
\ (huhehaote1)
14 / Southwest China (Chengdu)
\ (chengdu1)
15 / Southwest China (Chongqing)
\ (chongqing1)
16 / Southwest China (Guiyang)
\ (guiyang1)
17 / Nouthwest China (Xian)
\ (xian1)
18 / Yunnan China (Kunming)
\ (yunnan)
19 / Yunnan China (Kunming-2)
\ (yunnan2)
20 / Tianjin China (Tianjin)
\ (tianjin1)
21 / Jilin China (Changchun)
\ (jilin1)
22 / Hubei China (Xiangyan)
\ (hubei1)
23 / Jiangxi China (Nanchang)
\ (jiangxi1)
24 / Gansu China (Lanzhou)
\ (gansu1)
25 / Shanxi China (Taiyuan)
\ (shanxi1)
26 / Liaoning China (Shenyang)
\ (liaoning1)
27 / Hebei China (Shijiazhuang)
\ (hebei1)
28 / Fujian China (Xiamen)
\ (fujian1)
29 / Guangxi China (Nanning)
\ (guangxi1)
30 / Anhui China (Huainan)
\ (anhui1)
location_constraint> 1
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
2 | The AllUsers group gets READ access.
\ (public-read)
/ Owner gets FULL_CONTROL.
3 | The AllUsers group gets READ and WRITE access.
| Granting this on a bucket is generally not recommended.
\ (public-read-write)
/ Owner gets FULL_CONTROL.
4 | The AuthenticatedUsers group gets READ access.
\ (authenticated-read)
/ Object owner gets FULL_CONTROL.
acl> private
Option server_side_encryption.
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / None
\ ()
2 / AES256
\ (AES256)
server_side_encryption>
Option storage_class.
The storage class to use when storing new objects in ChinaMobile.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Default
\ ()
2 / Standard storage class
\ (STANDARD)
3 / Archive storage mode
\ (GLACIER)
4 / Infrequent access storage mode
\ (STANDARD_IA)
storage_class>
Edit advanced config?
y) Yes
n) No (default)
y/n> n
--------------------
[ChinaMobile]
type = s3
provider = ChinaMobile
access_key_id = accesskeyid
secret_access_key = secretaccesskey
endpoint = eos-wuxi-1.cmecloud.cn
location_constraint = wuxi1
acl = private
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Leviia 云对象存储
Leviia 对象存储,在 100% 法国云环境中备份并安全存储你的数据,且独立于 GAFAM(谷歌、苹果、脸书、亚马逊、微软)。
若要配置对 Leviia 的访问,请按照以下步骤操作:
- Run
rclone config
and selectn
for a new remote.
rclone config
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
- Give the name of the configuration. For example, name it ’leviia'.
name> leviia
- Select
s3
storage.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ (s3)
[snip]
Storage> s3
- Select
Leviia
provider.
Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
\ "AWS"
[snip]
15 / Leviia Object Storage
\ (Leviia)
[snip]
provider> Leviia
- Enter your SecretId and SecretKey of Leviia.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> ZnIx.xxxxxxxxxxxxxxx
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> xxxxxxxxxxx
- Select endpoint for Leviia.
/ The default endpoint
1 | Leviia.
\ (s3.leviia.com)
[snip]
endpoint> 1
- Choose acl.
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
2 | The AllUsers group gets READ access.
\ (public-read)
[snip]
acl> 1
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
--------------------
[leviia]
- type: s3
- provider: Leviia
- access_key_id: ZnIx.xxxxxxx
- secret_access_key: xxxxxxxx
- endpoint: s3.leviia.com
- acl: private
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
Name Type
==== ====
leviia s3
Liara
以下是一个配置 Liara 对象存储 的示例。首先运行:
rclone config
这将引导你完成一个交互式的设置过程。
No remotes found, make a new one?
n) New remote
s) Set configuration password
n/s> n
name> Liara
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Liara, Minio)
\ "s3"
[snip]
Storage> s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> YOURACCESSKEY
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> YOURSECRETACCESSKEY
Region to connect to.
Choose a number from below, or type in your own value
/ The default endpoint
1 | US Region, Northern Virginia, or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
[snip]
region>
Endpoint for S3 API.
Leave blank if using Liara to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint> storage.iran.liara.space
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
[snip]
acl>
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
server_side_encryption>
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
storage_class>
Remote config
--------------------
[Liara]
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
endpoint = storage.iran.liara.space
location_constraint =
acl =
server_side_encryption =
storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
This will leave the config file looking like this.
[Liara]
type = s3
provider = Liara
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region =
endpoint = storage.iran.liara.space
location_constraint =
acl =
server_side_encryption =
storage_class =
Linode
以下是一个配置 Linode 对象存储 的示例。首先运行:
rclone config
这将引导你完成一个交互式的设置过程。
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
Enter name for new remote.
name> linode
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others
\ (s3)
[snip]
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / Linode Object Storage
\ (Linode)
[snip]
provider> Linode
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth>
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> ACCESS_KEY
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> SECRET_ACCESS_KEY
Option endpoint.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Amsterdam (Netherlands), nl-ams-1
\ (nl-ams-1.linodeobjects.com)
2 / Atlanta, GA (USA), us-southeast-1
\ (us-southeast-1.linodeobjects.com)
3 / Chennai (India), in-maa-1
\ (in-maa-1.linodeobjects.com)
4 / Chicago, IL (USA), us-ord-1
\ (us-ord-1.linodeobjects.com)
5 / Frankfurt (Germany), eu-central-1
\ (eu-central-1.linodeobjects.com)
6 / Jakarta (Indonesia), id-cgk-1
\ (id-cgk-1.linodeobjects.com)
7 / London 2 (Great Britain), gb-lon-1
\ (gb-lon-1.linodeobjects.com)
8 / Los Angeles, CA (USA), us-lax-1
\ (us-lax-1.linodeobjects.com)
9 / Madrid (Spain), es-mad-1
\ (es-mad-1.linodeobjects.com)
10 / Melbourne (Australia), au-mel-1
\ (au-mel-1.linodeobjects.com)
11 / Miami, FL (USA), us-mia-1
\ (us-mia-1.linodeobjects.com)
12 / Milan (Italy), it-mil-1
\ (it-mil-1.linodeobjects.com)
13 / Newark, NJ (USA), us-east-1
\ (us-east-1.linodeobjects.com)
14 / Osaka (Japan), jp-osa-1
\ (jp-osa-1.linodeobjects.com)
15 / Paris (France), fr-par-1
\ (fr-par-1.linodeobjects.com)
16 / São Paulo (Brazil), br-gru-1
\ (br-gru-1.linodeobjects.com)
17 / Seattle, WA (USA), us-sea-1
\ (us-sea-1.linodeobjects.com)
18 / Singapore, ap-south-1
\ (ap-south-1.linodeobjects.com)
19 / Singapore 2, sg-sin-1
\ (sg-sin-1.linodeobjects.com)
20 / Stockholm (Sweden), se-sto-1
\ (se-sto-1.linodeobjects.com)
21 / Washington, DC, (USA), us-iad-1
\ (us-iad-1.linodeobjects.com)
endpoint> 5
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
[snip]
acl>
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Configuration complete.
Options:
- type: s3
- provider: Linode
- access_key_id: ACCESS_KEY
- secret_access_key: SECRET_ACCESS_KEY
- endpoint: eu-central-1.linodeobjects.com
Keep this "linode" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
This will leave the config file looking like this.
[linode]
type = s3
provider = Linode
access_key_id = ACCESS_KEY
secret_access_key = SECRET_ACCESS_KEY
endpoint = eu-central-1.linodeobjects.com
Magalu
以下是一个配置 Magalu 对象存储 的示例。首先运行:
rclone config
这将引导你完成一个交互式的设置过程。
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
Enter name for new remote.
name> magalu
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...Magalu, ...and others
\ (s3)
[snip]
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / Magalu Object Storage
\ (Magalu)
[snip]
provider> Magalu
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> ACCESS_KEY
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> SECRET_ACCESS_KEY
Option endpoint.
Endpoint for Magalu Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / São Paulo, SP (BR), br-se1
\ (br-se1.magaluobjects.com)
2 / Fortaleza, CE (BR), br-ne1
\ (br-ne1.magaluobjects.com)
endpoint> 2
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
[snip]
acl>
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Configuration complete.
Options:
- type: s3
- provider: magalu
- access_key_id: ACCESS_KEY
- secret_access_key: SECRET_ACCESS_KEY
- endpoint: br-ne1.magaluobjects.com
Keep this "magalu" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
This will leave the config file looking like this.
[magalu]
type = s3
provider = Magalu
access_key_id = ACCESS_KEY
secret_access_key = SECRET_ACCESS_KEY
endpoint = br-ne1.magaluobjects.com
ArvanCloud
ArvanCloud ArvanCloud 对象存储超越了传统的有限文件存储方式。 它让你可以访问备份和存档文件,并支持文件共享。 像应用程序中的个人资料图片、用户发送的图片或扫描文档等文件,都可以安全、便捷地存储在我们的对象存储服务中。
ArvanCloud 提供了一个 S3 接口,可以按照以下方式配置以便与 rclone 配合使用。
No remotes found, make a new one?
n) New remote
s) Set configuration password
n/s> n
name> ArvanCloud
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Liara, Minio)
\ "s3"
[snip]
Storage> s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> YOURACCESSKEY
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> YOURSECRETACCESSKEY
Region to connect to.
Choose a number from below, or type in your own value
/ The default endpoint - a good choice if you are unsure.
1 | US Region, Northern Virginia, or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
[snip]
region>
Endpoint for S3 API.
Leave blank if using ArvanCloud to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint> s3.arvanstorage.com
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for Iran-Tehran Region.
\ ""
[snip]
location_constraint>
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
[snip]
acl>
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
server_side_encryption>
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
storage_class>
Remote config
--------------------
[ArvanCloud]
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = ir-thr-at1
endpoint = s3.arvanstorage.com
location_constraint =
acl =
server_side_encryption =
storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
This will leave the config file looking like this.
[ArvanCloud]
type = s3
provider = ArvanCloud
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region =
endpoint = s3.arvanstorage.com
location_constraint =
acl =
server_side_encryption =
storage_class =
腾讯云对象存储(COS)
腾讯云对象存储(COS) 是腾讯云提供的一种用于非结构化数据的分布式存储服务。它安全、稳定、大容量、便捷、低延迟且低成本。
要配置对腾讯云对象存储(COS)的访问,请按照以下步骤操作:
- Run
rclone config
and selectn
for a new remote.
rclone config
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
- Give the name of the configuration. For example, name it ‘cos’.
name> cos
- Select
s3
storage.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ "s3"
[snip]
Storage> s3
- Select
TencentCOS
provider.
Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
\ "AWS"
[snip]
11 / Tencent Cloud Object Storage (COS)
\ "TencentCOS"
[snip]
provider> TencentCOS
- Enter your SecretId and SecretKey of Tencent Cloud.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> AKIDxxxxxxxxxx
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> xxxxxxxxxxx
- Select endpoint for Tencent COS. This is the standard endpoint for different region.
1 / Beijing Region.
\ "cos.ap-beijing.myqcloud.com"
2 / Nanjing Region.
\ "cos.ap-nanjing.myqcloud.com"
3 / Shanghai Region.
\ "cos.ap-shanghai.myqcloud.com"
4 / Guangzhou Region.
\ "cos.ap-guangzhou.myqcloud.com"
[snip]
endpoint> 4
- Choose acl and storage class.
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Owner gets Full_CONTROL. No one else has access rights (default).
\ "default"
[snip]
acl> 1
The storage class to use when storing new objects in Tencent COS.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Default
\ ""
[snip]
storage_class> 1
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
--------------------
[cos]
type = s3
provider = TencentCOS
env_auth = false
access_key_id = xxx
secret_access_key = xxx
endpoint = cos.ap-guangzhou.myqcloud.com
acl = default
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
Name Type
==== ====
cos s3
网易 NOS
对于网易 NOS,请按照配置向导 rclone config
进行配置,将提供商设置为 Netease
。这将自动设置 force_path_style = false
,这是其正常运行所必需的。
Petabox
以下是一个配置 Petabox 的示例。首先运行:
rclone config
This will guide you through an interactive setup process.
No remotes found, make a new one?
n) New remote
s) Set configuration password
n/s> n
Enter name for new remote.
name> My Petabox Storage
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ "s3"
[snip]
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
[snip]
XX / Petabox Object Storage
\ (Petabox)
[snip]
provider> Petabox
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> YOUR_ACCESS_KEY_ID
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> YOUR_SECRET_ACCESS_KEY
Option region.
Region where your bucket will be created and your data stored.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / US East (N. Virginia)
\ (us-east-1)
2 / Europe (Frankfurt)
\ (eu-central-1)
3 / Asia Pacific (Singapore)
\ (ap-southeast-1)
4 / Middle East (Bahrain)
\ (me-south-1)
5 / South America (São Paulo)
\ (sa-east-1)
region> 1
Option endpoint.
Endpoint for Petabox S3 Object Storage.
Specify the endpoint from the same region.
Choose a number from below, or type in your own value.
1 / US East (N. Virginia)
\ (s3.petabox.io)
2 / US East (N. Virginia)
\ (s3.us-east-1.petabox.io)
3 / Europe (Frankfurt)
\ (s3.eu-central-1.petabox.io)
4 / Asia Pacific (Singapore)
\ (s3.ap-southeast-1.petabox.io)
5 / Middle East (Bahrain)
\ (s3.me-south-1.petabox.io)
6 / South America (São Paulo)
\ (s3.sa-east-1.petabox.io)
endpoint> 1
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
2 | The AllUsers group gets READ access.
\ (public-read)
/ Owner gets FULL_CONTROL.
3 | The AllUsers group gets READ and WRITE access.
| Granting this on a bucket is generally not recommended.
\ (public-read-write)
/ Owner gets FULL_CONTROL.
4 | The AuthenticatedUsers group gets READ access.
\ (authenticated-read)
/ Object owner gets FULL_CONTROL.
5 | Bucket owner gets READ access.
| If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ (bucket-owner-read)
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ (bucket-owner-full-control)
acl> 1
Edit advanced config?
y) Yes
n) No (default)
y/n> No
Configuration complete.
Options:
- type: s3
- provider: Petabox
- access_key_id: YOUR_ACCESS_KEY_ID
- secret_access_key: YOUR_SECRET_ACCESS_KEY
- region: us-east-1
- endpoint: s3.petabox.io
Keep this "My Petabox Storage" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
This will leave the config file looking like this.
[My Petabox Storage]
type = s3
provider = Petabox
access_key_id = YOUR_ACCESS_KEY_ID
secret_access_key = YOUR_SECRET_ACCESS_KEY
region = us-east-1
endpoint = s3.petabox.io
Storj
Storj 是一种去中心化的云存储服务,可通过其原生协议或兼容 S3 的网关使用。
要配置兼容 S3 的网关,请使用 rclone config
命令,将存储类型设置为 s3
,提供商名称设置为 Storj
。以下是配置过程的示例:
Type of storage to configure.
Storage> s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> XXXX (as shown when creating the access grant)
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> XXXX (as shown when creating the access grant)
Option endpoint.
Endpoint of the Shared Gateway.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / EU1 Shared Gateway
\ (gateway.eu1.storjshare.io)
2 / US1 Shared Gateway
\ (gateway.us1.storjshare.io)
3 / Asia-Pacific Shared Gateway
\ (gateway.ap1.storjshare.io)
endpoint> 1 (as shown when creating the access grant)
Edit advanced config?
y) Yes
n) No (default)
y/n> n
请注意,当你创建访问授权时,会生成 S3 凭证。
后端特性
--chunk-size
会被强制设置为 64 MiB 或更大。这会比默认的 5 MiB 使用更多的内存。- 服务器端复制功能已禁用,因为当前网关不支持该功能。
- 不支持 GetTier 和 SetTier 操作。
后端问题
由于 问题 #39,通过 S3 网关上传多部分文件会导致它们丢失元数据。对于 rclone 来说,这意味着修改时间不会被存储,源文件中可用的 MD5SUM 也不会被存储。
这会导致以下后果:
- 使用
rclone rcat
会失败,因为上传后元数据不匹配。 - 由于同样的原因,使用
rclone mount
上传文件也会失败。- 可以通过使用
--vfs-cache-mode writes
或--vfs-cache-mode full
或者将--s3-upload-cutoff
设置得很大来解决这个问题。
- 可以通过使用
- 通过多部分上传的文件将没有修改时间。
- 这意味着
rclone sync
可能会不断尝试上传大于--s3-upload-cutoff
的文件。 - 可以通过使用
--checksum
或--size-only
或者将--s3-upload-cutoff
设置得很大来解决这个问题。 - 不过,
--s3-upload-cutoff
的最大值为 5GiB。
- 这意味着
一个通用的解决方法是将 --s3-upload-cutoff
设置为 5G。这意味着 rclone 将小于 5GiB 的文件作为单部分上传。请注意,可以在配置文件中使用 upload_cutoff = 5G
进行设置,也可以在高级设置中进行配置。如果你经常传输大于 5G 的文件,那么在 rclone sync
中使用 --checksum
或 --size-only
是推荐的解决方法。
与原生协议的比较
使用原生协议可以利用客户端加密,还能实现尽可能好的下载性能。上传时会在本地进行纠删编码,因此上传 1GB 的数据将导致向网络中的存储节点上传 2.68GB 的数据。
使用此后端和兼容 S3 的托管网关可以提高上传性能,减轻系统和网络的负载。上传的数据会在服务器端进行加密和纠删编码,因此上传 1GB 的数据只会向网络中的存储节点上传 1GB 的数据。
有关更详细的比较,请查看 storj 后端的文档。
内存使用 {memory}
rclone 使用大量内存的最常见原因是单个目录中有数百万个文件。尽管 S3 实际上没有目录的概念,但为了与普通文件系统兼容,rclone 会按目录进行同步。
rclone 会将每个目录作为 rclone 对象加载到内存中。每个 rclone 对象占用 0.5k - 1k 的内存,因此大约每 100 万个文件占用 1GB 的内存,并且该目录的同步要等到整个目录完全加载到内存中才会开始。因此,对于大型目录,同步可能需要很长时间才能开始。
要同步一个包含 1 亿个文件的目录,你大约需要 100GB 的内存。在某些情况下,很难提供这么多的内存,因此有一个解决方法,需要编写一些脚本。
在某个时候,rclone 将增加一种同步模式,实际上就是这个解决方法,但会集成到 rclone 中。
限制
S3 后端不支持 rclone about
命令。不具备此功能的后端无法确定 rclone 挂载的可用空间,也不能在 rclone 联合远程存储中使用 mfs
(最大可用空间)策略。
请参阅 不支持 rclone about 的后端列表 和 rclone about
群晖 C2 对象存储
群晖 C2 对象存储 提供了一种安全、兼容 S3 且经济高效的云存储解决方案,没有 API 请求费、下载费和删除惩罚。
使用 rclone config
配置兼容 S3 的网关,存储类型设置为 s3
,提供商名称设置为 Synology
。以下是配置过程的示例:
First run:
rclone config
This will guide you through an interactive setup process.
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
Enter name for new remote.1
name> syno
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ "s3"
Storage> s3
Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
24 / Synology C2 Object Storage
\ (Synology)
provider> Synology
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> accesskeyid
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> secretaccesskey
Region where your data stored.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Europe Region 1
\ (eu-001)
2 / Europe Region 2
\ (eu-002)
3 / US Region 1
\ (us-001)
4 / US Region 2
\ (us-002)
5 / Asia (Taiwan)
\ (tw-001)
region > 1
Option endpoint.
Endpoint for Synology C2 Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / EU Endpoint 1
\ (eu-001.s3.synologyc2.net)
2 / US Endpoint 1
\ (us-001.s3.synologyc2.net)
3 / TW Endpoint 1
\ (tw-001.s3.synologyc2.net)
endpoint> 1
Option location_constraint.
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a value. Press Enter to leave empty.
location_constraint>
Edit advanced config? (y/n)
y) Yes
n) No
y/n> y
Option no_check_bucket.
If set, don't attempt to check the bucket exists or create it.
This can be useful when trying to minimise the number of transactions
rclone does if you know the bucket exists already.
It can also be needed if the user you are using does not have bucket
creation permissions. Before v1.52.0 this would have passed silently
due to a bug.
Enter a boolean value (true or false). Press Enter for the default (true).
no_check_bucket> true
Configuration complete.
Options:
- type: s3
- provider: Synology
- region: eu-001
- endpoint: eu-001.s3.synologyc2.net
- no_check_bucket: true
Keep this "syno" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y