【DNM】* *: support meta with s3#66
【DNM】* *: support meta with s3#66DanielZhangQD wants to merge 3 commits intopingcap:masterfrom DanielZhangQD:metas3
Conversation
Codecov Report
@@ Coverage Diff @@
## master #66 +/- ##
==========================================
- Coverage 62.2% 59.19% -3.01%
==========================================
Files 22 23 +1
Lines 2643 2779 +136
==========================================
+ Hits 1644 1645 +1
- Misses 768 901 +133
- Partials 231 233 +2
Continue to review full report at Codecov.
|
|
@overvenus @tennix @onlymellb PTAL the new commit with aws-sdk-go. |
|
@overvenus @kennytm @tennix @onlymellb PTAL the new commit with go cloud dev kit, thanks! |
kennytm
left a comment
There was a problem hiding this comment.
Is it possible to add a test case? Use a S3 mock or minio or whatever...
| if aerr, ok := err.(awserr.Error); ok { | ||
| switch aerr.Code() { | ||
| case s3.ErrCodeNoSuchBucket, notFound: | ||
| return createS3Bucket(svc, bucket) |
There was a problem hiding this comment.
I'm not sure if it is desired to create a bucket by default
There was a problem hiding this comment.
It's better that users create the bucket by themselves and br just uses it.
There was a problem hiding this comment.
but here BR creates a bucket if it's missing
There was a problem hiding this comment.
I think we can try to create the bucket if it's missing, but we'd add a document to tell users to create themselves. The minimal permission required is to read and write objects in the specified bucket.
This can avoid starting over again if users forgot to create the bucket.
There was a problem hiding this comment.
Will make it clear in the document that:
- Suggest users creating the bucket by themselves.
- If not, BR will create it automatically with the default configuration, and users have to handle the ACLs and any other attributes they want to set by themselves.
| prs, ok := qs[providerKey] | ||
| if ok && len(prs) >= 1 { | ||
| sqs.provider = prs[0] | ||
| } |
There was a problem hiding this comment.
Perhaps use the URL scheme oss://bucket for Aliyun OSS and s3://bucket for AWS S3 instead? Automatically set the endpoint to oss-$region.aliyuncs.com if the URL scheme is oss://.
There was a problem hiding this comment.
Thanks for sharing this info, I'd like to test Aliyun and change the necessary code in another issue, currently, focus on AWS S3 and Ceph.
There was a problem hiding this comment.
What I expect would be
- Remove
provider - One of
regionorendpointmust be supplied
There was a problem hiding this comment.
For the internal protocol, I think using the unified URL as gocloud does should be OK. For user interface, I agree we can make it less verbose by adopting oss/s3/ceph/minio etc.
There was a problem hiding this comment.
I don't see how provider can simplify config for Ceph or Minio.
There was a problem hiding this comment.
I will update this if necessary when the interface between BR and TiKV is determined.
|
@kennytm comments addressed, PTAL again, thanks! |
| sqs.bucket = u.Host | ||
|
|
||
| if u.Path != "" { | ||
| log.Info("path in storage arg is ignored", zap.String("path", u.Path)) |
There was a problem hiding this comment.
| log.Info("path in storage arg is ignored", zap.String("path", u.Path)) | |
| log.Warn("path in storage arg is ignored", zap.String("path", u.Path)) |
Let's make this a warning
| if aerr, ok := err.(awserr.Error); ok { | ||
| switch aerr.Code() { | ||
| case s3.ErrCodeNoSuchBucket, notFound: | ||
| return createS3Bucket(svc, bucket) |
There was a problem hiding this comment.
but here BR creates a bucket if it's missing
| prs, ok := qs[providerKey] | ||
| if ok && len(prs) >= 1 { | ||
| sqs.provider = prs[0] | ||
| } |
There was a problem hiding this comment.
What I expect would be
- Remove
provider - One of
regionorendpointmust be supplied
|
#88 has updated the storage interface and reorganized the code structure, close this PR and will submit another one. |
Support storing backup meta with S3.
Here are the commands to use: