turned off S3 bucket validation#2126
Conversation
Current coverage is
|
|
Overriding the default security here doesn't quite feel right to me. I'd like to make sure there aren't any other alternatives first... Have to admit I've never actually had a need to use the |
|
I think @stmarier understands this a lot better than I do, but I don't believe this For our particular use case, I think it's easier to use S3 with key pairs than https. But yeah, I agree with you that the https protocol is more generally useful. |
|
I think @mheilman summed it up pretty well. To my knowledge there's nothing inherently secure or insecure about either approach. Perhaps it might be more appropriate to move the try/except to the |
|
It's not security related. If you pass When setting |
|
Got it. @mheilman could you amend the PR with @mheilman's suggestion?
|
d545c42 to
a93835b
Compare
|
The inspection completed: 2 updated code elements |
turned off S3 bucket validation
|
Hi there, thank you for your contribution to Conda! This pull request has been automatically locked since it has not had recent activity after it was closed. Please open a new issue or pull request if needed. |
Currently, for s3 channels, conda uses boto's default
validate=Trueforget_bucket(cf. boto docs) when grabbing packages from s3. This requires one to have permissions both for the root of the bucket (e.g.,s3://my-bucket/) as well as the specific key where the conda repository lives (e.g.,s3://my-bucket/path/to/conda/).It seems better to use
validate=Falseso that one can put a conda repo in a place on s3 where not all the users would have to have permissions for the root of the bucket.Also, the current error message if you don't have permissions for the root of the bucket is fairly confusing.
And where that error is printed, it's actually catching a 404 error, even though it's ultimately a permissions-related error raise by boto (IIRC, something like
A client error (AccessDenied) occurred when calling the ListObjects operation: Access Denied, which is what I get if I doaws s3 ls s3://my-bucket/).