Skip to content

s3cmd v2.0.0 with Python 2.7.9 using SSL behind Squid Proxy, getting "ERROR: S3 error: 400 (Bad Request)" #905

@dannyk81

Description

@dannyk81

Hey guys,

I'm running s3cmd v2.0.0 on Debian Jessie (8.7) with Python 2.7.9.

We have a Squid proxy in place and when setting up s3cmd with SSL (use_https = True) s3cmd fails with ERROR: S3 error: 400 (Bad Request) error. With use_https = False everything works perfectly.

UPDATE: I performed two tests
UPDATE 2: test 3 - 5

  1. I installed aws-cli (v1.11.127) on the same node and it works just fine (I verified it uses our proxy and HTTPS):
$ aws s3 ls
2017-07-20 21:04:20 *****-backups-***
2017-07-26 01:22:42 *****-backups-***
  1. I configured the server to not use a Proxy and it works too

  2. Tried with Python 3 (3.4.2) -> doesn't work

  3. Tried with Buckets in different regions (Ireland, Oregen) -> doesn't work

  4. Upgraded Squid from 3.4.8 to 3.5.25 -> didn't help

Based on above, it seems like some kind of combination between s3cmd and Squid that just doesn't work. However, we have many other HTTPS connections going through this Proxy using the CONNECT method without any issues...


Here is a debug snippet:

$ s3cmd ls --debug
DEBUG: s3cmd version 2.0.0
DEBUG: ConfigParser: Reading file '/home/backup/.s3cfg'
DEBUG: ConfigParser: access_key->AK...17_chars...A
DEBUG: ConfigParser: access_token->
DEBUG: ConfigParser: add_encoding_exts->
DEBUG: ConfigParser: add_headers->
DEBUG: ConfigParser: bucket_location->US
DEBUG: ConfigParser: ca_certs_file->
DEBUG: ConfigParser: cache_file->
DEBUG: ConfigParser: check_ssl_certificate->True
DEBUG: ConfigParser: check_ssl_hostname->True
DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
DEBUG: ConfigParser: default_mime_type->binary/octet-stream
DEBUG: ConfigParser: delay_updates->False
DEBUG: ConfigParser: delete_after->False
DEBUG: ConfigParser: delete_after_fetch->False
DEBUG: ConfigParser: delete_removed->False
DEBUG: ConfigParser: dry_run->False
DEBUG: ConfigParser: enable_multipart->True
DEBUG: ConfigParser: encoding->UTF-8
DEBUG: ConfigParser: encrypt->False
DEBUG: ConfigParser: expiry_date->
DEBUG: ConfigParser: expiry_days->
DEBUG: ConfigParser: expiry_prefix->
DEBUG: ConfigParser: follow_symlinks->False
DEBUG: ConfigParser: force->False
DEBUG: ConfigParser: get_continue->False
DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_passphrase->f0...3_chars...R
DEBUG: ConfigParser: guess_mime_type->True
DEBUG: ConfigParser: host_base->s3.amazonaws.com
DEBUG: ConfigParser: host_bucket->%(bucket)s.s3.amazonaws.com
DEBUG: ConfigParser: human_readable_sizes->False
DEBUG: ConfigParser: invalidate_default_index_on_cf->False
DEBUG: ConfigParser: invalidate_default_index_root_on_cf->True
DEBUG: ConfigParser: invalidate_on_cf->False
DEBUG: ConfigParser: kms_key->
DEBUG: ConfigParser: limit->-1
DEBUG: ConfigParser: limitrate->0
DEBUG: ConfigParser: list_md5->False
DEBUG: ConfigParser: log_target_prefix->
DEBUG: ConfigParser: long_listing->False
DEBUG: ConfigParser: max_delete->-1
DEBUG: ConfigParser: mime_type->
DEBUG: ConfigParser: multipart_chunk_size_mb->15
DEBUG: ConfigParser: multipart_max_chunks->10000
DEBUG: ConfigParser: preserve_attrs->True
DEBUG: ConfigParser: progress_meter->True
DEBUG: ConfigParser: proxy_host->fpx.prd.mia.novumproject.com
DEBUG: ConfigParser: proxy_port->8080
DEBUG: ConfigParser: put_continue->False
DEBUG: ConfigParser: recursive->False
DEBUG: ConfigParser: recv_chunk->65536
DEBUG: ConfigParser: reduced_redundancy->False
DEBUG: ConfigParser: requester_pays->False
DEBUG: ConfigParser: restore_days->1
DEBUG: ConfigParser: restore_priority->Standard
DEBUG: ConfigParser: secret_key->9U...37_chars...A
DEBUG: ConfigParser: send_chunk->65536
DEBUG: ConfigParser: server_side_encryption->False
DEBUG: ConfigParser: signature_v2->False
DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
DEBUG: ConfigParser: skip_existing->False
DEBUG: ConfigParser: socket_timeout->300
DEBUG: ConfigParser: stats->False
DEBUG: ConfigParser: stop_on_error->False
DEBUG: ConfigParser: storage_class->
DEBUG: ConfigParser: urlencoding_mode->normal
DEBUG: ConfigParser: use_http_expect->False
DEBUG: ConfigParser: use_https->True
DEBUG: ConfigParser: use_mime_magic->True
DEBUG: ConfigParser: verbosity->WARNING
DEBUG: ConfigParser: website_endpoint->http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
DEBUG: ConfigParser: website_error->
DEBUG: ConfigParser: website_index->index.html
DEBUG: Updating Config.Config cache_file ->
DEBUG: Updating Config.Config follow_symlinks -> False
DEBUG: Updating Config.Config verbosity -> 10
DEBUG: Unicodising 'ls' using UTF-8
DEBUG: Command: ls
DEBUG: CreateRequest: resource[uri]=/
DEBUG: Using signature v2
DEBUG: SignHeaders: u'GET\n\n\n\nx-amz-date:Fri, 28 Jul 2017 01:12:54 +0000\n/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(None): s3.amazonaws.com
DEBUG: ConnMan.get(): creating new connection: proxy://fpx.prd.mia.novumproject.com:8080
DEBUG: Using ca_certs_file None
DEBUG: httplib.HTTPSConnection() has only context
DEBUG: proxied HTTPSConnection(fpx.prd.mia.novumproject.com, 8080)
DEBUG: tunnel to s3.amazonaws.com, None
DEBUG: format_uri(): /
DEBUG: Sending request method_string='GET', uri=u'/', headers={'Authorization': u'AWS AKIAJC6JE7KDKR2N4WKA:7e0JA1RN3bcDPMUlFyekxjUcllk=', 'x-amz-date': 'Fri, 28 Jul 2017 01:12:54 +0000'}, body=(0 bytes)
DEBUG: ConnMan.put(): closing proxy connection (keep-alive not yet supported)
DEBUG: Response:
{'data': '',
 'headers': {'connection': 'close',
             'date': 'Fri, 28 Jul 2017 01:12:53 GMT',
             'server': 'AmazonS3',
             'transfer-encoding': 'chunked'},
 'reason': 'Bad Request',
 'status': 400}
DEBUG: S3Error: 400 (Bad Request)
DEBUG: HttpHeader: transfer-encoding: chunked
DEBUG: HttpHeader: date: Fri, 28 Jul 2017 01:12:53 GMT
DEBUG: HttpHeader: connection: close
DEBUG: HttpHeader: server: AmazonS3
ERROR: S3 error: 400 (Bad Request)

This seems like an issue with our Proxy, however I can't seem to figure out what.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions