Skip to content

Allow a init.d like dir to be used to execute scripts upon startup#1018

Merged
whummer merged 1 commit intolocalstack:masterfrom
shenie:init.d
Feb 19, 2019
Merged

Allow a init.d like dir to be used to execute scripts upon startup#1018
whummer merged 1 commit intolocalstack:masterfrom
shenie:init.d

Conversation

@shenie
Copy link
Contributor

@shenie shenie commented Nov 23, 2018

Primarily this can be used to create resources on localstack which is why the aws cli is also installed so the script running inside container can use it.

It solves problems like #1014

Please refer to the contribution guidelines in the README when submitting PRs.

@coveralls
Copy link

coveralls commented Nov 23, 2018

Coverage Status

Coverage remained the same at 72.714% when pulling b46a2c4 on shenie:init.d into a403747 on localstack:master.

@snorkypie
Copy link

Could we please merge this? This is the way to do it.

@shenie shenie force-pushed the init.d branch 3 times, most recently from b487125 to b46a2c4 Compare February 18, 2019 20:11
@shenie
Copy link
Contributor Author

shenie commented Feb 18, 2019

@whummer any feedback on this PR?

@whummer
Copy link
Member

whummer commented Feb 18, 2019

Sorry for the long delay on this PR. Looks great - only a minor comment. Is the pip installation required in the Dockerfile @shenie ? Thanks

@whummer whummer merged commit 4544ad5 into localstack:master Feb 19, 2019
@shenie shenie deleted the init.d branch February 19, 2019 01:20
@gopinath-langote
Copy link

@shenie
How about giving the option to mentioned file(*)/directory in docker-compose configuration.

Using localstack for multiple projects on local with different scripts is not possible with the current solution. If we allow developers to send option in docker-compose, we can achieve different configurations for different projects.

Any thought?

I can implement & create PR if you want :)

Regards,
Gopinath

@shenie
Copy link
Contributor Author

shenie commented Feb 21, 2019

@gopinath-langote can I assume each project has its own docker-compose.yaml with localstack service? If so then you can do something like this to have each project perform any configurations the project needs by having scripts inside the project folder e.g. <project-root>/localstack

  localstack:
    image: localstack:localstack
    volumes:
      - $PWD/localstack:/docker-entrypoint-initaws.d

@gopinath-langote
Copy link

@shenie

This way, we got dependancy on aws cli to create resources on localstack which mean we need to have aws cli installed on agents running CI tool right?

@shenie
Copy link
Contributor Author

shenie commented Feb 23, 2019

aws cli and awslocal cli are both available in the localstack image so your scripts can make use of them without installing it.

@gopinath-langote
Copy link

@shenie
Ohh I see.

Good. this will solve all the problems.

Thanks

Stovoy pushed a commit to Nextdoor/localstack that referenced this pull request Feb 27, 2019
@drewboardman
Copy link

I'm attempting to use this feature, but not seeing any logs in my docker-compose command that indicates it's creating an sqs queue.

## docker-compose
version: "3.3"
services:
  localstack:
    image: localstack/localstack:0.11.0
    ports:
      - "4566-4599:4566-4599"
      - "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
    environment:
      - SERVICES=s3,sqs
      - DEBUG=1
      - DATA_DIR=/tmp/localstack/data
      - PORT_WEB_UI=${PORT_WEB_UI- }
      - AWS_ACCESS_KEY_ID=dummy
      - AWS_SECRET_ACCESS_KEY=dummy
    volumes:
      - ./aws-test-setup:/docker-entrypoint-initaws.d:ro

My script looks like this (in aws-test-setup/create_queue.sh):

#!/usr/bin/env bash
# Run the app using local stack SQS and S3

export SQS_QUEUE_URL=http://localhost:4566/000000000000/integration
export SQS_ENDPOINT=http://localhost:4566
export SQS_REGION=us-east-1
# AWS_PROFILE needs to be set, but dev isn't actually used
export AWS_PROFILE=dev

# Create the queue
aws --endpoint-url $SQS_ENDPOINT sqs create-queue --queue-name default --region us-east-1

# Clear any existing messages from the queue
aws --endpoint-url $SQS_ENDPOINT sqs purge-queue --region us-east-1 --queue-url $SQS_QUEUE_URL

@mgagliardo
Copy link
Contributor

mgagliardo commented May 10, 2021

Hello @drewboardman, 2 things:

  1. Does your script have execute permissions? (i.e. chmod +x script.sh)
  2. Can you remove the :ro from the mountpoint?

@mgagliardo
Copy link
Contributor

mgagliardo commented May 10, 2021

Ah also I have tested your setup, seems like the AWS_PROFILE is not needed. You could fix this with awslocal.

Output:

localstack_1  | 15:25:01.726 [main] INFO  org.elasticmq.server.Main$ - === ElasticMQ server (0.15.5) started in 4755 ms ===
localstack_1  | 2021-05-10 15:25:01,772:API: 127.0.0.1 - - [10/May/2021 15:25:01] "GET / HTTP/1.1" 200 -
localstack_1  | 2021-05-10 15:25:01,772:API: 127.0.0.1 - - [10/May/2021 15:25:01] "GET / HTTP/1.1" 200 -
localstack_1  | 2021-05-10 15:25:01,772:API: 127.0.0.1 - - [10/May/2021 15:25:01] "GET / HTTP/1.1" 200 -
localstack_1  | Ready.
localstack_1  | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initaws.d/create_queue.sh
localstack_1  | Traceback (most recent call last):
localstack_1  |   File "/usr/bin/aws", line 27, in <module>
localstack_1  |     sys.exit(main())
localstack_1  |   File "/usr/bin/aws", line 23, in main
localstack_1  |     return awscli.clidriver.main()
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/awscli/clidriver.py", line 69, in main
localstack_1  |     rc = driver.main()
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/awscli/clidriver.py", line 203, in main
localstack_1  |     command_table = self._get_command_table()
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/awscli/clidriver.py", line 112, in _get_command_table
localstack_1  |     self._command_table = self._build_command_table()
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/awscli/clidriver.py", line 129, in _build_command_table
localstack_1  |     self.session.emit('building-command-table.main',
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/session.py", line 674, in emit
localstack_1  |     return self._events.emit(event_name, **kwargs)
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/hooks.py", line 356, in emit
localstack_1  |     return self._emitter.emit(aliased_event_name, **kwargs)
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/hooks.py", line 228, in emit
localstack_1  |     return self._emit(event_name, kwargs)
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/hooks.py", line 211, in _emit
localstack_1  |     response = handler(**kwargs)
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/awscli/customizations/preview.py", line 69, in mark_as_preview
localstack_1  |     service_name=original_command.service_model.service_name,
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/awscli/clidriver.py", line 328, in service_model
localstack_1  |     return self._get_service_model()
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/awscli/clidriver.py", line 345, in _get_service_model
localstack_1  |     api_version = self.session.get_config_variable('api_versions').get(
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/session.py", line 237, in get_config_variable
localstack_1  |     return self.get_component('config_store').get_config_variable(
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/configprovider.py", line 293, in get_config_variable
localstack_1  |     return provider.provide()
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/configprovider.py", line 390, in provide
localstack_1  |     value = provider.provide()
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/configprovider.py", line 451, in provide
localstack_1  |     scoped_config = self._session.get_scoped_config()
localstack_1  |   File "/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/session.py", line 337, in get_scoped_config
localstack_1  |     raise ProfileNotFound(profile=profile_name)
localstack_1  | botocore.exceptions.ProfileNotFound: The config profile (dev) could not be found

Seeing your script, the SQS URL is wrongly declared (name default, you put integration) and the AWS_PROFILE is not needed.

$ awslocal sqs list-queues
{
    "QueueUrls": [
        "http://localhost:4566/queue/default"
    ]
}
#!/usr/bin/env bash
# Run the app using local stack SQS and S3

export SQS_QUEUE_URL=http://localhost:4566/queue/default
export SQS_ENDPOINT=http://localhost:4566
export SQS_REGION=us-east-1
# AWS_PROFILE needs to be set, but dev isn't actually used

# Create the queue
aws --endpoint-url $SQS_ENDPOINT sqs create-queue --queue-name default --region us-east-1

# Clear any existing messages from the queue
aws --endpoint-url $SQS_ENDPOINT sqs purge-queue --region us-east-1 --queue-url $SQS_QUEUE_URL

@drewboardman
Copy link

Hello @drewboardman, 2 things:

1. Does your script have execute permissions? (i.e. `chmod +x script.sh`)

2. Can you remove the `:ro` from the mountpoint?

Thanks, I got the script running. However I have another issue. I'm running this container with TestContainers in an integration test. In the docker-compose output, the container is Ready before the execution of the script posted above. This means that the test attempts to execute against a Queue that doesn't exist, resulting in TestFailed(Ordinal(0, 3),AWS.SimpleQueueService.NonExistentQueue

@mgagliardo
Copy link
Contributor

@drewboardman Why don't you create the queue in the integration test instead of using such script? You can see some of them on https://github.com/localstack/localstack-java-utils

@drewboardman
Copy link

@drewboardman Why don't you create the queue in the integration test instead of using such script? You can see some of them on https://github.com/localstack/localstack-java-utils

That's what I originally was doing, however this creates a flakey scenario where the Queue gets created twice. I don't want test code to fail for reasons that aren't related to the code its supposed to test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants